In light of potential risks such as the development of deep fakes and algorithmic prejudice, the Albanese government is considering a ban on “high-risk” uses of artificial intelligence and automated decision-making.
The National Science and Technology Council’s study on emerging technologies and a discussion paper on how to develop “safe and responsible” AI will both be made public on Thursday by industry and science minister Ed Husic.
The use of generative AI, in which AI generates new text, graphics, audio, and code, has increased recently. Examples include the “large language model” programmes ChatGPT, Google’s chatbot Bard, and Microsoft Bing’s chat feature.
The industry department’s discussion document cautions AI has a variety of “potentially harmful purposes” while colleges and education authorities debate the new technology’s use in student cheating.
These include the production of deepfakes to sway democratic processes or engage in other forms of deceit, the dissemination of false information, the encouragement of self-harm, and others.
According to the report, one of the main risks or perils of AI is algorithmic bias, which has the potential to favour male candidates over female ones in hiring or to target racial minorities.
The article also highlighted some of the advantageous ways AI is already being used, like medical image analysis, increased building security, and cost-effective legal services. It did not address how AI would affect the labour market, national security, or intellectual property.
The study also highlighted some of the advantageous uses of AI that are already in practise, like the analysis of medical imagery, increased building security, and cost-effective delivery of legal services. It was not intended to address the effects of AI on the labour market, national security, or intellectual property.
According to the NSTC assessment, Australia may be at danger as a result of the concentration of generative AI resources in a select few sizable, global technological firms with headquarters primarily in the US.
Australia has certain advantages in computer vision and robotics, but due to high entry restrictions, its core foundational capability in [big language models] and related fields is rather weak.
The report outlines a variety of international solutions, from Singapore’s voluntary approaches to increased regulation in the EU and Canada.
It claimed that a risk-based approach to AI governance is becoming more popular on a global scale.
In particular for high-risk applications of AI and [automated decision-making], the government would make sure that there are enough protections, according to the study.
The report questioned stakeholders in a hasty eight-week consultation if some high-risk AI applications or technologies should be fully banned and, if so, what standards should be used for doing so.
However, the report pointed out that in order to benefit from systems powered by AI that are made available on a global scale and support the development of AI in Australia, Australia may need to harmonize its governance with those of its key trading partners.
The paper urges stakeholders to take into account the effects on Australia’s domestic tech sector as well as our current export and trade activities with other nations if we took a more stringent approach to outlawing certain high-risk practices.
Husic claimed that responsibly deploying AI is a balancing act that the entire world is now trying to figure out.
He stated in a statement that the potential benefits are enormous, whether they involve preventing online fraud or using novel medications developed by AI to combat superbugs.
However, he has been stating for a number of years, there must be the proper protections in place to guarantee the responsible and safe use of AI.
Today is about what we will do in the future to increase public trust and confidence in these vital technologies.
The National AI Centre, which is part of the science agency CSIRO, and a new programme called Responsible AI Adopt for small and medium businesses each received $41 million from the federal government as part of the budget.
Due to Australia’s “technology neutral” legislation, the research observed that AI is already somewhat governed by existing consumer protection, internet safety, privacy, and criminal laws statutes.
For instance, the hotel reservation company Trivago has paid fines for using an algorithm that deceived customers into thinking they were getting the best deals.
The first defamation action against the automated messaging service would have been filed in April by a provincial Australian mayor who threatened to sue OpenAI if it did not retract ChatGPT’s fake assertions that he had served time in prison for bribery.
The eSafety commissioner issued a warning in May about the possibility that predators could automate child grooming through the use of generative AI programmes.
Julian Hill, a Labour Party member of parliament, has proposed for the creation of a new Australian AI Commission to oversee AI after expressing concern in February about unmanageable military applications of AI.