The nations that negotiated the first legally binding international AI treaty, which includes the US, the UK, and members of the European Union, will be able to sign it on Thursday, according to the Council of Europe human rights organization.
After discussions involving 57 countries, the long-awaited AI Convention was finally adopted in May. It addresses potential risks associated with AI while also encouraging responsible innovation.
According to a statement from Britain’s justice minister Shabana Mahmood, this Convention is a significant step toward ensuring that these new technologies can be used without undermining the most fundamental principles, such as the rule of law and human rights.
Distinct from the EU AI Act, which came into effect last month, the AI Convention is primarily concerned with safeguarding the human rights of individuals impacted by AI systems.
The EU’s AI Act contains extensive guidelines for the creation, application, and use of AI systems inside the EU internal market.
Founded in 1949 and separate from the EU, the Council of Europe is an international organization whose mission is to protect human rights. All 27 EU member states are among its 47 member nations.
A committee on artificial intelligence was established in 2022 to draft and negotiate the text of the AI framework convention, after an ad hoc committee began investigating its viability in 2019.
In order to give effect to the provisions, the signatories may decide to adopt or maintain administrative, legislative, or other measures.
The agreement had been “watered down” into a broad set of principles, according to Francesca Fanucci, a legal expert at ECNL (European Center for Not-for-Profit Law Stichting), who worked with other civil society organizations to draft the treaty.
It raises serious questions about the legal certainty and effective enforceability of the convention’s principles and obligations because they are so broadly formulated and full of caveats, according to her.
Fanucci pointed out shortcomings such as the exemptions for AI systems used for national security and the lack of scrutiny for private companies compared to the public sector. She went on, “This double standard is disappointing.”
The government of the United Kingdom declared that it would collaborate with regulatory bodies, devolved administrations, and local authorities to guarantee that its new mandate can be implemented effectively.
Australia is planning AI regulations for transparency and human oversight
In the midst of a rapid rollout of AI tools by businesses and in daily life, Australia’s center-left government announced on Thursday that it planned to introduce targeted artificial intelligence rules, including human intervention and transparency.
The government has launched a month-long consultation period to determine whether to make AI systems mandatory in high-risk environments in the future, according to Industry and Science Minister Ed Husic, who also unveiled ten new voluntary guidelines on the subject.
Husic said in a statement that while Australians are aware of the potential benefits of AI, they also want to know that safeguards are in place in case things go wrong. “Australians want stronger protections on AI, we’ve heard that, we’ve listened.”
According to the guidelines’ report, it is imperative that human control be enabled whenever necessary during the lifecycle of an AI system.
“Meaningful human oversight will let you intervene if you need to and reduce the potential for unintended consequences and harms,” the report said. It further stated that businesses need to be open about how AI is used in content creation.
Global regulators are becoming increasingly concerned about false information and fake news produced by AI tools in light of the growing acceptance of generative AI platforms like Google’s Gemini and Microsoft-backed OpenAI’s ChatGPT.
Consequently, the European Union passed historic AI laws in May that place stringent transparency requirements on high-risk AI systems. These laws go beyond several countries’ light-touch voluntary compliance approaches, requiring more transparency.
“We believe that the right to self-regulation has been superseded. We’ve crossed that threshold,” Husic said.
Australia introduced eight voluntary principles for the responsible use of AI in 2019, but it does not yet have any specific laws governing the technology. According to a government report released this year, the current conditions are insufficient to handle high-risk situations.
According to Husic, only one-third of companies utilizing AI were doing so in a way that was responsible in terms of safety, fairness, accountability, and transparency.
By 2030, artificial intelligence is predicted to generate up to 200,000 new jobs in Australia, so it’s critical that Australian companies have the resources necessary to effectively develop and apply the technology, he said.