In an effort to encourage businesses to develop AI systems that are safe by design, the United States, Britain, and more than a dozen other nations unveiled what a senior U.S. official called the first comprehensive international agreement on how to keep artificial intelligence safe from rogue actors.
The 18 nations concurred in a 20-page document released on Sunday that businesses creating and utilizing AI must create and implement it in a way that protects consumers and the general public from abuse.
The mostly general recommendations included in the non-binding agreement include safeguarding data from manipulation, keeping an eye out for abuse of AI systems, and screening software providers.
However, Jen Easterly, the director of the U.S. Cybersecurity and Infrastructure Security Agency, emphasized the significance of numerous nations endorsing the notion that safety must be the primary concern of AI systems.
Easterly told that this is the first time we have seen an affirmation that these capabilities should not be about cool features, how quickly we can get them to market, or how competitive we can be to drive down costs. The guidelines signify an agreement that security is the most important thing that must be accomplished during the design phase.
The agreement is the most recent in a string of endeavors undertaken by governments worldwide to influence the advancement of artificial intelligence. Although some of these initiatives have merit, their influence is growing in significance within industry and society as a whole.
The 18 countries that have endorsed the new guidelines, including the United States and Britain, consist of Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore.
As an illustration of a recommendation to release models solely following thorough security testing, the framework addresses concerns regarding how to prevent hackers from exploiting AI technology.
It avoids sensitive topics such as the proper applications of artificial intelligence and the methods employed to collect the data used to train these models.
Fears that AI could be utilized to disrupt the democratic process, accelerate fraud, or cause catastrophic job losses, among other negative consequences, have been fueled by a growing number of concerns surrounding its development.
Regarding AI regulations, legislators in Europe are currently formulating rules that surpass those of the United States. Supporting “mandatory self-regulation through codes of conduct” for so-called foundation models of AI, which are designed to generate a wide variety of outputs. France, Germany and Italy also recently reached an agreement on how artificial intelligence should be regulated.
The Biden administration has lobbied lawmakers for AI regulation, but a polarized United States Congress has made little progress in passing effective legislation.
In October, the White House issued a new executive order aimed at strengthening national security and mitigating the risks associated with AI for workers, consumers, and minority groups.