President Joe Biden stated on Friday that new pledges by companies like Amazon, Google, Meta, Microsoft, and others that are leading the development of artificial intelligence technology to adhere to a set of AI safeguards negotiated by his White House are a crucial step towards managing the “enormous” promise and risks posed by the technology.
Biden stated that seven American businesses have agreed, voluntarily, to ensure that their AI technologies are secure before they are released. Although they don’t specify who will check the technology or hold the firms responsible, some of the promises call for independent oversight of the operation of the next generation of AI systems.
According to Biden, we must be alert to the dangers that new technology may provide. He also stated that businesses have a “basic duty to ensure the safety of their products.
According to Biden, social media has demonstrated the damage that strong technology can cause when the proper precautions aren’t in place. These agreements are a positive first step, but the job is far from done.
Increased commercial investment in generative AI tools that can produce impressively human-like writing, fresh images, and other media has aroused public excitement as well as worries about the tools’ potential to spread misinformation and fool people, among other risks.
The four digital behemoths, along with ChatGPT creator OpenAI, startups Anthropic, and Inflection, have agreed to security testing conducted in part by outside experts to protect against key dangers including biosecurity and cybersecurity, according to a statement from the White House.
Testing will also look at the potential for societal damages like bias and discrimination as well as more theoretical threats from powerful AI systems that could take over physical systems or “self-replicate” by creating copies of themselves.
The businesses have also agreed to procedures for reporting system vulnerabilities and the use of digital watermarking to help distinguish between real photos and audio from deepfakes created by AI.
In a private meeting with Biden and other officials on Friday, executives from the seven companies vowed to uphold the criteria.
Inflection CEO Mustafa Suleyman said in an interview after the White House event that the president was extremely firm and clear in his desire for the businesses to remain inventive, but he also felt that this required a lot of attention.
Bringing together all the labs and businesses is a significant thing, according to Suleyman, whose Palo Alto, California-based venture is the newest and smallest of the businesses. They wouldn’t work together in any other situation given how competitive this is.
The agreement also states that the businesses would disclose any dangers and weaknesses in their technology, including how they may affect fairness and bias.
The voluntary commitments are intended to address dangers right away before a longer-term campaign to persuade Congress to establish rules governing the technology.
Some proponents of AI legislation stated that while Biden’s action is a positive step, more must be done to hold the businesses and their products accountable.
According to Amba Kak, executive director of the AI Now Institute, a closed-door discussion with corporate parties that produced voluntary measures is insufficient. They need a lot broader public discussion, and that’s going to raise concerns that businesses almost likely won’t voluntarily commit to because it would have materially different consequences, ones that would more directly affect the business models.
Even if it’s optional, Suleyman said it’s not an easy commitment to make to submit to “red team” tests that prod their AI systems.
The promise they have made to have red-teamers essentially try to break their models, find flaws, and then share those methods with the other large language model developers is a fairly substantial promise, according to Suleyman.
In order to build on the commitments made Friday, Senate Majority Leader Chuck Schumer, a Democrat from New York, has stated that he will file legislation to regulate AI and is working closely with the Biden administration and its bipartisan colleagues.
Many tech executives have urged for regulation, and several of them were present at a previous White House gathering in May.
Brad Smith, the president of Microsoft, said on his company’s blog on Friday that his organization is making pledges beyond the White House pledge, such as support for legislation that would establish a licensing system for highly competent models.
Some experts and upstart rivals are concerned that the proposed regulations may benefit the well-funded pioneers OpenAI, Google, and Microsoft, while pushing out smaller businesses due to the high expense of modifying AI systems to comply with regulations.
According to the White House vow, it mostly only applies to models that are overall more powerful than the industry frontier, which was recently established by models like OpenAI’s GPT-4 and picture generator DALL-E 2, as well as comparable models from Anthropic, Google, and Amazon.
A number of nations have been considering how to govern AI, with EU lawmakers drafting broad AI regulations for the 27-nation union that may limit uses thought to have the greatest hazards.
Recently, Antonio Guterres, the secretary-general of the United Nations, stated that the organization is “the ideal place” to set global rules. He also appointed a group that would submit a report on alternatives for global AI governance by the end of the year.
In addition, Guterres stated that he supported demands from some nations for the establishment of a new U.N. body to help international efforts to regulate AI, with models like the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change as inspiration.
According to a statement released by the White House on Friday, other nations have been consulted about the voluntary commitments.
The pledge is primarily concerned with safety risks, but it doesn’t address other concerns about the most recent AI technology, such as how it will affect jobs and market competition, how much environmental resources will be needed to build the models, and copyright issues regarding the use of writings, artwork, and other human creations to teach AI systems how to produce content that is similar to that of humans.
The Associated Press and OpenAI struck a partnership last week under which the AI business would obtain a license to use the AP’s news story library. There is no information on how much it will pay for that stuff.