The urgent demand by scientists to halt the development of potent AI systems and figure out a secure path ahead for technologies that could significantly alter or even endanger human existence has not been successful, according to the organizers.
With the expiration of a six-month moratorium that the institute called for and that was signed by more than 33,000 people, including Elon Musk, CEO of SpaceX, Tesla & X, and Apple co-founder Steve Wozniak, Anthony Aguirre, Executive Director & Secretary of the Board at the U.S.-based Future of Life Institute, told Newsweek that AI labs are recklessly rushing to build more and more powerful systems without robust solutions to make them safe. The condition, according to Aguirre, might lead to a “suicidal AI arms race in which everyone loses.
AI systems with “human-competitive intelligence” can pose serious hazards to society and humanity, according to the letter from March 22 titled “Pause Giant AI Experiments: An Open Letter”. It urged experts all across the world to stop developing AI systems that are stronger than OpenAI’s GPT-4 technology.
A week or so after GPT-4’s debut, which OpenAI claimed demonstrated “human-level performance on various professional and academic benchmarks,” the letter was sent. With its human-like impression of quickly studying any topic and writing fluently about it, OpenAI’s ChatGPT had already caught the globe by storm late last year. With its Bard chatbot, Google joined the race of other businesses to catch up.
Some opponents of the Future of Life Institute’s proposal for a break said that the organization was attempting to weaken the competitive advantage that OpenAI and others had built up. The institute vigorously refuted this claim.
“To raise awareness of the risks posed by unrestrained, out-of-control AI development, we wrote a letter. Aguirre said in email that threats have since made news all across the world. He pointed out that the U.S. Congress had held hearings on the hazards and that China had approved a law governing some forms of AI. The EU had also passed its first laws to control AI. Aguirre claimed that the US needed to establish a federal agency to handle the problem.
According to surveys, most Americans are concerned about the possible negative effects of AI and would like to see a halt. But our letter wasn’t simply a forewarning; it also included procedures like licensing and auditing to help develop AI properly and safely, he said. A majority of Americans support the establishment of a federal agency for oversight, and more than 80% of Americans disbelieve that AI businesses would self-regulate.
He claimed that major hazards and safety worries are acknowledged by AI labs.
They cannot or will not, however, speculate as to when or even how such a slowdown may take place. With the technical and legal ability to guide and halt development when it becomes dangerous, our leaders must be able to direct AI for the good of all, according to Aguirre.
He encouraged them to go to the “AI Safety Summit” that is scheduled for Nov. 1 and 2 in the United Kingdom at Bletchley Park, the location of groundbreaking computer science and where Enigma, a German code machine used by the Nazis during World War II, was broken. The meeting provided an opportunity for advancement while also highlighting the advantages of AI, according to Aguirre.
This is a global endeavor, and in the upcoming UK conference every concerned country must have a seat, according to Aguirre, who also suggested that China be included.
Global catastrophe is a possibility due to the ongoing arms race, which also limits AI’s enormous potential for good. He argued that our shared future must not be jeopardized by rivalry between a small number of businesses. China has a security interest in reducing risks from non-state actors, he added, and it should recognize that it is also in danger from a suicidal AI weapons race that would end in failure.
According to a statement made by an unnamed government spokesperson, the United Kingdom’s Department of Science, Innovation, and Technology stated that the country wanted to bring together important nations, top businesses, researchers, and civil society to promote targeted, quick international action on the safe and responsible development of the technology.
When asked who was invited, the spokesperson responded, “We’ve always said AI demands a collaborative approach, and we will be collaborating with international governments to ensure we can agree on safety measures that are needed to address the most significant risks emerging from the latest developments in AI technologies.” We won’t make any assumptions about who might be invited, as is customary for summits of this type.
In addition to the OECD, Global Partnership on AI, Council of Europe, UN, G7, and G20, who are also attempting to put together a response to the difficulties posed by AI, the U.K. government feels the gathering will complement these other forums.
Liu Pengyu, a spokesman for the Chinese embassy in Washington, D.C., made it clear that China wants to help determine the future of the world, telling Newsweek that as a general principle, China believes that the development of AI benefits all countries and that all countries should be able to take part extensively in the global governance of AI.
Leading AI businesses announced a “voluntary commitment” in July, indicating that the Biden administration favored mild regulation and desired that the technology be developed in a safe and responsible manner.
The G7, which consists of the United States, Canada, Germany, Italy, France, Japan, the United Kingdom, and the European Union, declared in their Hiroshima Leaders’ Communiqué that they would work to advance international discussions on inclusive artificial intelligence (AI) governance and interoperability in order to realize their shared vision and goal of trustworthy AI, in keeping with their shared democratic values. Although conversations on how to move with AI are ongoing, some G7 members are reportedly reluctant to bring up China until the group has reached a consensus.