In a statement released Wednesday, hundreds of public figures—including Nobel Prize-winning scientists, former military leaders, artists, and members of the British royal family—called for a ban on any work that could eventually lead to computer superintelligence, an undeveloped stage of artificial intelligence that they said could endanger humanity.
Until there is “strong public buy-in” and “wide scientific consensus that it will be done safely and controllably,” the declaration suggests “a prohibition on the development of superintelligence.”
The declaration, which was organized by AI researchers who were worried about the rapid advancement of technology, received more than 800 signatures on Tuesday night from a wide range of individuals. Among the signatories are U.K. Prince Harry and his spouse, Meghan Markle, former Joint Chiefs of Staff Chairman Mike Mullen, artist Will.i.am, former Trump White House strategist Steve Bannon, and Nobel laureate and AI researcher Geoffrey Hinton.
As AI threatens to transform vast areas of the economy and society, the declaration joins an increasing number of requests for a pause in AI. While organizations of all sizes are searching for methods to incorporate AI characteristics into a wide variety of goods and services, OpenAI, Google, Meta, and other major firms are investing billions of dollars in new AI models and the data centers that support them.
According to some AI experts, AI systems are developing quickly enough to eventually exhibit what is known as artificial general intelligence, or the capacity to carry out cognitive activities in a manner similar to that of a human. Researchers and industry executives think that superintelligence, when AI models outperform even the most skilled humans, might be the next step.
The Future of Life Institute, a nonprofit organization that studies major threats including nuclear weapons, biotechnology, and artificial intelligence, is the source of the statement. Elon Musk, a tech tycoon who is now involved in the AI race with his business xAI, was one of its early sponsors in 2015. The institute claims that Vitalik Buterin, a co-founder of the Ethereum blockchain, is now its biggest donor. It also states that it does not take donations from large tech corporations or businesses looking to develop artificial general intelligence.
According to its executive director, Anthony Aguirre, a physicist at the University of California, Santa Cruz, advances in AI are occurring more quickly than the general public can comprehend.
“The AI companies, their founders, and the economic system that powers them have, on some level, selected this course for us, but nobody has really asked almost anybody else, ‘Is this what we want?'” he stated in an interview.
The fact that there hasn’t been as much direct debate about “do we want these things?” has surprised me. Do we want AI systems to replace people? He stated. We’ll just have to face the consequences. It’s sort of seen as: Well, buckle up, this is where it’s heading. However, I don’t believe that to be the case. There are other options available to us on how to develop technology, including this one.
The statement does not specifically target any government or entity. Aguirre stated his intention to compel a discussion with lawmakers from China, the United States, and other countries, in addition to prominent AI corporations. The pro-industry stance on AI taken by the Trump administration, he said, needs to be balanced.
The public doesn’t desire this. They’re not interested in competing for this,” he stated. “As with other potentially dangerous technologies, there may eventually need to be an international treaty on advanced AI,” he warned.
Prior to the statement’s formal distribution on Tuesday, the White House did not immediately reply to a request for comment.
According to this year’s NBC News Decision Desk Poll, which was powered by SurveyMonkey, Americans are about evenly divided on the possible effects of AI. 42% of American citizens questioned believed AI will worsen their futures, while 44% believing technology would improve their lives and the lives of their families.
The declaration was not signed by top tech leaders who have made forecasts about superintelligence and indicated that they are working toward it as a goal. In July, Mark Zuckerberg, the CEO of Meta, declared that superintelligence was “now in sight.” Musk stated on X in February that the arrival of digital superintelligence “is happening in real-time” and had before worried about “robots going down the street killing people,” but currently Tesla, where Musk is CEO, is striving to produce humanoid robots. In a January blog post, OpenAI CEO Sam Altman stated that his business was focusing on superintelligence and that he would be shocked if it didn’t exist by 2030.
Requests for response on the statement were not immediately answered by a number of tech businesses.
The Future of Life Institute informed last week that OpenAI had responded to its plea for AI governance by subpoenaing it and its president. Jason Kwon, the chief strategy officer of OpenAI, stated in a post dated October 11 that the subpoena was issued because OpenAI had doubts about the financing sources of a number of charity organizations that had opposed its restructuring.
Apple co-founder Steve Wozniak, Virgin Group co-founder Richard Branson, conservative talk show host Glenn Beck, former U.S. national security adviser Susan Rice, physicist John Mather, Turing Award winner and AI researcher Yoshua Bengio, and Vatican AI adviser the Rev. Paolo Benanti are among the other signatories to the statement. A number of Chinese AI researchers also signed the declaration.
The intention, according to Aguirre, was to have a diverse group of signers from across society.
We want people to feel comfortable discussing it, but we also want to show that this is not just a problem that just a select few Silicon Valley nerds, who are frequently the only ones at the table, are interested in. This is an issue for all of humanity,” he said.






