On Monday morning, more than 200 well-known scientists and politicians—including ten Nobel Prize winners and other top AI researchers—issued a pressing appeal for legally binding worldwide action to prevent harmful AI applications.
The declaration, known as the Global Call for AI Red Lines, warns that the “current trajectory of AI presents unprecedented dangers” and contends that “an international agreement on clear and verifiable red lines is necessary.” Given the speed at which AI capabilities are developing, the open letter calls on authorities to implement this agreement by the end of 2026.
The letter was revealed by Nobel Peace Prize winner Maria Ressa during her opening remarks on Monday morning during the United Nations General Assembly’s High-Level Week. In order to “prevent universally unacceptable risks” from AI and to “define what AI should never be allowed to do,” she asked for countries to unite.
Notable authors like Stephen Fry and Yuval Noah Harari, as well as former heads of state like former Irish President Mary Robinson and former Colombian President Juan Manuel Santos, who was awarded the 2016 Nobel Peace Prize, are among the signatories, along with Nobel Prize winners in Chemistry, Economics, Peace, and Physics.
The open letter was also signed by two of the three so-called “godfathers of AI,” Geoffrey Hinton and Yoshua Bengio, who were both honored with the prestigious Turing Award. Many people consider the Turing Award to be the equivalent of the Nobel Prize in computer science. Two years ago, Hinton quit a high-profile job at Google to advocate for the risks of uncontrolled AI development.
Leading AI nations like China and the United States are among the dozens of nations represented among the signatories.
According to Harari, “humans have learned—sometimes the hard way—that powerful technologies can have both beneficial and dangerous consequences for thousands of years.” “Before the technology reshapes society beyond our comprehension and destroys the foundations of our humanity, humans must agree on clear red lines for AI.”
As AI comes under more and more scrutiny, the open letter was released. AI has garnered global attention in the last week due to its potential to propagate false information and possibly threaten our collective sense of reality, as well as its use in mass surveillance and its suspected involvement in a teen’s suicide.
But the statement cautions that more catastrophic and widespread effects may soon eclipse today’s AI threats. The letter cites recent expert assertions, for instance, that AI may soon be a factor in global unemployment, pandemics, and systematic human rights abuses.
The letter does not provide specific recommendations, instead stating that government officials and scientists must discuss where red lines exist in order to achieve worldwide consensus. However, the letter proposes various limitations, such as outlawing fatal autonomous weaponry, autonomous replication of AI systems, and the use of AI in nuclear warfare.
Ahmet Üzümcü, the former director general of the Organization for the Prohibition of Chemical Weapons (OPCW), which received the 2013 Nobel Peace Prize during Üzümcü’s leadership, stated that “we should act in our vital common interest to stop AI from causing significant and possibly irreversible harm to humanity.”
As evidence of the effort’s viability, the statement cites other international resolutions that set red lines in other risky fields, such as bans on ozone-depleting chlorofluorocarbons or biological weapons.
Concerns regarding the existential risks posed by AI are not new. More than 1,000 technological executives and researchers, including Elon Musk, demanded in March 2023 that the creation of potent AI systems be halted. The leaders of well-known AI laboratories, including Sam Altman of OpenAI, Dario Amodei of Anthropic, and Demis Hassabis of Google DeepMind, signed a one-sentence declaration two months later urging that the existential threat presented by AI to mankind be taken as seriously as the threats posed by pandemics and nuclear weapons.
The most recent letter was signed by Ian Goodfellow, a scientist at DeepMind, and Wojciech Zaremba, a co-founder of OpenAI, but not by Altman, Amodei, or Hassabis.
Leading American AI firms have frequently indicated in recent years that they want to create safe and secure AI systems. For instance, they signed a safety-focused agreement with the White House in July 2023 and joined the Frontier AI Safety Commitments at the Seoul AI Summit in May 2024. Nonetheless, current studies reveal that these businesses often only follow around half of those voluntary pledges, and world leaders have charged them with putting profit and technological advancement ahead of the wellbeing of society.
Businesses like OpenAI and Anthropic also willingly permit the UK’s AI Security Institute and the Center for AI Standards and Innovation, a government organization devoted to American AI initiatives, to test and assess AI models for safety prior to models being made publicly available. The efficiency and constraints of this kind of voluntary cooperation, however, have been questioned by several commentators.
Despite echoing earlier attempts, Monday’s open letter goes one step farther by advocating for legally obligatory restrictions. The open letter is the first to include Nobel Prize winners from a variety of scientific fields. Biochemist Jennifer Doudna, economist Daron Acemoglu, and physicist Giorgio Parisi are among the winners of the Nobel Prize who have signed.
Heads of state and government gather in New York City to discuss and establish policy goals for the upcoming year at the U.N. General Assembly’s High-Level Week, which began with the letter’s distribution. Spanish Prime Minister Pedro Sanchez and U.N. Secretary-General António Guterres will be the main speakers at the U.N.’s first diplomatic AI body inauguration on Thursday.
More than 60 civil society groups from all around the world, including the Beijing Institute of AI Safety and Governance and the UK’s Demos think tank, also endorsed the letter.
Three nonprofit groups are organizing the Global Call for AI Red Lines: the French Center for AI Safety, The Future Society, and the University of California Berkeley’s Center for Human-Compatible AI.






