Experts in AI warn about the “risk of extinction” in a unified statement

Leading AI experts from around the world, many of whom have recently raised concerns about the existential dangers posed by their own research, issued a stern statement on Tuesday warning of a “risk of extinction” from advanced AI if its development is not properly controlled.

According to its authors, the goal of the joint declaration, which was signed by hundreds of professionals including the CEOs of OpenAI, DeepMind, and Anthropic, is to remove barriers to open discussion of the catastrophic threats that AI poses. It occurs at a time when worries about AI’s effects on society are growing, even as businesses and governments strive to make significant advancements in the technology.

According to a statement released by the Centre for AI Safety, AI specialists, journalists, policymakers, and the general public are increasingly debating a wide range of significant and pressing concerns from AI. However, expressing worries about some of the most serious threats associated with advanced AI can be challenging. To get beyond this barrier and start a conversation, the little remark below was written.

Distinguished leaders are aware of issues

Among the signatories are some of the most important leaders in the AI sector, including Dario Amodei, CEO of Anthropic, Dennis Hassabis, CEO of Google DeepMind, and Sam Altman, CEO of OpenAI. These businesses are often regarded as pioneers in AI research and development, so it’s noteworthy that their CEOs are aware of the dangers.

Yoshua Bengio, a pioneer in deep learning, Ya-Qin Zhang, a distinguished scientist and corporate vice president at Microsoft, and Geoffrey Hinton, known as the “godfather of deep learning,” who recently left his position at Google to “speak more freely” about the existential threat posed by AI, are among the notable researchers who have also signed the declaration.

His changing opinions on the capabilities of the computer systems he has devoted his life to studying have come to light since Hinton’s departure from Google last month. At 75 years old, the well-known academic has stated a wish to have open debates about the possible risks posed by AI without being constrained by business ties.

Call to action

In March, dozens of researchers signed an open letter calling for a six-month “pause” on large-scale AI development beyond OpenAI’s GPT-4. The joint declaration is the result of that campaign. Tech titans Elon Musk, Steve Wozniak, Bengio, and Gary Marcus all signed the “pause” letter.

Despite these warnings, there is still little agreement among business executives and legislators over the best way to control and responsibly develop AI. Tech industry heavyweights, including Altman, Amodei, and Hassabis, met with Vice President Harris and President Biden earlier this month to discuss prospective regulation. Altman argued for government action in a recent Senate testimony, highlighting the gravity of the dangers posed by cutting-edge AI systems and the demand for regulation to address possible harms.

OpenAI leaders recently presented a number of suggestions for effectively controlling AI systems in a blog post. They recommended, among other things, that top AI researchers work together more frequently, that large language models (LLMs) be the subject of in-depth technical study, and that there be a global organization for AI safety. This declaration is another call to action, pushing people in the larger community to have meaningful discussions about the future of AI and its potential social effects.

Source link