HomeArtificial IntelligenceArtificial Intelligence NewsAI-focused tech firms heading down hill

AI-focused tech firms heading down hill

The scientist behind a historic letter urging a halt to the creation of potent AI systems claims that tech CEOs did not stop working because they are engaged in a “race to the bottom.”

The Future of Life Institute co-founder Max Tegmark organized an open letter in March asking for a six-month halt on the creation of massive AI systems.

More than 30,000 people signed the document, including Elon Musk and Steve Wozniak, the co-founder of Apple, yet it was unable to stop the development of the most ambitious systems.

Six months later, Tegmark told the that he had not anticipated the letter to have any impact on tech companies’ efforts to develop AI models that are more potent than GPT-4, the huge language model that runs ChatGPT, because the market has grown so competitive.

He had the impression that many of the corporate leaders he spoke with secretly desired [a pause] but were constrained by this competition for survival at the bottom. Thus, he added, no company can pause by itself.

The letter urged governments to step in if leading AI companies like Google, ChatGPT owner OpenAI, and Microsoft could not come to an agreement on a moratorium on developing systems more powerful than GPT-4. It also warned of a “out-of-control race” to create minds that no one could “understand, predict, or reliably control.”

Should we create nonhuman minds that could one day outnumber, outsmart, replace, and exist independently of us? Should we take a chance on losing control of our civilization?

Tegmark, a physics professor and artificial intelligence specialist at the Massachusetts Institute of Technology, declared that he thought the letter was successful.

The letter has had a greater impact than he anticipated, the author claimed, citing a political awakening on AI that has resulted in hearings with tech leaders before the US Senate and the UK government’s organization of a summit on AI safety in November.

Since the letter’s release, Tegmark claimed, expressing fear about AI has moved from being taboo to becoming a mainstream viewpoint. A statement from the Centre for AI Safety, endorsed by hundreds of IT CEOs and academics, was released in May in response to the letter from his think group. The statement stated that AI should be viewed as a societal risk on par with pandemics and nuclear weapons.

He got the impression that there was a lot of unspoken nervousness about moving AI forward at full speed that people all across the world were reluctant to express for fear of coming across as alarmist luddites. The letter made it socially acceptable to talk about it and gave it legitimacy.

Accordingly, you’re hearing it from people like [letter signatory] Yuval Noah Harari, and politicians are starting to raise difficult issues, according to Tegmark, whose think tank studies existential threats and potential advantages from cutting-edge technology.

Concerns about AI development span from the immediate, like the capacity to create deepfake videos and widely disseminate misinformation, to the existential risk posed by extremely intelligent AIs that elude human control or make decisions that are irreversible and have significant consequences.

Tegmark cautioned against considering the emergence of digital “god-like general intelligence” as a long-term danger, citing some AI experts who think it might occur in the next few years.

The Swedish-American scientist praised the upcoming UK AI safety meeting, which will take place at Bletchley Park in November. His thinktank has suggested that the summit should focus on three outcomes: developing a shared understanding of the gravity of the challenges posed by AI; acknowledging the need for a coordinated global response; and accepting the necessity of immediate government intervention.

Until universally accepted safety standards were fulfilled, he continued, progress would need to be halted. Making models more potent than what we now have must be suspended until they can meet established safety requirements. The halt will naturally occur once everyone has agreed on the safety criteria, he continued.

Tegmark also asked governments to act on open-source AI models that the general public can access and modify. One UK expert cautioned that Mark Zuckerberg’s Meta’s recent release of the open-source Llama 2 large language model was like to “giving people a template to build a nuclear bomb.”

Tegmark said that dangerous technology, whether it be bioweapons or software, shouldn’t be open source.

Source link

Most Popular