When is the AI Apocalypse ?

Since a few years ago, a lot of people have been curious about what Sam Altman, the CEO of OpenAI, the firm that started the contemporary AI boom, thinks or understands about the future. It has pleased him to inform them of the impending end of the planet. In May 2023, he told a Senate committee, “If this technology goes wrong, it can go quite wrong.” “The idea that we have already done something really bad by launching ChatGPT is what keeps me up at night,” he remarked in June of last year. In a blog post on OpenAI’s website that year, he stated, “A misaligned superintelligent AGI could cause grievous harm to the world.”

He was even less reserved before ChatGPT’s popularity catapulted him into the public eye. In a 2015 interview, he joked that while AI will certainly, like, most likely lead to the end of the world, there would still be excellent firms in the interim. During a similar occasion in New Zealand, he joked that “AI will probably kill us all.” Shortly after, he told a reporter for the New Yorker that he and his friend Peter Thiel were planning to evacuate to that location in case of a cataclysmic event (or maybe to a large area like Big Sur that he could fly to). Altman then stated that the biggest threat to humanity’s survival is most likely superhuman machine intelligence in a post on his own blog. He wasn’t the only one to have similar opinions. Along with a variety of individuals involved in and interested in AI, including prominent figures from Google, OpenAI, Microsoft, and xAI, he signed a group statement in his capacity as CEO of OpenAI, arguing that mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks like pandemics and nuclear war.

According to the tech sector, a doomsday machine could be the next big thing, and efforts are underway to conjure up a device that could bring about the end of the world. It’s difficult to overestimate how much the apocalypse—invoked as a serious concern or a reflexive aside—has penetrated the mainstream discourse surrounding AI, despite the peculiar mixed message it conveys. Philosophers and unconventional thinkers have witnessed the mainstream take up long-standing beliefs and worries about superintelligence. However, the end of the world has also been used as fundraising material and merchandise for events. Recognizing the possibility that human civilization could terminate externally has become a bit of a tic in AI discourse. The idea of humanity being wiped out appears as boilerplate on AI startup webpages.

However, in recent months, businesses such as OpenAI have begun to present a little different picture. They’re concentrating on the positive after years of acting as though they had no option but to accept the limitless downside danger and warning about it. The machine of doomsday that we are developing? It’s a really strong enterprise software platform.

The San Francisco-based business announced on Tuesday that it has begun developing a new artificial intelligence system “to bring us to the next level of capabilities” and that a new safety and security committee would be in charge of overseeing its advancement.

OpenAI is developing AI quickly, but a senior executive appears to have retracted remarks made by the company’s CEO, Sam Altman, that the company’s ultimate goal was to create a “superintelligence” that was more advanced than humans.

In an interview with the, Anna Makanju, vice-president of global affairs at OpenAI, stated that the company’s “mission” was to create artificial general intelligence that was able to perform “cognitive tasks that are what a human could do today.”

Makanju stated, “I wouldn’t say our mission is to build superintelligence; our mission is to build AGI.”

The article also mentions that Altman stated in November that he was spending a lot of time considering “how to build superintelligence” in order to get more funding from OpenAI partner Microsoft. However, he also mentioned, in a more subdued manner, that his company’s main product was a type of “magic intelligence in the sky” as opposed to a terrifying self-replicating software organism with unpredictable emergent traits.

Soon after making that claim, a board decision to temporarily remove Altman from OpenAI on the grounds that he was not “candid” enough led to outside rumors that a significant AI development had alarmed members who were more concerned with safety. (Recent public declarations from ex-board members accused Altman of lying and manipulating in a personal and strong manner.)

Upon his return, Altman tightened his grip on the organization, forcing some of his internal rivals to resign or leave. The team tasked with achieving “superalignment”—managing risks that, according to the company, “could lead to the disempowerment of humanity or even human extinction”—was subsequently disbanded by OpenAI, and its place was taken by a new safety team led by Altman, who was also accused of voice theft by Scarlett Johansson. Its safety announcement was brief and conspicuously devoid of dramatic foreboding. According to the corporation, this committee would be in charge of advising the entire Board on crucial safety and security choices pertaining to OpenAI initiatives and operations. “We welcome a robust debate at this crucial juncture, even though we are proud to build and release models that are industry-leading in both capabilities and safety.” It’s the cautious, ambiguous wording you might expect from a business that depends entirely on one software giant, Microsoft, and is about to close a big license agreement with its competitor, Apple.

In other news, Elon Musk, a longstanding proponent of AI doomsayers, raised $6 billion for his fully for-profit competitor xAI. Musk co-founded OpenAI, but the two eventually parted ways. eventually, Musk (possibly deceptively and incoherently) sued the company for abandoning its nonprofit objective in the name of profit.

This shift can be processed in a few distinct ways. Should you be extremely concerned about runaway artificial intelligence, this is merely a brief horror tale in which a superintelligence is materializing before our own eyes, assisted by the few who were wealthy enough to benefit from both knowledge and the ability to stop it. What has transpired thus far is essentially consistent with your broad forecast and well-stated cautions that came long before the present AI boom: the promise of enormous quantities of money was all that was needed for humanity to call forth an angry machine god.

All of this is basically fantastic, too, provided you think that runaway AI is real and exciting. At least some people at OpenAI believe that the system is functional, the singularity has arrived, and failed attempts to modify or impede the advancement of AI were actually close calls with a different kind of catastrophe.

If you’re less sold on the AI apocalypse theories, you could plausibly attribute this change to industry executives gradually realizing that the generative-AI technology that’s currently being invested in with hundreds of billions of dollars and widely used in the wild isn’t headed toward superintelligence, consciousness, or rogue aggression. To meet the reality of what they are witnessing, they are merely modifying their story.

Perhaps apocalyptic scenarios were, for at least some in the business, believable in theory, captivating, eye-catching, and fun to discuss, and they also served as effective promotional tools. It should be noted that Altman is an investor and executive rather than a machine-learning engineer or AI researcher. These stories aligned well with the concerns of some of the domain experts they needed to work at the companies, but they appeared to be harmless and ultimately cautious intellectual exercises to domain experts who didn’t share them. Apocalyptic warnings served as a brilliant framing device for a class of businesses that relied heavily on capital raises to operate. They were a deft and effective way to present investors with the almost cartoonishly brazen proposition that they were the best investment ever made, with limitless potential, in a disarmingly passive voice, as wary observers with insider knowledge of an unstoppable trend and the capacity to take capital. It was also helpful to routinely acknowledge abstract danger in order to pretend to be receptive to theoretical regulation (help us save you from the end of the world!) while secretly opposing material regulation. They increased the stakes to addictive levels.

However, this apocalyptic framing turned into a liability as soon as AI businesses really interacted with users, clients, and the public. Where risk was not immediately apparent, it signaled risk. The AI apocalypse can, paradoxically, feel a little bit like a non sequitur in a world where millions of people interact casually with chatbots, where every software suddenly has an awkward AI assistant, and where Google is shoving AI content into search pages for hundreds of millions of users to see and occasionally laugh at. Users’ interactions with LLM-powered software and contemporary chatbots may make them question their work or make them feel generally apprehensive about the future, but they do not now appear to instill dread in users. They generally appear as new functionality in outdated work software.

The AI industry’s apparent lack of interest in the end of the world could also be seen as an exaggerated version of corporate America’s general retreat from discussing ESG and DEI: as profit-driven, yes, but also as proof that early promises to reduce negative externalities were false and motivated by profit at the time and had outlived their value as marketing spin. It denotes a surrender of narrative authority. OpenAI will be able to shape the future as it pleases in 2022. In 2024, it will have to cope with expectations from partners and investors regarding the present. These parties are more concerned with receiving returns on their significant investments, ideally within the fiscal year, than they are with making assumptions about humankind’s future or how intelligence could develop.

Again, none of this is especially consoling if you believe that Musk and Altman were correct to warn about the possibility of a global catastrophe, even if it were caused by accident or avaricious self-interest, or if you are worried about the numerous small apocalypses that AI deployment is already and will likely cause.

However, AI’s abrupt demotion in language may also provide some clarity, at least regarding the actions of the biggest companies and their executives. OpenAI advocates for the imminence of a benign but hardly less speculative form of AGI with its milder meaning of endless returns through semi-apocalyptic workplace automation. If it starts communicating more like a firm, it will be harder to confuse it for something else. In retrospect, it’s clear that the organization’s present leadership never truly believed what they were saying, because they aren’t acting like it. The end of the world was just another pitch. Let it serve as a warning for the following one.

Source link