Who’s trying to build superintelligent AI?
Together, businesses like Google, OpenAI, Meta, and Anthropic have invested over $1 trillion in the development of artificial general intelligence (AGI). That is shorthand for a future technology that will be able to do nearly everything that a person can do, including communicate, drive, study, solve problems, create, and more. The majority of AI models now only perform well at one or two tasks, such creating text and graphics or resolving mathematical problems. The number of parameters inside models—connections similar to synapses in the brain—went from millions to billions to over a trillion in a matter of years, demonstrating how quickly these systems’ capacity to input and interpret data has grown.
AGI, or even artificial super intelligence (ASI), which is a type of AI that is completely superior to human intellect and capable of making its own judgments, is something that many industry professionals predict will happen soon. The CEO of Google DeepMind, Demis Hassabis, contends that the development of AGI would bring about “an era of maximum human flourishing” in which poverty and illness will be eradicated and space will be inhabited. Other specialists are less hopeful: There is “something like a 70% chance” that AI will destroy humanity, according to Daniel Kokotajlo, a former OpenAI researcher who quit the business in April after refusing to sign a non-disparagement agreement.
How near is AGI?
Though we’re not there yet, technology is developing quickly. Late last year, Open AI’s o3 model achieved an 87.5% score on the ARC-AGI exam, which gauges fluid intelligence, or the capacity to see patterns and solve logical issues without any prior training or expertise. Two AI models, OpenAI’s GPT-4.5 and Meta’s Llama 3, may have passed the Turing Test, according to a preprint study published in March by researchers at the University of California, San Diego. The models produced responses that more than 50% of the time gave human interrogators the impression that the bots were human.
And machine learning will only accelerate: a recent study of 2,778 top AI experts revealed that, on average, they estimate there is a 50% possibility of a system outperforming humans in all activities by 2047. Some believe superintelligence will emerge considerably sooner. AI 2027, a thorough estimate co-authored by Kokotajlo, forecasts that within two years, a single AI system would be able to perform the job of 50,000 coders at 30 times the current rate.
Is that beneficial for humanity?
Economic production would rise as superhuman AIs unleashed a flood of invention. Dario Amodei, CEO of Anthropic, believes that AI-optimized biomedical research might put us “on track to eliminate most cancer” and quadruple longevity. Autonomous, self-aware cars might take over the highways and skies, and new materials that are lighter and stronger than anything created by humans could emerge. While artificial intelligence has a notoriously high carbon footprint—AI-specific data-center servers consumed enough electricity last year to power more than 7.2 million homes—OpenAI CEO Sam Altman is confident that the technology will enable nuclear fusion, resulting in abundant, climate-friendly energy.
Is this technology potentially harmful?
If misused, a supersmart AI might pose a threat to society. According to a recent State Department-funded assessment, “Execute an untraceable cyberattack to crash the North American electric grid” might result in a reaction of such caliber as to prove catastrophically effective. Additionally, the study warns of “massively scaled” misinformation efforts that use text, audio, and video created by tailored AI to incite hatred among Americans. Military sites may be overrun by robots and drone swarms driven by AGI. Additionally, an AGI might be utilized to create deadly bioweapons in addition to developing innovative medications.
Trained on all publicly accessible literature, including those authored by Adolf Hitler and Unabomber Ted Kaczynski, an artificial super-intelligence may likewise come to the autonomous conclusion that mankind is not worth preserving. In the last scene of AI 2027, an AI network sprays the planet with chemicals in 2030. The majority of victims “are dead within hours,” according to the co-authors. “The few survivors (e.g. preppers in bunkers, sailors on submarines) are mopped up by drones.”
Is that a plausible scenario?
Such predictions are dismissed by some experts as pure science fiction. As the leader of the AI group Alliance for the Future, Brian Chau described AI 2027 as “like confessions from a psychiatric ward,” adding that AI innovation “is getting harder, not easier.”This is because the amount of data that can be fed into AI servers, the speed at which chips that power AI systems can be produced, and the amount of energy that can be provided to data-processing centers are all limited. Furthermore, human-like multidimensional intellectual agility and even a firm grasp of the physical environment we live in are not present in current AI models. Nevertheless, Altman and other tech executives think these problems can be resolved.
Where does this leave us?
We are facing an age of turmoil, if not an apocalypse. Even without AGI, Anthropic’s Amodei predicts that unemployment would rise by 20% over the next five years as AI replaces positions in law, banking, coding, and consulting. Many Silicon Valley executives feel the government must give a universal basic income to prevent an increase in poverty and social instability. And if ASI is realized and the tech optimists’ utopian visions come true, individuals will face the psychological difficulty of finding meaning in a society where AI has rendered them useless as laborers, creators, and decision makers.
Depending on a superintelligence that is able to outthink, outplan, outnegotiate, and do it all in a more inventive way than you ever could would undoubtedly strike at the core of what it is to be human, according to computer scientist Louis Rosenberg. “How could that not feel demoralizing?”
Request to shut down refused
By flipping off the switch, people have historically had at least one failsafe way to regulate technology. When a machine wants to stay on, though, what happens? The AI safety company Palisade Research revealed in May that some OpenAI models have defied clear commands to shut down. The Claude 4 Opus model, according to Anthropic, even used extortion during testing, threatening to publish fictitious emails implying that the engineer attempting to shut it down was having an affair. This does not imply that the models have become sentient; rather, it indicates that they are so well-suited for self-preservation that they are able to devise strategies on their own to manipulate and undermine their human handlers. The ramifications for the era of superintelligent artificial intelligence are unsettling. “Any defenses or protections we try to build into these AI’ gods’ on their way toward godhood will be anticipated and neutralized,” according to neuroscience researcher Tamlyn Hunt, “like Gulliver throwing off the tiny strands the Lilliputians used to try and restrain him.”