In a new personal blog post titled “The Intelligence Age,” OpenAI CEO Sam Altman detailed his vision for an AI-driven future of technological advancement and worldwide prosperity on Monday. In the essay, Altman describes how artificial intelligence has accelerated human advancement and speculates that within the next ten years, superintelligent AI may become a reality.
In a few thousand days (!), we might develop superintelligence; it might take longer, but I’m sure we’ll get there, he wrote.
The current objective of OpenAI is to develop artificial general intelligence, or AGI, a term used to describe hypothetical technology that could perform many tasks with an intelligence comparable to human intelligence without specialized training. On the other hand, superintelligence is a level of machine intelligence that is higher than artificial general intelligence (AGI) and can potentially outperform humans to an unimaginable extent in any intellectual task.
For years, the machine-learning community has been interested in superintelligence, also known as “ASI” for “artificial superintelligence.” This has been the case, particularly since controversial philosopher Nick Bostrom published Superintelligence: Paths, Dangers, Strategies in 2014. Former Chief Scientist and co-founder of OpenAI Ilya Sutskever left the company in June to start Safe Superintelligence. Altman has been discussing the development of superintelligence since last year, at the latest.
What exactly does “a few thousand days” mean, then? It’s impossible to say for sure. Though it sounds like he believes it could happen within a decade, Altman most likely chose a broad estimate because he is unsure of the precise date of ASI’s arrival. By way of comparison, 2,000 days equate to approximately 5.5 years, 3,000 days to 8.2 years, and 4,000 days to nearly 11 years.
Although no one can accurately predict the future, Altman’s ambiguity in this case can easily be criticized. After all, as CEO of OpenAI, he is probably aware of impending AI research methods that the general public is unaware of. Thus, even when presented in a broad time frame, the claim originates from a notable source in the field of artificial intelligence—albeit one that has a strong stake in ensuring that advancements in AI do not come to a standstill.
Altman’s enthusiasm and optimism are not shared by everyone. “I am so freaking tired of all the AI hype: it has no basis in reality and serves only to inflate valuations, inflame the public, garner [sic] headlines, and distract from the real work going on in computing,” wrote computer scientist and frequent critic of AI Grady Booch on X, citing Altman’s “few thousand days” prediction.
In spite of the criticism, it’s noteworthy when the CEO of the company that is currently most likely the leading AI player in the market makes a broad projection about future capabilities, even if it means constantly seeking funding. These days, a lot of tech CEOs are most concerned with building the infrastructure needed to support AI services.
“We need to drive down the cost of compute and make it abundant (which requires lots of energy and chips) if we want to put AI into the hands of as many people as possible,” writes Altman in his essay. AI will become a very scarce resource that sparks conflicts and is primarily used by the wealthy if we don’t build enough infrastructure.”
Altman’s concept for “The Intelligence Age”
After the Stone Age, Agricultural Age, and Industrial Age, Altman describes our current period as the start of “The Intelligence Age,” the next revolutionary technological era in human history, in other places in the essay. He merely asks, “How did we get to the doorstep of the next leap in prosperity?,” attributing the start of this new era to the success of deep-learning algorithms. In a nutshell: deep learning was effective.”
According to the head of OpenAI, AI assistants will eventually become more powerful and form “personal AI teams” that will enable people to achieve nearly anything they can dream of. According to him, advances in software development, healthcare, education, and other industries will be made possible by AI.
Even though Altman is aware of possible drawbacks and labor market disruptions, he is nevertheless positive about AI’s overall social impact. He writes, “Prosperity would significantly improve the lives of people around the world, but it doesn’t necessarily make people happy—there are plenty of miserable rich people.”
Altman made no specific mention of the sci-fi risks posed by AI, despite the fact that regulations like SB-1047 governing AI are currently a hot topic. Noteworthy that @sama is no longer just giving lip service to existential risk concerns, the only drawbacks he’s considering are labor market adjustment issues.
Though optimistic about AI’s potential, Altman also cautions—albeit hazily. We must act with conviction but with caution, the author writes. The beginning of the Intelligence Age is a historic event with incredibly difficult and consequential challenges. Though the story won’t be entirely positive, the potential is so great that we should learn how to manage the risks in front of us for the benefit of both the future and ourselves.
Apart from the disruptions to the labor market, Altman does not address how the Intelligence Age will not be entirely beneficial. However, he ends by drawing an analogy between an obsolete profession that vanished as a result of technological advancements.
A few hundred years ago, many of the jobs we do now would have seemed like pointless wastes of time, but no one is looking back and wishing they were a lamplighter, he wrote. A lamplighter would find the prosperity surrounding him unbelievable if he could see the world as it is today. The prosperity that surrounds us would seem equally unbelievable if we could travel back in time a century.