Sam Altman believes that next year, AI will have “novel insights.”

In a new essay titled “The Gentle Singularity,” which was published on Tuesday, OpenAI CEO Sam Altman offered his most recent predictions on how artificial intelligence will alter human life in the coming fifteen years.

Altman’s futurism is exemplified in this essay, which downplays the arrival of AGI while hyping up its potential and claiming that his business is very near to achieving it. These kinds of pieces are often published by the CEO of OpenAI, who clearly lays out a future in which artificial intelligence (AGI) upends our contemporary ideas of labor, energy, and the social contract. However, Altman’s essays frequently hint at the next projects that OpenAI is working on.

In the essay, Altman asserted that the world will “probably see the arrival of [AI] systems that can figure out novel insights” in 2026. OpenAI officials have lately stated that the company is focusing on getting AI models to generate novel and intriguing concepts about the world, despite the fact that this is a bit ambiguous.

In April, co-founder and president Greg Brockman announced OpenAI’s o3 and o4-mini AI reasoning models, claiming that they were the first models that scientists had utilized to produce novel, practical concepts.

OpenAI itself may step up its efforts to create AI that can produce new insights in the upcoming year, according to Altman’s blog post. A number of OpenAI’s rivals have redirected their attention to developing AI models that may assist scientists in formulating new theories and, consequently, in making new discoveries about the world, so the business is by no means alone in this endeavor.

In may, Google published a study on AlphaEvolve, an AI coding agent. According to the business, AlphaEvolve has produced innovative solutions for challenging mathematical issues. As reported by FutureHouse, another business supported by former Google CEO Eric Schmidt, its AI agent tool has the potential to produce a true scientific breakthrough. Anthropic started an initiative to fund scientific study in May.

If successful, these businesses might automate a crucial step in the scientific method and possibly get into big sectors like material science, medication development, and other science-based disciplines.

Altman has hinted at OpenAI’s intentions in a blog post before. Altman made another prediction about 2025 being the year of agents in a January blog post. After that, his company released its first three AI agents: Codex, Deep Research, and Operator.

Making AI systems agentic may be easier than getting them to produce new insights. The larger scientific community is still not entirely convinced that AI can produce truly novel insights.

Thomas Wolf, the Chief Science Officer at Hugging Face, argued in an essay earlier this year that the ability of contemporary AI systems to ask big questions is essential for any significant scientific advance. Today’s AI models are unable to produce novel theories, as former OpenAI research lead Kenneth Stanley recently.

Stanley is currently assembling a group at Lila Sciences, a startup that received $200 million to establish an AI-powered lab dedicated to improving hypotheses generated by AI models. Stanley says this is a challenging topic since it entails allowing AI models to identify what is fascinating and innovative.

It will be interesting to see if OpenAI can actually develop an AI model that generates new discoveries. But there may be a recurring theme in Altman’s essay: a sneak peek of the direction OpenAI is probably going in the future.

Source link