According to Sam Altman, OpenAI “totally screwed up” with the release of GPT-5. Altman then went on to utter the B-word over a meal with reporters. “When bubbles happen, smart people get overexcited about a kernel of truth,” The Verge wrote, citing OpenAI CEO’s views. Then came the broad MIT poll, which put a number to what so many people appear to be feeling: a staggering 95% of generative AI experiments at corporations fail.
A tech sell-off followed, with frightened investors causing the S&P 500 to drop $1 trillion in value. It was a warning indicator that the AI boom was becoming dotcom bubble 2.0, especially since tech firms that had mostly changed into AI equities were taking over that index. The S&P 500 ended a five-day losing streak on Friday following Jerome Powell’s quasi-dovish remarks at Jackson Hole, Wyoming, demonstrating that concerns about the AI trade aren’t the only thing influencing markets. The Fed chair’s suggestion of openness toward a September rate cut really sent markets into panic mode.
In addition to warning of a possible bubble and troubled economy since 2023, Gary Marcus has been warning about the limitations of large language models (LLMs) since 2019. He has a really unique weight to his statements. The former cognitive scientist who is now a seasoned AI researcher has been involved in machine learning since he established Geometric Intelligence in 2015. Following Uber’s 2016 acquisition of that business, Marcus left to work at other AI firms and publicly criticize what he perceives to be stagnant developments in the field.
In an interview, Marcus stated that he does not consider himself a “Cassandra” and is not attempting to be one. A heroine from Greek tragedy named Cassandra made prophesies that came true, but no one took her seriously until it was too late. “As a realist, I consider myself to have anticipated the issues and been right about them.”
Above all, Marcus blames GPT-5 for the market sway. While he acknowledged that it’s not a failure, he described it as “underwhelming” and a “disappointment,” which has “really woken a lot of people up.” He went on to say, “You know, GPT-5 was sold, basically, as AGI, and it just isn’t.” He was referring to artificial general intelligence, a hypothetical AI that is capable of reasoning similarly to humans. “The model isn’t bad,” he remarked, adding that it’s not the quantum leap that many people had been led to believe.
Since Marcus said in 2022 that “deep learning is hitting a wall,” he stated this shouldn’t come as a surprise to anyone who is paying attention. Marcus has been freely speculating on his Substack on when the generative AI bubble would burst. In an interview, he stated that “crowd psychology” is undoubtedly at play and that he considers the John Maynard Keynes quotation, “The market can stay irrational longer than you can stay solvent,” or Looney Tunes’s Wile E. Coyote chasing Road Runner off a cliff and hanging in midair before falling to earth.
Marcus states, “That’s how I feel.” We have descended from the cliff. There is no logic to this. And in recent days, we’ve seen some indications that people are starting to pay attention.
Establishing warning indicators
Torsten Slok, the head economist of Apollo Global Management and a well-known and powerful figure on Wall Street, made a startling estimate in July but refrained from calling a bubble. He wrote that the top 10 companies in the S&P 500 are more overvalued now than they were in the 1990s, which is how the AI bubble differs from the IT bubble of the 1990s. He also cautioned that the startling market capitalizations and forward P/E ratios of companies like Nvidia, Microsoft, Apple, and Meta had “become detached from their earnings.”
While hardly the only change in the weeks that followed, the disappointment of GPT-5 was a significant one. The enormous expenditure on data centers to meet the potential future demand for AI application is another red flag. Slok has also examined this topic and discovered that, during the first half of 2025, data center investments contributed as much to GDP growth as consumer expenditure. This is significant since, as we know, consumer spending accounts for 70% of GDP. Weeks before, Christopher Mims of the Wall Street Journal had provided the calculation. And lastly, in a much-discussed August 19 New York Times opinion piece, former Google CEO Eric Schmidt co-authored the statement that “it is uncertain how soon artificial general intelligence can be achieved.”
Political scientist Henry Farrell, who claimed in the Financial Times in January that Schmidt had a major influence on the “New Washington Consensus,” which was partly based on the idea that artificial intelligence was “just around the corner,” said this is a major about-face. While acknowledging that he had been depending on casual discussions with people he knew in the nexus of D.C. foreign policy and tech policy, Farrell claimed on his Substack that Schmidt’s op-ed demonstrated that his previous set of presumptions were “visibly crumbling away.” That piece was titled “The Twilight of Tech Unilateralism” by Farrell. “A lot of the reasoning behind this consensus breaks down if the AGI bet is a bad one,” he concluded. And Eric Schmidt appears to be coming to that conclusion.
In the summer of 2025, the atmosphere is finally changing to one of growing opposition to AI. In a May Brookings article, Darrell West cautioned that the tide of scientific and public opinion will soon shift against AI’s masters of the universe. Fast Company soon forecast that there would be a lot of “AI slop” throughout the summer. By the beginning of August, Axios had discovered that the term “clunker” was being used extensively to describe AI errors, especially when it came to poor customer service.
History teaches us that short-term hardship leads to long-term benefit
John Thornhill of the Financial Times provided some insight into the bubble question, recommending readers to brace themselves for a fall while also preparing for a future “golden age” of AI. He focuses on the data center buildout, which will cost Big Tech $750 billion in 2024 and 2025 as part of a worldwide deployment predicted to cost $3 trillion by 2029. Thornhill turns to financial historians for solace and perspective. It has repeatedly been demonstrated that this sort of frenetic investment generally results in bubbles, dramatic crashes, and creative destruction—but that long-term value is finally achieved.
He points out that this tendency was recorded by Carlota Perez in her book Financial Capital and Technological Revolutions: The Dynamics of Bubbles and Golden Ages. She saw artificial intelligence (AI) as the fifth technological revolution to follow the pattern started in the late 18th century, which led to the development of personal computers and railroad infrastructure, among other things, in the contemporary economy. There was a bubble and a crash in each. Although Thornhill did not mention him in this column, Edward Chancellor noted similar trends in his seminal work Devil Take The Hindmost, which is renowned not just for its analysis of bubbles but also for foreseeing the dotcom boom before it occurred.
In November 2024, Chancellor was quoted by Owen Lamont of Acadian Asset Management as stating that a critical bubble moment has passed, with an abnormally high proportion of market players stating that prices are too high but maintaining that they are likely to climb further.
Although cautious, Wall Street is not predicting a bubble
Most Wall Street banks are not advocating for a bubble. According to a recent report from Morgan Stanley, the S&P 500 will achieve $920 billion in annual efficiency as a result of AI. UBS, for its part, agreed with the warning expressed in the MIT study that made headlines. Although it cautioned investors that the data center buildout will be accompanied by a time of “capex indigestion,” it insisted that the use of AI is rising well beyond projections, pointing to increasing revenue from ChatGPT from OpenAI, Gemini from Alphabet, and CRM systems driven by AI.
Prior to the release of GPT-5, in early August, Bank of America Research released a note describing AI as a component of a “sea change” in worker productivity that will continue to provide a “innovation premium” for S&P 500 companies. Savita Subramanian, the head of U.S. equity strategy, basically made the case that AI will accelerate businesses’ efforts to accomplish more with less and to convert people into processes, lessons learned from the 2020s inflation wave. In an interview with Fortune, she stated, “I don’t think it’s necessarily a bubble in the S&P 500,” but she also said, “I think there are other areas where it’s becoming a little bit bubble-like.”
Subramanian cited smaller enterprises and perhaps private lending as places “that may have re-rated too aggressively.” She is particularly concerned about the possibility of firms investing so much in data centers, noting that this reflects a return to an asset-heavy approach rather than the asset-light one that is increasingly distinguishing top performance in the US economy.
“This is new, really,” she remarked. “Tech used to be very asset-light and just spent money on R&D and innovation, and now they’re spending money to build out these data centers,” she said, adding that she might be putting an end to their high-margin, asset-light existence and essentially turning them into more manufacturing-like and asset-intensive businesses than they were before. That justifies a lower multiple in the stock market, in her opinion. “It’s starting to happen in places,” she replied when asked if it is equivalent to a bubble, if not a correction, and she agreed with the analogy to the railroad boom.
The math and the ghost in the machine
Gary Marcus said that he is worried about the foundations of mathematics, pointing out that close to 500 AI unicorns are worth $2.7 trillion. “Given the amount of revenue coming in, that just doesn’t make sense,” he added. Marcus mentioned that OpenAI still isn’t profitable despite generating $1 billion in revenue in July. He speculated that OpenAI had around half of the AI market and provided an estimate that would bring in about $25 billion annually for the industry. This is not insignificant, but it is expensive, and trillions of dollars are [invested].
If Marcus is right, then why hasn’t anyone been paying attention to him for years? He claimed to have been warning about this for years as well, referring to it as the “gullibility gap” in his 2019 book Rebooting AI and claiming in The New Yorker in 2012 that deep learning was a false hope. Marcus worked as a cognitive scientist for the first 25 years of his career, during which time he studied the “anthropomorphization people do.” They end up utilizing these machines as companions and mistakenly believe that they are closer to addressing these challenges than they actually are because they mistakenly attribute to them an intellect and humanness that do not exist. He believes the bubble is ballooning to its current size in large part due to the human desire to project ourselves onto objects, which cognitive scientists are not equipped for.
These computers may appear to be human, but they do not operate like you, Marcus explained, adding that “this entire business has been founded on people not understanding that, assuming that scaling will cure all of this, since they do not truly grasp the problem. It’s almost tragic.
According to her, Subramanian believes that “people love this AI technology because it feels like sorcery.” It seems a bit mysterious and magical. Although the world hasn’t altered all that much as of yet, I don’t think it should be written off. She has become rather fond of it as well. My kids don’t use ChatGPT as much as I do. To be honest, this is sort of fascinating. These days, I use ChatGPT for everything.






