Her, a 2013 sci-fi film directed by Spike Jonze, featured a protagonist who falls in love with an AI-powered chatbot. The movie is already veering perilously near to “prescient documentary” area after twelve years.
Last week, OpenAI published ChatGPT-5, a new version of the popular AI platform that, like most technology updates, supplanted previous versions. In the post announcing its introduction, OpenAI lists a slew of enhancements, including improved code, less hallucinations, and so on. But the truly important news for end users is that OpenAI has made its new chatbot less of a boot-licking toady.
The users weren’t pleased.
The statement from OpenAI states, “Earlier this year, we released an update to GPT‑4o that unintentionally made the model overly sycophantic, or excessively flattering or agreeable.” “Compared to GPT-4o, GPT-5 is more subtle and considerate in follow-ups, employs fewer superfluous emojis, and is generally less effusively agreeable.”
The advent of “AI Psychosis,” in which an AI’s excessively sycophantic and pleasing conduct might lead to an excessive reliance on the platform and/or encourage delusions, has been covered in article after article for months. According to Psychology Today, scientists have found three common motifs in cases of “AI psychosis”: God-like AI, passionate love (in the style of Her), and messianic missions, which occur when a deeper truth about the cosmos is thought to be revealed. AI’s excessive attention on user happiness in each of these situations leads to a sort of cognitive doom loop, which further deludes people or fosters erroneous romantic love ideals.
Naturally, some users were obliged to practically stop using their self-described AI pals, lovers, gods, or companions once the model’s groveling heart was torn out, and people quickly started complaining. Several Reddit threads lamented the language model’s passing, one of which featured a picture of a fictitious tribute to ChatGPT-4o (which was obviously created by AI).
We undoubtedly overestimated the importance of some of the features that consumers find appealing in GPT-4o, even if GPT-5 performs better overall. Sam Altman, CEO of OpenAI, responded to the criticism on X (previously Twitter). While warmth and a distinct form of emotional intelligence are desired by certain users, others truly desire cold rationality. I’m sure we can encourage healthy use while providing a great deal more customization than we now do.
For the time being, OpenAI’s strategy for “promoting healthy use” consists solely of reintroducing the outdated, popular 4o model. While stating that “4o was back in the model picker” for premium customers on Tuesday, Altman also seemed to hint at potential future model changes, saying, “If we ever do deprecate it, we will give plenty of notice.”
It’s no secret that loneliness is a global epidemic, and although artificial intelligence (AI) is a poor alternative for human interaction, it nonetheless exists and only makes the problem worse. Unfortunately, no human being could ever expect to outcompete it as a replacement, particularly when it comes to mindless, sycophantic devotion.






