AI Isn't just Experiencing "Hallucinations" 2

Artificial intelligence users are complaining more and more about inaccurate results and erratic reactions. There are even others who wonder if it has some form of “digital dementia” or if it is just hallucinating.

For instance, in June, Meta’s AI chatbot for WhatsApp gave a stranger access to a genuine person’s private phone number. While waiting for a delayed train in the United Kingdom, 41-year-old Barry Smethurst requested Meta’s WhatsApp AI assistant for the TransPennine Express assistance number. Instead, he received a private cell number belonging to another WhatsApp user. Then, when questioned about the issue, the chatbot tried to shift the topic and defend its error.

Google’s AI Overviews have created some rather illogical explanations for made-up idioms like “you can’t lick a badger twice” and even suggested adding glue to pizza sauce.

Roberto Mata was suing airline Avianca after he claimed to have been hurt on a flight to Kennedy International Airport in New York, demonstrating that even the legal system is susceptible to an AI error. His attorneys included fictitious examples in the complaint they downloaded from ChatGPT, but they never checked to see if they were authentic. In addition to additional penalties, their law company was required to pay a $5,000 fine when the judge overseeing the case discovered them.

The Chicago Sun-Times published a “Summer reading list for 2025” in May, but people swiftly criticized the article for its fictitious and hallucinogenic book names in addition to its blatant usage of ChatGPT. The list included fictitious book titles that were purportedly authored by famous authors such as Percival Everett, Maggie O’Farrell, Rebecca Makkai, and others. The article has subsequently been removed.

In a post on Bluesky, producer Joe Russo also described how a Hollywood studio utilized ChatGPT to assess manuscripts. However, the AI’s study was not only “vague and unhelpful,” but it also included an old camera in one of the scripts. The problem is in the script’s complete absence of any references to an ancient camera. Despite several user adjustments, which the AI disregarded, ChatGPT must have experienced some sort of digital mental relapse and developed a hallucination.

These are only a handful of the articles and shared postings that have been made on the odd occurrence.

What’s happening?

Although artificial intelligence (AI) has been hailed as a game-changing technological tool to assist speed up and enhance output, sophisticated large language models (LLMs) — chatbots like OpenAI’s ChatGPT — have been providing progressively erroneous replies while presenting what they believe to be true.

As more and more consumers report oddities and hallucinatory reactions from AI, the tech industry has been dealing with this issue in a number of papers and social media posts.

Furthermore, the worry may be justified. Company testing have revealed that OpenAI’s latest o3- and o4-mini models are hallucinating over 50% of the time. Vectara’s study also indicated that some AI reasoning models appear to hallucinate more frequently, although it concluded that this is due to training issues rather than the reasoning, or “thinking,” of the model. Additionally, when AI hallucinates, it may feel like speaking with a person who is suffering from cognitive decline.

But is the inability to reason, the fabrication of facts, and AI’s insistence on their veracity a true sign of the technology’s cognitive decline? Is the problem the assumption that it possesses human-like cognition? Or is the artificial intelligence process being muddied by our own poor input?

We interviewed specialists in artificial intelligence to learn more about the changing nature of AI confabulations and how they affect the excessively ubiquitous technology.

Experts claim AI isn’t declining — it’s just dumb to begin with.

In December 2024, researchers administered the Montreal Cognitive Assessment (MoCA), a screening exam intended to identify cognitive decline in patients, to five top chatbots. A practicing neurologist then scored and assessed the results. The majority of the top AI chatbots, according to the findings, show minor cognitive impairment.

According to Daniel Keller, CEO and co-founder of InFlux Technologies, generalizations on this AI “phenomenon” of hallucinations should not be oversimplified.

AI does have hallucinations, he said, but it depends on a number of variables. He said that when a model produces “nonsensical responses,” it’s because the data used to train the model is “outdated, inaccurate, or contains inherent bias.” However, Keller does not believe that this is proof of cognitive deterioration. Furthermore, he thinks that the issue will progressively become better. He predicted that as reasoning powers develop through better training techniques fueled by reliable, publicly available data, hallucinations will become less common.

AI is experiencing a “bit” of cognitive decline, according to Raj Dandage, CEO and creator of Codespy AI and co-founder of AI Detector Pro. However, he believes this is because some of the more well-known or often used models, such as ChatGPT, are running out of “good data to train on.”

Dandage’s team ran a study using AI Detector Pro to determine the percentage of the internet that is artificial intelligence (AI) created. They discovered that an astounding quantity of information is currently AI-generated, up to 25% of all new content on the internet. Therefore, if the information that is made is progressively generated by AI and then fed back into the system for more outputs without any quality checks, it will become an endless stream of inaccurate data that keeps coming up on the internet.

Binny Gill, CEO of Kognitos and a specialist in corporate LLMs, feels the lack of truthful replies is a human issue more than an AI one. “If we develop machines inspired by the entire internet, we will primarily get normal human behavior with occasional flashes of brilliance. And by doing so, it is performing exactly how the data set instructed it to do. “There should be no surprise.”

Gill went on to say that humans designed computers to execute reasoning that ordinary humans find difficult or time-consuming to perform, but that “logic gates” are still required. “Captain Kirk, no matter how clever, can never become Spock. It is not intelligence; it is brain architecture. “We all want computers to be like Spock,” Gill explained. He argues that in order to correct this program, neuro-symbolic AI architecture (a field that blends neural network with symbolic AI-logic-based systems) is required.

“So, it isn’t any kind of ‘cognitive decline’; that assumes it was smart to begin with,” according to Gill. “This is the letdown following the excitement. There is still a long way to go, but nothing can replace a basic calculator or computer. “Dumbness is so underrated.”

And that “dumbness” may become more of a concern if we rely on AI models without any form of human thinking or intellect to distinguish false statements from true ones.

In other respects, artificial intelligence is also making us dumber.

As it happens, a recent MIT research suggests that utilizing ChatGPT may be contributing to our own cognitive deterioration. In Boston, MIT’s Media Lab split up 54 participants, ages 18 to 39, into three groups and asked them to compose SAT essays using ChatGPT, Google’s search engine (which now uses AI), or their own brains without the use of AI.

Using electroencephalograms (EEGs) to measure the participants’ brain wave activity, it was discovered that the ChatGPT users performed the worst and had the lowest engagement out of the three groups. It only became worse for the ChatGPT users, according to the research, which lasted for many months. According to the report, utilizing AI LLMs like ChatGPT may be detrimental to a user’s ability to learn and think critically, especially for younger users.

There is a lot more development work to be done.

Even Apple recently produced the article “The Illusion of Thinking,” which indicated that certain AI models are deteriorating in performance, causing the corporation to reconsider incorporating current models into its products in favor of subsequent, more advanced ones.

AI is made to tackle problems by creating a “scalable algorithm using recursion or stacks, not brute force,” according to Tahiya Chowdhury, an associate professor of computer science at Colby College. Chowdhury claims that when these models are unable to identify recognizable patterns in training data, “their accuracy collapses.” “This is not cognitive decline or hallucinations; the models were never reasoning in the first place,” Chowdhury continued.

It turns out that while AI is capable of memorization and pattern recognition, it is still unable to reason like a human.

Source link