Mind Matters Natural and Artificial Intelligence News and Analysis
digital-chatbots-on-smartphones-access-data-and-information-in-online-networks-robot-applications-and-global-connectivity-ai-artificial-intelligence-innovation-and-technology-stockpack-adobe-stock
Digital chatbots on smartphones access data and information in online networks. Robot Applications and Global Connectivity AI Artificial Intelligence innovation and technology
Image Credit: Narumol - Adobe Stock

Insights into why chatbots hallucinate, spewing false information

Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

At his Substack, AI analyst Gary Marcus offers some thoughts on why chatbots (technically, large language models or LLMs) hallucinate.

He means, for example, the kind of response Pomona College business prof Gary Smith got when he asked ChatGPT if the Russians had sent bears into space. Smith received detailed, entertaining — and wholly fictional — answers, even listing the fictitious bears’ names.

Marcus is no stranger to this issue because a bot claimed in 2023 that he had a pet chicken named Henrietta: “ If I did own a pet chicken I rather doubt I would call it Henrietta.”

So why do these programs provide wholly false information?

Because LLMS statistically mimic the language people have used, they often fool people into thinking that they operate like people.

But they don’t operate like people. They don’t, for example, ever fact check (as humans sometimes, when well motivated, do). They mimic the kinds of things of people say in various contexts. And that’s essentially all they do…

By sheer dint of crunching unthinkably large amounts of data about words co-occurring together in vast of corpora of text, sometimes that works out. Shearer and Spinal Tap co-occur in enough text that the systems gets that right. But that sort of statistical approximations lacks reliability. It is often right, but also routinely wrong…

And although I don’t own a pet chicken named Henrietta, another Gary (Oswalt) illustrated a book with Henrietta in the title. In the word schmear that is LLMs, that was perhaps enough to get an LLM to synthesize the bogus sentence with me and Henrietta.

“Why DO large language models hallucinate?,” May 5, 2025

He says he has been warning about this since 2001. He adds,

One recent study showed rates of hallucinations of between 15% and 60% across various models on a benchmark of 60 questions that were easily verifiable relative to easily found CNN source articles that were directly supplied in the exam. Even the best performance (15% hallucination rate) is, relative to an open-book exam with sources supplied, pathetic. That same study reports that, “According to Deloitte, 77% of businesses who joined the study are concerned about AI hallucinations”. “Hallucinate?

We should remember this when we hear pundits announce that AI is coming for our jobs. If our jobs consist of making up stuff despite correct answers being readily available, the pundits might have a point. Otherwise…


Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

Insights into why chatbots hallucinate, spewing false information