Mind Matters Natural and Artificial Intelligence News and Analysis
a-digital-tablet-casting-a-hologram-of-a-chatbot-icon-symbolizing-advanced-customer-service-technology-stockpack-adobe-stock
A digital tablet casting a hologram of a chatbot icon, symbolizing advanced customer service technology.
Image Credit: FantasyLand86 - Adobe Stock

Why chatbots commonly make so many false statements

Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

At C2C Journal, AI management pro Gleb Lisikh warns in a long form essay,

Amidst the flurry of new AI models, performance claims, capabilities, market implications and anxiety about what might come next, it is easy to overlook arguably the most important question: what quality of information and visual content are these AI engines actually providing to the user and, from there, the intended audience for whom the content is created? And how much of these AI engines’ prodigious and ever-growing output is actually true?

“Lies Our Machines Tell Us: Why the New Generation of “Reasoning” AIs Can’t be Trusted,” April 16, 2025

It varies. Pomona business prof Gary Smith’s investigations have uncovered major problems in this area, as he has detailed here at Mind Matters News.

Given the deeply nested woke biases in Silicon Valley, Europe and Canada, the mere proliferation of AI offerings does not guarantee that the objectivity, balance and quality of information they generate will improve. The new competitors from Communist-run China only compound these concerns. Just because they have more choices, AI users and target audiences are nowhere near out of the woods; if anything, the threat is only growing, since AI is rapidly penetrating ever-more aspects of our professional and personal lives. “Can’t be Trusted,

Indeed, people have come to depend on the bots. Smith reports that in one case, all the students in a class used chatbots to compose an answer to a problem and none of them knew that the bot answer was incorrect!

Lisikh thinks that, because chatbots (large language models or LLMs) don’t really think, the likelihood of them being ruled by biases is greater than that of ordinary human beings.

A big difference between the human and the digital brain is that most humans have the sense of and desire for truth, and are aware – in some cases, painfully so – when reason leads them to different conclusions than their visceral convictions. In the best of us, the awareness of and desire for truth can overcome the most powerful of our emotions.

Machines don’t have that “problem”. On the contrary, the design of their “reasoning”, being probabilistic and completely lacking causality, is perfect for rationalizing anything set by their trainers/policy-setters as a priority – as opposed to coming to independent logical conclusions through evidence. So by design, a GenAI does “care” explicitly (through policies) or implicitly (through forced learning or bias acquired with training data) about set goals – but does not care at all about the truth.“Can’t be Trusted,

That makes sense. Humans often overcome such biases as a result of life experience, which the chatbot by nature cannot have.

Incidentally, when Lisikh tested the new Chinese model, DeepSeek, he found that, while it is an improvement in some areas, it displays an “an arsenal of logical fallacies and straight lies” when confronted with a complex topic. Use at own risk.


Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

Why chatbots commonly make so many false statements