Chatbot offers info on a legal case — too bad it’s mostly fiction
In Chapter 7 of Stockholm Syndrome Christianity (2025), John West provides a telling example of the way in which chatbots’ tendency to hallucinate makes it dangerous to trust them.

The underlying programming of these systems can lead to the generation and dissemination of completely fictional statements. I discovered this firsthand when I experimented with Google’s Bard in 2023. I asked it questions about the legality of teaching evidence for intelligent design in public school classrooms. I had expected Bard to generate the kinds of inaccuracies and bias found in places like Wikipedia. After all, it probably drew on Wikipedia as one of its sources. What I didn’t expect was that Bard would invent completely false information, such as claiming (wrongly) that the United States Supreme Court had ruled that teaching intelligent design in public schools is unconstitutional in the case of Kitzmiller v. Dover.
In reality, Kitzmiller was merely a federal district court case, not a Supreme Court case, and its decision only applies to one part of Pennsylvania, not nationwide.
And when I asked it to justify specific claims or to cite its sources, Bard repeatedly invented court rulings that didn’t actually exist. It generated the names of the purported legal cases and even cited the courts that purportedly issued the rulings. But these court cases and rulings never happened.
Let that sink in. Bard provided fictional sources in support of false claims. Bard’s fanciful answers made Wikipedia’s biased entries appear to be paragons of accuracy.
I later learned that I had experienced a well-documented flaw of AI systems: They generate all sorts of imaginary facts. In one humorous example, ChatGPT “kept asserting confidently (and incorrectly) that the Russians had sent various numbers of bears into space.” Pp. 231-32
Indeed. Pomona business prof Gary Smith wrote about those bears here at Mind Matters News. It’s one example of many. Chatbots can’t help hallucinating because they don’t actually think; they just generate and rework copy from internet sources.
West cautions fellow Christians (but the advice would apply to anyone), “Christians who aren’t aware that AI-based information services can be even more biased and inaccurate than other sources are asking for trouble.”
There is still no substitute for just doing our homework.
You may also wish to read: John West’s new book Stockholm Syndrome looks at Francis Collins. It’s not possible reconcile Collins’s treatment of unborn children with a Christian view of them. The genome mapper is widely cited as a famous Christian in science but West asks us to look a little deeper — into his actual policies.