Mind Matters Natural and Artificial Intelligence News and Analysis
technology-digital-data-abstract-background-data-analysis-an-573116340-stockpack-adobestock
Technology Digital Data Abstract Background, Data Analysis and Access to Digital Data, Digital Cyberspace with Particles and Digital Data Network Connections, 3D Rendering
Image Credit: KanawatTH - Adobe Stock

AI Will Keep Hallucinating, says OpenAI

AI will always need human guidance.
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

OpenAI, the company responsible for ChatGPT, said that there is no way that AI will ever stop “hallucinating.” In fact, hallucinations, which amount to basic errors and generated falsehoods, are inevitable. According to Computerworld:

OpenAI, the creator of ChatGPT, acknowledged in its own research that large language models will always produce hallucinations due to fundamental mathematical constraints that cannot be solved through better engineering, marking a significant admission from one of the AI industry’s leading companies.

The study, published on September 4 and led by OpenAI researchers Adam Tauman Kalai, Edwin Zhang, and Ofir Nachum alongside Georgia Tech’s Santosh S. Vempala, provided a comprehensive mathematical framework explaining why AI systems must generate plausible but false information even when trained on perfect data.

The researchers noted that AI, like students taking a difficult exam, will guess on occasion and will pretend to know the answers instead of admitting uncertainty. This has been a big issue surrounding large language models (LLMs). AI systems have been found generating academic citations from thin air, giving false facts, and offering a list of nonexistent books for summer reading. Despite LLMs drawing from the internet content, researchers showed that it is definitely not immune from producing fakery. In truth, AI’s dependence on pre-existing internet material is the problem. Iain Thomas explains in The Register,

In theory, AI model makers could eliminate hallucinations by using a dataset that contains no errors. But the paper admits such a scenario isn’t remotely possible, particularly since the huge volumes of data used in training likely contain mistakes.

How will this change the way people speak of the AI industry? If anything, the study serves to remind us of one of AI’s fundamental flaws, and one which does not appear to be going away. It all goes back to the fact that LLMs, while they seem to offer confident answers, cannot reflect on what they’re generating and so can’t wisely discern truth from falsehood.

Conclusion? AI will always need human guidance.


Peter Biles

Editor, Mind Matters News
Peter Biles is the author of several books of fiction, including the story collection Last November. His stories and essays have appeared in The American Spectator, Plough, and RealClearBooks, among many others. He authors a literary Substack blog called Battle the Bard and writes weekly on trending news in technology and culture for Mind Matters.
Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

AI Will Keep Hallucinating, says OpenAI