AI Will Keep Hallucinating, says OpenAI
AI will always need human guidance.OpenAI, the company responsible for ChatGPT, said that there is no way that AI will ever stop “hallucinating.” In fact, hallucinations, which amount to basic errors and generated falsehoods, are inevitable. According to Computerworld:
OpenAI, the creator of ChatGPT, acknowledged in its own research that large language models will always produce hallucinations due to fundamental mathematical constraints that cannot be solved through better engineering, marking a significant admission from one of the AI industry’s leading companies.
The study, published on September 4 and led by OpenAI researchers Adam Tauman Kalai, Edwin Zhang, and Ofir Nachum alongside Georgia Tech’s Santosh S. Vempala, provided a comprehensive mathematical framework explaining why AI systems must generate plausible but false information even when trained on perfect data.
The researchers noted that AI, like students taking a difficult exam, will guess on occasion and will pretend to know the answers instead of admitting uncertainty. This has been a big issue surrounding large language models (LLMs). AI systems have been found generating academic citations from thin air, giving false facts, and offering a list of nonexistent books for summer reading. Despite LLMs drawing from the internet content, researchers showed that it is definitely not immune from producing fakery. In truth, AI’s dependence on pre-existing internet material is the problem. Iain Thomas explains in The Register,
In theory, AI model makers could eliminate hallucinations by using a dataset that contains no errors. But the paper admits such a scenario isn’t remotely possible, particularly since the huge volumes of data used in training likely contain mistakes.
How will this change the way people speak of the AI industry? If anything, the study serves to remind us of one of AI’s fundamental flaws, and one which does not appear to be going away. It all goes back to the fact that LLMs, while they seem to offer confident answers, cannot reflect on what they’re generating and so can’t wisely discern truth from falsehood.
Conclusion? AI will always need human guidance.
