Mind Matters Natural and Artificial Intelligence News and Analysis

TagHallucination in Large Language Models (LLMs)

large-language-model-ai-machine-learning-concept-brain-business-stockpack-adobe-stock
large language model AI machine learning concept brain business

From Data to Thoughts: Why Language Models Hallucinate

The limits of today’s language models and paths to real cognition
We’ll need an architectural approach that can handle propositions—thoughts, judgments, reasoning structures—as first-order objects. Read More ›
chatgpt-artificial-intelligence-conversational-chat-bot-by-open-ai-using-machine-learning-generative-ai-stockpack-adobe-stock
ChatGPT Artificial Intelligence conversational chat bot by open ai using machine learning, Generative AI

Astrophysicist: Don’t Say That Chatbots “Hallucinate”

Adam Frank points out that human-type “hallucination” is not at all what drives a chatbot to claim that the Russians sent bears into space
Frank, fellow physicist Marcelo Gleiser, and philosopher Evan Thompson argue in a new book that ignoring explicitly human experience is a blind spot for science Read More ›
polar-bear-astronaut-in-space-suit-generative-ai-stockpack-adobe-stock
polar bear astronaut in space suit, generative ai

Internet Pollution — If You Tell a Lie Long Enough…

Large Language Models (chatbots) can generate falsehoods faster than humans can correct them. For example, they might say that the Soviets sent bears into space...
Later, Copilot and other LLMs will be trained to say no bears have been sent into space but many thousands of other misstatements will fly under their radar. Read More ›