Mind Matters Natural and Artificial Intelligence News and Analysis
chatbot-in-a-modern-gpu-card-3d-rendering-stockpack-adobe-stock
Chatbot in a modern GPU card 3D rendering

Programmers admit fundamental limits to chatbot thinking

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

At Quanta, science writer Anil Ananthaswamy tells us that chatbot developers are beginning to face up to the fundamental limitations of their products:

LLMs are fundamentally matching the patterns they’ve seen, and their abilities are constrained by mathematical boundaries. Embedding tricks and chain-of-thought prompting simply extends their ability to do more sophisticated pattern matching. The mathematical results imply that you can always find compositional tasks whose complexity lies beyond a given system’s abilities. Even some newer “state-space models,” which have been touted as more powerful alternatives to transformers, show similar limitations(opens a new tab).

On the one hand, these results don’t change anything for most people using these tools. “The general public doesn’t care whether it’s doing reasoning or not,” [Nouha] Dziri said. But for the people who build these models and try to understand their capabilities, it matters. “We have to really understand what’s going on under the hood,” she said. “If we crack how they perform a task and how they reason, we can probably fix them. But if we don’t know, that’s where it’s really hard to do anything.”

“Chatbot Software Begins to Face Fundamental Limitations,” January 31, 2025

Curiously, that’s not what the AI celebs are saying. As Pomona College business prof Gary Smith noted here at MMN yesterday, they say things like this:

In October 2024, Elon Musk declared that:

I certainly feel comfortable saying that it’s getting 10 times better per year… I think it will be able to do anything that any human can do possibly within the next year or two.

In November 2024, OpenAI’s Sam Altman predicted the arrival of AGI in 2025. A year earlier, in October 2023, Blaise Agüera y Arcas and Peter Novig wrote a piece titled “Artificial General Intelligence Is Already Here.”

LLM cheerleaders claim that the effects will be immense. Wharton Professor Ethan Mollick asserted that the productivity gains from LLMs might be larger than the gains from steam power. Sundar Pichai, CEO of Alphabet and Google, proclaimed that, LLMs are “more profound than fire” and Turing winner Geoffrey Hinton declared, “I think it’s comparable in scale with the Industrial Revolution or electricity— or maybe the wheel.”

It is now clear that this is self-serving hyperbole. Plain-vanilla LLMs will not lead to AGI because they do not understand the text they input and output or how this text relates to the real world. They consequently cannot distinguish between fact and fiction or between correlation and causation— let alone engage in critical thinking. They are consequently prone to hallucination and flub simple questions. Pre-training on larger and larger databases won’t solve this problem

“Why LLMs (chatbots) won’t lead to,” artificial general intelligence (AGI), February 20, 2025

So we can believe either the hands-on programmers or the celebs.

You may also wish to read: Why LLMs (chatbots) won’t lead to artificial general intelligence (AGI). The biggest obstacle is seldom discussed: Most consequential real-world decisions involve uncertainty. LLMs can’t help when key decisions don’t feature objectively correct probabilities but rather subjective probabilities that need interpretation. (Gary Smith)


Programmers admit fundamental limits to chatbot thinking