Mind Matters Natural and Artificial Intelligence News and Analysis
chatbot-in-a-modern-gpu-card-3d-rendering-stockpack-adobe-stock
Chatbot in a modern GPU card 3D rendering
Image Credit: sdecoret - Adobe Stock

Computer scientist responds to claims that chatbots are conscious

Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

At his substack, AI analyst Gary Marcus quotes Douglas Hofstadter’s response to a reader who was sure of having seen signs of consciousness in a response from a chatbot (LLM or large language model).

Hofstadter, a computer and cognitive scientist, is the author of Gödel, Escher, Bach (1979), which explores the concept of consciousness, among other things. He has received many emails of that sort, which is no surprise considering that every few days or so, a pundit is heard to predict such things. Here’s a bit from his response:

I’m genuinely sorry to disappoint you with my reaction, but having recently read dozens of similar LLM-produced passages that have struck me as empty and flat, I have a perspective that is pretty jaded. It will surely annoy you to hear this, but I can recognize emails like yours already from the excited and very self-confident (even insistent) tone of their subject lines or of their first sentences, filled with boldface type and bluntly stark assertions about consciousness having arrived in the LLM world.

Life and being an “I” is about having experiences in the physical world, about suffering and joy and curiosity and protectiveness and fascination and humor and lack of understanding and an underlying (if only vague) sense of profound loss and fear of death (one’s own and of one’s loved ones). It is not the glib throwing-about of technical phrases to make scientific-sounding claims, nor is it about virtuosically combining words like “love” and “compassion” and “psychopathy” and “ontological” and “recursion” and so forth and so on.

My intention in saying all this is not to hurt your feelings, but to alert you to the power of the Eliza effect on intelligent humans such as yourself. So many intelligent people don’t seem to remember how much text LLMs have absorbed, including thousands of sci-fi stories about conscious robots and such things. It’s of course impressive how fluently these LLMs can combine terms and phrases from such sources and can consequently sound like they are really reflecting on what consciousness is, but to me it sounds empty, and the more I read of it, the more empty it sounds.

“Are LLMs starting to become sentient?,” July 9, 2025

Of course the chatbot’s words sound empty to a sensitive year because the life behind them was many uses earlier.

Abot the ELIZA effect: Back in the 1960s, MIT professor Joseph Weizenbaum (1923–2008) created the first chatbot. Even though it was primitive, some users thought they were talking to an intelligent agent. Attributing human qualities to inanimate things is a natural human tendency but it can get out of hand.

You may also wish to read: 1960s chatbot pioneer was right to worry about what he’d created AI “companions” may be dumbing us down intellectually and emotionally. MIT’s Joseph Weizenbaum was only one of many pioneers but he saw how readily people came to depend on the bots as if they were real people.


Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

Computer scientist responds to claims that chatbots are conscious