Mind Matters Natural and Artificial Intelligence News and Analysis
my-best-friend-pepper-stockpack-unsplash
My best friend Pepper

Tech Hype Watch: Do Chatbots Really Understand Things?

Well-known author Robert Wright believes they do but he misunderstands how computers work
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

American science fiction author Philip K. Dick (1928–1982) titled his 1968 novel Do androids dream of electric sheep? (Doubleday 1968). That novel’s premises formed the basis for the influential 1982 film Blade Runner.

What does it mean to dream?

A dream is unique because it takes place entirely within an imagined world, a world within a human mind. Dreams are a contradiction within themselves: something that does not exist can feel so real, sometimes more real than the waking world. Why does this happen? It is because of the meaning within our minds. A being that cannot understand meaning cannot dream. So, dreams are the hallmark of beings with understanding.

Robert Wright, the author of NonZero (Pantheon 2000), asks a very intriguing question about minds and meaning: Can the recent AI phenomenon ChatGPT understand meaning?

At first glance, the reader may think not. How can a series of ones and zeros on transistor switches understand anything? This is the thrust of John Searle’s famous Chinese Room argument: it is possible for a machine to look like it understands without really understanding, because it operates entirely based upon rules.

But Wright’s contention is that something very different is happening in AIs like ChatGPT, so that Searle’s argument no longer applies.

Is there a special sauce makes the latest and greatest AI so different?

The basic mechanism behind large language models (LLMs) is not new. It was in fact described by Claude Shannon (1916– 2001) in 1948 in his paper “A Mathematical Theory of Communication.” Shannon showed that it is possible to generate semi-coherent text with a very simple probabilistic lookup table called a Markov generator. So there is nothing new about the fundamentals of AI, and in fact the fundamentals are exactly like Searle’s Chinese Library.

What Wright thinks makes recent AI unique is a coordinate system known as embeddings. These are vectors of numbers that represent words. Since they are a coordinate system, the distance between words can be measured, and word vectors can be added and subtracted from each other, providing a “”geometry of meaning”.

One famous example: Subtracting “man” from “king” and adding “woman” results in a vector close to the word “queen.” This coordinate system seems to resemble meaning because we associate concepts with one another.

These embeddings are learned by the AI from the training documents, and then used to associate and look up words in a complex architecture known as a “transformer.”

So was Searle wrong? Can chatbots and other advanced AIs really understand things?

The upshot, according to Wright, is since AIs possess a map of meaning, they now have understanding. This implies that Searle’s argument is no longer valid, notwithstanding the fact coordinate geometry is nothing new.

However, just because the lookup process has become more sophisticated does that mean it is no longer a rule-based system, as described by Searle? Imagine that same librarian again. What’s changed now is that, instead of looking up Chinese characters in a book, he looks up the 1’s and 0’s that AIs run on. So he is still looking up symbols and applying rules, only it is now much more complicated than the Chinese characters were.

And this is the way it will always be unless we come up with some new computer paradigm that does not rely on logic gates applied to 1’s and 0’s. Yet a new computer paradigm is impossible, to the best of our knowledge.

The Church-Turing thesis states everything physical can be reduced to computation. Thus there is no way to build a computer that cannot be reduced to the logic of 1’s and 0’s. Therefore, no matter how sophisticated an AI becomes, ChatGPT or otherwise, Searle’s argument still applies.

So, to answer both Philip K. Dick and Robert Wright’s question: robots cannot dream of electric sheep. Due to the Church-Turing thesis, computers will always fundamentally be the sort of rule based system described by Searle’s Chinese Room argument. Consequently, they can never truly understand meaning.

You may also wish to read: AI and the Chinese Room Argument We still haven’t cracked the mystery of human intelligence. 80 years later, we are using the same paradigm, with much faster computers and vast data — but we still haven’t cracked the mystery of human intelligence.

The feature photo is by Owen Beard on Unsplash.


Eric Holloway

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Eric Holloway is a Senior Fellow with the Walter Bradley Center for Natural & Artificial Intelligence, and holds a PhD in Electrical & Computer Engineering from Baylor University. A Captain in the United States Air Force, he served in the US and Afghanistan. He is the co-editor of Naturalism and Its Alternatives in Scientific Methodologies.

Tech Hype Watch: Do Chatbots Really Understand Things?