Mind Matters Natural and Artificial Intelligence News and Analysis
abstract-digital-human-face-artificial-intelligence-concept-of-big-data-or-cyber-security-3d-illustration-stockpack-adobe-stock
Abstract digital human face.  Artificial intelligence concept of big data or cyber security. 3D illustration
Image licensed via Adobe Stock

If Consciousness Is Not Physical, How Can an AI Duplicate It?

Robert Lawrence Kuhn interviewed Berkeley philosopher Hubert Dreyfus on the question before his death in 2017
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

At Closer to Truth, Robert Lawrence Kuhn interviewed the late philosopher Hubert Dreyfus (1929–2017) a couple of years back on the question “Is consciousness entirely physical.” The interview was released May 18, 2022 (10:03 min).

Here’s the big question about consciousness, our inner experience of what things feel like. Is consciousness a product of the physical world alone? Because if consciousness is the output of the physical brain by itself, however complex, then consciousness as physicalism would defeat those who believe, or hope for, the existence of nonphysical realities.

Some philosophers (physicalists) do maintain that consciousness is entirely physical or, more commonly, they dance around the point. For example, philosopher David Papineau said in 2020, “If only we could stop ourselves seeing things through dualist spectacles, we’d no longer feel that there is anything puzzling about consciousness.” Almost no one has found that to be an accurate summary of the situation.

Dreyfus, who wasn’t afraid to question vast claims for artificial intelligence, took on the physicalists here:

Selections from the transcript and some notes follow:

Robert Lawrence Kuhn: Bert, the question that has dominated my life has been the nature of consciousness. Is it material? Is there something mental or spiritual? What is the essence of consciousness? Now, the people who are involved with artificial intelligence, looking at the tremendous growth — exponential growth — of computer power would say that, once you have sufficient number of processes with improved software that represents the real world, etc., you will be able to wholly duplicate human consciousness on a machine and, in fact, in a few years do it very cheaply. (0:40)

You have pioneered a way of thinking about computers that may be a little bit different. Am I right? (0:46) …

Hubert Dreyfus: I think AI has failed … it isn’t even pointing in the direction of getting intelligent behavior, let alone consciousness. (1:25)

To begin with, when I came to MIT, and I taught at MIT for eight years, the AI people were doing their artificial intelligence thing nearby and they had been doing it for a while. They came into my class and said, well, if you philosophers have had the 2000 years and you can’t understand consciousness, intelligence, language learning, any of that. And we’re beginning
to understand it over here in the AI labs (1:51)

Setting out to discover the truth of the matter, Dreyfus got hired as a consultant at the Rand Corporation

Hubert Dreyfus: That was in [19]65. The big beginners of AI were at Rand and we had a kind of tense relationship. I wrote a paper called “Alchemy and artificial intelligence” which was to say that the way they were trying to achieve intelligence with computers couldn’t be done. (2:29)

HarperCollins 1978

And it was sort of interesting how I knew it couldn’t be done and they didn’t know it couldn’t be done. Philosophers had developed a lot of very sophisticated ways of thinking about the mind. Namely that there must be elementary bits of knowledge — sort of primitive something or others. The concepts were rules. We had representations in our minds of the world and we made inferences from those representations of the world and that’s how we came to behave intelligently and understand things now. (3:02) …

But they didn’t care. They were trying to make computers at that point that just could behave — and behave intelligently — and what was interesting was that, whereas they came to my class and said “You know, you philosophers have wasted your time for 2000 years” once I saw what they were writing — reading Newell and Simon at Rand — I discovered they had inherited the whole philosophical story. (3:27)

The philosophers, like Descartes,believed in atomic ideas. Every one of these people since Descartes believed that we had internal mental representations of the world and they bought all that and they turned it into a research program at the very same time — it was about 1957 — that Ludwig Wittgenstein published the Logical Investigations where he was destroying that whole view… (4:01)

Since I was teaching those guys, I knew that the AI people had inherited a lemon they had taken over in their research program — a 2,000 year failure. So I said okay, you guys are just behind the times. You’re going to discover that this doesn’t work. And they finally did. (4:30)

[Marvin] Minsky, who was head of the AI lab then, said all we needed was a few million more facts in the computer and then it would behave intelligently, to have common sense knowledge. He has said in an interview at Wired that AI has been brain dead since the early 70s when they discovered the common sense knowledge problem. And that’s what I talked about in my book, What computers can’t do. (4:52)

Note: Computer pioneer Marvin Minsky (1927–2016) had also said, “We’ll show you that you can build a mind from many little parts, each mindless by itself.” And “Common sense is not a simple thing. Instead, it is an immense society of hard-earned practical ideas — of multitudes of life-learned rules and exceptions, dispositions and tendencies, balances and checks.” (Society of Mind, Simon and Schuster 1986). He apparently believed that common sense could be replicated by a large enough computer.

[ … ]

Hubert Dreyfus: So now, what’s going to help them to get the missing thing, which is learning, or (the) even more missing thing, which is consciousness? All this has been done on the behaviorist level: We’ll just get it to behave like people. And it fails to behave like people and darn well must fail to be conscious. (7:28)

Everybody agrees to that. So what would it take? Well, the latest idea, which sounds to me like sheer madness and desperation, is to say, computer chips get more and more and more powerful. And once they get powerful enough, so that they’re as many bits on them as there are stored in the brain, then we’ll be able to do this. (7:50) …

Nobody has any idea and they should just keep quiet until they do. Because, I mean, I think it is the hardest question — how in the world matter, which is this third-person material stuff could ever produce consciousness. And AI and the use of computers is not helping us understand it one bit. (10:01)

You may also wish to read: Brain scientist: Consciousness didn’t evolve. It creates evolution. With a tremor in his voice, Donald Hoffman tells Robert Lawrence Kuhn that even the Big Bang must be understood in a universe where consciousness is fundamental. Cognitive scientist Hoffman starts by trying to align his consciousness theory with standard evolution theory and then just chucks that and says what he thinks.


Denyse O'Leary

Denyse O'Leary is a freelance journalist based in Victoria, Canada. Specializing in faith and science issues, she is co-author, with neuroscientist Mario Beauregard, of The Spiritual Brain: A Neuroscientist's Case for the Existence of the Soul; and with neurosurgeon Michael Egnor of the forthcoming The Human Soul: What Neuroscience Shows Us about the Brain, the Mind, and the Difference Between the Two (Worthy, 2025). She received her degree in honors English language and literature.

If Consciousness Is Not Physical, How Can an AI Duplicate It?