Mind Matters Natural and Artificial Intelligence News and Analysis
Ice cream chocolate and vanilla sundae topping with red cherry.

A Short Argument Against the Materialist Account of the Mind

You can simply picture yourself eating a chocolate ice cream sundae.
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
01searleautumn
John R. Searle

John Searle’s Chinese Room scenario is the most famous argument against the “strong AI” presumption that computation-writ-large-and-fast will become consciousness:

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese. (1999)

His argument shows that computers work at the level of syntax, whereas human agents work at the level of meaning:

I demonstrated years ago with the so-called Chinese Room Argument that the implementation of the computer program is not by itself sufficient for consciousness or intentionality (Searle 1980). Computation is defined purely formally or syntactically, whereas minds have actual mental or semantic contents, and we cannot get from syntactical to the semantic just by having the syntactical operations and nothing else. To put this point slightly more technically, the notion “same implemented program” defines an equivalence class that is specified independently of any specific physical realization. But such a specification necessarily leaves out the biologically specific powers of the brain to cause cognitive processes. A system, me, for example, would not acquire an understanding of Chinese just by going through the steps of a computer program that simulated the behavior of a Chinese speaker. (2010)

John R. Searle, “The Chinese Room Argument” at Stanford Encyclopedia of Philosophy

I still find Searle’s argument persuasive, despite decades of attempts by other philosophers to poke holes in it.

But there’s another, shorter and more intuitive argument against a materialist account of the mind. It has to do with intentional states. Michael Egnor and others have offered versions of this argument here at Mind Matters and elsewhere but I’d like to boil it down to its bare bones. Then you can commit it to memory and pull it out the next time your office mate starts to worry about Skynet or denies that he has free will.

Ice cream in a cup

Here goes:
Imagine a scenario where I ask you to think about eating a chocolate ice cream sundae, while a doctor does an MRI and takes a real-time scan of your brain state. We assume that the following statements are true:

  1. You’re a person. You have a “first person perspective.”
  2. You have thoughts.
  3. I asked you to think about eating a chocolate ice cream sundae.
  4. You freely chose to do so, based on my request.
  5. Those thoughts caused something to happen in your brain and perhaps elsewhere in your body.

Notice that the thought in question—your first person, subjective experience of thinking about the chocolate sundae—would not be the same as the pattern in your brain. Nor would it be the same as an MRI picture of the pattern. One glaring difference between them: Your brain pattern isn’t about anything. Your thought is. It’s about a chocolate sundae.

We have thoughts and ideas—what philosophers call “intentional” states—that are about things other than themselves. We don’t really know how this works, how it relates to the brain or chemistry or the laws of physics or the price of tea in China. But whenever we speak to another person, we assume it must be true. And in our own case, we know it’s true. Even to deny it is to affirm it.

Points (1) through (5) above are common sense. In other words, everyone who hasn’t been persuaded by skeptical philosophy assumes them to be true. But it’s not merely that everyone assumes them. They are basic to pretty much any other intellectual exercise, including arguing.

That’s because you have direct access to your thoughts and, by definition, to your first-person perspective. You know these things more directly than you could conclude, let alone know, any truth of history or science. You certainly know them more directly than you could possibly know the premises of an argument for materialism.

That matters because (1) through (5) defy materialist explanation.

The materialist will want to say one of three things to avoid the implication of a free agent whose thoughts cause things to happen in the material world:

A) Your “thoughts” are identical to a physical brain state.

B) Your “thoughts” are determined by a physical brain state.

or

C) You don’t really have thoughts.

And if any one of (A), (B), or (C) is true, then most or all of (1) through (5) are false.

So here’s the conclusion: What possible reason could we have for believing (A), (B), or (C) and doubting (1) through (5)? Remember that if you opt for (A), (B), or (C), you can’t logically presuppose (1) through (5). Surely this alone is enough to conclude that we can have no good reason for believing the materialist account of the mind.

Jay Wesley Richards
Jay Wesley Richards

Jay Richards is a research assistant professor at the Busch School of Business and author of The Human Advantage: The Future of American Work in an Age of Smart Machines.

See also: Jay Richards asks, can training for an AI future be trusted to bureaucrats?

and

Will AI lead to mass joblessness and social unrest?


A Short Argument Against the Materialist Account of the Mind