Mind Matters Natural and Artificial Intelligence News and Analysis
john-t-413908-unsplash
Man in a maze
Photo by John T on Unsplash

Has Neuroscience Disproved Thinking?

A philosopher argues that Nobel Prize-winning research shows that the theory of mind is just another illusion, useful for survival and success
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Duke University philosopher Alex Rosenberg recently claimed that modern neuroscience shows that we don’t really “decide” to do anything. Philosophers would call his approach “the demise of personal agency.”

For evidence, Rosenberg points to recent Nobel Prize-winning neuroscience. In “Is neuroscience a bigger threat than artificial intelligence?” he tells us why the discoveries of Eric Kandel (2000) and John O’Keefe and Edvard and May-Britt Moser (2014) undermine the quaint view that our conscious beliefs and desires control our actions. It turns out our brain does not contain commands or statements. Our brain contains only neurons firing in seemingly arbitrary patterns.

This, he says, is the false way we represent reality to ourselves:

Consciousness tells us that we employ a theory of mind, both to decide on our own actions and to predict and explain the behavior of others. According to this theory there have to be particular belief/desire pairings somewhere in our brains working together to bring about movements of the body, including speech and writing.

But the researchers, he says, show that the brain does not contain beliefs and desires:

This research program began with HM, the patient famous for being unable to acquire or store beliefs because of a lobotomy that went wrong and destroyed his hippocampus. The irony of this research is that it ended up showing that no ones’ brain acquires, stores, and uses information in the form of beliefs and desires.

The researchers demonstrated this by showing that the brain is driven only by the firing of neurons. For example, they could not find a map of a rat’s surroundings in its brain. All they found were neural firings. Rosenberg deduces,

But note, neither the rat nor any part of its brain constructs a map from the neural firings. It’s not giving the neural circuits content, treating them as containing statements about where the rat is. Experimenters decode firing patterns. Rats don’t. They ‘re just driven by them. Firings are all the same, all over the brain—rat and human.

Therefore, he concludes, the rat doesn’t find its way around by any kind of mapping system but because certain neurons are making its muscles twitch:

The neurons don’t represent to the rat the way it’s world is arranged. So they don’t work any thing like the way beliefs have to work, pairing up with desires via shred content about means and ends.

Rattus norvegicus 1.jpg

And, he argues, the same applies to the human mind: “That goes for our neuronal circuits, assemblies, modules, region, too.” The only reason we think that there are maps, beliefs, or desires is the way we decode our observations. “Experimenters decode firing patterns. Rats don’t. They’re just driven by them.”

Following this logic, Rosenberg finds that there is no real agency in the human mind. Rather, we’re driven by the chemicals and electrical circuitry in our brains. Our illusion of making choices is just a vestigial mental organ left over from our evolutionary prehistory:

The theory of mind is just another one of these illusions, useful for survival and success in the Pleistocene, but a blunt instrument of limited predictive and explanatory power. It emerged out of the more fundamental mind-reading ability we share with other species and used to track predators and prey, threats and opportunities. That undoubtedly inborn ability combined with our unique gift, language, to generate the theory of mind. By colonizing consciousness spoken language turned it into a monologue of silent speech, tricking us that the meaning of spoken words is given by thoughts’ content when its just silent sounds passing through consciousness. Neuroscience shows that that in our brains the neural circuits neither have nor need content to do their jobs.

There is a problem with this sort of reasoning. One could make the same argument about computer code, as follows:

There is no code. It’s all just assembly language.

Or, there is no assembly, it’s all just machine code.

Or, there is no machine code, there are just voltage levels on transistors.

One could continue following this chain of reasoning to the point where the transistors don’t exist. It’s just a bunch of electrons doing their thing.

Of course, the electrons don’t really exist either. They’re just a bunch of quarks and leptons.

In which case, the program your computer requires in order that you can read this article doesn’t exist.

Have we taken a wrong turn somewhere…?

I think we’ve all seen this sort of argument before in many other guises. It is commonly called “reductionism.” The reductionist claims that, because an object can be construed as made up of parts, the object is just the parts. It is like saying that because an article like this one is constructed from letters of the alphabet, the article is only rows of letters and any meaning we get from it is our own subjective interpretation.

Why do some people find reductionism a compelling argument?

There appear to be three main reasons: First, reductionism can be useful. By breaking down an object into its parts, we can reuse the parts for our own purposes. If neuroscience can isolate the neurons that enact a desired function, then scientists can control the behavior of a rat by inducing the right neural firing pattern. However, this rationale may provide little information about the natural state or purposes of the rat.

Suppose, for example, we took the same approach to a brand new computer. Instead of using it for writing, browsing, gaming, streaming and so on, we throw away the innards and use the casing as a flowerpot. Because we used the computer in a manner drastically at odds with its purpose, we extracted minimal value from our expensive purchase and have minimal understanding of it. The same reasoning applies to nature. If we fail to identify the purposes of natural entities, we run the risk of dramatically reducing our ability to understand them.

The second reason is more subtle. It is due to how scientific hypothesis testing is conducted. Currently, a hypothesis is tested using Fisher’s method of rejection regions. In order for a test to be valid, the hypothesis and rejection region must be specified before the test is carried out.

There is a certain intuitive justification for this approach. Imagine you are lying on a grassy meadow, looking up at clouds in the sky. You’re about to make a big decision, whether to purchase a brand new sports car. You’ve wanted this car for a long time, but are worried that it is not a sound financial decision. Suddenly, you spot a cloud that looks a bit like a wheel, next to another long cloud that could be the hood of a car. It must be a sign! Filled with confidence, you rush off to Joe’s Only Slightly Used Cars to purchase the car of your dreams, hoping this time things will turn out better.

The problem above is known as selection bias: We look for patterns in the data that confirm our preconceived ideas. Fisher’s approach to hypothesis testing is meant to guard against selection bias by making experimenters state what they are looking for up front, before performing an experiment.

While Fisher’s approach is reasonable, it is at odds with the way we tend to detect patterns in everyday life. Let’s say we are reading an article. If we used Fisherian hypothesis testing to read an article, we would first form a hypothesis about it and then read the article in order to try and falsify the hypothesis. If we were able to falsify the hypothesis about the article with high certainty, then Fisher would say we have learned something from the article. However, we do nothing of the sort when reading. Instead, we gain information directly from the article without formulating hypotheses before the fact.

A better description of what we are doing is captured by Bill Dembski’s concept of complex specified information (CSI). In Dembski’s formulation, we are able to identify patterns after examining the data. Instead of trying to reject a single hypothesis, we are comparing two means of describing the data. One hypothesis is called the chance hypothesis; it represents the probability that the data assumed its configuration due to the internal mechanics of the data’s environment. If we go back to the “sports car” cloud for a moment, it is probable that clouds form circular and oblong shapes under normal atmospheric conditions due to chance. The second hypothesis is called the specification and measures how concisely the data can be described according to a knowledge base that is independent from the chance hypothesis. If the data configuration has low probability due to the chance hypothesis and a concise description, then the data exhibits CSI and the pattern is real.

An example of an independent knowledge base is a set of words in English. Clouds do not naturally form words in English, so a sequence of words in the clouds (BUY THAT CAR!) has a very low probability. At the same time, it can be concisely described with reference to English. Consequently, we are right to conclude that there is a real pattern in the clouds that cannot be explained by standard wind dynamics. It might, of course, be explained by the fact that Joe’s other business, an outdoor publicity firm, owns a skywriter.

Bringing this discussion full circle back to Rosenberg’s argument from neuroscience, we can apply Dembski’s complex specified information (CSI) theory to refute reductionism. The original argument states that, because the rat’s intentionality is represented by neural firings, then there is no intentionality, there are only neural firings. However, the researchers showed that they could summarize the complex firing patterns with concise descriptions, such as a location map paired with goals. As we saw with the words in the clouds, this is an instance of CSI. Thus, we can infer that a higher level cause induces the neural firing.

To make this clearer, if we saw the rat suddenly stand up on its hind paws and dance a ballet, we could infer that something other than typical ratty beliefs and desires was governing the rat’s behavior. In the same way, we can infer that typical ratty beliefs and desires usually govern rat behavior. These beliefs and desires are transmitted to the rat’s muscles through the neural firing pattern, resulting perhaps in the rat chewing a hole in a plastic container to get at the contents (food waste). The rat is not an automaton programmed to seek food; he is a living animal who suffers from hunger and knows where humans leave unattended scraps overnight. But we are left with a question: Where do these beliefs and desires reside if they are not explicitly encoded in the rat brain?

This brings us to the third reason that reductionism seems compelling. Science has reduced causal explanations to physical mechanisms. Physical mechanisms are inherently incapable of the kind of top-down behavior modification that is implied by the belief-desire model. As such, Rosenberg is actually begging the question with his neuroscience argument. He is using a hidden premise that only physical processes can govern the rat’s behavior, and therefore it is only the neural firing pattern that is controlling the rat muscles.

If we take away Rosenberg’s hidden premise, then we can interpret the neuroscience evidence in a very different way. Because the neurons cannot originate the top level beliefs and desires that we know are governing the rat’s behavior, there must be an ulterior cause. This cause could be something analogous to software code; a high-level command structure is encoded in the rat brain, just like compiled code is derived from the high-level programming language. If our metaphysics allows, we could ascribe the rat’s behavior to an Aristotelian animal soul. However, the discussion of what exactly a soul is and how it differs from mechanism is for another article.

The main takeaway here is that Rosenberg’s argument has a hidden premise of reductionism. When we realize that complex causal phenomena are not reductive if they have concise descriptions, then we see the flaw in his argument. In fact, the evidence put forth could very well support the traditional concept of a soul in animals, as well as a soul in human beings. Whether there are souls or not, neuroscience evidence cannot eliminate agency and the narrative mind without engaging in circular arguments.

Eric Holloway has a Ph.D. in Electrical & Computer Engineering from Baylor University. He is a current Captain in the United States Air Force where he served in the US and Afghanistan He is the co-editor of the book Naturalism and Its Alternatives in Scientific Methodologies. Dr. Holloway is an Associate Fellow of the Walter Bradley Center for Natural and Artificial Intelligence.

Also: by Eric Holloway: Human Intelligence as a Halting Oracle

and

Does information theory support design in nature?


Has Neuroscience Disproved Thinking?