Mind Matters Natural and Artificial Intelligence News and Analysis
what will you choose? Fresh healthy berries come out from the bowl or junk potato fries from paper box
what will you choose? Fresh healthy berries come out from the bowl or junk potato fries from paper box

Can Free Will Really Be a Scientific Idea?

Yes, if we look at it from the perspective of information theory
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

One compelling argument for genuine artificial intelligence (machines that really think) goes like this:

Such machines can exist in principle. If all of the finite and discrete reality we experience can be reduced to chance and necessity, then the human mind can be reduced to chance and necessity too. Everything that can be reduced to chance and necessity (or, to use technical terms, randomness and determinism) can be simulated on your average laptop computer, or even on a pocket calculator, at least in principle. The smart (or stupid) phone in your pocket could run the entire universe (given enough memory and time).

So, if all of our universe could, in principle, be run on a smartphone, then every part of the universe could certainly be run on a smartphone, including the human mind. This is why many highly educated and intelligent people think that the mind itself must reduce to a computer algorithm, at least in theory. In their view, there is just no other possibility.

However, their assumption is provably false. There are other possibilities. Chance and necessity are a very narrow restriction on the range of possibilities. They are a very successful restriction because we’ve been able to explain and control much of our world by reducing it to models of chance and necessity. But the point remains that chance and necessity is not the only way things can be.

Why not? To understand what the phrase “chance and necessity” means, we need a brief detour through probability theory. In everyday life, we casually throw around the terms “confidence,” “chance,” and “likely.” We sometimes attach numbers too. We say that an event has a 90% chance of occurring. But, what do we mean by a “90% chance”?

There are several ways of interpreting our notion of a 90% chance and they do not all mean the same thing. Originally, it meant that if we observe N outcomes, then the proportion of our event occurring approaches 9/10 as we watch more and more events. Then, Andrei Kolmogorov (1903–1987) came up with the axiomatic approach that we currently use, where all the outcomes have a specific probability value assigned and all those values add up to 1. The first approach is empirical (what we observe), and the second approach is mathematical.

The first approach does not make any claims about the nature of reality whereas Kolmogorov’s approach asserts there is a fixed state to reality about which our observations accumulate information. His approach gives one reason for the first approach’s empirical observation. If there is a fixed state to reality, then it is provably the case that our observations will converge on that state.

The “chance and necessity” viewpoint adopted by science is that Kolmogorov’s definition of probability—that a fixed probability can be assigned to all possible outcomes—is true for absolutely everything in our universe. In that case, the conclusion that the mind itself a sort of artificial intelligence operating according to chance and necessity becomes, as we saw earlier, a foregone conclusion.

Once we understand the fundamental assumption that guides the quest for artificial intelligence, we are in a position to see its flaw. Simply, why assume that a fixed probability must be assigned to everything in reality? There is no reason why that must be true. Furthermore, we can, to some degree, empirically distinguish between the two scenarios.

If a fixed probability is assigned to everything, then everything we observe will eventually converge to a fixed probability. This concept of convergence is called the “law of large numbers.” On the other hand, if not everything is fixed, then some things we observe will never converge to a probability. So the straightforward response to the hypothesis that chance and necessity govern everything is that the law of large numbers may not apply to all physical phenomena.

How does this response apply to the original question, whether there are alternatives to chance and necessity for thinking about the human mind? One characteristic commonly ascribed to the human mind is free will. The defining quality of free will is that it is not fixed: We humans can always have done otherwise, have chosen differently. Nothing ultimately forces us to choose the way we chose. Insofar as a choice is our own, it is not determined by anything. If this concept of free will is true, then our will does not follow the law of large numbers. Given an arbitrarily long run of choices, the choices will never converge to a fixed assignment of probabilities.

One implication of the non-fixed nature of the human mind is that human activities cannot be modeled as if they had a fixed distribution. Consequently, any model based on the assumption of a fixed distribution is bound to fail, and possibly fail catastrophically.

That is what we see in the stock market. Theoreticians have faithfully applied the fixed state model to the stock market, and come up with sophisticated and deductively sound mathematical models. However, these models have failed spectacularly, as evidenced by the numerous crashes throughout the life of the market. This observation is consistent with the view that the human mind cannot be assigned a fixed probability.

‘Wait! Hold up just a second!’ a skeptic may say. ‘How do you propose to apply this theory scientifically? Haven’t you just substituted meaningless woo about free will for mathematical rigor? Even if our math doesn’t always match reality, isn’t it better to have a model we can at least improve on?’

As it happens, we can use information theory with the concept of free will. Information theory turns on the concept of entropy, which is essentially a measure of how many choices there are. The important point is that if there is a lot of information, then there is a lot of entropy. However, the reverse is not true. A lot of entropy does not mean that there is a lot of information.

Entropy is defined as the expected surprisal of an event. Surprisal is measured as the negative log of the event’s probability. Rare events, which are more surprising, have a larger amount of surprisal than common events. This matches our intuition that, if an information source has a lot of information, then it provides us with many messages that are unexpected, i.e. the messages tell us something new and are informative.

A related concept is cross-entropy. Cross entropy is also the expectation of surprisal but it uses a different probability distribution for the expectation. An important distinction between cross-entropy and regular entropy is that cross-entropy is never lower than regular entropy.

How does the concept of entropy relate to free will? If an entity has a fixed probability assignment, then it will have different entropy characteristics than an entity that does not have a fixed assignment, i.e. an entity with free will. It follows that, by empirically measuring the frequency of events from an entity, we can infer whether the entity has a fixed or unfixed probability assignment.

How do we make these measurements? Imagine that we have a long sequence of events sampled from the entity. We then take two different segments from the sequence and calculate two different empirical probability distributions. We then use the two distributions to calculate cross-entropy and regular entropy. If the entity has a fixed probability assignment, then the cross-entropy will converge to the regular entropy, with a large enough sample size. On the other hand, an entity that changes its probability assignment continually throughout time will never converge.

One significant problem with this approach is that convergence is not something we can absolutely disprove. It is always possible we have not observed the sequence long enough to see it converge. However, this is where the connection between entropy and information becomes relevant. A finite, discrete entity with a fixed probability assignment also has a fixed amount of information it can possibly contain. The information capacity is a limit on how long the entity can avoid convergence. Thus, based on the length of the non-converging sequence, we can infer the information need to prevent convergence for that length of time.

Information capacity is limited by an entity’s physical mass. So, if we know an entity’s mass, we can establish a limit on its information capacity. By comparing the mass to the length of the non-converged sequence, we can determine if the sequence could be explained by a fixed probability assignment with that amount of mass, or if some other explanation is needed.

At this point, we can falsify the notion that the entity’s behavior can be accounted for by its physical matter. However, we cannot identify with certainty the actual source of information. It could be free will, or it could be some source beyond the physical plane of reality. A sequence that stubbornly declines to converge offers three possibilities:

1) the entity has free will and is increasing net information
2) the entity has a very large source of non-physical information
3) the entity has an infinite source of non-physical information

Here empirical data cannot decide the matter for us and we must resort to philosophical conjecture. If we use Occam’s razor, that is, choose the simplest coherent explanation that makes sense of the data, then option #1 is preferred. The concept of free will is not a logical impossibility (a logical impossibility is a square circle or married bachelor).

In conclusion, it is possible to empirically distinguish an entity with free will from an entity that runs according to chance and necessity alone, while staying entirely within the methodology of modern science. We’ve identified the key assumption that a fixed probability assignment prevents such an inference and have shown that an entity without the fixed assignment has a measurably different sequence of behavior. Thus, it is possible to work scientifically with entities that do not work according to chance and necessity. It is possible to accept the concept of a mind with free will as a scientifically testable hypothesis.

Another outcome is that genuine artificial intelligence is not a foregone conclusion.


Further reading on free will:

Why do atheists still claim that free will can’t exist? Sam Harris reduces everything to physics but then ignores quantum non-determinism (Eric Holloway)

Was famous old evidence against free will just debunked? The pattern that was thought to prove free will an illusion may have been noise

and

Younger thinkers now argue that free will is real. The laws of physics do not rule it out, they say.


Eric Holloway

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Eric Holloway is a Senior Fellow with the Walter Bradley Center for Natural & Artificial Intelligence, and holds a PhD in Electrical & Computer Engineering from Baylor University. A Captain in the United States Air Force, he served in the US and Afghanistan. He is the co-editor of Naturalism and Its Alternatives in Scientific Methodologies.

Can Free Will Really Be a Scientific Idea?