Mind Matters Natural and Artificial Intelligence News and Analysis
Circuit board. Electronic computer hardware technology. Motherboard digital chip. Tech science EDA background. Integrated communication processor. Information CPU engineering 3D background
Circuit board. Electronic computer hardware technology. Motherboard digital chip. Tech science EDA background. Integrated communication processor. Information CPU engineering 3D render background

The Flawed Logic behind “Thinking” Computers, Part III

No program can discover new mathematical truths outside the limits of its code
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I am publishing, in three parts and with his permission, an exchange with Querius, who is looking for answers as to whether computers can someday think like people. In the first part, we discussed why human thinking cannot be indefinitely compressed. In the second part, we looked at how it is possible, in principle, to prove a negative without enumerating all the possibilities. Here is the third and final part:

We know that there are limiting laws of computation. One of the most fundamental limits is the halting problem, discovered by Alan Turing. The halting problem states there is no program that can determine in general whether an arbitrary program halts or not. If the human mind surpasses the limit created by the halting problem, it is a “halting oracle.” Then by definition, the human mind is not computable.

A couple of counter-arguments to this line of thought are the following:

  1. It is impossible for anything to be a halting oracle.
  2. A halting oracle could exist but the human mind cannot be one because a halting oracle can solve many problems that humans cannot solve.
  3. Even if the human mind could be a halting oracle, it is impossible to detect whether it is or not.

Here are some responses:

1.It is impossible for anything to be a halting oracle. A halting oracle is logically possible as an infinite lookup table that lists all finite programs and their halting status.

I agree with this.

Querius

2.A halting oracle could exist but the human mind cannot be one because a halting oracle can solve many problems that humans cannot solve. We can subtract a finite and even an infinite number of entries from the lookup table but it remains uncomputable.

Can you explain this further? Also human minds can be a halting oracle because we can validate halting in our creations. I differed with Wikipedia on this one:

“A machine with an oracle for the halting problem can determine whether particular Turing machines will halt on particular inputs, but they cannot determine, in general, whether machines equivalent to themselves will halt. This creates a hierarchy of machines, each with a more powerful halting oracle and an even harder halting problem. This hierarchy of machines can be used to define the arithmetical hierarchy” (Börger 1989).

I extend that to ask you, what makes us so different?

Querius

I’m not saying we are different in respect to what the Wikipedia article says. I agree there is a hierarchy, and I am saying that the human mind is higher in the hierarchy than algorithms.

It is consistent with the article to say the human mind is a halting oracle with regards to algorithms, while at the same time saying that there are halting problems beyond the ability of the human mind. And the fact there are halting problems we cannot solve does not stop the human
mind from being a halting oracle for the algorithmic halting problem defined on a finite Turing machine.

3.Even if the human mind could be a halting oracle, it is impossible to detect whether it is or not. The human mind is much more likely to be a halting oracle than an algorithmic intelligence. With enough tests, we can reduce the probability otherwise to an arbitrarily small number. Likewise, because all programs require a certain amount of storage space, if we could demonstrate that the potential range of actions of a single mind requires a program that exceeds the storage capacity of the universe, then we know that the human mind cannot be a physical program.

But we don’t know for sure, do we? This does not prove that strong AI is impossible. Nor is an algorithmic intelligence which again, by trying to deduce your definition, could be many many narrow AIs doing a human task. But I’d love for you to clarify your points if I’m wrong.

Querius

Well, nothing is 100% in this world. According to quantum physics, there is a slight probability I could turn into a fire truck. But we can experimentally reduce the probability to be so small that the possibility that AI is true is the same probability as that I’ll turn into a fire truck in the next second.

Is there evidence from experience that humans are halting oracles? One piece of evidence accessible to software developers is the act of programming. In order to create high-quality software, programmers must select programs with known halting status with great reliability. Additionally, no one has yet figured out how to completely automate the programming task, although there have been numerous attempts. These observations are easy to make sense of if the human mind is a halting oracle, but much more difficult to explain if the mind is computational.

Perhaps, but it still does not mean we can’t put higher-level human functions in a machine. It sounds as if your position is that because humans are halting oracles and machines are not, AI can’t exist. How do you know that? Is this a theory as to why AI may not be possible? Or is this an argument that AI is not possible under these circumstances.

Querius

Halting oracles cannot be reduced to finite Turing machines because a halting oracle requires an infinite lookup table. However, any form of AI is at best a finite Turing machine. Finite Turing machines are the upper limit on the capabilities of AI. Because halting oracles cannot be reduced to finite Turing machines, AI cannot be a halting oracle. This is just a logical necessity of computer science.

That in turn means that if the human mind is a halting oracle, it can never be copied by an algorithm. So AI cannot exist.

On the theoretical side, another piece of evidence is Leonid Levin’s Law of Independence Conservation. His law states that no combination
of random and computational processing is expected to increase the mutual algorithmic information between two different objects. In less technical terms, Levin’s law implies that no program should be able to generate information that was not already written into the program.
For example, no program can discover new mathematical truths outside the limits of its code.

Computers accidentally discover things all the time. They discover new, more effective ways to play a game like Go. Are they not “generating” a new way to play a game that a human never thought of? How do you define “generate”? Computers seem to generate new things all the time.

Grant it is by accident and brute force computation. Even so, one of the things we want to imbue into artificial intelligence is human-level understanding.

Querius

Yes, the only ability that AI has is to do things through randomness and brute force. And even with an unlimited number of attempts, there is a limit to the accidental discovery. That is what Levin’s law states, and is again due to the halting problem. In this case, it shows up in a concept known as Kolmogorov complexity The Kolmogorov complexity of a bitstring is the length of the shortest program that generates the bitstring. Kolmogorov complexity states that no program can generate a bitstring with Kolmogorov complexity greater than the program’s size, otherwise we arrive at a contradiction.

If AI were able to generate genuinely new discoveries, this is equivalent to saying that AI can generate bitstrings with Kolmogorov complexity greater than the AI’s size. And since AI is an algorithm, this is logically impossible.

More practically, no program can apply mathematics to invent new mechanical devices that are not implicit in its code. On the other hand, the history of human progress shows many mathematical, scientific, and mechanical innovations that are hard to explain as implicit in the human brain or the environment. This creativity indicates that the human mind can access a capability that surpasses randomness and computation.

“More practically, no program can apply mathematics to invent new mechanical devices that are not implicit in its code.” What does this mean? Technically, there is no implicit code in the human brain to invent math either and yet we do.

Also when a machine learns a new task it is Tabula Rasa (Blank Slate). Okay, not completely; you feed it some “rules,” inputs and outputs, and you let the data do the rest. However, as I said, when an AI does something surprising and unexpected it’s because we never programmed that behavior explicitly into it. You could say it’s an emergent behavior not defined by its code. If you are implying that that’s not what you meant, you might need to explain this better.

Querius

Regarding humans, that is my point. There is no implicit code in the human brain, yet we invent many things. However, as with Kolmogorov complexity, all algorithms are limited by their implicit code. So, if an algorithm does not have the implicit code, it cannot invent.

One simple example of this is the question “What is the largest number?” Without an implicit answer in the code, an algorithm will never be able to answer this question.

In the case of machine learning, there is no general-purpose learning algorithm. This is another result, known as the No Free Lunch Theorem. The algorithm selection, dataset grooming, and feature selection all have a big impact on what exactly the machine is able to “learn.” All
this setup cannot be automated away. Ironically, machine learning requires a whole lot of human learning in order to work.

So, circling back to the original question of whether strong AI is possible, what have we learned?

If we define artificial intelligence as a very trivial form of algorithmic intelligence, which we have called regurgence, then it is necessarily true as a theoretical construct, although it may be practically impossible.

To reiterate, I’d like to see this example of a hardcore regurgence because codes are merely representations of the world. Simulations, imperfect models. Take your pick.

On the other hand, if we rely on a compression interpretation of intelligence, then it is no longer necessarily true. It may still not be practically possible, although it may seem the best hypothesis.

I would definitely say we lean more so on the compression interpretation, as far as teaching an AI any narrow task. Maybe, maybe not. We don’t know.

Querius

Yes, the compression is what we are seeking, but the creative aspect of the mind is incompressible.

Then we examined whether the idea is falsifiable, and it turns out that algorithmic intelligence can be falsified via the limitations of algorithms, such as the halting problem.

“Can be falsified”? Or “has been falsified”? It seems as if you’re not sure. You’re at least open to it.

Querius

It has been falsified by the history of human innovation, computer programmers, mathematicians, and the like. On the other hand, there haven’t been a lot of people putting numbers to this falsification because the field is under-researched.

In conclusion, if the human mind passes the limitations of algorithms, then the mind cannot be an algorithm and artificial intelligence is impossible. A couple of pieces of evidence offered in this regard are the issues in software development and the history of human innovation. Not only is it valid to ask whether artificial intelligence is impossible but the argument can be pursued on a scientific basis with quantifiable, empirical evidence.

“In conclusion, if the human mind passes the limitations of algorithms, then the mind cannot be an algorithm, and artificial intelligence is impossible.” This is the question of our time. Seems like it’s still a big if.

However, if you can slam my arguments down, So help me God I hope you do. You will be either ridiculed, exalted, or—like Einstein—both.

Querius

I would disagree with it being a big if. The conclusion appears pretty obvious. It is just the media hype machine that makes AI seem plausible. Not any actual data-driven plausibility.

“Not only is it valid to ask whether artificial intelligence is impossible but the argument can be pursued on a scientific basis with quantifiable, empirical evidence”

I would definitely try to double down on

1. Whether we are in fact Turing Oracles 2. If we are Turing Oracles, can we still imbue AI to function at human level? 3. Clear examples of what is non-computable 4. If we have a secret sauce that is non-computable.

Querius
  1. #1 is obviously true, but I am still working to put numbers to it.
  2. Logical necessity makes #2 false.
  3. Regarding #3, fundamental mathematical axioms are non-computable. For example, the concept of infinity cannot be derived from anything finite.
  4. Regarding #4, the secret sauce contains at least these elements: rationality, free will, intentionality. None of these is computable.

Also final thoughts, I feel like the crux of AI could be consciousness. Science tries to handwave consciousness away as a “byproduct” of unconscious algorithms and claims that it only uses up perhaps 5% of our brain. But I’m not completely convinced. If consciousness holds the missing link on true reasoning, reflection, and deliberate decision-making based on the former two, then that could be something that gives weight to the Turing Oracle. The fact that humans are Turing Complete, but not Turing Machines. One could say much more about this.

Querius

One more parting thought along those lines. Even if AI were true, it could never be more powerful than a human mind. This is because all algorithms can be reduced to a Turing machine, and a Turing machine consists of five simple actions that just about any human can do. The only difference is that a human will take much longer to execute a code than an AI.

Once again, I respect the amount of time you’ve put into this exchange and your intent to find out the truth.

I hope that you can be a force that saves human exceptionalism and proves me wrong. I’ve never wanted to be so wrong in my life. Nonetheless, I’m not one to engage in confirmation bias and “wish” the issue away. I look forward to your further responses and the further refinement of your arguments.

Querius

Yes, we cannot wish for our ideas to be true. Facts and truth first. As things stand, my opinion is that AI is a lot of wishful thinking with no facts or truth behind it. And this is coming from someone who used to believe that all of reality could be a giant algorithm.

See also:
Part I: A program that is intelligent must do more than reproduce human behavior

and

Part II: There is another way to prove a negative besides exhaustively enumerating the possibilities

Note: “Querius” is a pseudonym

Also by Eric Holloway: Will artificial intelligence design artificial superintelligence?

Artificial intelligence is impossible

and

Human intelligence as a Halting Oracle


Eric Holloway

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Eric Holloway is a Senior Fellow with the Walter Bradley Center for Natural & Artificial Intelligence, and holds a PhD in Electrical & Computer Engineering from Baylor University. A Captain in the United States Air Force, he served in the US and Afghanistan. He is the co-editor of Naturalism and Its Alternatives in Scientific Methodologies.

The Flawed Logic behind “Thinking” Computers, Part III