Mind Matters Natural and Artificial Intelligence News and Analysis
robot-reading-book-and-solving-math-data-analytics-concept-o-765127551-stockpack-adobe_stock
robot reading book and solving math data analytics, concept of future mathematics artificial intelligence

Is True Artificial Intelligence Possible? A Science Perspective

Many will conclude that true AI — AI that thinks like humans — must be possible because we hear so much about it from so many sources
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

One important criterion for a scientific theory is falsifiability. In this case, could we show that true AI is not possible?

At first, falsifiability might seem like a surprising criterion for science. But consider this: Most theories can be supported by at least some evidence. Some people believe in witchcraft, astrology, or invisible aliens, for example. They can all offer some evidence. But if they say that what they believe cannot be disproven by scientists (not falsifiable), the topic isn’t science. To be science, the claim must be disprovable in principle.

At first glance, the theory of true AI is not falsifiable

It is commonly said that science can’t prove a negative, so disproving true AI is impossible. That is often phrased as, “Just because we do not have an algorithm today that gives human intelligence to AI, does not mean we won’t discover one tomorrow. There is no way to know for sure.”

However, the common saying is wrong. Science can prove a negative. For example, the conservation of energy rules out perpetual motion machines.

Similarly, a physicist named John Bell (1928–1990) proved that classical physics is not the whole story when it comes to explaining the behavior of the elementary particles of our universe. Bell’s Inequality shows mathematically how entangled particles must behave according to classical physics — and also shows an alternative behavior better described by quantum physics.

The key distinction is whether the particles can affect each other without being close to each other. Classical physics says no, there is no action at a distance, while quantum physics says yes, there is action at a distance. The classical position is known as local realism.

Experiments demonstrated that entangled particles do indeed show action at a distance, instead of only local interactions as the theory requires. Consequently, local realism has been scientifically proven false. In other words, Bell’s inequality does prove a negative. You can test Bell’s inequality yourself with source code that runs on IBM’s quantum computers.

How does this apply to the algorithms needed for artificial intelligence?

The same approach applies to the world of algorithms. We know of many limits on algorithms. A few such limits are:

  1. the no free lunch theorem (no computation method offers a shortcut to solutions)
  2. conservation of information “Computers, despite their speed in performing queries, are completely inadequate for resolving even moderately sized search problems without accurate information to guide them.”
  3. the halting problem “No general algorithm that can determine for every deterministic computer program whether that program will halt or not.”

It follows that anything that passes just one of these limits must be something other than an algorithm.

As with Bell’s inequality draws a line limiting classical physics, we can draw many lines that limit algorithms. If we discover that the human mind can pass just a single one of these lines, then we have proven the theory of true artificial intelligence to be false.

What the ARC–AGI test shows

Recently, François Chollet introduced a problem set known as ARC-AGI. It consists of many puzzles that humans find easy to solve, but to date no algorithmic approach has achieved any great success, despite a $1 million prize incentive.

Here are the Arc–AGI tests you could try (but humans, who pass easily, are not eligible for the prize).

A recent study showed that humans can solve about 85% of the puzzles, but the best algorithmic approach only achieves 30%.

What distinguishes these puzzles from most other AI tests is twofold:

  1. the puzzles require no special knowledge to solve
  2. solving the puzzles requires deriving new principles

This means humans and AI solve the puzzles on an equal playing field. As such, it is very surprising that humans consistently perform so much better than AI.

Are you a halting oracle?

We can also connect the puzzles back to the limits of algorithms, particularly the halting problem. A halting oracle is a non-mechanical entity that can tell when the program will halt. Only a halting oracle can surpass the limits of the halting problem.

Because solving an Arc-AGI puzzle requires deriving a new principle, we can model the puzzle solver as a halting oracle. Halting oracles cannot be computer algorithms, by definition. As a result, since humans consistently solve the ARC–AGI problems, and consistently doing so indicates access to a halting oracle, then this result strongly suggests the human mind is non-algorithmic.

It remains to be seen how well these results continue to hold up. But, what we can say definitively is experiments like ARC–AGI have finally made true computer intelligence (artificial general intelligence or AGI) into a scientifically respectable theory, because true AI can now be falsified by showing that there are things it cannot do in principle.


Eric Holloway

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Eric Holloway is a Senior Fellow with the Walter Bradley Center for Natural & Artificial Intelligence, and holds a PhD in Electrical & Computer Engineering from Baylor University. A Captain in the United States Air Force, he served in the US and Afghanistan. He is the co-editor of Naturalism and Its Alternatives in Scientific Methodologies.

Is True Artificial Intelligence Possible? A Science Perspective