Mind Matters Natural and Artificial Intelligence News and Analysis
moon
Moon detailed closeup
Image licensed via Adobe Stock

We Can’t Build a Hut to the Moon

The history of AI is a story of a recurring cycle of hype and disappointment
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Once upon a time there live a tribe who lived on the plains. They were an adventurous tribe, constantly wanting to explore. At night they would see the moon drift lazily overhead, and became curious.

How could they reach the moon? The moon was obviously higher than their huts. Standing on the highest hut no one could reach the moon.

At the same time, standing on the hut got them closer to the moon.

So, they decided to amass all their resources and build a gigantic hut.

Reason being that if standing on a short hut got them closer to the moon, then standing on a gigantic hut would get them even closer.

Eventually the tribe ran out of mud and rocks, and though the gigantic hut did get the tribe even closer to the moon, it would still always drift just tantalizingly out of reach. The moral of the story: just because a climbing on a hut gets us closer to the moon doesn’t mean we can reach the moon by building ever bigger huts.

The Hype and Disappointment Cycle

This is a story about AI and why the field undergoes a constant hype disappointment cycle. There are many problems related to human intelligence that can be solved by computers on a small scale. In the 70s and 80s AI expert systems were all the rage. These programs could apply logic rules to a database of facts and answer questions, just like a human could, and even better. But, expert systems never ended up replacing human experts. Expert system databases were too labor intensive to fill with facts, and the algorithms turned out to not be scalable. Then, in the 90s computational intelligence, which used more statistical and approximate approaches to work around the perceived problems of expert systems, became prominent. Neural networks, the current AI darling, was a progeny of the computational intelligence revolution. However, neither did it result in a replacement for humans. Many more AI revolutions and fizzles have come and gone, with always the same pattern. Initial excitement over an algorithm that can exceed human performance on small problems, followed by major disappointment when the algorithm turns out to not scale. This phenomenon has been dubbed the AI “winter”. But why does it occur?

AI and Real-World Problems

In 1973, the UK Parliament commissioned Sir James Lighthill to evaluate AI. His takeaway is that AI is stymied by a fundamental problem of combinatorial explosion. This means that algorithms will work well with small problems, and quickly grind to a halt as the problem gets bigger to meet real world expectations.

The combinatorial explosion is illustrated with the number of numbers as we increase significant digits. With one digit we have 10 numbers.

With two digits we have 100 numbers. With three digits we have 1000 numbers. What we see is that with adding a single digit we increase the number of possibilities tenfold, which is an exponential increase.

“Combinatorial explosion” is just another way of saying that as we increase the size of the problem the number of possibilities increases exponentially. The reason why the combinatorial explosion cannot be escaped is because everything a computer deals with is a string of symbols, such as 0s and 1s, and just like with numerical digits the number of possible strings of a certain length increases exponentially as the string increases.

It’s easy to see this problem applies to ChatGPT. For every additional character ChatGPT has to predict, the number of possibilities it has to choose from increases by at least a factor of 26. So ChatGPT does phenomenally well writing short essays, and over short discussions, but its coherence will drop off exponentially as the generated text length increases.

However, this is not just a problem for ChatGPT.  Everything in the computer is ultimately a string of 0s and 1s, even pictures and sounds, and so all AIs, regardless of media, experience the combinatorial explosion.

In the end, the problem with AI is fundamental mathematics. The constant hype/disappointment cycle is the intersection of the combinatorial explosion with human excitement and the misconception we can build a hut to the moon.


Eric Holloway

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Eric Holloway is a Senior Fellow with the Walter Bradley Center for Natural & Artificial Intelligence, and holds a PhD in Electrical & Computer Engineering from Baylor University. A Captain in the United States Air Force, he served in the US and Afghanistan. He is the co-editor of Naturalism and Its Alternatives in Scientific Methodologies.

We Can’t Build a Hut to the Moon