Mind Matters Natural and Artificial Intelligence News and Analysis
large-cog-wheels-in-the-motor-stockpack-adobe-stock
Large cog wheels in the motor.
Large cog wheels in the motor.

Can Computers –- and People — Learn To Think From the Bottom Up?

That’s the big promise made in a recent article at Aeon

Tufts University biologist Michael Levin and Columbia University neuroscientist Rafael Yuste have an ambitious project in hand: To explain how evolution “‘hacked’ its way to intelligence from the bottom up,” that is, from nothing. They base their thesis on computer science:

This is intelligence in action: the ability to reach a particular goal or solve a problem by undertaking new steps in the face of changing circumstances. It’s evident not just in intelligent people and mammals and birds and cephalopods, but also cells and tissues, individual neurons and networks of neurons, viruses, ribosomes and RNA fragments, down to motor proteins and molecular networks. Across all these scales, living things solve problems and achieve goals by flexibly navigating different spaces – metabolic, physiological, genetic, cognitive, behavioural.

But how did intelligence emerge in biology? The question has preoccupied scientists since Charles Darwin, but it remains unanswered. The processes of intelligence are so intricate, so multilayered and baroque, no wonder some people might be tempted by stories about a top-down Creator. But we know evolution must have been able to come up with intelligence on its own, from the bottom up.

Michael Levin and Rafael Yuste, “Modular Cognition” at Aeon (March 8, 2022)

Can that really work? The big problem for evolution is assembling a lot of components in a particular pattern. The probability of correct assembly drops exponentially as the pattern grows. Richard Dawkins proposed in his book The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe Without Design (1986) that maybe evolution can produce things that look intelligently designed if, instead of assembling all components in one fell swoop, they can be assembled piecemeal.

Gears and cogs macro

Piecemeal assembly reduces the problem to a series of linear choices. Ever since the early pre-Socratic philosophers proposed materialism and the evolution of modern organisms through variation and selection, thinkers have puzzled over how variation and selection alone could produce highly complex and specified organisms that include independent problem-solving intelligence.

The only realm in which we see complex specified artifacts such as the James Webb Space Telescope created on a regular basis is intelligent design by humans. That has led many thinkers throughout the millennia to conclude that organisms are also the product of intelligent design.

But, in “Modular cognition,” Levin and Juste disagree. They take the same approach as Dawkins. They claim that, if evolution can proceed by piecemeal variation on individual modules, the plasticity observed in stem cells, tadpoles, and mental cognition will arise.

They go one step further and propose that the higher order modules can emerge from the lower level variations of modules. The process envisioned is similar to the way in which words can vary and form sentences that can in turn vary to form paragraphs, and so on. They call this process “modular cognition.”

In doing so, they make a very important implicit assumption. They assume that, as we move to higher and higher levels of modularity, intermediary steps do not become dramatically harder, or even impossible, to find. That is a key assumption, and worth keeping in mind as we proceed.

Let’s try out the authors’ idea with word ladder puzzles. In word ladder puzzles, one word is transformed into another word by varying one letter at a time. The catch is that each intermediary step must also be a valid word. This rule is analogous to the common sense assumption in evolutionary biology that if one type of creature is to evolve into another type of creature, each intermediary typoe must survive and reproduce.

So let’s try to turn a “cat” into a “dog” with modular variation.

  1. CAT
  2. COT
  3. COG
  4. DOG

Pretty easy, huh? That makes the idea that modular cognition can explain the origin of creative intelligence from monad to man plausible.

But things get wobbly when we must tackle longer words — equivalent, perhaps, to more complex organisms. For example, there is no word ladder from “electrical” to “transcends.”

We run into the same problem with sentences. Turning “The cat chases the dog.” into “The dog chases the cat.” by modifying one letter at a time, while maintaining a meaningful sentence (which is equivalent in biology to keeping the organism alive) becomes much more difficult. What does “the cot chases the cog” mean? Beats me.

How about if we can swap out whole words in a sentence? This does result in a direct path:

  1. the cat chases the dog
  2. the dog chases the dog
  3. the dog chases the cat

Unfortunately, the solution of swapping words creates new problems. One problem is that, to swap out words, we now need a variation mechanism that uses a dictionary to store and look up words that actually mean something in the context, as opposed to mere strings of letters.

Another problem is that the number of options for swapping each word now grows exponentially. Thus the probability of hitting coherent sentences likewise drops exponentially. So, we’ve solved one problem only at the expense of introducing two new and much more difficult problems.

What we’ve just seen — that new problems are introduced by trying to solve the original problem at a higher level — is known as the vertical no free lunch theorem (VNFLT). The VNFLT was first invented by Dr. William Dembski and Dr. Robert J. Marks,

They prove in “The Search for a Search” that, as we try to solve a problem at higher and higher levels — as the “Modular cognition” authors propose — the difficulty increases exponentially instead of decreasing. So we see that the key assumption made by the authors is false.

Modular cognition will not work as a theory of intelligent design. It is yet one more proposal
that is unable to climb the steep ladder of the VNFLT.


You may also wish to read:

To what extent does life simply invent itself as it goes along? The evidence may surprise us. It does not seem that all life arose simply by common descent. But maybe it can’t just invent itself without an inventor either. Human inventions illustrate the point. (Eric Holloway)

and

Can AI really evolve into superintelligence all by itself? We can’t just turn a big computer over to evolution and go away and hope for great things. Perpetual Innovation Machines tend to wind down because there is no universally good search. Computers are powerful because they have limitations.


Eric Holloway

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Eric Holloway is a Senior Fellow with the Walter Bradley Center for Natural & Artificial Intelligence, and holds a PhD in Electrical & Computer Engineering from Baylor University. A Captain in the United States Air Force, he served in the US and Afghanistan. He is the co-editor of Naturalism and Its Alternatives in Scientific Methodologies.

Can Computers –- and People — Learn To Think From the Bottom Up?