Mind Matters Natural and Artificial Intelligence News and Analysis
Evolution of computer Claaudio Schwartz Purzlbaum Unsplash-0-DjV_Tk1cQ

Can computers simply evolve greater intelligence?

Maybe it sounds attractive but nature doesn't seem to work quite that way
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

A recent paper on the evolution of learning “explores how computers could begin to evolve learning in the same way as natural organisms did.” The authors use Avida, a software program for simulating evolution, to support their claim.

Avida was originally intended to demonstrate how Darwinian evolution, which could occur without design in nature, is supposed to work. However, as many have shown, the program actually ended up demonstrating quite conclusively the need for design. This latest paper on using Avida to simulate the evolution of learning has shown the same thing.

First, some history. Avida first came to public notice in a May 2003 paper, The Evolutionary Origin of Complex Features (paywall), in the journal Nature The paper, by Richard E. Lenski, Charles Ofria, Robert T. Pennock and Christoph Adami, attempted to show that Darwinian evolution (natural selection acting on random mutations) could evolve what they considered “complex” features.

Avida’s operations were largely determined by a “genetic code” (a software instruction set that somewhat mimics the form of the natural genetic code). The researchers then created an “environment” that consisted of “feeding” their digital organisms digital logic problems, and giving them “energy” if they were successful at solving them. The more successful organisms could breed more; thus the fittest digital organisms were selected. Or so it seemed.

Even in that first paper, the researchers admitted,

Some readers might suggest that we “stacked the deck” by studying the evolution of a complex feature that could be built on simpler functions that were also useful. However, that is precisely what evolutionary theory requires, and indeed, our experiments showed that the complex feature never evolved when simpler functions were not rewarded.

In other words, if they didn’t develop the exact right environment, nothing complex evolved. Now, the interesting part is that, if they say that the environment must be a specific type in order for the organism to evolve, that is another way of saying that the environment must be designed. In other words, far from proving that design is unnecessary, they merely shift the design to another part of the system.

This is consistent with Bill Dembski’s notion of displacement. Put simply, to develop complex functional systems, you can shift design around but you can’t actually get rid of it. The original Avida program simply shifted the design of the organism to that of the environment. As the paper itself said, if the environment was not properly designed, the organism didn’t evolve the traits needed for survival. (More detailed information here.)

Another interesting aspect of Avida is that, despite the researchers’ claims that design has been avoided, every Avida organism contains designed code. Additionally, that designed code is detectable using standard notions of irreducible complexity adapted to computational systems. It turns out that, to get started, the Avida organism requires a replication loop, which must be pre-programmed (i.e., designed) into the organism itself. Additionally, the nature of the loop allows it to be detected as a designed artifact.

So far, we have seen that Avida:

– demonstrates that design can be moved, but not removed
– demonstrates that design detection systems work in the real world

The Avida researchers have now released a new paper (open access) on organismal learning (summarized here.). They report that they have adapted the Avida system to work with the idea of genetic associative learning. They have changed some things around, but the core ideas are the same.

First, they have removed the replication loop and moved replication into a single instruction. Second, instead of solving logic problems, the organism must successfully navigate a digital environment. The instruction set is given instructions for sensing, processing, and moving. The organisms are then mutated so that they try out different combinations of these instructions to move around in their environment.

Now, there are a lot of issues with the application of this system to evolution among life forms. Essentially, these problems include reducing to one or two instructions (essentially the informational size of an amino acid, which can be mutated in a single step) processes that require multiple genes with hundreds of amino acids working together in a natural organism in a real environment. Additionally, these systems are not themselves subject to mutation, so they don’t include the possibility of problems created by pleiotropy (a gene affecting more than one system).

But, let’s leave that aside for now, and see what they did find out. According to the paper,

Associative learning is only one of many successful behaviors to evolve, and its origin depends on the environment possessing certain information patterns that organisms can exploit. (emphasis mine)

So, it turns out, that even after 16 years of research, the core fact remains, as Dembski has shown, you can move design around but you can’t get rid of it. I doubt that this paper will lead us to new knowledge about how natural organisms learned to learn. But it does reinforce certain truths about the nature of complex functionality and the requirements for achieving it.


Further reading: Can we evolve self-driving cars? The new method may be an advance but thinking of it as “evolution” at work risks misconceptions

and

Could one single machine invent everything? (Eric Holloway)

Featured image: Evolution of computers/Claudio Schwartz-Purzlbaum, Unsplash


Jonathan Bartlett

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Jonathan Bartlett is a senior software R&D engineer at Specialized Bicycle Components, where he focuses on solving problems that span multiple software teams. Previously he was a senior developer at ITX, where he developed applications for companies across the US. He also offers his time as the Director of The Blyth Institute, focusing on the interplay between mathematics, philosophy, engineering, and science. Jonathan is the author of several textbooks and edited volumes which have been used by universities as diverse as Princeton and DeVry.

Can computers simply evolve greater intelligence?