Mind Matters Natural and Artificial Intelligence News and Analysis
abstract-binary-code-science-computing-center-stockpack-adobe-stock.jpg
abstract binary code science computing center
abstract binary code science computing center

Can AI Really Evolve into Superintelligence All by Itself?

We can’t just turn a big computer over to evolution and go away and hope for great things
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

At Science earlier this year it was claimed that Darwinian evolution alone can make computers much smarter. As a result, researchers hoped to “discover something really fundamental that will take a long time for humans to figure out”:

Artificial intelligence (AI) is evolving—literally. Researchers have created software that borrows concepts from Darwinian evolution, including “survival of the fittest,” to build AI programs that improve generation after generation without human input. The program replicated decades of AI research in a matter of days, and its designers think that one day, it could discover new approaches to AI.

Edd Gent, “Artificial intelligence is evolving all by itself” at Science (April 30, 2020)

How does that work?

The program discovers algorithms using a loose approximation of evolution. It starts by creating a population of 100 candidate algorithms by randomly combining mathematical operations. It then tests them on a simple task, such as an image recognition problem where it has to decide whether a picture shows a cat or a truck.

Edd Gent, “Artificial intelligence is evolving all by itself” at Science (April 30, 2020)

The program then creates thousands of such candidates and weeds them out, using sheer computing power.

In a preprint paper published last month on arXiv, the researchers show the approach can stumble on a number of classic machine learning techniques, including neural networks. The solutions are simple compared with today’s most advanced algorithms, admits Le, but he says the work is a proof of principle and he’s optimistic it can be scaled up to create much more complex AIs.

Edd Gent, “Artificial intelligence is evolving all by itself” at Science (April 30, 2020)

We caught up with computer programmer and philosopher Jonathan Bartlett and asked what he thought of the chances of somehow stumbling on complex artificial intelligence this way. He said,

The interesting thing about this paper is that it shows that the principles demonstrated in the 1990s by Wolpert and Macready still have not really sunk in yet. As their “No Free Lunch” theorems point out, there is no universally good search through any search space. This is similar to the question of perpetual motion machines, but the results are actually much more solid as they rely purely on mathematics.

If someone were to invent a universally good search through a search space, it would have to be done on something that isn’t a computer. Computers are powerful because they have limitations. It is unsurprising that they found some problems that their search mechanism was able to solve. The generalization of that to larger and broader problems shows a deep misunderstanding of the problem itself.

He pointed out that Eric Holloway’s recent post here at Mind Matters News on the word ladder game illustrates the problem very well: In the word ladder game, we turn one word into another by changing each letter systematically. But the rule is that each change must result in a word in English (or in another language chosen for the game). So to turn CAT into DOG, we can go from CAT to COT to COG to DOG.

But what about longer, more complex words? There is no such simple, systematic route from TRANSMUTATION to PERAMBULATION:

It is not a scientific argument to merely assume that such a transformation is possible, especially when we can see with our example from the word ladder game how quickly a transition becomes impossible merely by changing one word into another. That is many orders of magnitude (a great understatement!) simpler than transforming one DNA sequence into another.

Eric Holloway, “Evolution and artificial intelligence face the same basic problem” at Mind Matters News

Some things that appear very simple if we look only at small tasks or small numbers prove immensely more complex when we ask for big results. The difficulties increase by orders of magnitude.

Holloway added,

The key thing here is novelty. As Dr. Marks and I proved in our paper “Observation of Unbounded Novelty in Evolutionary Algorithms is Unknowable,” it is impossible to generate endless novelty through algorithmic techniques. Randomized techniques like evolutionary algorithms and simulated annealing can help search for better tweaking of parameters, but the search space is just too large to depend on randomized searching to find brand new solutions. If we look at solutions found by algorithmic programming, such as genetic programming these solutions are filled with random meaninglessness and tend to optimize for quick hacks instead of fundamental insights.

There’s a broad general reason that all these Perpetual Innovation Machines tend to wind down, he says: Goodhart’s Law: Once a policy becomes a target, it loses all information:

These algorithms all proceed by essentially Goodhart’s law: optimizing for a specific objective function. There is no way to achieve fundamental insights by optimizing for objective functions. Even making novelty itself the objective function cannot work due to the limits placed by Turing’s halting problem and Kolmogorov complexity.

Yet it is the fundamental insights that are necessary to drive machine learning and artificial intelligence forward. The only known source of such fundamental insights is humans. This is why human-in-the-loop approaches are necessary.

Holloway also notes that one interesting type of research is to combine a novelty search with human interaction, as for example in “A Novel Human-Computer Collaboration: Combining Novelty Search with Interactive Evolution(open access) by Brian J. Woolley and Kevin O. Stanley:

The experimental results demonstrate that combining human insight with novelty search not only finds solutions significantly faster and at lower genomic complexities than fully-automated processes guided purely by fitness or novelty, but it also finds solutions faster than the traditional IEC approach. Such results add to the evidence that combining human users and automated processes creates a synergistic effect in the search for solutions.

So, it turns out that we can’t just turn a big computer over to evolution and go away and hope for great things. But life never really was that simple, was it?

Here and here are open-access papers on the topic.


You may also enjoy: Surprising ways AI can help recover lost languages. Researchers into lost languages hail the new technologies as a golden age for discovery


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Can AI Really Evolve into Superintelligence All by Itself?