Mind Matters Natural and Artificial Intelligence News and Analysis
close-up-image-of-coder-typing-on-computer-stockpack-adobe-stock
Close-up image of coder typing on computer

The Search for the Universal Algorithm Continues

Why does machine learning always seem to be rounding a corner, only to eventually hit a wall?
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

DeepMind, a part of Alphabet (i.e., Google), has made many headlines in the past. The biggest was its development of AlphaGo, which used reinforcement learning to beat the number one Go player at the time (2017). DeepMind generalized this into AlphaZero, which is supposedly able to solve any two-player game of perfect information.

DeepMind has come back into headlines recently with the attempt to build an AI which can generate any algorithm. While they are starting with map data, the goal is to generalize this and generate any desired algorithm.

The search for such a “universal algorithm” has been essentially equivalent to the search for a perpetual motion machine in physics. The allure of both is obvious. In physics, if you can create a perpetual motion machine, you can remove the world’s need for energy. In software, if you can develop a “universal algorithm,” you can remove the world’s need for programmers. 

Robert Fludd’s 1618 “water screw” perpetual motion machine from a 1660 wood engraving. It is widely credited as the first attempt to describe such a device in order to produce useful work, that of driving millstones.

The problem with perpetual motion machines is that they violate the law of conservation of energy. You can’t generate more energy than you have. It is simply a limitation of physics. Interestingly, information theory has similar limiting laws, known as “conservation of information” laws. The most well-known incarnations of these are the “No Free Lunch” theorems by Wolpert and Macready, which states there is no universally good optimization algorithm — all rely on some form of expert knowledge.

Here at Mind Matters, we’ve talked about Turing’s Halting Problem as an example of this sort of limitation. Simplified, algorithms are combinations of axioms (fundamental assumptions) and theorems (combinations of axioms). Computers are essentially very fast theorem generators, but are fundamentally unable to generate axioms. Advances in machine learning generally come from the introduction of axioms into the mix. If you add an axiom, you may be able to generate thousands of new theorems, but no amount of theorem-generation will give you a new axiom.

This is, ultimately, the problem with universal algorithms. They are always limited by the axioms supplied to them. This is why machine learning always seems to be rounding a corner, but always eventually hits a wall. Every added axiom unlocks so many solutions, but then for the next set of solutions, new axioms are required.

As George Montanez pointed out in a 2017 paper, “The Famine of Forte,” few search problems favor any algorithm. Essentially, what Montanez was pointing out was that any search algorithm has a bounded set of problems that it can apply to. This means that universal algorithms are just not possible. 

So, while the search for the universal algorithm continues, don’t bet on anyone finding it.


Jonathan Bartlett

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Jonathan Bartlett is a senior software engineer at McElroy Manufacturing. Jonathan is part of McElroy's Digital Products group, where he writes software for their next generation of tablet-controlled machines as well as develops mobile applications for customers to better manage their equipment and jobs. He also offers his time as the Director of The Blyth Institute, focusing on the interplay between mathematics, philosophy, engineering, and science. Jonathan is the author of several textbooks and edited volumes which have been used by universities as diverse as Princeton and DeVry.

The Search for the Universal Algorithm Continues