The Google-backed AI company DeepMind made headlines in March 2016 when its AlphaGo game AI engine was able to defeat Lee Sedol, one of the top Go players in the world. DeepMind followed up this great achievement with the AlphaZero engine in 2017, which made the remarkable achievement of soundly beating AlphaGo in Go as well as one of the world’s best chess engines in chess.
The interesting difference between AlphaGo and AlphaZero is that AlphaGo uses databases of top human games for learning, while AlphaZero only learns by playing against itself. Using the same AI engine to dominate two different games, while also discarding reliance on human games suggests that DeepMind has found an algorithm that is intrinsically superior to the human mind. But is that really the case?
Since we (or at least Google) know the exact configurations and resource usage necessary to produce these achievements, we can use this information to measure the efficiency of these AI engines. My claim is that it is the machine’s efficiency that is most interesting, not its winning abilities.
Merely winning chess and Go is actually not that interesting, since it is theoretically possible to create a lookup table with perfect moves. While not practically possible for an entire game of chess, it is possible to create perfect endgame lookup tables that can play perfectly with up to seven pieces on the board. If we knew that we were playing against a perfect lookup table, then we would not be very impressed when the perfect lookup table won. After all, if we could routinely beat the perfect lookup table it would not be perfect, would it?
Instead, what we are actually impressed by is “efficiency.” In the same way a person who wins chess games because they spent their entire life studying chess is not impressive in the same way as a young chess prodigy with a natural knack, we are impressed by these chess engines because they appear to beat humans with much greater efficiency. As the AlphaZero paper states, it only took 9 hours to train AlphaZero until it could beat one of the best chess engines in the world. If a child could learn chess and beat the world’s grandmaster in 9 hours, purely being self-taught and only playing against themselves, we would be very impressed.
However, we all know that computer processing time is not the same as human thinking time. For example, according to the paper, AlphaZero evaluates 63 thousand moves in a second, whereas a human can only evaluate at most a couple, or not even one. Not even in the same ballpark! So once we start looking at the raw processing numbers for the AI engine, it becomes very clear that comparing AI and the human mind is completely apples and oranges.
If we were to limit AlphaZero to human level processing capabilities, it would completely flounder. What is actually remarkable is the shear amount of processing power needed to bring computers up to the level of even the most basic human player! This indicates the human mind is doing something totally different and extraordinarily more efficient than the best AI algorithms we have today.
Rather than demonstrating the superiority of algorithms over thinking, these AI game engines instead show the ever-widening gap between computation and cogitation, with the advantage clearly in the human court.