In a recent article, I argued that computers are not, and never can become smarter. An insightful reader wrote to ask, “What if smartness is defined by speed?”
This is a good point. The debate revolves around the definition of “smart.” and if we define “smart” as “fast”, then since computers are certainly getting faster they will necessarily become smarter.
Such a definition has intuitive appeal. Think of the world’s best chess player versus a beginner. One of the big distinctions is the chess expert will choose a good move more quickly than a beginner, and in general will play faster than a beginner. As such, play speed demonstrates a certain level of intelligence on the part of the player. A computer chess program like Deep Blue (pictured), which beat chess champion Gary Kasparov in 1997, plays much faster than a human ever could. As a consequence, it seems much smarter than a human player. Doesn’t it make sense to say smartness is speed, since smart things are fast?
It is certainly clear that greater smarts implies greater speed. A implies B. However, B does not necessarily imply A. Assuming B implies A is a logical fallacy known as “affirming the consequent.” The fact that all ravens are black does not mean all black things are ravens.
Let’s say we have two vehicles driving towards a cliff. One is a sports car and the other is a tractor. Would we say the sports car driver is smarter than the tractor driver because he will reach the cliff and drive over it first? Probably not.
How about a numerical example. Let’s say we have two computers trying to solve for x + 2 = 5. Computer A runs its program on the world’s faster super computer but only checks negative numbers. Computer B runs its program on a pocket calculator, starts x at 0, and increments by 1 for each attempt at a right answer. Clearly, computer A will never find the right answer and computer B will find the right answer in four attempts (0 through 3). So computer B must be smarter than computer A. Yet computer A is much faster than computer B. In this example,
“smart” is not the same as “fast.”
But what if computer A used computer B’s program? Aha! It would arrive at the answer much more quickly! So computer A is indeed smarter than computer B if speed is our only criterion. On the other hand, if computer B used computer A’s program, it would diverge from the right answer at a much slower rate. In that case, computer B’s speed does nothing to help its smarts, and instead contributes to its stupidity. So, even if there is a head-to-head competition where both computers start from the same point, speed does not entail smartness.
Our numerical example shows that the crucial element that makes a computer smart or stupid has nothing to do with speed. It all comes down to whether the computer’s starting state is well-matched with the desired end state. Only in the case when the beginning is well-matched with
the end does speed contribute to the computer’s smartness. But the well-matched principle has nothing to do with the computer’s speed. The well matched principle has everything to do with the computer programmer’s skill and knowledge.
Here’s an argument we sometimes hear: If we can determine the ultimate start and end state for a computer program (or at least a decent approximation of it) then, as computers get faster, they will indeed get smarter. If we can achieve a general enough setting, then perhaps as computers get faster we may even reach the point where the computer achieves broad-ranging humanlike intelligence. The Deep Blue example makes this approach plausible. A really fast computer was able to play chess better than a human because its start state was well matched with the desired end state.
Yet even in this very limited sense where computational speed improves smartness, when the start and end state are well matched, there are zero returns for humanlike intelligence.
In the most general case, we have the halting problem: “The halting problem, commonly applied to Turing-complete programs and models, is the problem of finding out whether, with the given input, a program will halt at some time or continue to run indefinitely.” (Technopedia) All computers are logically incapable of solving the halting problem, regardless of how fast they are. In this case, we have zero returns.
Unfortunately, the halting problem is important for many problems we care about, such as
- automatically generating programs from a given set of requirements
- protecting our computers from all viruses and hackers
- understanding human language
- playing any given game better than a human
- completely autonomous driving
- replacing humans with autonomous robot workers
- produce new scientific theories
- compressing text, images, and sound as well as humans
- recognizing images as well as humans
The above list of challenges and problems can be solved by computers in very narrow cases. But general solutions always run up against the halting problem. Even if these problems can be mathematically defined at the start and end, if they are dependent on the halting problem, making computers faster cannot solve them.
Furthermore, because a general solution to many of the problems in the above list would be necessary for computers to reach human level intelligence, it follows that computers cannot ever become as smart as humans, to say nothing of surpassing human intelligence.
So we find ourselves back to the original response to our question. Can computers become smarter by becoming faster? No.
The only thing that can put smarts into computers is humans.
Note: The photo of one of two racks of Deep Blue, which beat chess champion Gary Kasparov in 1997, taken at the Computer History Museum, is courtesy James the photographer(CC By 2.0)
You may also enjoy:
Computers are getting faster, but are they getting smarter? No. Computers are Turing machines, limited to operations that can be completely understood in relation to their programming. The Lovelace test, helps us understand the fundamental limits of computing as a method of thinking: Computing does not create new ideas.