Harvard U Press Computer Science Author Gives AI a Reality Check
Erik Larson told COSM 2021 about real limits in getting machines that don’t live in the real world to understand itThe speaker told the audience that although computers can do many impressive things, they will never achieve artificial intelligence.
Who is “they” in the sentence you just read? The audience or computers?
You immediately know the answer. It’s computers, because we know that researchers are struggling how to figure out how to endow computers with AI. It makes no sense to talk about an audience having artificial intelligence. You intuitively understanding the meaning of “they” in the sentence without even having to think about it.
What if the sentence had read:
The speaker told the audience that although computers can do many impressive things, they will be sorry if they bought one of this year’s models.
Again, it is obvious to a typical human that “they” refers to the audience. But surprisingly, computers have an incredibly hard time answering such basic questions. Hence, “they” will never achieve artificial intelligence.
That was the central message of a talk yesterday at COSM 2021 by Erik J. Larson, author of The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do, published this year by Harvard University Press.
Larson, who studied the philosophy of computer science for his Ph.D. from the University of Texas at Austin, argues in his book that “The myth of artificial intelligence is that its arrival is inevitable, and only a matter of time-that we have already embarked on the path that will lead to human-level AI, and then superintelligence. We have not. The path exists only in our imaginations.” He continues “all evidence suggests that human and machine intelligence are radically different.”
For Larson, computers can do calculations, but the key missing ingredient in machine intelligence is the ability to appreciate context, do analysis, and make appropriate inferences.
“Calculation is connecting known dots; applying the rules of algebra, say. Analysis is making sense of the dots, making a leap or guess that explains them—and then, given some insight, using calculation to test it,” he writes.
This is why it is so difficult for computers to identify who “they” are in the sentence.
He provides example after example of sentences with ambiguously defined pronouns. Humans can look at the context of the sentence and instantly understand who the pronoun refers to. Computers lack this analytical inference-making ability, and get stuck.
During his talk, Larson explained that supercomputers can be trained to play Jeopardy, or get directions to the grocery store — and these accomplishments are often hyped in the media. Yet “in the meantime, nobody can talk to Alexa,” because computers lack the intuition or inference-making abilities to understand the meaning of human speech.
This imposes impassable limits on what computers can accomplish:
It is impossible to get a joke, discover a new vaccine, solve a murder [mystery] … or mere keep up with the sundry happenings and communications in the world without some inference capability.
Larson told the audience that, absent a fundamental innovation in our technology, the quest to create artificial intelligencesthat can do everything that humans can do (artificial general intelligence) is “doomed to failure.”
This view does not make Larson popular with people in his field. But he thinks a reality check is needed if the field of AI is to progress. He explains in his book how the myth of artificial intelligence is preventing us from making the breakthroughs that will enable progress in AI:
Mythology about AI is bad, then, because it covers up a scientific mystery in endless talk of ongoing progress. The myth props up belief in inevitable success, but genuine respect for science should bring us back to the drawing board. … [W]e are unlikely to get innovation if we choose to ignore a core mystery rather than face up to it. A healthy culture for innovation emphasizes exploring unknowns, not hyping extensions of existing methods-especially when these methods have been shown to be inadequate to take us much further.
Mythology about inevitable success in AI tends to extinguish the very culture of invention necessary for real progress-with or without human-level AI.
Thus Larson told the crowd that, “The ultimate irony is we need human innovation to figure out how to compute, how to make general intelligence on a computer, if that’s what we want to do. But make no mistake, the AI we have today is inadequate for that task.”
You may also wish to read:
How Erik Larson hit on a method for deciding who is influential: The author of The Myth of Artificial Intelligence decided to apply an algorithm to Wikipedia — but it had to be very specific. Many measures of influence depend on rough measures like numbers of hits on pages. Larson realized that influence is subtler than that.