Mind Matters Natural and Artificial Intelligence News and Analysis
Photo by Hugues de BUYER-MIMEURE on Unsplash

Why Can’t Machines Learn Simple Tasks?

They can learn to play chess more easily than to walk

Why is it comparatively easy to develop a program to play chess, as opposed to teaching a robot to walk freely? Moravec’s Paradox offers one explanation:

In the 1980s, computer scientist Hans Moravec laid out this exact challenge — what has now been dubbed “Moravec’s paradox” — and explained why it’s just what we should expect from machines that are immune to the pressures of natural selection…

That is to say, the things humans find easiest are the very things that took millennia of evolution to refine. The things humans find hardest are only hard because they’re new — we’ve been thinking about chess strategy for a little over a thousand years, but we’ve been learning how to interact with our surroundings since our ancestors were single-celled organisms. The skills that are hardwired through evolution don’t take conscious thought, and when you don’t have to think about something, it’s harder to figure out how to teach a machine to do it. Ashley Hamer, “Moravec’s Paradox Is Why the Easy Stuff Is Hardest for Artificial Intelligence” at Curiosity

Twenty years later, the paradox holds:

The recent AI developments have underscored Moravec’s research with AI making staggering progress in narrow applications such as beating reigning champion in Go, self-driving cars, virtual assistants and more. Research from Tractica shows that narrow AI techniques, used to solve specific problems, will dominate AI applications in the next 10 years, accounting for 99.5 percent of AI revenue. Richa Bhatia, “Understanding Moravec’s Paradox And Its Impact On Current State Of AI” at Anaytics India (July 7, 2018)

Although Moravec’s paradox captures robotic reality, his appeal to evolution is less certain. Is it true that the things we find the hardest are hard only because they are new to history? Chess strategy, for example, is highly formal but it derives from ancient strategies for warfare, as the names of the pieces imply. It is not likely that warfare came into existence with recorded history or that successful strategy was ever easy.

Most of us have forgotten how hard it was to learn to walk and, as physiotherapists will attest, the skill is not easy to teach even to humans, who are the only normally bipedal primates.

The paradox was developed by Hans Moravec, professor of robotics at Carnegie Mellon University and author of Mind Children: The Future of Robot and Human Intelligence (1988) and Robot: Mere Machine to Transcendent Mind (1998) In Robot, he argues that “robots will match human intelligence in less than fifty years.”

That prediction is worthy of reflection in the face of another, unrelated one:

TWENTY years ago this week [1998], two young men sat in a smoky bar in Bremen, northern Germany. Neuroscientist Christof Koch and philosopher David Chalmers had spent the day lecturing at a conference about consciousness, and they still had more to say. After a few drinks, Koch suggested a wager. He bet a case of fine wine that within the next 25 years someone would discover a specific signature of consciousness in the brain. Chalmers said it wouldn’t happen, and bet against.

Per Snaprud, “Consciousness: How we’re solving a mystery bigger than our minds” at New Scientist

The bet has five years to run. The fruitfulness of the current field of consciousness studies may be inferred from the fact that positions ranging from consciousness is an illusion to everything is conscious are academically acceptable.

If specifically human intelligence is related to consciousness, the robotics engineers might best leave consciousness out of their goals for their products and focus on more tangible ones.

Hat tip: Eric Holloway

See also: AI That Can Read Minds? Deconstructing AI Hype The source for the claims seems to be a 2018 journal paper, “Real-time classification of auditory sentences using evoked cortical activity in humans.” The carefully described results are indeed significant but what the Daily Mail article didn’t tell you sheds a rather different light on the AI mind reader.

Why Can’t Machines Learn Simple Tasks?