Mind Matters Natural and Artificial Intelligence News and Analysis
jose-maria-garcia-garcia-656379-unsplash
Photo by Jose Maria Garcia Garcia on Unsplash

Machines just don’t do meaning

And that, says a computer science prof, is a key reason they won't compete with humans
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Possibly due to the maturing of the machine learning industry, we are beginning to see pushback against inflated claims about what machines can be taught to do. Here’s a recent example, from Melanie Mitchell, Professor of Computer Science at Portland State University:

The Facebook founder, Mark Zuckerberg, recently declared that over the next five to 10 years, the company will push its A.I. to “get better than human level at all of the primary human senses: vision, hearing, language, general cognition.” Shane Legg, chief scientist of Google’s DeepMind group, predicted that “human-level A.I. will be passed in the mid-2020s.”
Melanie Mitchell, “Artificial Intelligence Hits the Barrier of Meaning” at New York Times

Mitchell is certain that these forecasts “will fall short,” as she has seen so many others do in her decades-long career in AI, because machines do not understand what things mean, which precludes certain types of learning:

The errors made by such systems range from harmless and humorous to potentially disastrous: imagine, for example, an airport security system that won’t let you board your flight because your face is confused with that of a criminal, or a self-driving car that, because of unusual lighting conditions, fails to notice that you are about to cross the street.

Lack of intrinsic consciousness of the meaning of actions also makes ambitious systems vulnerable to malicious takeovers, she warns:

Numerous studies have demonstrated the ease with which hackers could, in principle, fool face- and object-recognition systems with specific minuscule changes to images, put inconspicuous stickers on a stop sign to make a self-driving car’s vision system mistake it for a yield sign or modify an audio signal so that it sounds like background music to a human but instructs a Siri or Alexa system to perform a silent command.

Human understanding is grounded, as Mitchell says, in common-sense knowledge about how the world works and why things matter. Researchers have not been able to transfer this understanding to AI but she worries that many teams are moving ahead with projects that require such ability for safety.

As she reminds us, A.I. researcher Pedro Domingos noted in The Master Algorithm, “People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.”

All true, unfortunately. And it’s also true that we do not understand what human consciousness is or how human intelligence works, or for that matter how to make a machine care about outcomes. That alone means that the Singularity (the final machine takeover) is much less likely than a string of embarrassing setbacks and meltdowns.

Note: Mitchell’s forthcoming book is Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, and Giroux, 2019)

Hat tip: Eric Holloway

See also: See also: Should robots run for office? A tech analyst sees a threat to democracy if they don’t

Too late to prevent rule by The Algorithm? Dilbert’s creator, Scott Adams, tells Ben Shapiro why he thinks politicians soon won’t matter.

How AI could run the world Its killer apps, in physicist Max Tegmark’s tale, include a tsunami of “message” films

Human intelligence as a halting oracle (Eric Holloway)

Meaningful information vs. artificial intelligence (Eric Holloway)

and

AI is indeed a threat to democracy. But not in quite the way historian Yuval Noah Harari thinks (Michael Egnor)


Machines just don’t do meaning