Mind Matters Natural and Artificial Intelligence News and Analysis
matrix eye code.jpg
Digital eye. Green matrix background. Concept of Artificial Intelligence

Researcher: Fear of AI Caused by Four Common Misconceptions

AI isn't panning out the way so many popular media articles predicted and there are reasons for that, a computer science prof says
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

As part of the Demystifying AI series, software engineer Ben Dickson, founder of TechTalks, tackles some of the most serious misunderstandings he thinks the public has about AI. For example, while AI can win in board games, where the map is the territory, “some of the early goals of AI like housekeeper robots and self-driving cars continue to recede as we approach them.”

So why isn’t AI panning out the way so many popular media articles predicted? Dickson points to a recent article at ArXiv by professor Melanie Mitchell of the Santa Fe Institute, author of Artificial Intelligence: A Guide For Thinking Humans (2019):

The year 2020 was supposed to herald the arrival of self-driving cars. Five years earlier, a headline in The Guardian predicted that “From 2020 you will become a permanent backseat driver”. In 2016 Business Insider assured us that “10 million self-driving cars will be on the road by 2020”. Tesla Motors CEO Elon Musk promised in 2019 that “A year from now, we’ll have over a million cars with full self-driving, software… everything”. And 2020 was the target announced by several automobile companies to bring self-driving cars to market.

Melanie Mitchell, “Why AI is Harder Than We Think” at arXiv:2104.12871

And now even Elon Musk is walking back the self-driving claims, to as little fanfare as possible.

So what happened? Mitchell suggests four fallacies that routinely lead to wrong predictions:

Fallacy 1: Narrow intelligence is on a continuum with general intelligence

Indeed, if people see a machine do something amazing, albeit in a narrow area, they often assume the field is that much further along toward general AI. The philosopher Hubert Dreyfus (using a term coined by Yehoshua Bar-Hillel) called this a “first-step fallacy.” As Dreyfus characterized it, “The first-step fallacy is the claim that, ever since our first work on computer intelligence we have been inching along a continuum at the end of which is AI so that any improvement in our programs no matter how trivial counts as progress.” Dreyfus quotes an analogy made by his brother, the engineer Stuart Dreyfus: “It was like claiming that the first monkey that climbed a tree was making progress towards landing on the moon”

Melanie Mitchell, “Why AI is Harder Than We Think” at arXiv:2104.12871

No such step is guaranteed. Mitchell goes on to note that the critical requirement of common sense remains an unsolved problem for computers: “Giving machines common sense will require imbuing them with the very basic ‘core,’ perhaps innate, knowledge that human infants possess about space, time, causality, and the nature of inanimate objects and other living agents, the ability to abstract from particulars to general concepts, and to make analogies from prior experience. No one yet knows how to capture such knowledge or abilities in machines.” If it’s not computable, it may not even be possible.

As one commentator puts the matter, “Computers don’t have any common sense and the problem doesn’t seem to go away, even when we throw big data, huge quad-core processing power, and fancy machine learning algorithms at it.”

Fallacy 2: Easy things are easy and hard things are hard

That is, the things that we humans do without much thought — looking out in the world and making sense of what we see, carrying on a conversation, walking down a crowded sidewalk without bumping into anyone — turn out to be the hardest challenges for machines…

Gary Marcus pointed out, there are domains, including games, that, while easy for humans, are much more challenging than Go for AI systems. One example is charades, which “requires acting skills, linguistic skills, and theory of mind”, abilities that are far beyond anything AI can accomplish today.

Melanie Mitchell, “Why AI is Harder Than We Think” at arXiv:2104.12871

Mitchell notes that this problem is called Moravec’s Paradox, after roboticist Hans Moravec, who first posed it in the 1980s. Moravec himself believed that computers would rival humans by 2040 and thereafter render us extinct so he was not raising fussy objections; he was bringing up an obstacle that he genuinely believed to exist.

Fallacy 3: The lure of wishful mnemonics

Melanie Mitchell

By “wishful mnemonics,” Mitchell means “terms associated with human intelligence that are used to describe the behavior and evaluation of AI programs” but probably don’t refer to an equivalent process. For example, “learning” doesn’t mean the same thing to a machine as to a human. Machine learning, unlike human learning, is not easily transferable and how to make it so is unclear:

One could argue that such anthropomorphic terms are simply shorthand: IBM scientists know that Watson doesn’t read or understand in the way humans do; DeepMind scientists know that AlphaGo has no goals or thoughts in the way humans do, and no human-like conceptions of a “game” or of “winning.” However, such shorthand can be misleading to the public trying to understand these results (and to the media reporting on them), and can also unconsciously shape the way even AI experts think about their systems and how closely these systems resemble human intelligence.

Melanie Mitchell, “Why AI is Harder Than We Think” at arXiv:2104.12871

Of course, such a system is a feedback loop, with everyone reinforcing everyone else’s false picture. The problem isn’t new. David Watson of the Oxford Internet Institute and the Alan Turing Institute traces an early instance back to 1958: “The Navy has revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence” (New York Times, p. 25) He comments, “This disconnect between biological and artificial neural networks suggests that the latter lack some crucial component essential to navigating the real world.” But if we are not sure what this crucial component is, how can we give it to them?

Fallacy 4: Intelligence is all in the brain

The assumption that intelligence is all in the brain has led to speculation that, to achieve human-level AI, we simply need to scale up machines to match the brain’s “computing capacity” and then develop the appropriate “software” for this brain-matching “hardware. For example, one philosopher wrote a report on the literature that concluded, “I think it more likely than not that 1015 FLOP/s is enough to perform tasks as well as the human brain (given the right software, which may be very hard to create)”. No body needed! …

Instead, what we’ve learned from research in embodied cognition is that human intelligence seems to be a strongly integrated system with closely interconnected attributes, including emotions, desires, a strong sense of selfhood and autonomy, and a commonsense understanding of the world. It’s not at all clear that these attributes can be separated.

Melanie Mitchell, “Why AI is Harder Than We Think” at arXiv:2104.12871

Mitchell’s ideas can be compared with those of Erik J. Larson, author the the recent The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do (2021). A panel discussion with both of them would be most interesting.


You may also wish to read:

Why did a prominent science writer come to doubt the AI apocalypse? John Horgan’s endorsement of Erik J. Larson’s new book critiquing of AI claims stems from considerable experience covering the industry for science publications. Horgan finds that, despite the enormous advances in neuroscience, genetics, cognitive science, and AI, our minds remain “as mysterious as ever.”

and

No AI overlords?: What is Larson arguing and why does it matter? Information theorist William Dembski explains, computers can’t do some things by their very nature. If a needed thought process is not computational, a bigger or better computer is not the answer.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Researcher: Fear of AI Caused by Four Common Misconceptions