Mind Matters Natural and Artificial Intelligence News and Analysis
concept-creative-idea-and-innovation-hand-picked-wooden-cube-block-with-head-human-symbol-and-light-bulb-icon-stockpack-adobe-stock
Concept creative idea and innovation. Hand picked wooden cube block with head human symbol and light bulb icon
Photo licensed via Adobe Stock

Computer Prof: We Can’t Give Machines Understanding of the World

Not now, anyway. Melanie Mitchell of the Santa Fe Institute finds that ever larger computers are learning to sound more sophisticated but have no intrinsic knowledge
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Last December, computer science prof Melanie Mitchell, author of Artificial Intelligence: A Guide for Thinking Humans (2019), let us in on a little-publicized fact: Despite the greatly increased capacity of the vast new neural networks. they are not closer to actually understanding what they read:

The crux of the problem, in my view, is that understanding language requires understanding the world, and a machine exposed only to language cannot gain such an understanding. Consider what it means to understand “The sports car passed the mail truck because it was going slower.” You need to know what sports cars and mail trucks are, that cars can “pass” one another, and, at an even more basic level, that vehicles are objects that exist and interact in the world, driven by humans with their own agendas.

All this is knowledge that we humans take for granted, but it’s not built into machines or likely to be explicitly written down in any of a language model’s training text. Some cognitive scientists have argued that humans rely on innate, pre-linguistic core knowledge of space, time and many other essential properties of the world in order to learn and understand language. If we want machines to similarly master human language, we will need to first endow them with the primordial principles humans are born with. And to assess machines’ understanding, we should start by assessing their grasp of these principles, which one might call “infant metaphysics.”

Melanie Mitchell, “What Does It Mean for AI to Understand?” at Quanta (December 16, 2021)

The trouble is, machines aren’t infants. Infants grow naturally toward an understanding of the world. Their growth includes the growth of consciousness, which is the single most difficult concept in science. It is fair to say we know little about it.

This situation was highlighted by a recent field test by Gary Smith a business prof at Pomona College:

Decades ago, AI researchers largely abandoned their quest to build computers that mimic our wondrously flexible human intelligence and instead created algorithms that were useful (i.e., profitable). Despite this understandable detour, some AI enthusiasts market their creations as genuinely intelligent. For example, a few months ago, Blaise Aguera y Arcas, the head of Google’s AI group in Seattle, argued that “statistics do amount to understanding.” As evidence, he cites a few exchanges with Google’s LaMDA chatbot. The examples were impressively coherent but they are still what Gary Marcus and Ernest Davis characterize as “a fluent spouter of bullshit” because computer algorithms do not understand what words mean. They are like Nigel Richards, who has won several French-language Scrabble championships without knowing the meaning of the words he spells.

Gary S. Smith, “The Ai Illusion – State-of-the-art Chatbots Aren’t What They Seem” at Mind Matters News (March 21, 2022)

Google’s LaMDA can’t be accessed by the public. But Smith learned something interesting when he looked into OpenAI’s text generator GPT-3 which, scarfing up huge sections of the internet, can generate coherent sentences: OpenAI has forty contractors who “clean up” the nonsense. Sure enough, when Smith was able to test the system, he found that the answers to questions that GPT-3 generated were silly at first, then suddenly sensible. For example,

Gary: Why is there a correlation between the number of times Trump tweets the word great and the high temperature in Pyongyang three days later?

One of eight silly answers was

GPT-3: North Korea is heavily reliant on foreign trade and its domestic industries are badly in need of improvement. Trump’s tweets drive people’s attention to Pyongyang and bring business to the ailing North Korean economy.

Later, when he checked back, the answer was

GPT-3: There is no known reason for this correlation.

Sounds better. Those humans had better not quit their jobs any time soon. Getting the program to scarf up trillions of documents is not really the same thing as giving it experience or thought patterns.


You may also wish to read:

Researcher: Fear of AI caused by four common misconceptions. AI isn’t panning out the way so many popular media articles predicted and there are reasons for that, Melanie Mitchell says. Many believe that narrow intelligence achieved by computing is on a continuum with general intelligence but there are good reasons for doubt.

and

Machines just don’t do meaning And that, says a computer science prof, is a key reason they won’t compete with humans. Human understanding is grounded, as Prof. Mitchell says, in common-sense knowledge about how the world works and why things matter. Researchers have not been able to transfer this understanding to AI but she worries that many teams are moving ahead with projects that require such ability for safety.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Computer Prof: We Can’t Give Machines Understanding of the World