Mind Matters Natural and Artificial Intelligence News and Analysis
robotic-man-cyborg-face-representing-artificial-intelligence-3d-rendering-stockpack-adobe-stock.jpg
Robotic man cyborg face representing artificial intelligence 3D rendering

How To Flummox an AI Neural Network

Kids can figure out the same-different distinction. So can ducklings and bees. But top AI can't.
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Science writer John Pavlus identifies a key limitation of artificial intelligence:

The first episode of Sesame Street in 1969 included a segment called “One of These Things Is Not Like the Other.” Viewers were asked to consider a poster that displayed three 2s and one W, and to decide — while singing along to the game’s eponymous jingle — which symbol didn’t belong. Dozens of episodes of Sesame Street repeated the game, comparing everything from abstract patterns to plates of vegetables. Kids never had to relearn the rules. Understanding the distinction between “same” and “different” was enough.

Machines have a much harder time. One of the most powerful classes of artificial intelligence systems, known as convolutional neural networks or CNNs, can be trained to perform a range of sophisticated tasks better than humans can, from recognizing cancer in medical imagery to choosing moves in a game of Go. But recent research has shown that CNNs can tell if two simple visual patterns are identical or not only under very limited conditions. Vary those conditions even slightly, and the network’s performance plunges.

John Pavlus, “Same or Different? The Question Flummoxes Neural Networks” at Quanta Magazine (June 23, 2021)

It’s not just children. Pavlus tells us, ducklings and bees can figure it out. Indeed, they would need to, just to live.

A convolutional neural network beat humans at go, but that didn’t require solving the same–different question.

Part of the problem may be that “sameness” is an abstraction:

The researchers found that a CNN trained on many examples of these patterns could distinguish “same” from “different” with up to 75% accuracy when shown new examples from the SVRT image set. But modifying the shapes in two superficial ways — making them larger, or placing them farther apart from each other — made the CNNs’ accuracy go “down, down, down,” Ricci said. The researchers concluded that the neural networks were still fixated on features, instead of learning the relational concept of “sameness.”

John Pavlus, “Same or Different? The Question Flummoxes Neural Networks” at Quanta Magazine (June 23, 2021)

The industry is currently divided as to whether the problem is or isn’t fundamental to AI: “CNNs lack a fundamental reasoning capability that can’t be shored up with more data or cleverer training” (Firestone, Puebla) vs. “neural networks have been mathematically proved to be capable, in principle, of approximating any function” (Santoro).

It comes down to a question about the nature of reality. Can every form of thought be a computation? If not, same-different may be a fundamental limit on computer-based AI.

There are, of course, a number of other limitations to artificial intelligence, especially lack of creativity. It can’t just “evolve” intelligence or superintelligence, as in the sci-fi movies.


You may also wish to read: Sorry, AI can’t do our thinking for us. J. C. Derrick asked Robert J. Marks whether AI can outthink people or make humans immortal. (Robert J. Marks)


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

How To Flummox an AI Neural Network