Mind Matters Natural and Artificial Intelligence News and Analysis
Shovel Annie Spratt Unsplash annie-spratt-j4fV6dKT9tw-unsplash

Do We Need To Learn from AI How To Think Better?

No, and a moment’s thought shows why not
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

This time, it’s not chess or Go but the video game StarCraft II. A DeepMind AI trounced a couple of the best professional players in January. The usual media narrative is AI marches on. Another game. Another defeat. Another advance for AI. Or so goes the same tired story.

But some, like technology writer Douglas Heaven, go a step further. He sees a coming meld between mind and machine leading to “superhuman” thought. After all, if we’ve taught AI to the point they can routinely crush us, perhaps we need to turn the tables and begin learning from them:

Many are startled by the ability of DeepMind’s AIs to make winning moves no human player would dream up, rewriting centuries-old playbooks. Tapping into these AIs can take players to a new level.

Douglas Heaven, “Mind meld: Artificial intelligence is improving the way humans think” at New Scientist

Do unexpected moves show that AI has achieved a level of understanding vaulting past the best of us?

Nope, not at all, and a moment’s thought makes clear why: Any interesting game includes a vast space of possible moves and play. Chess has roughly 1046 legal board positions. Go, which is both simpler and more complex than chess, has even more board positions, over 10170, (which exceeds the number of atoms in the known universe).

Because the number of moves and positions is so large, any uninhibited search—that is, a search not restricted to strategies that we’ve devised— will come across unexpected moves. It is like throwing a dart at a barn.

Game-playing AI, including AlphaGo and AlphaZero, search willy-nilly (though they trim and evaluate their options as they proceed) and select moves that prior training suggests are likely to yield a win. But computers stumble across moves by chance, not by insight and certainly not by strategy. (DeepMind acknowledges that AlphaGo used a “Policy Network” to pick the next move, based on statistical success, and a “Value Network” that, given a board position, predicts the winner.)

Humans cannot motor through huge numbers of possible moves. So we rely on historical strategies and insights to limit the possible moves we must consider to a much, much smaller number. We aim at targets. As a result, any move that a game-playing AI makes that is not drawn from approaches familiar to us will surprise us. And, sometimes, those unexpected moves can be good ones.

Heaven is correct to point out that such AI could help us, but not for the reason he supposes:

Assuming such techniques work and we can build ever better AIs, the most promising possibility is that they will become our collaborators.

Douglas Heaven, “Mind meld: Artificial intelligence is improving the way humans think” at New Scientist

AI can become our collaborators only in the sense that a shovel can collaborate with me to dig a hole: It amplifies my power to do things which are otherwise difficult for me.

Humans have limits; that’s no surprise. And we’ve built tools to amplify our minds past those limits—to dig canals, build hundred-story buildings, and create machines we call AI.

So is AI useful? Yes; I would not dig a hole to plant a shrub without a shovel. We can use AI, properly vetted, to amplify our abilities.

Is AI becoming intelligent and ready to supersede us? Should we now sit at its feet to learn? No and no. I will not trade my hands for shovels; nor will I give up my mind for the mere calculation of a machine, no matter how useful it can sometimes be.


If you enjoyed this item, you may also be interested in these articles by Brendan Dixon:

DeepMind’s AlphaGo defeated a world-champion Go player but further gains were hard-won at best. The question scientists must ask, especially about an unexpected finding, is this: If no one can reproduce your results, did you discover something new or did you just get lucky? With AI that’s not easy, due to dependence on randomness.

Alexa really does NOT understand us. In a recent test, only 35 percent of the responses to simple questions were judged adequate. Actually, I am impressed that voice assistants work as well as they do, given the number of AI problems that were solved. But consider how much more complex the problems facing a self-driving car are.

and

Has Aristo broken bounds for thinking computers? The Grade 8 graduate improves on Watson but we must still think for ourselves at school. Here’s why: Aristo combines questions and answers on a multiple-choice test to decide on the best answer without understanding any of the information.


Brendan Dixon

Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Brendan Dixon is a Software Architect with experience designing, creating, and managing projects of all sizes. His first foray into Artificial Intelligence was in the 1980s when he built an Expert System to assist in the diagnosis of software problems at IBM. Since then, he’s worked both as a Principal Engineer and Development Manager for industry leaders, such as Microsoft and Amazon, and numerous start-ups. While he spent most of that time other types of software, he’s remained engaged and interested in Artificial Intelligence.

Do We Need To Learn from AI How To Think Better?