Mind Matters Natural and Artificial Intelligence News and Analysis
Artificial Intelligence Playing Go
Artificial intelligence playing traditional board game Go concept

The Game-Playing AI Does Not Always Win, It Turns Out

Enterprising researchers beat KataGo at Go by taking advantage of its known blind spots
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

At Vice, science writer Tatyana Woodall tells us that clever researchers developed a rival adversarial AI to trick KataGo into losing games:

Players have often used KataGo to test their skills, train for other matches, and even analyze past games, yet in a study posted recently on the preprint server arXiv, researchers report that by using an adversarial policy—a kind of machine-learning algorithm built to attack or learn weaknesses in other systems—they’ve been able to beat KataGo at its own game between 50 to 99 percent of the time, depending on how much “thinking ahead” the AI does. Funnily enough, the new system doesn’t win by trumping KataGo all out, but instead by forcing KataGo into a corner, essentially tricking it into offering to endthe match at a point favorable to its adversary. “KataGo is able to recognize that passing would result in a forced win by our adversary, but given a low tree-search budget it does not have the foresight to avoid this,” co-author Tony Wang, a Ph.D. student at MIT said of the study on the site LessWrong, an online community dedicated to “causing safe and beneficial AI.”

Tatyana Woodall, “Scientists Found a Way to Defeat a ‘Near-Superhuman’ Go-Playing AI” at Vice (November 10, 2022) The paper is open access.

The researchers were taking advantage of KataGo’s known blind spots. The researchers hope to learn more about how powerful AIs approach problems. According to one of the study authors, Adam Gleave, a powerful AI casrries out tasks very differently from a human, so if it fails, it will fail in “actually very surprising and alien ways.” Complete lack of foresight appears to be, in this case, one of them.

As tech philosopher George Gilder points out, the reason chess and Go-playing machines can usually win is that, in those games, the map is the territory. So mastery of the map usually means mastery of the territory. Real life doesn’t work that way at all, which is why such machines don’t do a SkyNet and take over the world.

And then along come some humans who study the AI’s specific weaknesses and take advantage of them. That wasn’t on the map.

You may also wish to read: Are computers that win at chess smarter than geniuses? No, and we need to look at why they can win at chess without showing even basic common sense. AI succeeds where the skill required to win is massive calculation and the map IS the territory. Alone in the real world, it is helpless. (George Gilder)


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

The Game-Playing AI Does Not Always Win, It Turns Out