Mind Matters Natural and Artificial Intelligence News and Analysis
jackson-jost-530469-unsplash
Photo by Jackson Jost on Unsplash

AI Is Not (Yet) an Intelligent Cause

So-called “white hat” hackers who test the security of AI have found it surprisingly easy to fool.
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Artificial intelligence (AI) continues to advance by leaps and bounds, alarming some, like Michael Egnor, and not alarming others, like Jay Richards. The new Bradley Center that launched on July 17 will be considering both the promise and threat of artificial intelligence.

Should we fear a machine takeover? It’s a staple of science fiction: our creations become our masters. They no longer need us. “Step aside, human; I’ll handle things from here.” But then, will they also develop their own morals? SETI’s ultimate scenario foresees machines supplanting biological organisms, learning to reproduce themselves wherever they land, gradually populating the galaxy. But then, where are they? The Fermi Paradox rears its head. Maybe they haven’t arrived here because they can’t.

It puts one in mind of Yogi Berra. One Yogism says, “It’s tough to make predictions, especially about the future” (although Yogi Berra would be the first to complain, “I really didn’t say all the things I said”). A recent conference raises concerns, according to Science Magazine, that our machines may never be able to get wise to human deviancy. So-called “white hat” hackers who test the security of AI have found it surprisingly easy to fool. Matthew Hutson reports,

Last week, here at the International Conference on Machine Learning (ICML), a group of researchers described a turtle they had 3D printed. Most people would say it looks just like a turtle, but an artificial intelligence (AI) algorithm saw it differently. Most of the time, the AI thought the turtle looked like a rifle. Similarly, it saw a 3D-printed baseball as an espresso. These are examples of “adversarial attacks”—subtly altered images, objects, or sounds that fool AIs without setting off human alarm bells.

Impressive advances in AI—particularly machine learning algorithms that can recognize sounds or objects after digesting training data sets—have spurred the growth of living room voice assistants and autonomous cars. But these AIs are surprisingly vulnerable to being spoofed. At the meeting here, adversarial attacks were a hot subject, with researchers reporting novel ways to trick AIs as well as new ways to defend them. Somewhat ominously, one of the conference’s two best paper awards went to a study suggesting protected AIs aren’t as secure as their developers might think. [Emphasis added.]

Hutson describes one test last year where Dawn Song, a computer scientist at UC Berkeley, subtly altered a stop sign with stickers. It fooled an autonomous vehicle’s image recognition system into “thinking” it was a 45 mph speed limit sign. Humans could immediately recognize the stop sign, but the car did not. It’s no wonder that autonomous car makers shuddered at the results. Will “black-hat” hackers be able to turn autonomous vehicles into terror weapons?

A few months ago, Nicholas Carlini, a computer scientist at Google in Mountain View, California, and a colleague reported adding inaudible elements to a voice sample that sounded to humans like “without the data set the article is useless,” but that an AI transcribed as “OK Google, browse to evil.com.”

Researchers are devising even more sophisticated attacks. At an upcoming conference, Song will report a trick that makes an image recognition AI not only mislabel things, but hallucinate them. In a test, Hello Kitty loomed in the machine’s view of street scenes, and cars disappeared. Matthew Hutson, “Hackers easily fool artificial intelligences” at Science

Evolution News has described steganography as a branch of cryptology, an instance of intelligent design science in action. But this is a new situation: fooling both human and artificial intelligences. If we thought machines could save us from steganography attacks by revealing the secret messages, what are we to think of human designers intentionally fooling the algorithms designed to detect them?

Hutson describes countermeasures AI engineers are developing to thwart the hacks, but his account shows the pursuit to be an endless game of leapfrog: In one of the ICML’s award-winning papers, Carlini, Athalye, and a colleague analyzed nine anti-hacking algorithms obtained from a recent AI conference. Seven relied on obfuscated gradients as a defense, and the team was able to break seven; none took more than a couple days.

One method of thwarting hackers might be to embed all possible hacks in a list and check for them before continuing. Deep Blue beat Kasparov at chess in 1996, for instance, with a “brute force” method of rapidly scanning all possible moves for the best one. This tactic, however, is not really “intelligent” in terms of reasoning; it merely takes advantage of a machine’s speed at processing lists. A decade later, “deep learning” methods began providing shortcuts to solutions. Even so, how can any computer know all the possible methods a sophisticated hacker might dream up in the future? Developers quickly run into a huge scaling problem. Some hope that mathematical algorithms will be able to verify legitimate AI, but the outlook is not hopeful:

A stronger approach is to train an algorithm with certain constraints that prevent it from being led astray by adversarial attacks, in a verifiable, mathematical way. “If you can verify, that ends the game,” says Pushmeet Kohli, a computer scientist at DeepMind in London. But these verifiable defenses, two of which were presented at ICML, so far do not scale to the large neural networks in modern AI systems. Kohli says there is potential to expand them, but Song worries they will have real-world limitations. “There’s no mathematical definition of what a pedestrian is,” she says, “so how can we prove that the self-driving car won’t run into a pedestrian? You cannot!”

Carlini hopes developers will think harder about how their defenses work—and how they might fail—in addition to their usual concern: performing well on standard benchmarking tests. “The lack of rigor is hurting us a lot,” he says. Matthew Hutson, “Hackers easily fool artificial intelligences” at Science

Rather than try to predict who will win the cat-and-mouse game, ID advocates can use the insights from the International Conference on Machine Learning (ICML) to reflect on the meaning of intelligence as a cause. After all, the definition of ID says that “certain features of the universe and of living things are best explained by an intelligent cause….” The ICML conference opens up interesting questions for the Bradley Center to pursue.

See also: Virtual Railroads and West Virginia Back Roads: AI’s Temptation to Theft Over Honest Toil (Bill Dembski)

SaveSave


AI Is Not (Yet) an Intelligent Cause