Mind Matters Natural and Artificial Intelligence News and Analysis
clinic-doctors-glass-127873

Too Big to Fail Safe?

If artificial intelligence makes disastrous decisions from very complex calculations, will we still understand what went wrong?
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

A neuroscientist offers an example of the kind of thing that can go wrong while the AI system is still small and focused enough to be easily understood:

As exciting as their performance gains have been, though, there’s a troubling fact about modern neural networks: Nobody knows quite how they work. And that means no one can predict when they might fail.

Take, for example, an episode recently reported by machine learning researcher Rich Caruana and his colleagues. They described the experiences of a team at the University of Pittsburgh Medical Center who were using machine learning to predict whether pneumonia patients might develop severe complications. The goal was to send patients at low risk for complications to outpatient treatment, preserving hospital beds and the attention of medical staff. The team tried several different methods, including various kinds of neural networks, as well as software-generated decision trees that produced clear, human-readable rules.

The neural networks were right more often than any of the other methods. But when the researchers and doctors took a look at the human-readable rules, they noticed something disturbing: One of the rules instructed doctors to send home pneumonia patients who already had asthma, despite the fact that asthma sufferers are known to be extremely vulnerable to complications. Aaron M. Bornstein, “Is Artificial Intelligence Permanently Inscrutable?” at Nautilus

The machine’s job was to discover a true pattern in the data. It did. The pattern was that asthma sufferers rarely developed complications.

The reason for that pattern was that the policy was to send asthma sufferers with pneumonia to intensive care, with the result that they seldom developed severe complications. But the artificial intelligence system did not “know” that. It actually didn’t “know” anything.

Bernstein goes on to discuss the problem of artificial systems becoming so large that they are “inscrutable,” so that no one really knows why the output is potentially disastrous. It’s good to know we are not there yet.

See also: Software Pioneer Says General Superhuman Artificial Intelligence Is Very Unlikely
and

Meaningful information vs. artificial intelligence (Eric Holloway)


Too Big to Fail Safe?