Mind Matters Natural and Artificial Intelligence News and Analysis
tom-barrett-329280-unsplash
View of Lake Michigan from the Memorial Museum in Milwaukee, Wi
Photo by Tom Barrett on Unsplash

AI Winter Is Coming

Roughly every decade since the late 1960s has experienced a promising wave of AI that later crashed on real-world problems, leading to collapses in research funding.
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Self-driving cars, robotic surgeons, and digital assistants seem to promise a world in which AI can do anything. And yet …

But it’s those very self-driving cars that are causing scientists to sweat the possibility of another AI winter. In 2015, Tesla founder Elon Musk said a fully-autonomous car would hit the roads in 2018. (He technically still has four months.) General Motors is betting on 2019. And Ford says buckle up for 2021. But these predictions look increasingly misguided. And, because they were made public, they may have serious consequences for the field. Couple the hype with the recent death of a pedestrian in Arizona, who was killed in March by an Uber in driverless mode, and things look increasingly frosty for applied AI. Eleanor Cummins, “Another AI winter could usher in a dark period for artificial intelligence” at Popular Science

Cummins is not alone, nor is she the first to notice. Even venture capitalists have been asking, is AI doomed to yet another period of darkness?

What is an “AI Winter”? Researchers coined the term in the mid-1980s, after observing that AI projects appeared to follow a boom-and-bust cycle, not that you hear much of that checkered history from boosters. Roughly every decade since the late 1960s has experienced a promising wave of AI that later crashed on real-world problems, leading to collapses in research funding.

So, is AI currently headed for another round of winter doldrums? Will it again retreat into relative hibernation as funding dries up and computer scientists take on other challenges? Always winter and never Christmas? I don’t believe so. Though AI has no hope of ever fulfilling the fever-dreams of Elon Musk or Ray Kurzweil, things really are different this time. But other dangers loom.

First, what caused previous AI winters? There was one straightforward reason: The technology did not work. Expert systems weren’t experts. Language translators failed to translate. Even Watson, after winning Jeopardy, failed to provide useful answers in the real-world context of medicine. When technology fails, winters come.

Nearly all of AI’s recent gains have been realized due to massive increases in data and computing power that enable old algorithms to suddenly become useful. For example, researchers first conceived neural networks—the core idea powering much machine learning and AI’s notable advances—in the late 1950s. The worries of an impending winter arise because we’re approaching the limits of what massive data combined with hordes of computers can do.

Though some among the current slate of so-called AI marvels are failures (I’m looking at you, Watson) and limits do exist, others are delivering as promised, in the real-world, with live data—and therefore real problems. As long as an idea works, people will pay, funding will remain, and winter can be held off. But, as spectacular new results come less often, our attention will drift, and we’ll no longer “see” the AI around us, chugging away through unimaginable amounts of data to make decisions affecting us all. And that’s the problem.

A couple of months ago, Amazon “fired” an AI tool it had created to screen resumés when they learned it showed bias against women. If they had read mathematician Cathy O’Neil’s book, Weapons of Math Destruction (available at Amazon, naturally), they might have been more wary. You see, she has raised the problem of data bias in AI, and computer algorithms more generally, for years. While Amazon may have caught their misogynistic resumé reader in time, data-backed AI is quickly expanding into business and elsewhere; those systems, too, likely have hidden, embedded biases.

Here are a few other problems to watch for with modern, data-driven AI:

Data biases: This is Dr. O’Neil’s concern. The data used to train the AI system fails to include all the cases the system might encounter, leading to biased decisions of all kinds (such as, not granting loans or misinterpreting a medical condition).

Plain ol’ mistakes: Even with sufficient, good data, AI remains an algorithm (of sorts). Algorithms can make mistakes. The problem with nearly all modern AI systems is that the “reasons” for a decision are opaque; that is, no one knows why the system chooses one thing over another. Mistakes, then, look just like right answers.

It happened, for example, when a hospital AI system informed doctors that patients with asthma would be at low risk for pneumonia complications if sent home. The problem is, asthma sufferers were at low risk for complications because they never were sent home; they were sent to intensive care instead. But the artificial intelligence system did not “know” that. It didn’t “know” anything.” Fortunately, no one took its advice.

Failing off the edge: Last spring, Wei Huang’s Tesla drove head-first into a concrete highway barrier, causing the car to burst into flames, and killing Huang. Tesla, naturally, reminded everyone that their “autopilot” system was designed only to assist drivers. But what human would, under normal freeway conditions, suddenly, as if for no reason, decide to slam head-first into the lane dividers? Humans can make approximations. They can ease away from a decision. They can choose to go only partway. Computers, not so much. Because machines, at bottom, speak only zeros and ones, edges emerge in the system. Those edges can be fatal.

We have adversaries: Assuming, for the moment, that we can ameliorate the previous problems, modern AI remains fragile for other reasons. Last year, researchers fooled Google’s image recognition system into misidentifying a turtle as a gun. While that may be a humorous episode  (as long the system does not misidentify a gun as a turtle!), it points up a sober reality: Fooling modern AI systems is not that hard, especially when you’re determined to do so. For example, subtle changes to street signs could fool self-driving cars. Bad actors are well aware of these possibilities.

Alternate realities: We tend to believe what we see. Which is why, if we can make up what we see, we can affect what we believe. The same AI that can help us spot weapons in airplane luggage can be used to create fake images or video, indistinguishable from the real thing. So-called “deep fakes” can be so convincing that “even people who personally know the subject in question—President Obama, in one example—couldn’t tell it was fake” (BBC, 2017).

So, what do we do? We should do what we do any time we have a technology that can both help and hurt. We should put boundaries in place to protect against errors. We should stop promoting and believing ridiculous hype. And we should never forget the absolute, stunning thing we call the human mind. Even if AI never experiences another winter.

Image result for Brendan Dixon

Brendan Dixon is a Software Architect with experience designing, creating, and managing projects of all sizes. He first foray into Artificial Intelligence was in the 1980s when he built an Expert System to assist in the diagnosis of software problems at IBM. Though he’s spent the majority of his career on other types of software, he’s remained engaged and interested in the field.

Also by Brendan Dixon: The “Superintelligent AI” Myth The problem that even the skeptical Deep Learning researcher left out

and

There is no universal moral machine Brendan Dixon’s view of MIT’s Moral Machine is featured.


AI Winter Is Coming