Mind Matters Natural and Artificial Intelligence News and Analysis
Aerial view of city intersection with many cars and GPS navigation system symbols. Autonomous driverless vehicles in city traffic. Future transportation concept
Aerial view of city intersection with many cars and GPS navigation system symbols. Autonomous driverless vehicles in city traffic. Future transportation concept.
AI surveillance Adobe Stock licensed

The Real Threat AI Poses Is the “I” That Controls It

As AI becomes a part of everyday life, the science fiction glow fades; the constant high-tech surveillance intensifies
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In 2014, I attended a talk by Oxford philosopher Nick Bostrom (right), who drew attention to his just-published book, Superintelligence: Paths, Strategies, Dangers, by warning us all about the threat of superintelligent AI. AI is an existential threat, he explained, a possible destroyer of worlds that we aren’t treating seriously enough. The audience, it seemed, shared his concern and the Q&A featured lots of serious questions and comments about what to do when AI becomes smarter than Einstein—in other words, smarter than any of us.

I treated the talk as a diversion, a lesson in, say, the sociology of technology, where strange futuristic theories get taken seriously without any evidence or basis. Later, I wrote an article in The Atlantic about AI hype, highlighting what I then called a counterculture of thinkers who question Bostrom-like worries about the imminent rise of machines.

The narrative is different today. It’s not that the countercultural voices won. So far as I can tell, we have the same cast of characters prognosticating about AI and the future today. Few people openly scoff at, say, Elon Musk’s declaration that we’re “summoning the devil” with true AI (the thinking kind, that Bostrom warns us about). Futurist Ray Kurzweil, Google’s Director of Engineering, still commands large audiences eager to hear his divinations of the inevitable arrival of truly intelligent AI—AI with common sense he means, for one, but also AI with a mind and feelings. The future talk is largely the same.

If anything, futurism about the arrival of true AI has become de rigueur. It’s so mainstream that Forbes: or the BBC might still run a story about the coming life-like AI even as, weirdly, Wired magazine, the putative home of tech futurism, does so less and less. The old narrative hasn’t died so much as that all the Promethean excitement it generated a few years ago has by degrees faded. It’s ringing hollower.

The trend is welcome, in large part because discussion about AI is finally starting to sound more realistic. AI is actually in the world now—the world we live in. Pundits and the rest of us worry about surveillance states, rigged elections, and politically incorrect results (suddenly recognized as such) from mindless machines owned by billionaires. AI pressed into service by well-meaning companies can create unintended and not-so-funny spoof reels, providing a lesson about blind computation. Consider, for example, misclassifying photos online (as when two African-Americans were tagged as gorillas by Google Photos), spewing hate speech (as in Microsoft’s hapless Tay chatbot, which was cancelled less than 24 hours after launch), and more tragically some very stupid fatal accidents involving self-driving cars (keeping autonomous navigation in a research phase, thankfully).

Embarrassing AI failures are reminders of what should be obvious, sans all the futurism: machines are designed by people and don’t have any intelligence themselves. “Trust” is now a meme among the AI crowd, another reminder that AI is not getting smarter so much as it’s becoming ubiquitous, and the failures we see now matter—they are part of our everyday lives.

Bostrom’s warning about the arrival of superhuman “intelligence” on supercomputers running statistical algorithms (or what have you) is great sci-fi, no doubt. But when all the buzz about AI lands someone behind the wheel of a real automobile that’s really moving, steered by a bunch of computer code, existential threats seem less pertinent. Trust seems more like what is expected of us. And now.

The point is, “AI” exemplifies an age-old problem for society, that of wise assessment of the use of technology, which is often (if not always) a coin with two sides. In a very real sense, questions of trusting (or fixing) AI are on a par with discovering that old refrigerators could trap children inside. Technology has a downside, which requires human smarts to sort out. It’s not magical or demonic. It’s stuff, that we build.

Trust is just the first early warning sign of what might become a new AI “winter,” the cyclical loss of confidence in AI as it fails to make good on promises. The 2010s—viewable in pretty clear retrospect already—were the ultimate AI party. But they’re over. It all happened quickly. Deep learning algorithms debuted at the well-known ImageNet competitions, a yearly test of “visual object recognition”—recognizing images in Flickr photos, in other words. Deep Learning was AI on steroids, winning the contest easily in 2012. That success accounts for the resurgence of interest in other tasks requiring “eyes” of sorts, like driving a car. There are about 1,500 self-driving cars on the road, thanks largely to AI (though, thankfully, they’re all in testing phase).

Deep Learning has also spurred next-generation voice recognition systems like Siri, and some small successes in text-based applications like Google’s Talk to Books. But Siri still doesn’t understand anything (and you can’t really talk to books, alas). Optimistically, we might argue that Siri and other AI-inside applications are more useful today due to the AI summer we’ve just enjoyed. Pessimistically, nothing really substantive happened. The world is just more chock full of technology everywhere, labeled as “AI.”

Bostrom’s message at the height of the excitement was predictable, perhaps (unlike the super-AI itself). Also predictable was the way science and tech luminaries like Elon Musk, Bill Gates, and—famously—the late astrophysicist Stephen Hawking weighed in on the threat and promise of superintelligent AI—the kind that can hold a conversation with us, think like us, and then surpass us.

Listening to Musk, one imagines that an AI might wake up one day, scratch its silicon head, and decide that it’ll just take over the human world. But anyone familiar with even a potted history of the field of AI recognizes the same silly narrative here—and we should stop falling for it.

The real story, as we enter the next real decade, is the threat of mindless technology everywhere. And central to this new worry is a very human issue: who owns “AI”? Who’s using it, for what, and how? Along with trust issues, the other early warning sign we might have heeded instead of the hype is the coming surveillance state, made possible by—what else?—AI technology. There’s no futuristic AI in sight. But despots and unscrupulous governments and companies love AI all the same.

The Carnegie Endowment for International Peace has released a report, The Global Expansion of AI Surveillance (2019), which is a warning of sorts about summoning the devil of AI. But the report is about the human uses of AI, not the coming of HAL 9000. As I was chuckling self-assuredly about Bostrom’s warnings about a coming superintelligence (“Don’t people realize that AI has zero commonsense?”), China started aggressively funding the development and deployment of AI to create an actual dystopia, a sci-fi world we should worry about: an increasingly terrifying and oppressive surveillance infrastructure that tracks the movements and behaviors of its citizens. Central governments with control issues love AI—and the more mindless (but effective), the better. Chinese and US AI technology has spread to countries around the globe, where it can (and is) used with good or ill intent by its human owners. As the report indicates, “The most important factor determining whether governments will deploy this technology for repressive purposes is the quality of their governance.” The devil isn’t in the AI.

Thankfully the Carnegie group, recognizing the very real and seismic threat, created an AI Global Surveillance Index (AIGS), which tracks the adoption of AI technology for surveillance purposes in 176 countries. Surveillance is on the rise, it turns out. One of the key findings is that AI surveillance is spreading much faster than experts predicted and it’s creating a host of threats to peace and privacy worldwide.

Unintended consequences and downright nefarious intentional uses accompany thinking about major AI projects like smart city/safe city platforms (fifty-six countries), facial recognition systems (sixty-four countries), and smart policing (fifty-two countries). The adjective “smart” here doesn’t add much to our understanding. The technologies in question aren’t really “AI,” either. They’re supercomputers crunching volumes of citizen data. Virtual Reality pioneer and AI gadfly Jaron Lanier summed things up nicely when he quipped that “whoever owns the largest computer, wins” when it comes to modern data-centric AI. That’s not typically private citizens. It’s governments. Here’s to the new existential threat.

We really do have something to worry about with AI, after all. It’s just not what we expected. Superintelligence isn’t really imminent, and may not even be inevitable. It’s little relief though, if troubling surveillance and erosion of privacy and human rights can happen—is happening—without it. The more important prediction about AI, the spread of surveillance technology, hasn’t been aggressive enough—we failed to see the future that was coming. China, Russia, Saudi Arabia, and other countries with questionable human rights records have now achieved what amounts to a lock-in, with major AI systems already funded, developed (or purchased), and deployed. (Maybe Bostrom was onto something after all.) Someone might have raised more of an issue earlier, but in the West we were distracted, worried about AI “coming alive,” rather than taking pictures of us and tracking us for the all-too-human reasons of its owners.

Congratulations, futurists and skeptics alike. We—all of us—are left in the real world, as it rapidly transforms and spreads capabilities that will cause real pain, summoning real demons.

Note: The photo of Nick Bostrom at Oxford is courtesy of the Future of Humanity Institute (CC BY-SA 4.0).


Further reading on global AI and surveillance. These articles are by Heather Zeiger:

China: Sophisticated surveillance decides who gets sent to Uyghur camps: The leak of documents from police in Karakax County in Xinjiang reveal the details of everyday life that can send a Uyghur to the camps

Why China leans hard on Central Asia The region is critical to China’s ambitions, hence the generous offers of state-of-the-art surveillance technology

In China, high-tech racial profiling is social policy. For an ethnic minority, a physical checkup includes blood samples, fingerprints, iris scans, and voice recordings. The Chinese government seeks a database of everyone in the country, not only to track individuals but to determine the ethnicity of those who run up against the law.

and

The internet doesn’t free anyone by itself. China is testing 100% surveillance on the Uyghurs, a strategically critical minority.


Mind Matters Analysis

The Real Threat AI Poses Is the “I” That Controls It