Mind Matters Natural and Artificial Intelligence News and Analysis
face recognition technology concept illustration of big data and security in city with crowd
face recognition technology concept illustration of big data and security in city with crowd
AI Surveillance Adobe Stock Licensed

The Danger AI Poses for Civilization

Why must Google be my helicopter mom?
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The “intelligence” in artificial intelligence (AI) is a problem for civilization. If you look at what consumer AI stuff is actually used for, it’s always some non-essential, sometimes childish benefit. Even useful apps like Google Maps have a key side effect: I am tracked by my phone when I use navigation services? Why must Google be my helicopter mom? I own the capabilities I bought so why aren’t I like Jason Bourne? Why am I a data point for a company instead?

That’s not “I’m an adventurer.” It’s “I’m an extra in a remake of Brave New World. Gee, thanks.

As I noted in an earlier post, if I own a good pair of scissors or boots, they enhance my powers in a way that I own. Someone can seize my scissors (or hat, compass, or whatever). But no one can track me and turn me into a data point simply because I have it. It’s a simple extension of my abilities.

AI decidedly does not work this way for us. The difference is very obvious and very important too. For one thing, there’s no equivalent anywhere of a stand-alone AI system. If I have a coffee cup with “AI inside,” it’s probably connected to the Internet, which is just another way of saying that my damn coffee cup is transmitting data to some company’s servers about my coffee drinking habits. Whatever benefit the app provides will come at the cost of my autonomy, privacy, and competence as a person. Maybe the app offers a hitherto unheard-of benefit. Let’s say it automatically keeps the coffee at a specified temperature. Maybe I don’t even mind being a data point for Company X while I drink coffee and read Kierkegaard. But again, the point is that to get that AI benefit, I must give away something. I have not just bought a tool. I have become petlike—tracked and monitored. That’s weird. It should bother us. Wherever you find putative benefits of AI, you get the the exchange of your autonomy and privacy to get it.

My phone now turns on my alarm for me because it “noticed” that I always set it for the same time. All of this feel-good, ease of use, non-essential stuff is a benefit that makes me feel like a child. Facebook notices pictures of my friends even if I didn’t ask it to. Is that a benefit for me? Pretty low bar for a benefit. The problem is not the low bar for the benefit; the problem is the cost: an increase in dependence on externals that’s basically infantilizing. AI is not empowering anyone, so far as I can tell, with the notable exception governments and other organizations that have the express purpose of tracking movements, enforcing rules, and treating people as data points.

It’s interesting that when we automate systems for non-frivolous purposes, we don;t tend to call it “AI.” For instance, the autopilot systems on modern Airbus aircraft are pretty much just “AI” systems that take control from pilots, transferring it to a system using sensor input within some range of acceptable scenarios for flying the plane. That’s a major story for AI enthusiasts. But it’s not even discussed, largely because there’s no ready-made marketing pitch. People don’t want to hear about the MCAT system flying the plane (technically, pushing the nose down on a steep climb)—they just want to know that the pilot in charge has all that handled. So suddenly it’s not “AI”—it’s just a bunch cockpit machinery. But there’s no substantive difference between those aviation systems and “AI” in consumer goods, except that the infantilizing effect is only experienced by the pilots (the rest of us are in for a passive ride regardless).

The main problem? It’s not the infantilization as such. It is the fact that AI systems transfer power asymmetrically to the owners of the systems, not to the users. Because modern AI systems require masses of data to work better, whoever owns all the data also claims the “AI.” So, weirdly, the Orwellian angle, the Big Brother stuff, and the increasingly passive and powerless consumer are actually becoming the modern story.

The problem is not that legislation or watchdog groups (fill in all the usual remedies available) are not active enough. The problem is that the tech’s actual capabilities serve owners of big data and big computers directly and customers only secondarily, in a compromised way. There’s a fundamental shift toward the large, the central, and the fragile—but that last point is another discussion. AI is driving that asymmetry of power; or, at least, it is the clearest exhibition of the asymmetry. It’s still an open question how we can design systems that don’t shift influence and control this way so that we can have powerful tech without becoming someone else’s tool.

The nub of the problem we face is this: If I have an “AI-inside” app on my phone that my friends and I built and no one else has access to or control of, it’s probably not very helpful or powerful. If I download one that Microsoft or Google designed “for me” it probably works well. But they have the handle now. The app works so well because it mixes all my data (what I say, where I walk, when I have dinner—whatever) with a huge database of other people whose activities are similarly tracked and their behaviors analyzed. Thus my consumer advantage increases for whatever benefit I seek from that app but I’m increasingly a data point in that centralizing landscape. Unavoidable. Again: it’s an open question to what extent that one-sided power relationship can be changed.

We have no real models of AI that aren’t tied to Big Data. Because AI doesn’t really “work” without Big Data, there’s always a ready argument for centralization, which means that each customer is transformed into a data point. You can’t build “stand-alone” AI. Owners of systems ended up with a ton of power this way—even if they themselves didn’t intend it, or even see it coming at the start. Too late now. They have it.

You could see this as an extension of a certain logic going all the way back to nineteenth-century thinkers like Adolphe Quetelet (1796–1874), who realized that while we can’t predict when one person will die or get married, we can use statistics to understand trends in a population. So there were early central repositories of data for actuarial and other purposes going back before the advent of the computer. However, no one back then was claiming that statistics were “intelligent,” as a stand-alone phenomena. Everyone realized that it was just applied mathematics.

Perhaps if we stripped all this AI talk of the undeserved moniker (“intelligent”) and just called it statistical algorithms applied to our data, some of the silliest aspects of the discussion would iron out. Facebook doesn’t have “AI”—it has a bunch of statistical algorithms that require that people post photos (for visual object recognition) and that people use the platform to communicate and voice their opinions (for text analysis). Google Translate needs texts written in different languages or there’s no “giant brain” for translating. Google makes money from the service but the people writing the content aren’t paid—instead, they get to hear that AI makes it all possible. So this asymmetry is now dug deep into our society. The model is, at any rate, unsustainable economically. Probably the long term psychological deficits it’s creating will make the current AI-driven central platform model too culturally corrosive to be sustainable either. But it is very difficult to see how to change it meaningfully on a large enough scale to matter and the question of rolling it all back is probably moot.


Other recent thoughts on the implications of high tech from our Analysis desk:

Technology centralizes by its very nature. Here are some other truths about technology, some uncomfortable ones. To see what I mean about centralization, consider a non-digital tool, say, a shovel. The shovel doesn’t keep track of your shoveling, read your biometrics, and store a file on you-as-shoveler somewhere. It’s a thing, an artifact. So you see, the new digital technology is itself the heart of the surveillance problem. No Matrix could be built with artifacts.

Superintelligent AI is still a myth. Neither the old classical approaches nor the new data scientific angle can make any headway on good ol’ common sense.

and

Can AI help Hollywood predict the next big hit? AI analysis sifts the past finely. But how well does the past predict the future?


Mind Matters Analysis

The Danger AI Poses for Civilization