Mind Matters Natural and Artificial Intelligence News and Analysis
fuse-burning-on-black-background-isolated-stockpack-adobe-stock
Fuse burning on black background isolated
Image licensed via Adobe Stock

What Mission Impossible Tells Us About AI Mythology

If you’re looking for an intelligent take on existential risk and superintelligent AI, the latest Mission Impossible movie is not for you.
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Tom Cruise — I mean Ethan Hunt — likes to run. He likes to ride motorcycles. He’s always down to speed race cars. He’s all in on leaping out of windows, base jumping, and hand to hand combat. And he always wins the heart of the beautiful girl. You’ll see all this in Mission Impossible: Dead Reckoning, Part One, the latest in the long-running film series, and the film series’ foray into “existential risk” thinking about superintelligent AI. “The Entity,” as it’s called, is the nemesis that Hunt and other members of the mum’s-the-word spy organization IMF (Impossible Mission Force) must confront, against all odds, as it’s smarter than any human, and learning constantly.

The Entity cleverly captures personal information about Hunt’s steadfast sidekick, Benji Dunn (played by English actor and comedian Simon Pegg) by planting a fake nuclear weapon in a train station, and requiring Dunn to answer riddles and questions about his personal life before defusing it. There’s no bomb. It’s just a ploy to learn more about Dunn. We’re dealing here with a vastly superior, ultra-witty Superintelligent AI, bent on taking over the world by infiltrating the world’s defense systems. How do we stop it? Who knows? It’s, well, smarter than us.

The arch-villains in cinema’s other long-running action series, the Bond movies — Dr. No was released in 1962, with Sean Connery debuting as 007 — are quirky geniuses with visions of world domination, too. They chuckle at evil, elegantly stroke white cats in posh digs, and watch fighting fish duel it out in fish bowls for amusement. They’re human, if delusional and evil, and they are typically done in by their megalomania and mad mental blind spots. Bond seems to outwit them by having—let’s face it—more common sense. He’s just cooler, basically. The movies seem to warn us that being too smart and having grand visions of taking over the world must end in ruin. The AIs haven’t learned this lesson yet, apparently.

Back to Mission Impossible. “The Entity” is a mashup of villianery, and at times is a shadow unseen, like a looming Poltergeist, soon to terrify by entering the physical world. At other times it makes strange scary noises like a demonic force, something out of The Ring, as it computes the best way to eliminate humankind. It’s a ghost in the machine, and a horror movie monster, to boot. As a plot device, it’s annoying. It’s a deux ex machina that serves mainly to keep Cruise running, shooting, riding, driving and loving. And that’s the problem. The most successful movies in the “AI” genre, loosely speaking, embody their AIs. Blade Runner had replicants. The Terminator had Schwarzenegger’s abs and pipes — talk about “embodied”! The body-less Entity in the latest Mission Impossible movie, we’re told, is running the whole show. Fine. Frankly, it’s boring. The movie is only saved by the human action, where derring-do and heroism keep the film emotional and interesting enough. The Entity is a non-Entity.

The film makers handle this ho hum by marrying The Entity to a mercurial human personage known as “Gabriel,” who we learn once killed Marie, one of Hunt’s early crushes, and has since haunted Hunt’s dreams. A shadowy, steely gazed evil doer, Gabriel is a real person — he’s the human equivalent of the androids and serves their function on screen, but he’s an actual dude. His dudeness allows The Entity to embody, so it can get a better seat for the show, the stream of turbocharged events that make the actual movie a movie. Superintelligence, turns out, is boring. (Like the critics say, then unplug it.) Let’s move on.

In an early scene, National Intelligence director Denlinger, in a top-secret meeting announces that the AI experiment, begun by US intelligence agencies and tested on a Russian submarine (no problem there), had rewritten its code, started learning on its own, and had since become “sentient.” It’s a creature. It’s alive. It now wants to take over the world — who can say why, exactly? We’re pigeons outside its window. We’re the dumber entity. It does what it does for reasons we can’t fathom.

Phlegm Theories

Ages ago, I spent a year studying at The University of Arizona at Tucson, where a young hipster called David Chalmers had arrived as a new professor. I wanted to study the philosophy of mind and consciousness, of course. I was on an exchange with my program at UT Austin. Chalmers has since become famous for making a simple but powerful point: consciousness isn’t reducible to whatever the brain is doing, because the “functions and structures” of scientific explanation don’t tell you what a red wine tastes like on the tongue, or an ice cube. A pain in the foot. Your mother-in-law. The philosophical “proof” of what’s called qualitative or sentient or conscious experience (I’m running these together for our purposes here — in philosophy we would spend another four thousand hours teasing them apart) gets complicated, involving something called modal logic, which deals with possibility and necessity. The upshot of all the math logic scribbles is that a philosophical zombie — a creature that acts exactly like any other creature with consciousness but has none — is logically possible, and therefore there can’t be a reduction of the mind to the functions and structures of the brain. Consciousness is a separate fact about the world. It isn’t contained in the facts about the brain.

At the time (this was, to date myself, 1998), philosophy departments were drawing lines in the sand between incompatible stances on consciousness: it’s just an illusion and the brain and science is it, it’s something “epiphenomenal,” or emergent from the brain but actually different (different property), or the traditional Cartesian view: we have matter (the brain), and we have mind (where consciousness is to be understood and were it resides). It was fun stuff. The debate drew in sharp minds not just from philosophy but from cognate fields like cognitive science, neuroscience, and psychology (a famous consciousness researcher, Stuart Hameroff, is an anesthesiologist. Another, Roger Penrose, a mathematical physicist.)

I left philosophy long ago and went into computer science. When I returned to the old debates about consciousness, though, I discovered to my amusement that they’d hardly changed. There were additional theories over the years, integrated information theory (IIT), global neuronal workspace (GNW), and others. They all shared the dubious honor of not really answering Chalmer’s pesky objection, that consciousness was provably a separate fact (the “proof” involves the modal logic, which of course can itself by disputed). Chalmer’s called the two problems — those of how the brain works, and those of what the mind is—the “easy” and “hard” problems of consciousness. Easy problems allow you to measure and experiment, like say a phenomenon like attention. What do we pay attention to, and how does that mechanism work? The hard problems are about feels or qualia, how things seem to us internally — in our minds. It’s hard to throw out the evidence that we’re conscious, because we are conscious. That’s the whole problem. We’re not automatons, the “lights are on” inside. But how?

Fast forward to The Entity. Or rather, to the field of AI as it progressed into data science and then became the cornerstone of digital technology and the world. This is where it gets messy. How is The Entity “sentient”? The lights are on inside. How can that be? Computers are basically calculators — they literally add binary digits. Great for spreadsheets, word processors, cloning human speech (ChatGPT), classifying images and flying drones (not so much driving cars yet). It’s a powerful set of tools. But how can gargantuan adding machines “come alive”? And here’s where the muddle of consciousness research in philosophy helps all those billionaire entrepreneurs bent on having a worldview. It’s a phlegm theory.

Phlegm. Ahem.

In 2016, Princeton psychologist and neuroscientist Michael Graziano wrote in The Atlantic a fantastic article about the state of consciousness research, titled “Most Popular Theories of Consciousness Are Worse Than Wrong: They play to our intuitions, but don’t actually explain anything.” He had a great hook: phlegm. “Phlegm theories,” he explained, “resonate[] seductively with our intuitions and biases, but [don’t] explain anything.” The oozy reference to phlegm is from the middle ages:

According to medieval medicine, laziness is caused by a build-up of phlegm in the body. The reason? Phlegm is a viscous substance. Its oozing motion is analogous to a sluggish disposition.

Garziano’s point — quite a downer — is that our theories of conscious today are still phlegm theories. We really have no clue how it’s connected to our brains, or more specifically what gives rise to it, under what conditions, and how it comes to be subjective rather than another objective fact in science. How can we have subjective facts in a mature science, or technoscience? What do you mean, subjective? I think it was the great German thinker Gottfried Leibniz who once remarked that if we could shrink a person to the size of a — biological cell? — and have him or her walk around in the brain, they wouldn’t find any consciousness. Leibniz was an Enlightenment figure, obsessed with God and calculus — he co-invented the calculus with Isaac Newton — but he made a beguiling and simple point. Consciousness isn’t a “function or structure.” It’s not something you can write down and capture. You can’t see it. You experience it. What do we do, then? Simple. Throw in some phlegm.

Ahh, phlegm. This brings me to Bostrom. Superintelligence boosters like Nick Bostrom, who wrote in 2014 the now classic (but very flawed) Superintelligence: Paths, Dangers, Strategies, like to play on one hand a pure “science” view of AI that excludes talk of sentience or consciousness. We hear from Brian Christian about “the alignment problem,” where a possibly mindless but very clever AI learns independently to the point where it’s “out of alignment” with its human creators and the rest of us hapless fools, who will no doubt soon be in its crosshairs. It might just be math, but it’s dangerous math. Whether it’s truly sentient or not, it’s too darn smart to ignore.

On the other hand, playing that hand is not much fun — no one likes math but math geeks — and since the 1960s masterpiece, 2001 A Space Odyssey Kubrick, introducing us to the inimitable HAL 9000 (“I’m sorry Dave, I’m afraid I just can’t do that…”), on up through the Terminators and Replicants and SkyNets and all the rest, sentience is what’s sexy, and complicated enough intelligent machinery ought to have it. You can’t be evil if you don’t have a mind — no one thinks a fly is evil for landing on your waffle. Flies are the sorts of creatures that land on waffles. That’s it. So sentient superintelligence is the real prize here.

If pressed, the serious scientific types (Bostrom is actually a philosopher) can fall back on the argument that sentience isn’t required to create an existential risk. (Berkeley computer scientist Stewart Russell also plays the “on one hand, on the other hand” card, as I’ve explained in my book at some length. Maybe it’s a mindless program that still learns to kill us. Maybe it’s sentient. Who can say?) This practiced ambivalence about the question of sentience is the superintelligence enthusiast’s version of ducking and weaving in boxing. It keeps the discussion going, and manages to slip direct questions intended to clarify. It sidesteps well meaning attempts to pin down exactly what’s being claimed. It’s good work, if you can get it. Ducking and weaving is a good strategy.

Phlegm theories. They’ve found quite a home in imaginings and brain droppings about future AI. No one bothers to wonder whether a gadget that runs off of electricity and adds binary numbers incomprehensibly fast might become like an organism with sentience. No one has to bother with it. Why? Because the science of consciousness is mostly phlegm theories even about human minds, about us. If we don’t have a clue about consciousness, everything can be consciousness. Why not?

In other words, The Entity IS sentient. Get over it.

So. If you’re looking for an intelligent take on existential risk and superintelligent AI, the latest Mission Impossible movie is not for you. There are no insights into sentient AI. It’s reasonably clever writing about what-we-guess would be a super smart impersonal force. Like a demonic force in a horror movie, but built of electronics in Silicon Valley. On the other hand, if you like to watch humans be heroic and overcome seemingly impossible odds, this just might be your film. And if you like to watch Tom Cruise run, jump, ride, drive, dive, fight and get the girl, it’s definitely for you.

Cross posted at Colligo.


Erik J. Larson

Fellow, Technology and Democracy Project
Erik J. Larson is a Fellow of the Technology & Democracy Project at Discovery Institute and author of The Myth of Artificial Intelligence (Harvard University Press, 2021). The book is a finalist for the Media Ecology Association Awards and has been nominated for the Robert K. Merton Book Award. He works on issues in computational technology and intelligence (AI). He is presently writing a book critiquing the overselling of AI. He earned his Ph.D. in Philosophy from The University of Texas at Austin in 2009. His dissertation was a hybrid that combined work in analytic philosophy, computer science, and linguistics and included faculty from all three departments. Larson writes for the Substack Colligo.

What Mission Impossible Tells Us About AI Mythology