Mind Matters Natural and Artificial Intelligence News and Analysis
confused-robot-stockpack-adobe-stock
Confused robot
Image licensed via Adobe Stock

If AI’s Don’t Know What They’re Doing, Can We Hope to Explain It?

With AI, we have a world of powerful, useful, but entirely opaque systems. We don’t know why they make decisions and neither do they
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In yesterday’s post, I talked about the fact that AI’s don’t understand the work they’re doing. That makes the goal — to make them think like people — elusive. This brings us to the second problem, which ended up spawning an entire field, known as “Explainable AI.”

Neural networks not only don’t know what they’re doing when they do it, they can’t in general explain to their designers or users why they made such-and-such a decision. They’re a black box; in other words, they are obstinately opaque to any attempts at a conceptual understanding of their decisions or inferences. How does that play out?

It means, for example, that, with image recognition tasks like facial recognition, the network can’t explain why it thought someone was a criminal (because he looks like a photo at the crime lab), or why a self-driving car decided to classify a bicyclist as a foreign and unproblematic object.

With neural networks we gained an immense tool for important tasks, but with a Faustian bargain: We generally don’t count the systems as actually knowing anything (the point of AI). Even if they do, we can’t ask them what, or why. We have a world of powerful, useful, but entirely opaque systems.

The current top tech may feature a worse problem

Back to inference. An even thornier problem confronts our current tip-of-the-spear AI tech, neural networks. All machine learning, of which neural networks are a part, involves the provision of prior examples in order to learn. OpenAI and Microsoft have solved this “prior examples” problem with ChatGPT by cleverly linking it to Bing, a search engine. But there’s a confusion here: the core large language models make use of search engine results but aren’t trained on them. As a result, the model is not “learning” constantly like a human mind, but applying a previously trained model to information culled from the web.

Enthusiasts suggest that “in-context learning” solves this problem, but a closer look reveals its limits because the trained model weights don’t get updated. It’s a bit technical for this discussion, but readers can read about it here and in many other technical sources on the web.)

The difference between this technology and human brains is most obvious when considering, broadly, innovation—inventing something new, or coming up with a genuinely novel theory or idea. Training on prior examples means the knowledge available to the system is, in some very real sense, already discovered and written down. How can it come up with something—anything—new? The data dependency problem haunts ChatGPT in the same way that it haunts machine learning generally.

Though computer scientists typically don’t refer to this data dependency problem as induction, that’s what it is. And the problem, again, is that scientists, mathematicians, and philosophers have known for centuries that induction is not adequate, by itself, for true intelligence. It needs to be combined with other forms of inference, like deduction, and a much lesser known type referred to as abduction, or hypothesis generation.

Can AI do without reasoning from effects back to causes?

Abduction has a lineage as far back as Aristotle, who developed the syllogism: all people are mortal, Socrates is a person, therefore Socrates is mortal (known still by its original Latin name, modus ponens). The latter is roughly causal inference, where we reason from an observed effect back to plausible causes. Since most of the world we see is linked causally somehow—the car didn’t just stop, the brakes were applied, and the brakes generate hydraulic pressure, that travels to the brake caliper at the wheel—that’s generally how we human minds see everything. Neural networks don’t have a clue about these other types of inference, so they can’t possibly be on a path to general intelligence. We already know this, though, strangely, it’s rarely if ever communicated to the broader public.

ChatGPT is Cool. Sort of.

In 2017, a group of Google Research and Google Brain scientists — as well as (again) a scientist from the University of Toronto — delivered quite an ingenious paper to the Conference on Neural Information Systems Processing (NIPS) in Long Beach, California. The paper, an instance of pure genius, quietly at first and then very loudly paved a path to what we now know as Large Language Models. These are very large neural network systems that chug through massive amounts of text to generate new text.

The innovation was called “self-attention” or just “the attention mechanism,” and the details are a bit too hairy to delve into here. The attention mechanism they described made possible the new generation of language translation, text classification, text summarization, and chatbots or conversational AI we see and use today. It may even significantly improve web search, which Google has owned for essentially the entire century so far. Google has, predictably, launched its own version of ChatGPT called Bard.

Stagnation on the journey to true intelligence

The upshot of all this discussion is that on the surface AI has progressed by leaps and bounds but dig deeper and you see that it’s actually stuck; the innovations for neural networks are laudable, but the broader vision of AI reaching AGI is dead on arrival. In fact, the entire 21st century can be read as bold innovations for a specific part of AI (machine learning), as well as stagnation on our journey to true intelligence.

Autonomous Car Fatal Crash. Accident with self driving vehicle due to software failure

To take one obvious example, the focus on deep neural networks is why driverless cars, which were all the rage circa 2016, have largely disappeared from discussion today. It’s one thing to misclassify an image or a face, or get an AI “hallucination” from a large language model, as when ChatGPT makes up a ridiculous or nonsensical answer because there’s some weird gap in its training data — or, because it doesn’t actually know what it’s saying in the first place. It’s quite another when a fully autonomous vehicle weighing over a ton rams into a school bus, thinking it’s an overpass, or kills a bicyclist, or thinks a speed limit sign is a stop sign.

Self-driving car ballyhoo died out precisely because, as Elon Musk himself put it in 2021, “Generalized self-driving is a hard problem, as it requires solving a large part of real-world AI. Didn’t expect it to be so hard, but the difficulty is obvious in retrospect.” He add, tellingly: “Nothing has more degrees of freedom than reality.” There’s the rub.

If nothing has more degrees of freedom than reality …

The question is what to do next. The answer, at one level, is obvious. As a community, computer scientists need to start thinking beyond further innovations for neural network systems. We’ve already done that. OpenAI, which made ChatGPT, released its last version of a large language model, GPT 4, with over a trillion parameters. It was trained on billions of words (tokens), amounting to gigabytes of text.

This means, among other things, that just training a model of this size using the attention mechanism and the underlying neural network requires scores of GPU-equipped computers, outside the reach of scientists and engineers not hyper-funded by venture capital. The pursuit of AI has become the sole domain of very rich institutions only. But innovations typically emerge from diverse places, historically universities and labs, or sole tinkerers with new ideas, or groups of scientists with no money yet but common pursuits and passions. This isn’t the game we AI scientists are playing anymore.

Fortunately, some iconoclasts have begun speaking out, like Gary Marcus, formerly a cognitive science professor at NYU, Ernest Davis, who is also at NYU, and Hector Levesque at University of Toronto. They’re all pointing out that neural networks aren’t enough. And, encouragingly, even pioneers of neural networks like Yann LeCun, who is currently Chief AI Scientist at Meta, have begun admitting limitations. Last year, LeCun in a published interview conceded that the current approach isn’t enough and later put out a much-discussed paper on OpenReview.net outlining a different approach (though it still uses neural networks). LeCun’s complaint is with the lack of commonsense in current AI approaches, a lament that stretches far back into the annals of AI research. It’s refreshing that these scientists are discussing limitations and speaking out. It gives hope that new and diverse ideas might start flowing into the field from myriad sources. And as LeCun correctly remarks, such innovations may finally give computers commonsense, no doubt by expanding the types of inference they can perform. That was the vision of progress in AI all along. Not “Big Data AI,” but true AI. It’s time now to get on with it.

Here’s Part 1: Why, despite all the hype we hear, AI is not one of us. It takes an imaginative computer scientist to believe that the neural network knows what it’s classifying or identifying. It’s a bunch of relatively simple math. The AI scientist’s dream of general intelligence, often referred to as Artificial General Intelligence (AGI), remains as elusive as ever.


Erik J. Larson

Fellow, Technology and Democracy Project
Erik J. Larson is a Fellow of the Technology & Democracy Project at Discovery Institute and author of The Myth of Artificial Intelligence (Harvard University Press, 2021). The book is a finalist for the Media Ecology Association Awards and has been nominated for the Robert K. Merton Book Award. He works on issues in computational technology and intelligence (AI). He is presently writing a book critiquing the overselling of AI. He earned his Ph.D. in Philosophy from The University of Texas at Austin in 2009. His dissertation was a hybrid that combined work in analytic philosophy, computer science, and linguistics and included faculty from all three departments. Larson writes for the Substack Colligo.

If AI’s Don’t Know What They’re Doing, Can We Hope to Explain It?