Mind Matters Natural and Artificial Intelligence News and Analysis
programming-code-script-abstract-screen-of-software-developer-stockpack-adobe-stock
Programming code script abstract screen of software developer.
Photo licensed via Adobe Stock

Why You Are Not — and Cannot Be — Computable

A computer science prof explains in a new book that computer intelligence does not hold a candle to human intelligence.
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

An excerpt from Chapter 1 of Non-Computable You (2022) by Walter Bradley Center director Robert J. Marks (Discovery Institute Press, June 2022)

The Non-Computable Human

If you memorized all of Wikipedia, would you be more intelligent? It depends on how you define intelligence.

Consider John Jay Osborn Jr.’s 1971 novel The Paper Chase. In this semi-autobiographical story about Harvard Law School, students are deathly afraid of Professor Kingsfield’s course on contract law. Kingfield’s classroom presence elicits both awe and fear. He is the all-knowing professor with the power to make or break every student. He is demanding, uncompromising, and scary smart. In the iconic film adaptation, Kingsfield walks into the room on the first day of class, puts his notes down, turns toward his students, and looms threateningly.

“You come in here with a skull full of mush,” he says. “You leave thinking like a lawyer.” Kingsfield is promising to teach his students to be intelligent like he is.

One of the law students in Kingsfield’s class, Kevin Brooks, is gifted with a photographic memory. He can read complicated case law and, after one reading, recite it word for word. Quite an asset, right?

Not necessarily. Brooks has a host of facts at his fingertips, but he doesn’t have the analytic skills to use those facts in any meaningful way.

This image has an empty alt attribute; its file name is Non-Computable-You-1085x1597.jpg

Kevin Brooks’s wife is supportive of his efforts at school, and so are his classmates. But this doesn’t help. A tutor doesn’t help. Although he tries, Brooks simply does not have what it takes to put his phenomenal memorization skills to effective use in Kingsfield’s class. Brooks holds in his hands a million facts that, because of his lack of understanding, are essentially useless. He flounders in his academic endeavor. He becomes despondent. Eventually he attempts suicide.

This sad tale highlights the difference between knowledge and intelligence. Kevin Brooks’s brain stored every jot and tittle of every legal case assigned by Kingsfield, but he couldn’t apply the information meaningfully. Memorization of a lot of knowledge did not make Brooks intelligent in the way that Kingsfield and the successful students were intelligent. British journalist Miles Kington captured this distinction when he said, “Knowing a tomato is a fruit is knowledge. Intelligence is knowing not to include it in a fruit salad.”

Which brings us to the point: When discussing artificial intelligence, it’s crucial to define intelligence. Like Kevin Brooks, computers can store oceans of facts and correlations; but intelligence requires more than facts. True intelligence requires a host of analytic skills. It requires understanding; the ability to recognize humor, subtleties of meaning, and symbolism; and the ability to recognize and disentangle ambiguities. It requires creativity.

Artificial intelligence has done many remarkable things, some of which we’ll discuss in this book. AI has largely replaced travel agents, tollbooth attendants, and mapmakers. But will AI ever replace attorneys, physicians, military strategists, and design engineers, among others?

The answer is no. And the reason is that, as impressive as artificial intelligence is — and make no mistake, it is fantastically impressive — it doesn’t hold a candle to human intelligence. It doesn’t hold a candle to you.

And it never will. How do we know? The answer can be stated in a single four-syllable word that needs unpacking before we can contemplate the non-computable you. That word is algorithm. If not expressible as an algorithm, a task is not computable.

Algorithms and the Computable

An algorithm is a step-by-step set of instructions to accomplish a task. A recipe for German chocolate cake is an algorithm. The list of ingredients acts as the input for the algorithm; mixing the ingredients and following the baking and icing instructions will result in a cake.

Robert J. Marks

Likewise, when I give instructions to get to my house, I am offering an algorithm to follow. You are told how far to go and which direction you are to turn on what street. When Google Maps returns a route to go to your destination, it is giving you an algorithm to follow.

Humans are used to thinking in terms of algorithms. We make grocery lists, we go through the morning procedure of showering, hair combing, teeth brushing, and we keep a schedule of what to do today. Routine is algorithmic. Engineers algorithmically apply Newton’s laws of physics when designing highway bridges and airplanes. Construction plans captured on blueprints are part of an algorithm for building. Likewise, chemical reactions follow algorithms discovered by chemists. And all mathematical proofs are algorithmic; they follow step-by-step procedures built on the foundations of logic and axiomatic presuppositions.

Algorithms need not be fixed; they can contain stochastic elements, such as descriptions of random events in population genetics and weather forecasting. The board game Monopoly, for example, follows a fixed set of rules, but the game unfolds through random dice throws and player decisions.

Here’s the key: Computers only do what they’re programmed by humans to do, and those programs are all algorithms — step-by-step procedures contributing to the performance of some task. But algorithms are limited in what they can do. That means computers, limited to following algorithmic software, are limited in what they can do.

This limitation is captured by the very word “computer.” In the world of programmers, “algorithmic” and “computable” are often used interchangeably. And since “algorithmic” and “computable” are synonyms, so are “non-computable” and “non-algorithmic.”

Basically, for computers — for artificial intelligence — there’s no other game in town. All computer programs are algorithms; anything non-algorithmic is non-computable and beyond the reach of AI.

But it’s not beyond you.

Non-Computable You

Humans can behave and respond non-algorithmically. You do so every day. For example, you perform a non-algorithmic task when you bite into a lemon. The lemon juice squirts on your tongue and you wince at the sour flavor.

Now, consider this: Can you fully convey your experience to a man who was born with no sense of taste or smell? No. You cannot. The goal is not a description of the lemon-biting experience, but its duplication. The lemon’s chemicals and the mechanics of the bite can be described to the man, but the true experience of the lemon taste and aroma cannot be conveyed to someone without the necessary senses.

If biting into a lemon cannot be explained to a man without all his functioning senses, it certainly can’t be duplicated in an experiential way by AI using computer software. Like the man born with no sense of taste or smell, machines do not possess qualia—experientially sensory perceptions such as pain, taste, and smell.

Qualia are a simple example of the many human attributes that escape algorithmic description. If you can’t formulate an algorithm explaining your lemon-biting experience, you can’t write software to duplicate the experience in the computer.

Or consider another example. I broke my wrist a few years ago, and the physician in the emergency room had to set the broken bones. I’d heard beforehand that bone-setting really hurts.

But hearing about pain and experiencing pain are quite different.

To set my broken wrist, the emergency physician grabbed my hand and arm, pulled, and there was an audible crunching sound as the bones around my wrist realigned. It hurt. A lot. I envied my preteen grandson, who had been anesthetized when his broken leg was set. He slept through his pain.

Is it possible to write a computer program to duplicate — not describe, but duplicate — my pain? No. Qualia are not computable. They’re non-algorithmic.

By definition and in practice, computers function using algorithms. Logically speaking, then, the existence of the non-algorithmic suggests there are limits to what computers and therefore AI can do.

Here are all of the excerpts in order:

Why you are not — and cannot be — computable. A computer science prof explains in a new book that computer intelligence does not hold a candle to human intelligence. In this excerpt from his forthcoming book, Non-Computable You, Robert J. Marks shows why most human experience is not even computable.

The Software of the Gaps: An excerpt from Non-Computable You. In his just-published book, Robert J. Marks takes on claims that consciousness is emerging from AI and that we can upload our brains. He reminds us of the tale of the boy who dug through a pile of manure because he was sure that … underneath all that poop, there MUST surely be a pony!

Marks: Artificial intelligence is no more creative than a pencil. You can use a pencil — but the creativity comes from you. With AI, clever programmers can conceal that fact for a while. In this short excerpt from his new book, Non-Computable You, Robert J. Marks discusses the tricks that make you think chatbots are people.

Machines with minds? The Lovelace test vs. the Turing test. The answers computer programs give sometimes surprise me too — but they always result from their programming. When it comes to assessing creativity (and therefore consciousness and humanness), the Lovelace test is much better than the Turing test.

Machines with minds? The Lovelace test vs. the Turing test The answers computer programs give sometimes surprise me too — but they always result from their programming. When it comes to assessing creativity (and therefore consciousness and humanness), the Lovelace test is much better than the Turing test.

and

AI: The shadow of Frankenstein lurks in the Uncanny Valley. The fifth and final excerpt from Non-Computable You (2022), from Chapter 6, focuses on the scarier AI hype. Mary Shelley’s “Frankenstein” monster (1808) wasn’t strictly a robot. But she popularized the idea — now AI hype — of creating a human-like being in a lab.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Why You Are Not — and Cannot Be — Computable