Mind Matters Natural and Artificial Intelligence News and Analysis
ai-robot-sitting-on-a-chair-next-to-an-elderly-man-with-coff-895612249-stockpack-adobestock
AI robot sitting on a chair next to an elderly man with coffee, both waiting for a job interview. AI vs human concept. futuristic technology coexisting with people in a professional environment
Image Credit: MD Media - Adobe Stock

The Linda Problem Revisited, As If Reality Matters

Part 2: AI enthusiasts use false claims for humans' “natural stupidity” to bolster claims for machine intelligence
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

Yesterday, we looked at the “Linda problem”:

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice and also participated in anti-nuclear demonstrations.

Which of the following is more likely?

  1. Linda is a bank teller.
  2. Linda is a bank teller and is active in the feminist movement.
clerk counting cash money at bank officeImage Credit: Syda Productions - Adobe Stock

As I noted then, the most common answer (given by 85% to 90% of college undergraduates at major universities) was Option 2. This is wrong according to probability theory, because the probability of two events A and B occurring together cannot exceed the probability of either event occurring alone. Yet Nobel economist Daniel Kahneman (1934‒2024) and his partner, Amos Tversky (1937‒1996) were surprised to discover that the error persisted among the majority of the participants even when it was pointed out. 

But the resolution to the Linda problem is not that 90% of the educated population exhibits the “conjunctive bias,” but rather that most people — 90%! — interpret the description as a story rather than a math problem. They are thinking about the likelihood, given the information provided, not about axiomatic rationality and probability theory. So, why should they trot out the “law” of conjunction in probabilities? Likelihood is more about what we expect to see and much less about the outcome of a calculation. As economists John Kay and Mervyn King put it:

The subjects were not asked about probabilities but about likelihood, and they answered the examination question Kahneman asked rather than the one he thought he was asking.

Kay and King’s analysis is spot on, as far as I’m concerned, and we can add an additional bit of evidence that they didn’t discuss, known in linguistics and philosophy as “Grice’s maxims.” It is summed up by the cooperation principle:

Make your contribution as informative as is required (for the current purposes of the exchange) but not more so.

Grice’s cooperation principle

Applied to Linda’s problem, Grice’s cooperation principle for communication implies that there was some point to offering that Linda was active in issues of discrimination and social justice. Readers then naturally expect this additional description to enter into their judgment of what’s likely or not about her. Given that the two options differ precisely in the way the additional information is framed, it’s natural for participants to interpret the second as the more complete and plausible description of Linda’s life. If no one is thinking about doing probability calculations — and they were not instructed to do this — it’s hardly fair to tag them with a “conjunctive bias.”

Similar remarks can be made about many of the other biases in the list provided in Part I yesterday. They too assume some implausible standard like axiomatic rationality when presented with “big world” or, in other words, realistic or “real world” problems. Facing such problems, people not only don’t rely on formal mathematical reasoning but they can’t rely on it, because the problems can’t be reduced to a set of rules or principles with known or discoverable outcomes. So much for four decades of bias and the conclusion that humans are naturally stupid.

black swan In a pondImage Credit: EwaStudio - Adobe Stock

To be fair, research on cognitive bias can illuminate how we think and make judgments. But by assuming a standard like axiomatic rationality, researchers on bias tend to miss the real reasons we use heuristics or short cuts rather than do calculations in the real or “large world,” which features lots of “unk-unks,” or “unknown unknowns.” Kay and King have a term for our “muddling through” the world of radical uncertainty: evolutionary or ecological rationality. Their ecological rationality is well-suited to our real world of radical uncertainty. Here outcomes are either not envisioned at all — Nassim Nicholas Taleb’s “black swans” for example — or we understand the range of possible outcomes but additional ratiocination or data gathering won’t resolve what happens. For instance, loss aversion is a bias in Kahneman and Tversky’s world, but it’s pretty useful if you’re crossing a busy street or walking around an African savannah. As they put it, “A disposition to avoid large losses is a useful attribute.”

Not always though. In business and sports, risk tolerance is also useful. Indeed, insight and creativity aren’t part of axiomatic rationality and can’t be, since they by nature can’t be predicted. It would seem to follow that examples of insight and creativity are, by the standards or axiomatic rationality, weirdly irrational: “On what basis did you believe you could ‘bend reality to your will,’ Mr. Jobs?”

In fact, a good case can be made that entrepreneurship and capitalism in the vein of Adam Smith is based on a bunch of cognitive biases. Business icons like Steve Jobs or Richard Branson are known to have taken huge risks, not only in pursuit of business dreams but in their personal life. Branson reportedly started his business empire by making huge bets at a gambling table. Such a move, more often than not (by probabilities!), is foolish.

But the broader point, as Kay and King point out, is that people like Branson often have huge (read: irrational) risk tolerances, and this fact becomes a factor in their success. One wonders whether Einstein, who lived a comfortable but certainly not rich life as a patent clerk, was entirely rational when he chose to take on the entire edifice of modern Newtonian physics with an untested theory. Where would we be without these biases? And where would be — if anywhere at all — if we adhered to Kahneman and Tversky’s injunction to be (axiomatically) rational? Which brings us to AI.

How AI enthusiasts use “natural stupidity” to bolster claims for machine intelligence

 Techno-futurists of all stripes, but in particular the true believers in a coming “singularity” or superintelligence most egregiously attack human thinking as hopelessly biased, slow, and stupid. It’s no wonder they trumpet a coming machine intelligence with such ease — they’ve lowered the standards so much to begin with. Kay and King point out that the belief in machine intelligence assumes that radical uncertainty can be tamed; that it can be reduced or translated somehow into small worlds where the rules can be known and more data equals more insight into likely outcomes. Echoing my own use of these terms, they describe this assumption as a desire to convert mysteries into puzzles with known solutions:

Artificial Intelligence (AI) engages computers which can learn from experience. It is the means by which many believe that eventually all mysteries will become soluble puzzles.

Yet the AI crowd has things exactly backward. It’s not that AI will convert mysteries into puzzles that can then be solved, thereby establishing the superiority of machine intelligence over our own. No. It’s that, as Kay and King are at pains to point out, mysteries can’t be converted into puzzles. This in turn means that machine intelligence is at an extreme disadvantage exhibiting the sort of intelligence we value in the real world in which we live. This point is of extreme importance to understanding what is wrong with AI and thinking about the future of AI.

Mysteries are marked by the question, “What’s going on here?”

Axiomatic rationality or, in other words, ratiocination won’t suffice to “solve” a mystery. We need insights and abductive inferences rather than more data and faster processing. We need to develop a means to make plausible inferences and judgments in the absence of assignable probabilities. This is a game that computers don’t play particularly well (if at all). Because mysteries can’t be thought of as puzzles and puzzle-solving strategies are generally unfit for penetrating mysteries, we have a case of endemic artificial stupidity. Language models may have demonstrated that huge-scale puzzle-solvers can convert the cyber world of words into intelligent output, but in the real world where we live words in cyberspace aren’t enough. That’s why self-driving cars are stuck at a stop light since roughly 2016. You can’t drive a ton of metal and plastic around in the real world using tricks in cyberspace.

Small worlds vs. large worlds

Again, Kay and King (correctly) point out that small worlds versus large worlds (or axiomatic versus ecological rationality, or puzzles versus mysteries) inhabit what we might call different epistemic spaces. For humans with ecological rationality (was: cognitive biases), the different epistemic spaces of small and large world problems means that they’re in some sense “non-convertible” to one another. The entire history of AI is one long and largely unsuccessful attempt to convert thought into a set of puzzles that can then be solved by various computational techniques. No one builds an AI to “do abductive reasoning” because no one has the faintest clue how. It’s great that we built such large induction machines with LLMs that we can simulate some of the wonders of our human cognitive powers, if only by generating sequences of words in cyberspace. This is still the very definition of puzzle-solving, since one of the conditions of a puzzle (or a small world) is that more information reveals more of the solution. Or: more data and compute means better models. This is puzzle thinking.

The thought leaders in AI rarely admit that the larger plan here is something we already know is impossible: converting from small worlds to large worlds, or vice versa. The AI industry would rather trumpet successes on games, deep fakes, and text generation than confront the harsher reality of building something like Rosie the Robot or a fully autonomous vehicle. The news would be full of endless reports of failures and deeper and deeper problems.

But it’s not silly for them to subtly downplay human potential with a full throated embrace of “natural stupidity” and cognitive bias. All this psychology and economics research perfectly props their goals. Humans are clearly bad puzzle solvers compared to AI — look at chess, or now Go. We’re slow calculators, too, and we need to do things like eat and sleep. What we’re seeing here is an entire scientific paradigm — in the “soft sciences” — that buttresses the idea that humans are slow, crappy computers made of meat. How do we know? Look at the irrational responses to the Linda problem! Look at all the biases! Kay and King’s treatment of radical uncertainty in large (not small) worlds and our use of a more (not less) powerful form of thinking which they call evolutionary or ecological rationality is important. It gives the lie to the techno-futurist’s arrogant and misinformed insistence that “it’s only a matter of time” before computers can do anything we can do, and better. Computers will keep calculating in the small worlds for which they are built. We’ll keep screwing up the Linda problem and other silly tests as long as the research community is intent on denigrating human thinking rather than understanding it.

What’s “natural intelligence”? No one really knows. AI futurists will claim it either doesn’t exist, or it’ll be quickly in the rear view mirror. As usual, they’re wrong. How we get out of this mess is an interesting mystery indeed.

Kahneman’s last book

I’d like to close by mentioning Kahneman’s last book, published in 2022, Noise: A Flaw in Human Judgment (Little Brown Spark 2021), written with Olivier Sibony at HEC Paris. The Amazon page highlights a quote: “Human judgment can therefore be described as measurement in which the instrument is a human mind.” Wrong-o. This must be a typo, because human judgment precisely cannot be described as such.

Not to pick on Kahneman, but his last salvo seems to have been another gross misunderstanding of ecological rationality. “Noise” here is a statistical term that means random variability around a target. Judges who mete out different sentences for the same crimes depending on whether it’s before or after lunch are generating legal “noise.” But Kahneman again presupposes — really, sneaks in — small worlds where inconsistency can be calculated and a measure of noise versus signal makes sense. Many people read this latest book and concluded, as Kahneman did, that we should replace wherever possible people making judgments with algorithms that generate less noise. Yet many who read this latest book likely concluded, as Kahneman did, that human judgment should be replaced by algorithms whenever possible. And humans will need to be out in front if we are to have any sort of future worth fighting for.

Note: Kahneman describes the Linda problem in his 2013 book Thinking, Fast and Slow (Farrer Strauss & Giroux). I’m following his description in that more recent work.

Here’s the first part of this two-part essay: Humans aren’t that biased — and machines aren’t that smart. Part 1: At an upcoming conference on AI, I will be puncturing that particular AI enthusiast’s fantasy. Via the “Linda Problem,” Daniel Kahneman and Amos Tversky convinced generations that we don’t grasp probability. They’re wrong.


Erik J. Larson

Fellow, Technology and Democracy Project
Erik J. Larson is a Fellow of the Technology & Democracy Project at Discovery Institute and author of The Myth of Artificial Intelligence (Harvard University Press, 2021). The book is a finalist for the Media Ecology Association Awards and has been nominated for the Robert K. Merton Book Award. He works on issues in computational technology and intelligence (AI). He is presently writing a book critiquing the overselling of AI. He earned his Ph.D. in Philosophy from The University of Texas at Austin in 2009. His dissertation was a hybrid that combined work in analytic philosophy, computer science, and linguistics and included faculty from all three departments. Larson writes for the Substack Colligo.
Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

The Linda Problem Revisited, As If Reality Matters