Humans Aren’t That Biased — and Machines Aren’t That Smart
Part 1: At an upcoming conference on AI, I will be puncturing that particular AI enthusiast’s fantasyLater this month, I’ll be on a panel in Savannah, Georgia, for a Liberty Fund colloquium on “AI and Technosocialism.” This will be my second time participating — my first was nearly twenty years ago at a scenic retreat outside Tucson, Arizona. This year’s discussion centers on an intriguing book: Radical Uncertainty: Decision-Making Beyond the Numbers (Little Brown 2020) by economists John Kay and Mervyn King. Their focus on how we make judgments under “radical uncertainty” has important implications for AI.
First, let’s look at the experts and their own biases — particularly in how they label human reasoning as “cognitive bias.” An obsession with the supposed precision of math and computation has left multiple fields, like economics, psychology, and cognitive science with reams of research purportedly establishing our “natural stupidity.” Not so. Read on.
Why the concept of cognitive bias is itself biased: the “Linda problem”
I first encountered the “Linda problem” in a class on epistemology, part of the philosophy department’s curriculum. It is one of a number of arresting examples of human stupidity offered over the decades by Nobel economist Daniel Kahneman (1934‒2024) and his partner, Amos Tversky (1937‒1996), both Israeli psychologists working in America. From the 1970s up to the 21st century, Kahneman and Tversky have left such a large footprint on thinking about thinking that you’re likely to encounter them not just in philosophy but in economics, psychology, and even classes on math and statistics.

The two went mainstream in culture, too, notably when writer Michael Lewis, who wrote Moneyball and The Big Short (both were turned into major motion pictures) brought Kahneman and Tversky to bookshelves with his The Undoing Project (Norton 2016)
The “undoing project” is an apt description of what Kahneman and Tversky were up to. They were intent on undoing the idea that humans think rationally and aren’t generally plagued by cognitive bias. Au contraire, according to K & T we don’t think rationally even when we think we do. Worse, our mental shortcuts amount to distortions — cognitive bias — that we later rewrite to fit the facts, rather than face our own “natural stupidity.”
I haven’t read The Undoing Project, but knowing Lewis’s work, I suspect it’s a compelling narrative. I also have no doubt that K & T didn’t accomplish what they’d hoped. That is because we’re not hopelessly bias and irrational in the way they meant.
There is a serious problem with their presupposition: They assume that axiomatic rationality, of the sort that is appropriate for problems like games — with known rules and constraints, and a well-defined optimal outcome — should be the goal. Think of poker, for example, where there’s no ambiguity about the best possible hand. They are generally what economists call “small world” problems.
Back to the Linda problem. In 1983, Kahneman and Tversky published in the journal Psychological Review a paper titled “Extensional versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment.” In the paper, K & T reported their study where they presented participants with the following scenario*:
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice and also participated in anti-nuclear demonstrations.
Which of the following is more likely?
- Linda is a bank teller.
- Linda is a bank teller and is active in the feminist movement.
The most common answer (given by 85% to 90% of college undergraduates at major universities) was Option 2. This is wrong according to probability theory, because the probability of two events A and B occurring together cannot exceed the probability of either event occurring alone. Yet K & T were surprised to discover that the error persisted among the majority of the participants even when it was pointed out. Assuming that “axiomatic rationality” requires adherence to the laws of probability, they concluded that people have a “conjunctive bias.”
The conjunctive bias result then led to the classification of entire classes of biases that have now been broadly absorbed into society. Here is a list:
Probability & Statistical Biases
- Conjunction Fallacy – Assuming that specific conditions are more likely than a single general one (e.g., the Linda Problem).
- Base Rate Neglect – Ignoring general probability rates in favor of specific details.
- Gambler’s Fallacy – Believing past events affect the likelihood of independent future events (e.g., “I’ve lost five times, so I must win the next round”).
Risk & Loss Perception Biases
- Loss Aversion – Feeling the pain of losses more strongly than the pleasure of equivalent gains.
- Optimism Bias – Believing that negative events are less likely to happen to oneself than to others.
- Negativity Bias – Giving more importance to negative experiences than positive ones.
Decision-Making & Belief Biases
- Confirmation Bias – Tendency to search for, interpret, and remember information that confirms pre-existing beliefs.
- Availability Heuristic – Overestimating the importance of information that is most readily available (e.g., fearing plane crashes over car accidents due to media coverage).
- Anchoring Bias – Relying too heavily on the first piece of information encountered (the “anchor”) when making decisions.
- Overconfidence Bias – Overestimating one’s knowledge, ability, or predictions.
- Hindsight Bias – Believing, after an event has occurred, that it was predictable (“I knew it all along”).
- Framing Effect – Making different decisions based on how the same information is presented (e.g., “90% survival rate” vs. “10% mortality rate”).
Memory & Perception Biases
- Peak-End Rule – Judging an experience based on its peak (most intense moment) and its end rather than the total experience.
- Recency Bias – Giving more weight to recent events than earlier ones.
- Serial Position Effect – Remembering the first and last items in a list better than the middle ones.
Cataloguing these ostensible biases certainly has some value — presumably we would want hapless gamblers to internalize the idea of sunk cost fallacies. But the broader point here is negative: Kahneman and Tversky, and the concept of cognitive biases in itself, assume a picture of rationality — called, again, “axiomatic rationality” as it’s derived from math and statistics —that works only when we know the rules of the game. Then we can determine the outcomes, perhaps with more information.
Probability theory, a relatively modern concept in the history of thought, traces back to 1654 when a French gambler, Chevalier de Mere, sought the help of mathematician Blaise Pascal (1623–1662) in resolving a paradox in a popular dice game.

The details would take us too far afield, but suffice it that de Mere’s confusion anticipated several centuries of more recent confusion about the role of models as applied to the real world. Pascal was able to (quite brilliantly) resolve de Mere’s complaint that his expectations of winning didn’t match reality, but he succeeded because he had a small world to consider, rather than the large world we inhabit outside of card games and gambling.
Still, the notion that we can reduce the real world to a card game, in effect, and compute probabilities for outcomes seems an almost irresistible charm for modern researchers hoping to look clever by application of probability and math to big picture problems in social, psychological, environmental and other contexts. Indeed, Kahneman and Tversky and others of their ilk have convinced generations of scholars, students, and the rest of us that we’re walking dunderheads. Let’s go back to the Linda problem and see why they — not we — are wrong.
Next: The Linda problem revisited — as if reality matters
*Note: Kahneman describes the Linda problem in his 2013 book Thinking, Fast and Slow. I’m following his description in hat more recent work.