Mistaking the World for a Puzzle Risks a New McCarthyism
In Part 2, I look at the imaginary world we invent when we succumb to puzzle thinkingAI boosters promise an imaginary future where the big machine brain will solve our deepest human challenges: world hunger, climate change, war, poverty. But these are not puzzles waiting to be solved with more data. They are mysteries—layered, shifting, full of conflicting incentives and moral trade-offs. They demand insight and judgment, not just scale. The clues that matter — why people starve, why peace fails, why the planet overheats — are not hidden in larger and larger datasets. They’re entangled in history, politics, psychology, and human meaning.
Framing the world as a puzzle lets us justify an AI-first agenda. But it comes at a cost: we downgrade real intelligence — the kind that grasps nuance, context, and paradox. We pretend that more data means deeper understanding. In doing so, we commit ourselves to an impossible game: one where tech-centric solutions lean on data-driven systems but trap us in simplicity and puzzle-solving logic. Eventually, the real mysteries break through.
21st century mysteries
September 11 offered a puzzle in hindsight — data could be pieced together after the fact. But it was a mystery in real time. We had no frame for the idea that hijackers might use passenger planes as missiles. We were watching the wrong dashboard.
In his 2009 book What the Dog Saw, Malcolm Gladwell draws on national security analyst Gregory Treverton’s distinction (just as I do) between puzzles and mysteries to illuminate another case of catastrophic misreading: the Enron scandal. The Enron collapse in the early 2000s wasn’t the result of hidden fraud or secret documents — it was all there in the company’s public filings. The problem wasn’t that we lacked information. It’s that we failed to interpret it. Enron wasn’t a puzzle to be cracked; it was a mystery hiding in plain sight, buried under too much data and too little judgment.

The same could be said of the 2007–2008 financial collapse: we had more data than ever, but no shared lens to make sense of it. Our modern world is awash in puzzle thinking, even as the problems we face increasingly resemble open-ended mysteries. Yet our tools remain stubbornly unchanged — data-driven, obsessed with scale, and built atop ever-larger towers of computational iron.
Today, critics of AI and centralized Big Data are often dismissed as backward-looking, even anti-scientific. But they are not Luddites. They are, more often, the last defenders of human judgment — trying to preserve the possibility of meaning in a world that increasingly sees cognition as just computation, and forgets that mind is not only needed, but irreplaceable.
Sherlock vs. Deep Blue
Solving mysteries requires a distinctively human capacity to sleuth, an act that fires the imagination not through mechanistic data crunching masquerading as mind, but through human insight and intuition. Sherlock Holmes captivates us — not because he sees more, but because he sees differently. He claims merely to observe, but we know he’s performing a kind of magic no computer could replicate: drawing meaning from the same facts that leave others — Lestrade, the bumbling inspector, for example — completely stumped.
Sleuths transform what others treat as raw data into clues. They infer what lies behind appearances. They see what isn’t hidden in complexity, like patterns in data, but often sits in plain view — if only we had the eyes to see it. In short, they treat problems not as puzzles to be completed, but as mysteries to be unraveled.
Culture tends to celebrate the Herculean insight of sleuths because we all recognize in the sleuth powers that are distinctively human, powers that deserve to be held in high regard, protected, and fostered through education and training, and in our media and entertainment.

Even in the modern world, detectives — real world sleuths — don’t look to databases or algorithms to find clues. They use them to test hypotheses first born of intuition, experience, and hunch. Researchers like Gerd Gigerenzer, who wrote the 2007 book Gut Feelings, study not cognitive bias (aka “human stupidity”) but intuition and insight. Gut instinct, in other words. He and others in this field point out that humans often depart from sequential machine logic to solve problems. In this frame, the machine is properly only a tool. The mystery is left for the real Big Brain: the mind.
Mysteries as puzzles in disguise?
AI boosters insist that mysteries are merely puzzles in disguise — problems of scale solvable with more data and faster machines. This attitude diminishes the mind by trying to replace mystery with puzzle. We’re hard at work squaring that circle today, as large language models and generative AI spread rapidly across the world and are pressed into service everywhere. AI has its place, to be sure. There are still puzzles to solve, and data to crunch. But all the data in the world doesn’t make a mystery disappear. And pretending otherwise only deepens our confusion.
Our fetish for data crunching and analytical precision would have been perfectly suited to the Cold War, when the world really was a puzzle. The broad contours of our standoff with the Soviet bloc were stable and legible. Strategy meant assembling the right information, slotting it into a known framework, and outmaneuvering a predictable adversary.
Today’s world is anything but stable or tractable, like a game with opposing sides. It’s not a closed system but a tangle of opaque alliances, overlapping supply chains, and temporary arrangements with shifting friends and enemies. We fight asymmetric battles against actors who don’t share our playbook. The data is there — often overwhelming in quantity — but the context to make sense of it isn’t. We’ve wandered into a multidimensional mystery with tools built for solving board games.
AI systems like DeepMind’s Alpha Zero may have conquered chess and Go, but the world is not a game with bounded rules and optimal strategies. It’s not closed, it’s not knowable in advance, and no amount of computing power will make it so. The fantasy that AI will usher in a new age of peace, prosperity, and post-scarcity coordination is just that — a fantasy.
AI has its uses, but it won’t excuse us from sleuthing. It won’t rescue us from history. Believing otherwise isn’t just naive, it’s a symptom of our recursive modern condition: a return. We’ve revived a puzzle world we once outgrew, convinced it still fits. But it doesn’t. And the cost of getting it wrong keeps climbing.
This mindset — puzzle-first, data-obsessed, and enthralled by fantasies of tech saviors — doesn’t just misread the world. It leaves us exposed. It keeps us stuck in a new McCarthyism.
Here’s Part 1: AI’s not “The Answer” in our world of mysteries, not puzzles A focus on AI encourages us to see problems that are really mysteries as puzzles, which makes addressing them much harder. Humans are terrible at crunching data but remarkably good at seeing clues. That’s why we still outmatch our machines in domains where meaning matters.
