Mind Matters Natural and Artificial Intelligence News and Analysis
detective-board-with-crime-scene-photos-stickers-clues-and-r-400895277-stockpack-adobestock
Detective board with crime scene photos, stickers, clues and red thread, closeup
Image Credit: New Africa - Adobe Stock

AI’s Not “The Answer” In Our World of Mysteries, Not Puzzles

A focus on AI encourages us to see problems that are really mysteries as puzzles, which makes addressing them much harder
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

The Cold War (1947‒1991) was scary, but it wasn’t particularly complicated. America and its allies needed more information to respond to threats and to chart effective strategies for containing the Soviet Union. The concepts that emerged from this global standoff is the stuff of James Bond films and nuclear holocaust PSAs: covert ops, pens that secretly snapped photos, satellite surveillance, nuclear escalation, and the slow-motion debacle of the Bay of Pigs.

To win this new kind of war, the West consolidated treaties into NATO, monitored communist insurgencies in the jungles of Central America, and recruited a generation of patriotic citizens who dreamed of peace on earth but drew bright lines between the free world and the enemy.

Analysts at the CIA and other agencies compiled reconnaissance and intelligence briefings. Weapons manufacturing was delegated to a growing constellation of defense contractors, and the “military-industrial complex” that Eisenhower warned against took shape — producing artillery, vehicles, and long-range missiles to counter an adversary whose own strategic logic mirrored ours.

The Soviets played by the same general rules, even if their moves were cloaked in secrecy and deception. It was a dangerous game, but it was still a game. The Cold War, in other words, was a puzzle. The board was fixed; the challenge was to find the missing pieces.

AI to the rescue! …or not

Ironically, our obsession today with artificial intelligence — hailed as the spearpoint of modernity — actually marks a return to puzzle thinking. By embracing AI, we’ve remade the world as a problem space ideally suited to computational logic, but deeply out of sync with the nature of our own time, which differs markedly from the Cold War.

Literary fiction, police inspector, investigate crime and mystery story conceptual idea with sherlock holmes detective hat, smoking pipe, retro magnifying glass and book isolated on wood table topImage Credit: Victor Moussa - Adobe Stock

The problems that confront us now are not puzzles. They are mysteries. And AI, for all its power, is pulling us in the wrong direction.

Despite the hype around “Artificial Super Intelligence” (ASI) or “Artificial General Intelligence” (AGI), today’s AI is most useful when we already know the direction and can define the boundaries. We set objectives, curate datasets, and let massive computational systems — “big iron” — detect patterns and optimize outputs.

This is not thinking in any meaningful sense. It’s a statistical game. If we want a model to detect troop movements from satellite imagery, we feed it scores of annotated images until it can classify with confidence. If we want it to generate a face, we give it millions of examples of faces. These are puzzles solved with data, so it’s no wonder that AI has been used to play games.

The role of AI in games

Discrete board games like chess have been part of AI since the beginning. In the 1950s, Alan Turing (1912–1954) and Claude Shannon (1916–2001) designed game-playing algorithms using minimax logic — computing the move that maximizes your chance of winning while minimizing your opponent’s. When IBM’s Deep Blue defeated Garry Kasparov in 1997, it didn’t out-think him. It out-computed him.

Today, superhuman chess engines run on smartphones. More complex games like Go have also fallen. In 2016, DeepMind’s AlphaGo beat world champion Lee Sedol, stunning experts with moves no human had considered. Its successor, Alpha Zero, trained itself from scratch to master chess, shogi, and Go without human data — just the rules.

AI now plays poker, negotiates in diplomacy simulators, and writes code. It continues to extend its reach into domains once thought uniquely human. But the field rarely stops to ask a basic question: do deterministic games tell us anything about how life works? They don’t. They never did.

As philosopher Jerry Fodor (1935–2017) once put it, lamenting the difficulties AI researchers had making AI resemble human thinking, we’re playing three-dimensional chess with tools built for Tic-Tac-Toe. Except today’s world isn’t even a game of three-dimensional chess. It’s unstable, opaque, and full of paradox. AI, by contrast, is ironically rooted in Cold War logic — born to serve closed-world problems. One of its earliest applications? Calculating missile trajectories.

Puzzles vs. mysteries

The common feature of a puzzle is that more information helps solve it. In a mystery, more information often keeps it unsolved. We already live with too much information. What we have is often contradictory. Worse, it’s polluted — fakes, forgeries, noise. So we turn to computation to manage the deluge, not realizing that our own minds might make data-crunching unnecessary, or even unwise. The mystery we’re trying to solve may not be buried in the dataset at all. But we’re stuck crunching through it anyway, feeding it to AI systems built to detect patterns in puzzles, not meaning in mysteries.

Aspect: Nature

·         Puzzle: Closed system with fixed rules and boundaries

·         Mystery: Open-ended, ambiguous, often lacking clear structure

Aspect: Goal

·         Puzzle: Find the missing piece or correct solution

·         Mystery: Interpret clues to uncover meaning or hidden dynamics

Aspect: Method

·         Puzzle: Data collection, logic, computation

·         Mystery: Intuition, insight, inference

Aspect: AI Strength?

·         Puzzle: Yes — excels at rule-based, bounded problems

·         Mystery: No — lacks contextual judgment or understanding of clues

Aspect: Human Edge

·         Puzzle: Low (especially at scale or in pattern-heavy domains)

·         Mystery: High — especially in meaning-making and context-sensitive insight

Aspect: Examples

·         Puzzle: Chess, logistics, route planning, image recognition

·         Mystery: Espionage, diplomacy, counterterrorism, love, grief

The importance of clues, not just data

The distinction between data and a clue is crucial. Computers can’t see clues, because clues live in a dynamically changing world that depends on seeing something in the context of a large and unstructured problem that requires explanation.

Detective board with crime scene photos, stickers, clues and red thread, closeupImage Credit: New Africa - Adobe Stock

They’re not just isolated signals; they’re signs of something, and that something depends on purpose and perspective. A half-drunk soda can in my kitchen might be data if I’m tracking my own behavior: how often I leave cans unfinished. But it becomes a clue if I’m trying to figure out who left it there (and it wasn’t me). The object hasn’t changed. The frame has.

Data just sits there. Clues reveal themselves. In other words, clues are data that point. They become significant through narrative inference — through the act of noticing, of interpreting, of asking: what does this mean, and why is it here?

Computers, by contrast, are trained to generalize across fixed datasets. They don’t know what matters unless the information backdrop is held stable — like a board game with fixed rules. No wonder AI is so good at playing games. The core assumption behind scaling AI to solve problems requiring more “intelligence” is that if we feed the system enough data, it will eventually infer what we need. That’s true for puzzles. It’s irrelevant for mysteries. Mysteries are never solved by data crunching, which means they’re never candidates for solutions by AI.

Humans are terrible at crunching data but remarkably good at seeing clues. That’s why we still outmatch our machines in domains where meaning matters. But instead of building tools to elevate this strength, we keep designing systems that bypass it. We treat our own judgment as a liability, and outsource it to engines trained only to sort what’s already been labeled. And so we end up less like detectives — and more like assistants to the machine, dutifully organizing fragments in the hope that the answer will assemble itself.

Next: Mistaking the world for a puzzle risks a new McCarthyism


Erik J. Larson

Fellow, Technology and Democracy Project
Erik J. Larson is a Fellow of the Technology & Democracy Project at Discovery Institute and author of The Myth of Artificial Intelligence (Harvard University Press, 2021). The book is a finalist for the Media Ecology Association Awards and has been nominated for the Robert K. Merton Book Award. He works on issues in computational technology and intelligence (AI). He is presently writing a book critiquing the overselling of AI. He earned his Ph.D. in Philosophy from The University of Texas at Austin in 2009. His dissertation was a hybrid that combined work in analytic philosophy, computer science, and linguistics and included faculty from all three departments. Larson writes for the Substack Colligo.
Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

AI’s Not “The Answer” In Our World of Mysteries, Not Puzzles