Mind Matters Natural and Artificial Intelligence News and Analysis
industry-40-factory-industrial-engineer-working-with-automat-593395544-stockpack-adobe_stock
industry 4.0, Factory Industrial Engineer working with automation robot arms machine in intelligent factory, generative ai

Why Human Intelligence Thrives Where Machines Fail

We're worried about AI and trust. We should be worried about something deeper
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I’m increasingly concerned about where AI-driven automation is heading. Every week, I get bombarded with job offers on LinkedIn, WhatsApp, and email. Some are from big-name companies; others from startups with “the perfect role” for me. Lately, it’s harder to tell if the offers are genuine. Are real people behind this? Or AI?

Fat Tails and the problem with AI optimization

Much of today’s discussion simply assumes that AI is smart and getting smarter in a way that will either replace us or make us superhuman. The problem is, well, that’s not what’s happening. While we worry about (very real) issues with trust and bias, we’re ceding huge philosophical and cognitive space to the systems that we, after all, built. It’s frankly stupid. That’s why I’m writing about it here—to clear it up.

The concept of fat tails—significant, outlier events outside the normal distribution—should be at the center of our conversation about AI. Yes, you’ve likely heard “bell curve” objections to machine learning-based AI before. It’s not enough to get the idea of statistical averages.

I have colleagues who, hearing this, immediate start in on discussion about new AI that will capture outliers. But they’re missing the point about intelligence, so their theories will likewise be somewhat facile (sorry, it’s true). Outliers—the ones in the fat tails—aren’t just occasional serendipity, though they’re that too.

They’re precisely where intelligence actually happens. The world isn’t an average, and so those weird distributions actually create the environment for natural intelligence to operate. It’s a bit ironic, and sad, that we’re looking at “bell curve” machines for the future of intelligence, when optimizing on the bell curve is the one “sure bet” that we will surely fail.

Fortuna isn’t an algorithm

Here’s a quick and probably well-known example from the history of science. In 1928, a young Scottish bacteriologist named Alexander Fleming (1881-1955) returned to his lab after a vacation to find something unexpected growing in one of his Petri dishes. Mold had contaminated the culture of Staphylococcus bacteria he’d been working on. But instead of discarding it, Fleming noticed something unusual: the bacteria around the mold were being destroyed. Fleming didn’t throw it away. In one of the greatest serendipity moments in modern science, he inferred a causal interaction from what appeared to be a mistake, and discovered penicillin, the first true antibiotic.

The reason we humans can feel bullish about such proud moments is simple, yet we almost never zero in on it: machines don’t interact dynamically with their environment the way biological intelligence does. Fleming’s discovery wasn’t just solving a problem—it was the result of constant interaction with his surroundings and inferences based on wholly unexpected observations. The whole point of LLMs is to give the statistically best answer, which is to say, the expected one.

Fortuna, or chance, is embedded in human cognition in a way that machines, reliant on predetermined data, simply can’t replicate. This point is of enormous significance, because it suggests we think linearly about something in such a way that far and away the most important observation is the one most obscured.

Let me state this point in a different way. We don’t have a “black box” intelligence that simply replays prior training. Our brain learns dynamically, interacting with the environment in ways that lead to constant, unpredictable opportunities for insight. Einstein is a shopworn example but still makes the point grandly: he thought about physics while contemplating time on the back of a train, gazing at a clock behind him. What’s the point of that if you’re optimizing some function on data?

These moments—what the Greeks called Fortuna, or what we might call luck or chance—are not just nice-to-have but integral to our intelligence. Sometimes major insights come from dreams—as with the discovery of the benzene molecule—and sometimes they come because someone dropped a some milk on the floor or your mom is visiting, or what have you. Almost nothing of any consequence comes from regurgitating a dataset.

Be wary of false prophets, AI isn’t heading for “AGI.”

The cognitive difference is difficult to overstate. Human intelligence emerges because we are deeply embedded in our environments—a constant feedback loop of interaction that gives us a perpetual advantage. The machine model, no matter how well-trained, doesn’t operate within this dynamic system. It not only isn’t learning in real-time, it’s not inferring from outliers but from best-fit. I use LLMs to spit out facts and figures that don’t come to my mind—I’m not a calculator. I don’t use them to say something interesting. The more I interact with today’s AI, the more I realize we’re not much further along—thinking about real intelligence—than decades ago. We’re still messing around with machines and shit-talking ourselves.

Abductive Reasoning and Dynamic Thinking

A similar dynamic played out during the London cholera outbreak of the 1850s. At the time, most believed cholera spread through “miasma”—bad air. Dr. John Snow, a physician, saw that the outbreak in Soho was clustered around a single water pump. Hmmm. Snow made an abductive leap by inferring that the water, not the air, was spreading the disease—the dataset everyone was using, as it were, was focused on the air. His investigation led to the removal of the water pump handle, halting the outbreak and drastically improving our understanding of disease transmission.

John Snow (1813-1858)/public domain

Snow’s breakthrough didn’t come from data alone. It didn’t come from data, except in the most trivial of senses, at all.

Bottom Line

I’m pretty much constantly engaged in discussions about AI here and in the future. I’m perpetually having conversations about AI—about how it works now and what it might become. The discussion typically assumes AI is on some unstoppable cognitive trajectory, and we need to turn our gaze now to focus on things like bias, trust, and data ethics. Sure, I get that. We need systems we can trust. But we’re all missing the eight-hundred-pound gorilla in the room: true intelligence is found by moving away from larger datasets and away from statistical norms. Yes, there are statistical norms, and we make use of them in inference. It’s not that such inferences are non-existent but rather that they tell us very little about what we’re trying to understand: intelligence. We know neural networks can handle patterns that crystalize in large enough datasets. Unfortunately, that entire exercise has very little to do with AGI in the first place. Good luck with that. Silver lining: since people are pretty disastrously bad at discerning patterns in mountains of data, AI will always play a role in our broader cognitive story.

We’ve built these systems to optimize the world as we know it. But the world we know is just the start. When will researchers stop obsessing over training data and start talking about the one thing that makes us us: the ability to handle what we haven’t seen before? Until then, AI systems are playing catch-up—nay, better, they’re pretending to catch-up—to a game we’ve been playing since day one.


Erik J. Larson

Fellow, Technology and Democracy Project
Erik J. Larson is a Fellow of the Technology & Democracy Project at Discovery Institute and author of The Myth of Artificial Intelligence (Harvard University Press, 2021). The book is a finalist for the Media Ecology Association Awards and has been nominated for the Robert K. Merton Book Award. He works on issues in computational technology and intelligence (AI). He is presently writing a book critiquing the overselling of AI. He earned his Ph.D. in Philosophy from The University of Texas at Austin in 2009. His dissertation was a hybrid that combined work in analytic philosophy, computer science, and linguistics and included faculty from all three departments. Larson writes for the Substack Colligo.

Why Human Intelligence Thrives Where Machines Fail