

Erik J. Larson


Stranger Things: Why Mad Scientists Are Mad
At the highest levels, creativity seems to bypass the deliberate, structured thought process altogether
The Slow Decline of a Key Aspect of Creativity
The mechanization of mind is changing how we think about creativity — and not in a good way
Part 2: The Fiction of Generalizable AI: How to Game the System
Progress toward real generalization, by any substantive measure, is nil. Perhaps we should reexamine the very concept of the “I” in AI
The Fiction of Generalizable AI: A Tale in Two Parts
Why intelligence isn’t a linear scale — and why true generalization remains unsolved
The Linda Problem Revisited, As If Reality Matters
Part 2: AI enthusiasts use false claims for humans' “natural stupidity” to bolster claims for machine intelligence
Humans Aren’t That Biased — and Machines Aren’t That Smart
Part 1: At an upcoming conference on AI, I will be puncturing that particular AI enthusiast’s fantasy
The Last Humanist: What the Digital Age Can’t Replace
My review of Superbloom by Nicholas Carr, perhaps the last critic standing, with thoughts on some of his earlier books
Why Smart Cities Are a Dumb Idea
Seattle, in its current state, feels like the ghost of a promise — an uncanny blend of technological optimism and visible collapse
How Fruit Flies, Bees, and Squirrels Beat Artificial Intelligence
AI researchers assume they are on the path to intelligence, yet intelligence itself remains a mystery and many animals do better than current AI
AI in Biology: The Future AI Didn’t Predict
It doesn’t look like the past. Physical systems that evolve over time but don’t follow a fixed formula have always presented a deep challenge to AI
AI in Biology: The Disease Connection — When Proteins Go Wrong
Some of the most crucial proteins for human health—the ones we need to understand most urgently—are the very ones that AI has the hardest time modeling
AI in Biology: So Is This the End of the Experiment? No.
But a continuing challenge is that many of the most biologically important proteins don’t adopt a single stable structure. Their functions depend on structural fluidity
AI in Biology: What Difference Did the Rise of the Machines Make?
AI works very well for proteins that lock into a single configuration, as many do. But intrinsically disordered ones don’t play by those rules
AI in Biology: AI Meets Intrinsically Disordered Proteins
Protein folding — the process by which a protein arrives at its functional shape — is one of the most complex unsolved problems in biology
Why Humans Aren’t That Biased, and Machines Aren’t That Smart
Claims about the cognitive biases that supposedly overwhelm our judgment should be taken with a helping of salt
Machine Intelligence and Reasoning: We Are Not on a Path to AGI
AI guru François Chollet’s Abstraction and Reasoning Corpus (ARC) proves we’re not on a path to AGI
From Data to Thoughts: Why Language Models Hallucinate
The limits of today’s language models and paths to real cognition
Why Human Intelligence Thrives Where Machines Fail
We're worried about AI and trust. We should be worried about something deeper
The Left Brain Delusion: Are We Steamrolling Human Agency?
The two hemispheres of our brain really do see the world differentlyTechno-futurists love to dream up visions of the future. Invariably, these are worlds where everything is under control—where every problem has a solution, and the future unfolds exactly as planned. We do seem to be moving toward some sort of centralized loss of agency. But what’s distinctive about the techno-futurist vision is the belief that this is not only inevitable but wonderful. Self-driving cars eliminate wasted time in traffic; smart cities like Songdo or Masdar City adjust every streetlight and service in real-time to optimize efficiency. AI-driven healthcare, like the tools developed by Google’s DeepMind, promises to pinpoint diagnoses. Automated finance uses algorithms to manage our money and secure our futures. Everything works, all the time. But doomsday visions flip Read More ›