

Erik J. Larson


Machine Intelligence and Reasoning: We Are Not on a Path to AGI
AI guru François Chollet’s Abstraction and Reasoning Corpus (ARC) proves we’re not on a path to AGI
From Data to Thoughts: Why Language Models Hallucinate
The limits of today’s language models and paths to real cognition
Why Human Intelligence Thrives Where Machines Fail
We're worried about AI and trust. We should be worried about something deeper
The Left Brain Delusion: Are We Steamrolling Human Agency?
The two hemispheres of our brain really do see the world differentlyTechno-futurists love to dream up visions of the future. Invariably, these are worlds where everything is under control—where every problem has a solution, and the future unfolds exactly as planned. We do seem to be moving toward some sort of centralized loss of agency. But what’s distinctive about the techno-futurist vision is the belief that this is not only inevitable but wonderful. Self-driving cars eliminate wasted time in traffic; smart cities like Songdo or Masdar City adjust every streetlight and service in real-time to optimize efficiency. AI-driven healthcare, like the tools developed by Google’s DeepMind, promises to pinpoint diagnoses. Automated finance uses algorithms to manage our money and secure our futures. Everything works, all the time. But doomsday visions flip Read More ›

The New Tower of Babel
The old ways of arguing and understanding each other are on the decline, if not on life support.We all know Babel (no, not the language learning company). It’s in Genesis. The Biblical story about God making so many languages and dialects and (let’s add) opinions that no one could understand each other or effectively communicate. One legacy of the triumph of digital technology and AI in every corner of our existence is that we’ve recreated this Babel. Let me try to unpack this, and bear with me if it seems I’m saying something derogatory about one belief or another — my aim is to avoid that game and try to explain the mechanism, the social and cultural story, by which our new Babel is ascendant, and the old ways of arguing and understanding each other are on Read More ›

If AI’s Don’t Know What They’re Doing, Can We Hope to Explain It?
With AI, we have a world of powerful, useful, but entirely opaque systems. We don’t know why they make decisions and neither do theyIn yesterday’s post, I talked about the fact that AI’s don’t understand the work they’re doing. That makes the goal — to make them think like people — elusive. This brings us to the second problem, which ended up spawning an entire field, known as “Explainable AI.” Neural networks not only don’t know what they’re doing when they do it, they can’t in general explain to their designers or users why they made such-and-such a decision. They’re a black box; in other words, they are obstinately opaque to any attempts at a conceptual understanding of their decisions or inferences. How does that play out? It means, for example, that, with image recognition tasks like facial recognition, the network can’t explain Read More ›

Why, Despite All the Hype We Hear, AI Is Not “One of Us”
It takes an imaginative computer scientist to believe that the neural network knows what it’s classifying or identifying. It’s a bunch of relatively simple math
The Present Shock We’re Experiencing
Our modern obsession with the possibility of truly smart machinery keeps a self-important anti-humanism alive and kicking.
This is Digital McCarthyism
Far from being liberated by these technologies, we have been plunged back into the worst abuses of surveillance and privacy violation.The notion that we’re getting somewhere, making progress, is remarkably durable. It survives wars, financial collapse, riots, scandals, stagnating wages, and climate change (to name a few). Though techno-futurists are also fond of AI apocalypse scenarios, where artificial intelligence somehow “comes alive,” or at any rate uses its superior intelligence to make an autonomous decision to wipe out humanity, much more ink has been spilled this century prognosticating indomitable technical progress, which somehow stands in for human progress generally. But sanguine belief in progress is belied by the actual events of the twenty-first century. Computers have gotten faster and AI more powerful, but digital technology has also been used to spread misinformation, make deep fakes, and conduct relentless cyberwarfare. Financial Read More ›

The Modern World’s Bureaucracy Problem
The Iron law states that any market reform or government initiative aimed at shrinking bureaucracy ends up expanding it.
Is ChatGPT a Dead End?
There is still no known path to Artificial General Intelligence, including ChatGPT.
What Mission Impossible Tells Us About AI Mythology
If you’re looking for an intelligent take on existential risk and superintelligent AI, the latest Mission Impossible movie is not for you.
Why ChatGPT Is Killing Off Traditional AI
We're living in another AI "winter"
Don’t Expect AI to Revolutionize Science
Data science is a downstream phenomenon. Thinking isn't.