Artificial Intelligence, Science and the Limits of Knowledge
In Part 3, I show that AI, like science, has limits. It depends on narrowing a problem: making it specific, discarding most possibilities, sealing it inside a representation and specificationOne of science’s quiet strengths is its ability to chart the boundaries of the knowable.
The laws of thermodynamics reveal entropy’s inevitability. Einstein showed that nothing outruns light. Quantum mechanics gave us the uncertainty principle; chaos and complexity theory uncovered inherent unpredictability; mathematical logic proved that some truths can never be decided. Good science doesn’t just tell us what we can do — it tells us where we must stop.
AI belongs in this lineage of limits. Its successes depend on narrowing the problem: making it specific, discarding most possibilities, sealing it inside a representation and a specification. Deep Blue mastered chess, AlphaGo conquered Go, and LLMs like GPT-4 and GPT-5 dazzle with conversation — yet all are automation by definition: mechanical simulations of tasks that once required human intelligence. In LLMs, the representation is tokens; the specification is the transformer’s attention-and-prediction loop. However wide the surface domain is, the machinery is fixed.
Here’s the rub: asking how to make these systems “general” is asking how to remove the very constraints that made them work in the first place. That’s the futurist’s dilemma. Futurists point to Narrow AI triumphs as if they foreshadow AGI, but those triumphs — from chess engines to self-driving cars to LLMs — are demonstrations of boundaries, not breakthroughs toward mind.
Seen this way, all usable AI today is a form of engineered reduction: problems stripped down to what a machine can represent and compute. And the real question for AGI isn’t how much bigger we can make those reductions, but whether there are domains of intelligence that resist reduction altogether.
This, in the end, is why AI is still automation — and why the dream of machines that think like us remains just that: a dream. The real question we face isn’t whether AI will “wake up,” but how much of our human world we’re willing to hand over to machines.
Here are Parts 1 and 2 of my article:
Surprise: Artificial intelligence is still just automation I wrote this in 2016. And it is still true in 2025. A reflection in three parts. In the projected but impossible AI future, we are “Orga,” organic life forms, and they “Mecha,” mechanical life forms that will improve and take over.
and
Wide AI, while still just automation, is a genuine advance Part 2: Truly general intelligence is still a mystery. In fact, it’s more mysterious now than it was in 2016. Even when we get a quantum leap forward, it quickly signals that it too is a dead end for the bolder ambitions of true AGI.
