Wide AI, While Still Just Automation, Is a Genuine Advance
Part 2: Truly general intelligence is still a mystery. In fact, it’s more mysterious now than it was in 2016Why doesn’t Wide AI escape the “still just automation” label? Its outputs are still prone to illogical, mindless “hallucinations” that expose the fundamentally statistical nature of their inference. Wide AI models are induction engines, not minds. And their uses track perfectly with what we expect from automation: replacing cognitive work.
They raise familiar fears about deskilling, erosion of expertise, and job loss — as industries adopt LLMs to eliminate or simplify entire categories of work. We’ve simply traded the spinning frames of Luddite England for clusters of GPUs in cloud data centers.
Looking at the matter from a philosophical perspective, ten years on, Wide AI is an obvious innovation. It was made possible by Moore’s Law and by OpenAI’s early championing of the “scaling hypothesis” — the idea that bigger models, trained on ever-larger datasets, will yield better results. This has driven a gold rush for massive datasets and GPU megacampuses. In other words, the automation argument I made nearly a decade ago has only grown stronger.
AI is exactly automation — and seeing it clearly as such is the first step toward figuring out how to use it well, and what to guard against.
What Narrow AI and Wide AI really are
Strip away the hype and here’s how today’s “intelligent” systems — from Deep Blue to AlphaGo to GPT-5 — actually work:
Every AI system, no matter how dazzling, begins with two engineering moves:
Representation. Translate the problem into a form a computer can work on — a chessboard into a game tree, speech into a sequence of phonemes, language into tokens and embeddings.
Specification. Define a finite sequence of computational steps that can operate on that representation to produce outputs — minimax search in chess, convolutional layers in vision, transformer attention in language.
If you can’t represent and specify the problem, you can’t automate it. If you can, you’ve already narrowed it.
That’s why Deep Blue could crush Garry Kasparov at chess but couldn’t play checkers. It’s why AlphaGo could beat Lee Sedol but couldn’t hold a conversation. Even the latest large language models still live inside these constraints. Their breadth of conversation comes from scaling statistical token prediction over a massive training distribution, not from breaking free of representation and specification.
In fact, if Wide AI were to introduce something truly new to the “narrowness trap” of AI, we’d see a path opening up to generalize language models and make continuing progress toward the Holy Grail of AGI. We don’t see that. The limits of the scaling hypothesis that proved so fecund initially are now evident in the failure of deep pocket companies like OpenAI to continue the story of Wide AI by releasing newer versions like GPT 5.
Truly general intelligence is still a mystery. In fact, it’s more mysterious now than it was in 2016. Even when we get a quantum leap forward, it quickly signals that it too is a dead end for the bolder ambitions of true AGI. In 2025, it’s been made clear that the more things change, the more they stay the same.
The illusion that AI is “getting smarter”
Image Credit: ARdesignAi - Narrow AI — and now Wide AI — doesn’t naturally expand into general intelligence. It gets more powerful inside its domain. But the very process that makes it work — reducing the world to a representation and running a specification on it — strips away the open-ended flexibility that a mind would need.
In the case of large language models (LLMs), often called chatbots, the representation is a token sequence and the specification is the domain of syntax, semantics, and pragmatics in generating and interpreting natural language. The barrier to AGI here is actually a restatement of the engineering problem: endless sequences of tokens embedded into vectors to compute similarity and simulate meaning still hits up against the basic problem that gleaning probabilistic information about language is not the same as understanding it. Let this be yet another lesson for us.
Why does AI keep succeeding at ever more complex tasks?
It succeeds because engineers find clever ways to represent narrow slices of the world as computational problems, then write algorithms to operate within those boundaries. That’s how checkers, chess, and Go went from human pastimes to AI showcases.
But that same reduction seals the system in. The chess program doesn’t play Go; the Go program doesn’t hold a conversation. The representation and specification are tuned to a single domain, excluding almost everything else. Narrow AI wins by narrowing the world.
Image Credit: jittawit.21 - Large language models don’t change this. Yes, Wide AI can simulate conversation across many topics, but its “width” comes from scaling statistical token prediction over vast datasets, not from escaping representation and specification. The representation is text-as-tokens; the specification is the transformer’s attention and prediction loop. Broader imitation is not deeper generality. Their telltale flaws — hallucinated facts, brittle reasoning — are symptoms of that narrowing.
IBM’s Watson was no different: human engineers reworked Jeopardy! into a fixed pipeline of text parsing, clue interpretation, and retrieval. Once built, it wasn’t going to start reading novels for fun. Wide AI’s architecture is likewise fixed; its breadth is a trick of scale, not a step toward general intelligence.
Given this, believing these systems will “wake up” as sentient minds is a leap of faith. The closer you look, the wider the gap between narrow automation and open-ended thought.
So what other paths exist? Let’s broaden the discussion.
Next: Artificial intelligence, science, and the limits of knowledge
Here’s the first part of my three-part article: Surprise: Artificial intelligence is still just automation I wrote this in 2016. And it is still true in 2025.
