Mind Matters Natural and Artificial Intelligence News and Analysis
woman-approaches-giant-digital-face-futuristic-technology-st-1576816877-stockpack-adobestock
Woman approaches giant digital face futuristic technology
Image Credit: MdBakul - Adobe Stock

Wide AI, While Still Just Automation, Is a Genuine Advance

Part 2: Truly general intelligence is still a mystery. In fact, it’s more mysterious now than it was in 2016
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

Why doesn’t Wide AI escape the “still just automation” label? Its outputs are still prone to illogical, mindless “hallucinations” that expose the fundamentally statistical nature of their inference. Wide AI models are induction engines, not minds. And their uses track perfectly with what we expect from automation: replacing cognitive work.

They raise familiar fears about deskilling, erosion of expertise, and job loss — as industries adopt LLMs to eliminate or simplify entire categories of work. We’ve simply traded the spinning frames of Luddite England for clusters of GPUs in cloud data centers.

Here’s the first part of my three-part article on AI as automation.

Looking at the matter from a philosophical perspective, ten years on, Wide AI is an obvious innovation. It was made possible by Moore’s Law and by OpenAI’s early championing of the “scaling hypothesis” — the idea that bigger models, trained on ever-larger datasets, will yield better results. This has driven a gold rush for massive datasets and GPU megacampuses. In other words, the automation argument I made nearly a decade ago has only grown stronger.

AI is exactly automation — and seeing it clearly as such is the first step toward figuring out how to use it well, and what to guard against.

What Narrow AI and Wide AI really are

Strip away the hype and here’s how today’s “intelligent” systems — from Deep Blue to AlphaGo to GPT-5 — actually work:

Every AI system, no matter how dazzling, begins with two engineering moves:

Representation. Translate the problem into a form a computer can work on — a chessboard into a game tree, speech into a sequence of phonemes, language into tokens and embeddings.

Specification. Define a finite sequence of computational steps that can operate on that representation to produce outputs — minimax search in chess, convolutional layers in vision, transformer attention in language.

If you can’t represent and specify the problem, you can’t automate it. If you can, you’ve already narrowed it.

That’s why Deep Blue could crush Garry Kasparov at chess but couldn’t play checkers. It’s why AlphaGo could beat Lee Sedol but couldn’t hold a conversation. Even the latest large language models still live inside these constraints. Their breadth of conversation comes from scaling statistical token prediction over a massive training distribution, not from breaking free of representation and specification.

In fact, if Wide AI were to introduce something truly new to the “narrowness trap” of AI, we’d see a path opening up to generalize language models and make continuing progress toward the Holy Grail of AGI. We don’t see that. The limits of the scaling hypothesis that proved so fecund initially are now evident in the failure of deep pocket companies like OpenAI to continue the story of Wide AI by releasing newer versions like GPT 5.

Truly general intelligence is still a mystery. In fact, it’s more mysterious now than it was in 2016. Even when we get a quantum leap forward, it quickly signals that it too is a dead end for the bolder ambitions of true AGI. In 2025, it’s been made clear that the more things change, the more they stay the same.

The illusion that AI is “getting smarter”

Ai chatbot with educational icons connected to a person using a laptopImage Credit: ARdesignAi - Adobe Stock

Narrow AI — and now Wide AI — doesn’t naturally expand into general intelligence. It gets more powerful inside its domain. But the very process that makes it work — reducing the world to a representation and running a specification on it — strips away the open-ended flexibility that a mind would need.

In the case of large language models (LLMs), often called chatbots, the representation is a token sequence and the specification is the domain of syntax, semantics, and pragmatics in generating and interpreting natural language. The barrier to AGI here is actually a restatement of the engineering problem: endless sequences of tokens embedded into vectors to compute similarity and simulate meaning still hits up against the basic problem that gleaning probabilistic information about language is not the same as understanding it. Let this be yet another lesson for us.

Why does AI keep succeeding at ever more complex tasks?

It succeeds because engineers find clever ways to represent narrow slices of the world as computational problems, then write algorithms to operate within those boundaries. That’s how checkers, chess, and Go went from human pastimes to AI showcases.

But that same reduction seals the system in. The chess program doesn’t play Go; the Go program doesn’t hold a conversation. The representation and specification are tuned to a single domain, excluding almost everything else. Narrow AI wins by narrowing the world.

Digital chatbot, robot application, conversation assistant, AI Artificial Intelligence concept.Image Credit: jittawit.21 - Adobe Stock

Large language models don’t change this. Yes, Wide AI can simulate conversation across many topics, but its “width” comes from scaling statistical token prediction over vast datasets, not from escaping representation and specification. The representation is text-as-tokens; the specification is the transformer’s attention and prediction loop. Broader imitation is not deeper generality. Their telltale flaws — hallucinated facts, brittle reasoning — are symptoms of that narrowing.

IBM’s Watson was no different: human engineers reworked Jeopardy! into a fixed pipeline of text parsing, clue interpretation, and retrieval. Once built, it wasn’t going to start reading novels for fun. Wide AI’s architecture is likewise fixed; its breadth is a trick of scale, not a step toward general intelligence.

Given this, believing these systems will “wake up” as sentient minds is a leap of faith. The closer you look, the wider the gap between narrow automation and open-ended thought.

So what other paths exist? Let’s broaden the discussion.

Next: Artificial intelligence, science, and the limits of knowledge

Here’s the first part of my three-part article: Surprise: Artificial intelligence is still just automation I wrote this in 2016. And it is still true in 2025.


Erik J. Larson

Fellow, Technology and Democracy Project
Erik J. Larson is a Fellow of the Technology & Democracy Project at Discovery Institute and author of The Myth of Artificial Intelligence (Harvard University Press, 2021). The book is a finalist for the Media Ecology Association Awards and has been nominated for the Robert K. Merton Book Award. He works on issues in computational technology and intelligence (AI). He is presently writing a book critiquing the overselling of AI. He earned his Ph.D. in Philosophy from The University of Texas at Austin in 2009. His dissertation was a hybrid that combined work in analytic philosophy, computer science, and linguistics and included faculty from all three departments. Larson writes for the Substack Colligo.
Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

Wide AI, While Still Just Automation, Is a Genuine Advance