Mind Matters Natural and Artificial Intelligence News and Analysis
mountain-peak-in-low-clouds-stockpack-adobe-stock-123063583-stockpack-adobestock
mountain peak in low clouds
Image Credit: Olha Rohulya - Adobe Stock

Artificial Intelligence, Science and the Limits of Knowledge

In Part 3, I show that AI, like science, has limits. It depends on narrowing a problem: making it specific, discarding most possibilities, sealing it inside a representation and specification
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

One of science’s quiet strengths is its ability to chart the boundaries of the knowable.

The laws of thermodynamics reveal entropy’s inevitability. Einstein showed that nothing outruns light. Quantum mechanics gave us the uncertainty principle; chaos and complexity theory uncovered inherent unpredictability; mathematical logic proved that some truths can never be decided. Good science doesn’t just tell us what we can do — it tells us where we must stop.

AI belongs in this lineage of limits. Its successes depend on narrowing the problem: making it specific, discarding most possibilities, sealing it inside a representation and a specification. Deep Blue mastered chess, AlphaGo conquered Go, and LLMs like GPT-4 and GPT-5 dazzle with conversation — yet all are automation by definition: mechanical simulations of tasks that once required human intelligence. In LLMs, the representation is tokens; the specification is the transformer’s attention-and-prediction loop. However wide the surface domain is, the machinery is fixed.

Here’s the rub: asking how to make these systems “general” is asking how to remove the very constraints that made them work in the first place. That’s the futurist’s dilemma. Futurists point to Narrow AI triumphs as if they foreshadow AGI, but those triumphs — from chess engines to self-driving cars to LLMs — are demonstrations of boundaries, not breakthroughs toward mind.

Seen this way, all usable AI today is a form of engineered reduction: problems stripped down to what a machine can represent and compute. And the real question for AGI isn’t how much bigger we can make those reductions, but whether there are domains of intelligence that resist reduction altogether.

This, in the end, is why AI is still automation — and why the dream of machines that think like us remains just that: a dream. The real question we face isn’t whether AI will “wake up,” but how much of our human world we’re willing to hand over to machines.

Here are Parts 1 and 2 of my article:

Surprise: Artificial intelligence is still just automation I wrote this in 2016. And it is still true in 2025. A reflection in three parts. In the projected but impossible AI future, we are “Orga,” organic life forms, and they “Mecha,” mechanical life forms that will improve and take over.

and

Wide AI, while still just automation, is a genuine advance Part 2: Truly general intelligence is still a mystery. In fact, it’s more mysterious now than it was in 2016. Even when we get a quantum leap forward, it quickly signals that it too is a dead end for the bolder ambitions of true AGI.


Erik J. Larson

Fellow, Technology and Democracy Project
Erik J. Larson is a Fellow of the Technology & Democracy Project at Discovery Institute and author of The Myth of Artificial Intelligence (Harvard University Press, 2021). The book is a finalist for the Media Ecology Association Awards and has been nominated for the Robert K. Merton Book Award. He works on issues in computational technology and intelligence (AI). He is presently writing a book critiquing the overselling of AI. He earned his Ph.D. in Philosophy from The University of Texas at Austin in 2009. His dissertation was a hybrid that combined work in analytic philosophy, computer science, and linguistics and included faculty from all three departments. Larson writes for the Substack Colligo.
Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

Artificial Intelligence, Science and the Limits of Knowledge