Just When Human Reason is Most Productive — AI Makes Things Up
In Part 2, we see how the ability to handle only one type of truth limits AI. AI models are fundamentally untethered from realityPsychologist Martin Seligman described chatbot Ask Martin’s confabulation about his involvement with the Reeve Foundation (discussed in Part 1) as a “deep falsehood.” It’s most certainly that. But it’s also a revealing kind of error.
As I put it in a recent Substack Note:
This got me thinking about a deeper asymmetry between AI and human cognition. When humans venture past the boundaries of known knowledge, we’re seeking truth — often redefining prior knowledge in the process. When an LLM ventures past its training data, it’s producing what seems most plausible within the logic of its existing corpus. In one case, the goal is correspondence with reality; in the other, it’s coherence with prior patterns. That’s a crucial distinction.
In fact, AI’s probability of confabulation rises precisely at the point where human cognition is most productive — when we step beyond what’s been established to discover or innovate.
A physicist who is proposing relativity in a Newtonian world isn’t seeking coherence with prior knowledge but rather redefining the correspondence links themselves. But when an LLM is pushed into unknown territory, its best move is to produce something that seems right rather than something that is right. The gap between those two — between human and machine ways of knowing (epistemics) — is what I’m thinking of as the correspondence horizon.
Where AI ends and human creativity begins
There’s a deep link between the past and the future. Nearly everything we do — learning, reasoning, decision-making — is, in some sense, an act of conjuring the past. We draw upon knowledge, experience, facts, and figures and use them to make sense of the present and anticipate what’s next.

LLMs are supreme conjurers of the past, working by generalizing from their training data to generate plausible continuations. What makes them powerful is that they don’t simply parrot back the sentences they’ve seen; rather, they generate novel combinations of words that are statistically derived from their training data.
Outspoken linguist Emily Bender, in her famous critique of language models as “stochastic parrots,” was right to highlight their limitations — but she went too far. They are not mere parrots; they are statistical synthesizers, capable of producing text that appears fluid and intelligent. This is one reason Seligman was rightly impressed; faced with a problem, someone (perhaps him) has likely written or spoken about it before. If it’s in the training data of a foundational model, the system may well generate sound advice in response to an inquiry.
Untethered from reality
Yet, for all their fluency, LLMs are not truth-seeking mechanisms. This isn’t just because they are probabilistic — probabilities and truth are distinct concepts — but because they are fundamentally untethered from reality. They do not perceive, experiment, or refine their understanding in response to new evidence. Their outputs are constrained to the internal logic of language, rather than the external logic of the world.
A purely linguistic system must be detached from direct experience — words are placeholders for things but aren’t those things. A purely linguistic system swims in words that relate to other words in webs of probability gleaned from training.
AI lacks correspondence in its epistemic toolkit. When it crosses the correspondence horizon, it produces plausible bullshit. And this is the key to understanding the conditions and the nature of hallucinations.
I should say here too that this is why human coherentists (of the philosophical kind) struggle to explain Kuhnian scientific revolutions: a paradigm shift often requires breaking free from prior coherence altogether, fitting new observations to an emerging, radically different conceptual structure. It is also why the correspondence theory of truth — the idea that we believe something because it is actually true in the world — can’t be dismissed, despite decades and indeed centuries of positivist philosophers wishing it could be.
I found Seligman’s discussion here productive and refreshing — sometimes we don’t need more computer scientists but benefit from cross-disciplinary thinking. I suspect that we’ll find that teaming up with AI enhances creative output just as he suggests — perhaps two is indeed better than one. But the caveat here is important. The AI knows more, a lot more, but it sees less, a lot less.
Rather than treating hallucinations as random glitches, we should recognize them as the system wandering out of coherence — the crossing of a correspondence horizon. This is the point where AI’s generalization ability reaches its limit. It is also, crucially, the point where human intelligence shines brightest.
I find this line of thought fascinating. We cross this line and open up new vistas of discovery. AI, by contrast, crosses it and ramps up the bullshit.
That distinction, more than anything else, should shape how we think about the future of AI and humanity.
Here’s the first part of this two-part series: Why AI breaks down where human creativity begins. Part 1: AI can handle statements that are internally coherent but that is not the same thing as correspondence with reality. In short, philosophers distinguish between two fundamental theories of truth: correspondence and coherence, and AI does only coherence.