Mind Matters Natural and Artificial Intelligence News and Analysis
glowing-light-bulb-in-the-dark-stockpack-adobe-stock-890123744-stockpack-adobestock
Glowing Light Bulb in the Dark
Image Credit: MrHamster - Adobe Stock

Why AI Breaks Down Where Human Creativity Begins

Part 1: AI can handle statements that are internally coherent but that is not the same thing as correspondence with reality
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

I recently read an interesting chapter in a forthcoming book by Martin Seligman. I was not familiar with Seligman but had heard his name — he’s a well-known figure in Positive Psychology and quite prolific. According to his Wikipedia page, he has written over 30 books and some 350 papers.

He and I are both participating in a discussion group, and in our last session, he discussed at length his latest book on human agency. One chapter, “Beyond Human Agency,”  takes up the question of AI — can it help counsel and coach people? Can it write compelling narratives of patients and “get to know” (in some loose sense) patients from their journals and discussions? Can it be creative, that is, solve problems creatively or work with therapists to help patients solve their problems?

I should mention that therapy, per se, is not of much intellectual interest to me. But I found myself drawn to his approach to empowerment psychology. His book is, after all, about enhancing one’s agency to live a fuller, better life. So when he offered to let me read and comment on his AI chapter, I gladly accepted. I’ve now read it and want to pass along some thoughts on AI, creativity, and the confabulation problem.

Talking to the Dead: AI, Memory, and Creativity

Seligman opens the chapter with thoughts on agency and its natural decline as we age (or fall ill or lose close friends and mentors). He then asks whether AI might step in to help us out. Given the conversational nature of large language models, he wonders whether a machine could be a better therapist than he is.

Now 82, he reflects on how much of his own work — and that of others — he remembers only in broad strokes. He also notes that many of his closest intellectual companions have passed away.

One of them was Tim Beck, formerly Seligman’s mentor and a close friend over the years, who lived to be 100. Against this backdrop, he raises the AI question: What if he could keep talking to Tim? He’d like to ask, for instance, whether Beck believes coaching is destined to become more popular than psychotherapy (isn’t it already?) and how much clinical training coaches should have.

That leads him to talk about the chatbot Ask Martin, modeled on his own writings. It is much like Joseph Weizenbaum’s ELIZA — except that instead of the passive-aggressive Rogerian therapist routine, Ask Martin is fine-tuned to provide positive psychology insights. Trained on Seligman’s voluminous books, articles, and speeches, it provides generally solid advice. And because it’s built on a foundational model, it performs well enough that those who use it feel like they’re engaging with something meaningful.

This raises a deeper question: if AskMartin can coach and console people, why not extend the idea to a broader set of lost voices? Why not revive the ideas of deceased mentors and colleagues, preserving their intellectual presence beyond the grave? Could AI extend the agency of the departed by continuing their conversations?

Creativity: One or two?

Seligman then shifts the discussion toward creativity. He poses a question that I’d never quite thought of in these terms: Is creativity a 1 or a 2? Is it the lone genius — Michelangelo sculpting David, Einstein working through relativity? Or is it more often a pair — Lennon and McCartney, the Wright brothers?

This sets up the AI question: if creativity flourishes in twos, could we form a creative pair with an AI? Instead of the AI replacing human agency, might it augment it — helping us create in ways we wouldn’t have otherwise? It’s a compelling idea.

But we have to balance the enthusiasm of positive psychology and the sense that the future is wide open — perhaps more so when we team with knowledge-packed AI — with the obvious limitations of the technology that many of us on Planet Earth are now privy to: confabulation. Or, if you prefer, hallucination.

The hallucination problem: What happens when the AI doesn’t know?

As AI-generated advice and AI-assisted creativity become more common, so does a familiar issue: hallucination. Large language models, as Steven Pinker bluntly puts it, are “sophisticated bullshitters.” They don’t know anything; they just predict sequences of words that are statistically likely given their training data. It seems like hourly, a cacophony of voices on social media now remind us of not the creativity but the idiocy of LLMs. Got it.

This issue came up in an unexpected way when AskMartin was advising someone who had broken her back and was seeking psychological guidance. “In the middle of sound advice about the positive effects of optimism and how to regain it,” notes Seligman, the AI mentioned that he — Seligman — had worked with the Reeve Foundation (co-founded by Christopher Reeve (1952‒2004), the actor who played Superman in 1978 and broke his back falling off a horse in 1995).

“This was false,” Seligman writes. “I have never worked with them and indeed I did not know about them until I saw AskMartin’s post.” Seligman calls this a “deep falsehood.” He notes that we don’t really know how the bots work and that even their designers are stumped explaining how they succeed — and fail — when they do. All true.

Enter philosophy.

Correspondence and coherence: Two paths to truth

Decision making and choosing between two paths with different outcomes. Generative AIImage Credit: Artur - Adobe Stock

Philosophers distinguish between two fundamental theories of  truth: correspondence and coherence. Under the correspondence theory, a statement is true if it accurately reflects reality — “The cat is on the mat” is true if, in fact, the cat is actually on the mat. The coherence theory, by contrast, holds that a statement is true if it fits within a system of other accepted truths. For example, “I am the only son of my parents” is true under coherence if it aligns with other known facts: I have no brothers, I’m not adopted, and so on.

In both cases, the question is the same: Under what conditions is the proposition ‘p’ true? But the answers are radically different. The correspondence theory demands external verification — truth is discovered by checking against the world itself. The coherence theory, on the other hand, stays internal — truth emerges from consistency with prior knowledge. One seeks reality; the other seeks logical harmony.

This philosophical distinction has been debated in academic circles for centuries, but Seligman instinctively grasped its relevance to AI. When he asked Anthropic’s Claude which of these theories better describes how it “knows” things, it responded — quite plausibly — that it operates on coherence. This is unsurprising: a large language model (LLM) exists entirely in cyberspace, disconnected from the world. It cannot directly verify anything against external reality; it can only produce statements that fit within the linguistic and statistical patterns it has absorbed.

This is already interesting, but the implications are downright fascinating.

Next: Just when human reason is most productive — AI makes things up


Erik J. Larson

Fellow, Technology and Democracy Project
Erik J. Larson is a Fellow of the Technology & Democracy Project at Discovery Institute and author of The Myth of Artificial Intelligence (Harvard University Press, 2021). The book is a finalist for the Media Ecology Association Awards and has been nominated for the Robert K. Merton Book Award. He works on issues in computational technology and intelligence (AI). He is presently writing a book critiquing the overselling of AI. He earned his Ph.D. in Philosophy from The University of Texas at Austin in 2009. His dissertation was a hybrid that combined work in analytic philosophy, computer science, and linguistics and included faculty from all three departments. Larson writes for the Substack Colligo.
Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

Why AI Breaks Down Where Human Creativity Begins