Mind Matters Natural and Artificial Intelligence News and Analysis
clarifying-complex-ideas-theme-with-light-bulb-flat-lay-stoc-619300211-stockpack-adobestock
Clarifying complex ideas theme with light bulb - Flat lay
Image Credit: Tierney - Adobe Stock

Podcast: The Challenge of Proving Creativity in AI

AI researcher Mappouras discusses the limitations of the Lovelace test for AI creativity with host Robert Marks
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

In a recent episode of the Mind Matters podcast, host Robert J. Marks sat down with Dr. Georgios Mappouras to discuss his new paper, The Turing Test 2.0: The General Intelligence Threshold. In the paper, Mappouras proposes a modern update to the Turing Test. That was Alan Turing’s classic 1950 test for machine intelligence. His goal is to design a more precise way to determine whether an artificial intelligence (AI) system can truly think — and more important, whether it can be creative.

Turing’s original test judged intelligence by competence in conversation: if a human could not tell whether a conversation partner was a person or a machine, the AI would “pass.” But as language models like ChatGPT grow more sophisticated, the line between imitation and genuine thought becomes blurry. Mappouras’s Turing Test 2.0 aims to go deeper by focusing on functionality and creativity, not just successful imitation.

The Lovelace Test and the problem of surprise

In the third part of the conversation, Marks compared Mappouras’s proposal to the Lovelace Test, introduced by cognitive scientist Selmer Bringsjord. The Lovelace Test says that a computer is creative only if its output cannot be explained by its programming — in other words, if it produces something its programmer did not intend or foresee.

But as Marks and Mappouras discussed, this approach raises serious difficulties. To prove that an AI’s output wasn’t hidden somewhere in its training data, researchers would need to check every source the model learned from. Doing so would be a monumental task. Mappouras also sees a contradiction in the Lovelace Test: if a designer intentionally builds an AI to be creative, then surprise cannot be the measure of success. The designer shouldn’t be shocked when it works as planned.

Marks agreed that surprise and creativity are not the same. A program can surprise its maker simply because its inner workings are complex, not because it has original thought. True creativity, both agreed, involves producing new functionality — something genuinely new that wasn’t derivable from what already existed.

The flash of genius

Both scholars connected creativity to what Marks calls a “flash of genius.” A sudden moment of insight — like Isaac Newton seeing the apple fall and conceiving the law of gravity — is central to human invention. Mappouras’ Turing Test 2.0 seeks measurable ways to detect whether an AI could ever experience something like that.

In humans, these flashes often emerge without clear cause. People sometimes solve problems in entirely new ways that they can’t fully explain. Mappouras’s test, by contrast, examines whether an AI can take known information and produce a new kind of function — something it was never explicitly trained to do.  

Can AI truly be creative?

Marks asked whether it’s even possible to design an AI to be creative. Mappouras admitted he doesn’t see a path to it but leaves the question open. He wants to avoid defining intelligence in a way that automatically rules out the possibility. If creativity could exist in machines, he wants a test that could recognize it.

He uses examples from sports and art to illustrate how creativity appears in living systems. The evolution of basketball strategy — from the physical style of the 1960s to today’s emphasis on long-range three-point shooting — shows how new functionality emerges over time. Similarly, human art continually reinvents itself through new forms and styles. Animals, by contrast, rarely build upon what they’ve learned. A dog can learn tricks but doesn’t invent new ones. For Mappouras, this difference marks the boundary between adaptation and genuine creativity.

Why human minds still stand apart

Marks and Mappouras both agree that AI lacks the creative dimension that humans possess. Marks marveled that everything humans have built — from cars to computers — was made using materials that have co-existed with us on earth for thousands of years. The difference is our ability to recognize new uses and relationships among those materials. Creativity is about seeing new possibilities, not just processing data.

This ability might even point to something beyond the physical brain. Marks noted that while scientists can grow human organs, even brain material, from cells, such biological material would not have a mind or soul. The human intellect, he suggested, may transcend pure computation. This is a view supported by Nobel Laureate physicist Roger Penrose, who argues that some forms of human understanding are non-algorithmic, meaning they cannot be captured by any computer program.

Algorithmic limits and the nature of intelligence

Mappouras emphasized that every algorithm — including those running modern AI — operates through finite steps that always end in a predictable result. If creativity is algorithmic, then given enough time, machines should eventually achieve it. But if creativity is non-algorithmic, then it lies forever beyond reach. That’s why his test aims to define precisely what creativity and general intelligence mean, so that researchers can determine whether they’re even achievable in principle.

Marks connected this to famous results in mathematics, like Alan Turing’s halting problem and Gregory Chaitin’s unknowable number, both of which show there are truths that no algorithm can ever compute. If creativity involves such uncomputable insights, then AI will never replicate it fully.

The risk of false prophets

In closing, Mappouras warned about the “false prophet” problem — society’s tendency to trust authority figures uncritically. In earlier eras, people believed whatever the news anchor or the scientist said. Today, we risk believing whatever AI says. If we start feeding AI’s own generated data back into its training, we create a feedback loop of imitation, not innovation. This is a phenomenon researchers already call model collapse. This would lead to intellectual stagnation rather than discovery.

Keeping an open but critical mind

While machines can mimic human reasoning with astonishing speed, genuine creativity — the ability to generate truly new ideas — remains uniquely human. Until an AI passes Mappouras’s Turing Test 2.0, we should remember that these systems can only reflect what humanity already knows. For now, the spark of creative genius still belongs to the human mind.

Additional Resources

Podcast Transcript Download


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.
Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

Podcast: The Challenge of Proving Creativity in AI