AGI, the Halting Problem and the Human Mind
Pat Flynn continues the conversation with Dr. Winston EwertIn the most recent episode of our Mind Matters podcasts, Pat Flynn engaged in a thought-provoking discussion with Dr. Winston Ewert about the nature of human cognition and artificial intelligence, drawing from Ewert’s contribution to the book Minding the Brain (Discovery Institute Press 2023).
Their conversation focused on Ewert’s argument that human cognition, while computational in nature, cannot be reduced to mere algorithms in the way that some materialist perspectives suggest.
Ewert’s central claim is based on Turing’s halting problem in computer science, which demonstrates that no algorithm can universally determine whether another algorithm will halt or run forever. He extends this principle to argue that an algorithm can only create an algorithm less sophisticated than itself.
This has significant implications for AI research and the idea of a technological Singularity— the notion that artificial intelligence could continuously improve itself beyond human intelligence. According to Ewert, the Singularity is not possible because AI cannot create an intelligence greater than itself.
In this week’s episode linked at right, Flynn and Ewert explored why, despite the halting problem being well-known in computer science, many researchers remain optimistic about AI surpassing human intelligence. Ewert acknowledged that while the mathematical proof is widely accepted, his interpretation— that an algorithm can only generate lesser intelligences— is a novel conclusion. He referenced Roger Penrose’s similar argument but noted that Penrose concluded that human cognition must be non-computational. Ewert maintains that it remains computational but follows inherent limitations.
The implications of Ewert’s argument
The discussion then shifted toward the implications of Ewert’s argument on evolutionary theories of intelligence. If intelligence can only create lesser intelligence, then an ultimate intelligence must be responsible for the human mind. However, this leads to a deeper question: where does that intelligence originate? Ewert suggested that such an intelligence must transcend algorithmic processes, a notion that aligns with classical theistic perspectives.
Regarding AI advances, Flynn asked whether recent developments, particularly in language models like ChatGPT, have challenged Ewert’s framework. Ewert acknowledged being impressed by the progress in AI but maintained that these advancements align with his predictions: AI will continue to grow more sophisticated but will not surpass human intelligence. The fundamental limitations he outlined remain intact.
Ewert’s argument challenges both materialist and overly optimistic AI perspectives while offering a thought-provoking perspective on the nature of intelligence and its origins.