Recently, design theorist William Dembski wrote a long essay on artificial general intelligence at his site, billdembski.com, The article is also available as a series of shorter pieces at Evolution News. Last week, we offered some highlights here. Here are highlights from two segments published since then:
Kurzweil is a technophile in that he regards building and inventing technology, and above all machines, as the greatest thing humans do. But he’s also a technobigot in that he regards people of the past, who operated with minimal technology, as vastly inferior and less intelligent than we are. He ignores how much such people were able to accomplish through sheer ingenuity given how little they had to work with. He thus minimizes the genius of a Homer, the exploration of the Pacific by South Sea Islanders, or the knowledge of herbs and roots of indigenous peoples captured in oral traditions, etc.William A. Dembski, “Artificial General Intelligence: Machines vs. Organisms,”Evolution News, February 2, 2024
By way of refutation, he references and quotes biochemist Michael Denton who writes, about comparatively simple living systems:
Living things possess abilities that are still without any significant analogue in any machine which has yet been constructed. These abilities have been seen since classical times as indicative of a fundamental division between the [organismal] and mechanical modes of being.
To begin with, every living system replicates itself, yet no machine possesses this capacity even to the slightest degree… Every second countless trillions of living systems from bacterial cells to elephants replicate themselves on the surface of our planet. And since life’s origin, endless life forms have effortlessly copied themselves on unimaginable numbers of occasions … (from his essay “Organism and Machine” in Jay Richards, ed., Are We Spiritual Machines: Ray Kurzweil vs. The Critics of Strong A.I., 2001)Dembski, “Machines vs. Organisms
Neuroscientific research further complicates the picture. The brain is increasingly showing itself to be not just a complex information processor but an organ characterized by endogenous activity — spontaneous, internally driven behaviors independent of external stimuli. This perspective portrays the brain as an active seeker of information, as is intrinsic to organic systems. Such spontaneous behavior, found across all of life, from cells to entire organisms, raises doubts about the capacity of machines to produce these intricate, self-directed processes.Dembski, “Machines vs. Organisms
In the sixth essay in the series, “Artificial General Intelligence: The Oracle Problem,” Dembski notes that “Our most advanced artificial intelligence systems, which I’m writing about in this series, require input of external information to keep them from collapsing.”
Unlike the Aristotelian God, LLMs do not thrive without human oracles continually feeding them novel information. There are sound mathematical reasons for this. The neural networks that are the basis for LLMs reside in finite dimensional vector subspaces. Everything in these spaces can therefore be expressed as a linear combination of finitely many basis vectors. In fact, they are simplexes and the linear combinations are convex, implying convergence to a center of mass, a point of mediocrity. When neural networks output anything, they are thus outputting what’s inherent in these predetermined subspaces. In consequence, they can’t output anything fundamentally new. Worse yet, as they populate their memory with their own productions and thereafter try to learn by teaching themselves, they essentially engage in an act of self-cannibalism. In the end, these systems go bankrupt because intelligence by its nature requires novel insights and creativity, which is to say, an oracle.
Research backs up this claim that LLMs run aground in the absence of oracular intervention, and specifically external information added by humans.William A. Dembski, “Artificial General Intelligence: The Oracle Problem.” Evolution News February 6, 2024
His next and final essay in this series will be on how to destroy the idol that AGI has become.
Here are all the highlights from the series:
Dembski: Does the squawk around AI sound like the Tower of Babel? Well then, maybe that’s just what it is. He sees the breathless and implausible claims for computers that think like people as the modern equivalent of ancient idols. Here are some highlights.
Human intelligence is fundamentally different from machine intelligence. Dembski discusses the problems we will encounter when we try to integrate the two when, say, sharing the road with self-driving cars. He also touches on Ray Kurzweil’s quest for digital immortality and how it falls short of the original quest and its religious expressions.
William Dembski: When is transhumanism a form of technobigotry? In his further essays in the current series, he explains why AI cannot avoid collapse without the input of novel information from humans. AI systems alone go bankrupt, Dembski argues, because intelligence by nature requires novel insights and creativity, which is to say, an oracle from outside.
William Dembski: Destroy the AI idol before it destroys us. Design theorist Dembski points to the way that chess adapted to computers to become better than ever as a way forward in the age of AI He warns that the promoters of AI as “taking over” have a vested interest in claims that keep them at the top of society’s intellectual and social order.