Earlier this month, prominent science writer John Horgan posted entries at his blog Cross-Check (here and here), defending free will in light of Robert Sapolsky’s recent no-free-will book, Determined.
But two weeks later, he has suddenly begun to air doubts about free will because of concerns about… ChatGPT:
I’ve aggressively defended free will lately. Writing, I argue here and here, exemplifies the conscious deliberations and decisions that constitute free will. But in the dead of night, hell, in the cold glare of morning, I fear I have no more free will than a mindless machine-learning program like ChatGPT. …
What am I but a program that reflexively turns prompts into all-too-predictable responses based on my prior experiences? My program is grounded in brain cells rather than silicon wafers, but so what? A hockey stick is a hockey stick whether it’s made of wood or aluminum.
The more I dwell on the analogy between me and ChatGPT, the more compelling it becomes. My brain, or mind (and what is the difference, really?), is a program that generates columns in the style of John Horgan. Call it ChatGPT-Me. This program isn’t really intelligent, let alone creative. And free will? Forget about it.John Horgan, “Free Will and ChatGPT-Me,” Cross-Check, November 16, 2023
Origin of doubt
These concerns derive from a talk with a programmer that Horgan organized at New Jersey’s Stevens Institute of Technology. The talk was given by one of our Mind Matters News writers, Erik J. Larson, author of The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do (Harvard University Press, 2021).
One question raised — about which Horgan later wrote— was, does an advanced computer need the input of huge amounts of human knowledge to provide useful and rational responses:
You still need the fancy-schmancy knowledge, Larson contends. If the goal of AI research is to create an actual “superintelligence” with broad, common-sense knowledge, ChatGPT probably represents a dead end. “It’s not going to be a path forward in the way that [Open AI CEO] Sam Altman and other people are talking about.”
Fed certain questions, ChatGPT “comes up with a completely ridiculous answer.” If you ask it to explain its answer, ChatGPT “confabulates, it literally lies to you.” This flaw is “fatal,” Larson says. You cannot entrust ChatGPT with crucial tasks, because you cannot predict in advance when it fail. “There is no principled way to say beforehand, ‘Hey, we’re going to have a problem if we ask it these types of questions.’”…
Sam Altman, the Open AI CEO, claims ChatGPT can help solve urgent social problems, such as poverty. But ChatGPT is far too unreliable to fulfill this grand purpose, Larson asserts. ChatGPT is unreliable because it lacks a world model, background knowledge, common sense. “There’s nothing that it can fall back on and say, ‘Wait a minute, that sounds a little wacky.’”John Horgan, “Cutting Through the ChatGPT Hype,” Cross-Check, October 13, 2023
That sounds somewhat like the ominous model collapse problem; the chatbot, lacking fresh resources, simply reprocesses information into garble. Here’s the Stevens discussion between Horgan and Larson:
But now, how do we get from there to ChatGPT-Me and no free will? Horgan abruptly segues to reminding us that he accepts a Darwinian evolution explanation of our moral attitudes and feelings:
If you see a girl maimed by a bomb, you feel compelled to get her to a hospital—or, in my case, to write a column expressing pity for her and outrage toward the bombers. You don’t consciously calculate that your compassion and anger will be rewarded in ways that boost your chances of reproductive success. Natural selection has, in effect, made that calculation for you over millennia of evolution, and that’s why it predisposed you to feel pity and outrage in certain situations. Horgan, ChatGPT-Me
But why credit Darwinian natural selection, best known for “nature red in tooth and claw,” with creating either compassion or a sense of justice? Any claim that either quality reliably helped our remote ancestors reproduce is special pleading about countless unknown situations by an already convinced Darwinist.
Horgan then wonders if his self-consciousness is simply a form of self-delusion and ends with “I don’t know who is writing this sentence.”
Does Darwinism always do this to people?
Horgan’s concerns really have nothing to do with ChatGPT. Left to itself, a chatbot can take some bureaucratic piffle and generate reams of quite literally mindless nonsense about jackrabbits.
The human mind, as noted earlier, has no history. Technology has a history of course. But the minds that develop it are suddenly discovered, via cave paintings, suggestive burials, or attempts to create perfect stone orbs 1.3 million years ago. Thus, all attempts to account for qualities like abstract thinking and free will via Darwinian evolution end in nonsense. We may as well try to understand Ramanujan or Michelangelo through their family histories. Yes, we can learn some useful things from family histories but what we most want to know about just isn’t there and can’t be there.
One thing Horgan is demonstrating is that passionate belief in the Darwinian evolution of the human mind exacts a heavy price.
You may also wish to read:
The free will debate really heated up this year. Many commentators are weighing in; surprisingly, perhaps, well-known materialists are disputing the claim that there is no free will. Given that both Pinker and Horgan are Darwinian materialists, their coldness toward the idea that there is no free will is worth keeping an eye on.