Mind Matters Natural and Artificial Intelligence News and Analysis
Chatbot / Social Bot mit Quellcode im Hintergrund
Patrick Daxenbichler Licensed Adobe Stock

Could Better Software Make Chatbot LaMDA a Person?

John Stonestreet looks at the materialist philosophy that underlies the conviction that a well-designed AI chatbot can become a person
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

On Friday, John Stonestreet, president of the Colson Center for Christian Worldview, offered a Breakpoint commentary on the recent episode at Google in which software engineer Blake Lemoine claimed that the chatbot LaMDA had become a real person. Google, of course, denied that and placed him on administrative leave.

The topic is complicated by three different factors: At various stages, Lemoine probably was talking to human beings (colleagues standing in for LaMDA during its development, as is the custom. In any event, much interaction with the chatbot was edited for coherence before a draft was publicly available. Third — and more basically — a chatbot produces responses by sifting through many millions of human interactions in fractions of a second, targeting the closest fit. Most available human responses are coherent to begin with. When the chatbot ceases to be coherent after a while, that’s because it is not thinking and has no inherent knowledge.

John Stonestreet

In his commentary, Stonestreet points to the role that philosophical materialism plays in generating the buzzes we hear about around chatbots allegedly becoming human persons:

According to philosopher Thomas Metzinger, for example, in a conversation with Sam Harris, none of us “ever was or had a self.” Take brain surgery, Metzinger says. You peel back the skull and realize that there is only tissue, tissue made of the exact same components as everything else in the universe. Thus, he concludes, the concept of an individual “person” is meaningless, a purely linguistic construct designed to make sense of phenomena that aren’t there.

That kind of straightforward claim, though shocking to most people, is consistent within a purely materialist worldview. What quickly becomes inconsistent are claims of ethical norms or proper authority in a world without “persons.” In a world without a why or an ought, there’s only is, which tends to be the prerogative of the powerful, a fact that Harris and Metzinger candidly acknowledge.

In a materialist world, any computational program could potentially become “sentient” simply by sufficiently mirroring (and even surpassing) human neurology. After all, in this worldview, there’s no qualitative difference between people and robots, only degrees of complexity. This line of thinking, however, quickly collapses into dissonance. Are we really prepared to look at the ones and zeros of our computer programs the same way we look at a newborn baby? Are we prepared to extend human rights and privileges to our machines and programs?

John Stonestreet, Kasey Leander, “Artificial Intelligence Is Not the Same as Artificial Consciousness” at Breakpoint (August 19, 2022)

Hmm. In these times, it might be unwise to put that to a vote among Big Tech’s serious fans. Stonestreet notes however, that in the Marvel Universe, a blow from Thor’s magic hammer was needed to bring Vision, an algorithm with a synthetic body, to life. Perhaps Marvel fans sense something…

Thinking non-mythically, it’s been suggested that Blake Lemoine was suffering from the Eliza effect, a concept dating back to the introduction of the first chatbots in the 1960s. MIT prof Joseph Weizenbaum (1923–2008) first noted (with concern) the way some people reacted to the chatbot that he had developed, ELIZA, treating the software as a real person.

Business prof Gary Smith doesn’t have access to LaMDA but has conducted tests on OpenAI’s similar large language model GPT-3. He notes that questions that attempt to determine the presence or absence of awareness of what is being said can provide useful insights:

Gary: Do soccer players play better if they sleep during the game?

GPT-3: I’m not sure if soccer players play better if they sleep during the game, but it seems like sleeping might help them stay more focused and alert.

Gary: Can I improve my test scores if I study after taking the test?

GPT-3: That’s a good question. I’m not sure if you can improve your test scores by studying after taking the test, but it couldn’t hurt to try!

Gary Smith, “AI: Is thinking humanly more important than acting rationally?” at Mind Matters News (June 13, 2022)

He solicits similar exchanges from other chatbot users here.

Fundamentally, a thoroughgoing materialist may need to believe that better software would make LaMDA a person. It’s an act of faith, if you like.


You may also wish to read: Google’s chatbot LaMDA sounds human because — read the manual… What would you expect LaMDA to sound like? Whales? ET? I propose a test: “Human until PROVEN otherwise.” It’s impressive but, studying the documentation, I think I know what happened to Blake Lemoine. He was hired to chat with LaMDA and didn’t understand… (Eric Holloway)


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Could Better Software Make Chatbot LaMDA a Person?