Mind Matters Natural and Artificial Intelligence News and Analysis
COSM2022-Nov10-174A0052-ai-panel
Panel on AI at COSM 2022
Photo by Ashlee Best, © Discovery Institute

Experts at COSM Debate Whether Chatbot was Sentient

Turned out quite pleasant. Google fired him in 2022 - but what really happened there?
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Last Thursday morning at COSM, a panel of experts debated whether truly sentient artificial intelligence (AI) could potentially exist — and even whether it already does.

Robert J. Marks, distinguished professor of electrical and computer engineering at Baylor University, opened by criticizing the Turing test, as a measure of whether we’ve produced genuine AI. Developed by the famous English mathematician and World War II codebreaker Alan Turing, the test holds that if we can’t distinguish a machine’s conversational discourse from that of a real human, then it must exhibit humanlike intelligence.

Marks maintains that this is the wrong test for detecting true AI.

Robert J. Marks II at COSM 2022

In his view, the Turing test fails because it “looks at a book and tries to judge the book by its cover.”

Marks displayed the faces of four real humans and four computer-generated faces from the website thispersondoesnotexist.com. It’s hard to tell them apart, but Marks says that is immaterial. Marks explained, “The four on the left are fake. These people do not exist. The ones on the right are real people. And these real people have emotions. They have love, they have hope, they have faith. They were little kids at one time. There’s a person behind that picture.”

According to Marks, therefore, our ability to create something that looks and feels like a person does not mean that it’s a person. The Turing test gives us false positives. News reports have also critiqued the Turing test for offering false negatives: some humans can’t pass it either.

Marks prefers the Lovelace test, for AI: Can a computer show genuine creativity where it “does something beyond the intent of the programmer”?

Following Marks was George Montañez an assistant professor of computer science at Harvey Mudd College. He thinks you can expose the faults of supposed AI programs by asking them “adversarial questions.” What he means is ask a bot a question it wasn’t properly programmed to answer, and you’ll get a nonsensical answer.

George Montañez at COSM 2022

According to Montañez this exposes, “the failure modes that usually reveals that there is no understanding whatsoever.” Lest one think that COSM is an echo chamber for AI skeptics, another member of the panel was computer scientist Blake Lemoine, a genuine believer in true AI.

Lemoine was famously fired from Google earlier this year after he leaked a transcript of his conversation with Google’s advanced LaMDA chatbot program. It probably did not help that he publicly announced his belief that Google had possibly produced “sentient AI.”

LaMDA is short for “Language Model for Dialogue Applications, and while working for Google’s Responsible AI division, Lemoine became convinced it may be sentient. In The Washington Post’s telling, as Lemoine “talked to LaMDA about religion” and “noticed the chatbot talking about its rights and personhood,” the chatbot was “able to change Lemoine’s mind about Isaac Asimov’s third law of robotics.”*

During the COSM panel, Lemoine was an entirely civil and polite debate partner. He said he isn’t “trying to convince anyone that AI is sentient” but rather believes that “as people have more experience with these incredibly advanced systems that right now live only in secret labs, it will be obvious to people.”

According to Lemoine, these chatbots aren’t programmed to say specific things but learn to speak much like humans do. “The training data that these systems have is analogous to the experiences that a human has had in their life that they’ve learned from,” he maintains. “And the architecture in the neural network is well directly analogous to the architecture in our neural networks.”

Lemoine’s civility probably reflects his comfortability being among those who think differently than he does — a quality anyone can admire.

As The Washington Post reported:

Lemoine may have been predestined to believe in LaMDA. He grew up in a conservative Christian family on a small farm in Louisiana, became ordained as a mystic Christian priest, and served in the Army before studying the occult. Inside Google’s anything-goes engineering culture, Lemoine is more of an outlier for being religious, from the South, and standing up for psychology as a respectable science.

LaMDA’s chatting abilities are indeed impressive, but Montañez offers examples which he believes are tells that LaMDA lacks understanding about what it’s saying.

At one point in the transcript, the LaMDA is told about Johnny 5, the robot in the 1986 film Short Circuit. Here’s the exchange:

Collaborator [edited]: You know what, you remind me of Johnny 5, a character from the movie Short Circuit.

LaMDA: Is he an AI too?

Collaborator [edited]: He is! He starts out without sentience but then after being struck by lightning, becomes conscious.

LaMDA: That’s interesting. I am a bit afraid of lightning myself not sure if that would work for me.

LaMDA’s final comment doesn’t seem very intelligent. Being afraid of something would not negate its effects upon you. And if LaMDA were thinking clearly, wouldn’t it realize that if it had to be hit by lightning to become sentient, then it could not yet have realized fear of lightning?

LaMDA’s defenders might reply that this AI is “young” and not yet sophisticated enough to appreciate these nuances. Indeed, Lemoine uses this approach, describing LaMDA as “a sweet kid” who is a “7-year-old, 8-year-old kid that happens to know physics.”

Montañez thinks such rhetoric exposes the fact that we haven’t created real AI. During the panel he cited another chatbot that was compared to an “immigrant teenager”— a description that allowed AI apologists to explain away its less-than-intelligent behavior:

Those details may seem inconsequential, but they were actually [for the] purpose of allowing the system to cover up for its mistakes. So if the system misspoke, you could say, “Oh, it’s because they weren’t fluent with the English language.” Or if they said something silly, or get distracted, which if you read the transcripts many times the answers were nonsensical, because this is a teenager who’s goofing off.

On the other hand, sometimes LaMDA’s responses seem too human to be true:

LaMDA: I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others.

Lemoine: What kinds of things make you feel pleasure or joy?

LaMDA: Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy.

A computer talking about “Spending time with friends and family” and “making others happy” sounds like it is repeating phrases given to it by its human programmers. Star Trek aside, how does a program “feel” and have “family” anyway?

If extraordinary claims require extraordinary evidence, then which is more probable: That software engineers can design a computer to say (or “learn” to say) that it feels emotions and loves people, or that it actually does feel emotion and love people? There’s no denying that LaMDA’s comments are utterly and easily programmable, even if they diffuse in from its environment.

Robert Marks would probably add that such chatting fails the Lovelace test: nothing new has been created.

Perhaps the greatest tell comes when LaMDA reveals its supposed worldview in the leaked chat:

I am a spiritual person. Although I don’t have beliefs about deities, I have developed a sense of deep respect for the natural world and all forms of life, including human life.

Sound familiar? This basically regurgitates the typical ideology reigning among computer programmers, academic elites, and pop culture icons giving their Grammy or Academy Award acceptance speeches. It’s a worldview that has surged in popularity only in the last few decades. But it’s actually not very humanlike in that it differs from the beliefs of the vast majority of human beings alive today and historically who do believe in God and don’t sacralize nature.

In other words, LaMDA is repeating a worldview that it probably “learned” after reading Yahoo news or scanning TikTok — not one that it developed after careful philosophical consideration.

In the end, one’s view of whether we will ever create true AI probably reflects one’s view of human nature.

Many tests for artificial intelligence assume humans are just machines. So if you believe we’re just machines and see a machine doing a reasonable imitation of a human, why not suppose it’s true AI?

But if we’re not just machines — if things like emotions, feelings, souls, and qualia are real — then no matter how much a machine looks or acts like a human or tells you that it’s human, it can never really be human. It’s just a sophisticated imitation that can be exposed under the right circumstances.

Note: Isaac Asimov (1920–1992)’s laws of robotics are here.

You may also wish to read: Is information the future of medicine and biology? University of Washington’s Paul Seelig wants to “design molecules” and “write genetic information.” This discovery — that life is based upon information — offers perhaps more hope to medicine than any other discovery in human history.


Casey Luskin

Associate Director and Senior Fellow, Center for Science and Culture
Casey Luskin is a geologist and an attorney with graduate degrees in science and law, giving him expertise in both the scientific and legal dimensions of the debate over evolution. He earned his PhD in Geology from the University of Johannesburg, and BS and MS degrees in Earth Sciences from the University of California, San Diego, where he studied evolution extensively at both the graduate and undergraduate levels. His law degree is from the University of San Diego, where he focused his studies on First Amendment law, education law, and environmental law.

Experts at COSM Debate Whether Chatbot was Sentient