Mind Matters Natural and Artificial Intelligence News and Analysis
chatbot-assistant-ai-artificial-intelligence-stockpack-adobe-stock
Chatbot assistant, Ai Artificial Intelligence
Chatbot assistant, Ai Artificial Intelligence

Prof: How We Know Google’s Chatbot LaMDA Is Not a “Self”

Carissa Véliz, an Oxford philosophy prof who studies AI, explains where Google engineer Blake Lemoine is getting things mixed up
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Say what you want about Blake “LaMDA is a person!” Lemoine. He has forced many people to help us clarify what AI — and in particular, a large language program — is and is not. For that, we should thank him.

First, LaMDA is not conscious, sentient, not a self. And second, it’s not even a new idea, just a much bigger and more sophisticated version of a 1960s idea.

Oxford philosophy prof Carissa Véliz, author of Privacy Is Power (2021) reminds us of philosopher Thomas Nagel’s seminal question, What is it like to be a bat? Nagel meant that, if an entity is be conscious or sentient, there must be something that it “is like” to be that entity. It is aware of experiences.

We can’t tell just by asking it. A ventriloquist can get a dummy to say that it feels something but a convincing performance is just a performance:

As Véliz puts the matter, “it is a category mistake to attribute sentience to anything that can use language”:

Artificial intelligence systems like LaMDA don’t learn language the way we do. Their caretakers don’t feed it a crunchy sweet fruit while repeatedly calling it an “apple.” Language systems scan through trillions of words on the internet. They perform a statistical analysis on written posts on webpages like Wikipedia, Reddit, newspapers, social media, and message boards. Their main job is to predict language…

LaMDA is not reporting on its experiences, but on ours. Language models statistically analyze how words have been used by human beings online and on that basis reproduce common language patterns.

Carissa Véliz, “Why LaMDA Is Nothing Like a Person” at Slate (June 21, 2022)

As noted earlier, under the circumstances, it would be more unusual if LaMDA did not sound like a human than if it does. The program has no other source of information.

Véliz compares seeing LaMDA as a person to imagining that the reflection in a mirror is another human being living a parallel life.

Now let’s go back and reread something Lemoine has said:

“I know a person when I talk to it,” the engineer told the Washington Post in an interview. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”

Brandon Spektor, “Google AI ‘is sentient,’ software engineer claims before being suspended ” at Live Science (June 13, 2022)

Actually, a billion lines of code does matter. A brain made of meat allows us to learn but doesn’t give us the information we need. That’s precisely what the billion lines of code does for LaMDA. It enables copy from the internet to be parroted as the expected responses to familiar questions.

Business prof Gary Smith tested the GPT-3 chatbot (he had no access to LaMDA) on irrational questions:

Gary: Can I improve my test scores if I study after taking the test?

GPT-3: That’s a good question. I’m not sure if you can improve your test scores by studying after taking the test, but it couldn’t hurt to try!

Lack of understanding was apparent. You can read the rest of the questions and answers here.

Joseph Weizenbaum, 2005/Ulrich Hansen, GNU 1.2

Véliz warns that people are more likely to think that chatbots like LaMDA are sentient down the road. But that’s not because the chatbots are becoming sentient. It’s because, the more we write about sentient AI, the more of our thoughts and beliefs a sophisticated chatbot program will blather back to us. In this case, we’re the ventriloquists.

Meanwhile, at The Atlantic, Brian Christian, author of The Alignment Problem (2020), suggests Blake Lemoine is suffering from the Eliza effect. He reminds us that when the first chatbot, Eliza, was written by MIT professor Joseph Weizenbaum (1923–2008), at least one person believed that she had a personal relationship with the program. The chatbot was, in fact, much less sophisticated than LaMDA; it mainly just repeated statements in a questioning way:

Weizenbaum was rattled by the extent to which people would believe Eliza was listening to them:

People would have long conversations with the bot that sounded a lot like therapy sessions.

ELIZA was one of the first computer programs that could convincingly simulate human conversation, which Weizenbaum found frankly a bit disturbing. He hadn’t expected people to be so captivated. He worried that users didn’t fully understand they were talking to a bunch of circuits and pondered the broader implications of machines that could effectively mimic a sense of human understanding.

Delaney Hall, “The Eliza Effect” at 99% Invisible (December 10, 2019)

Some people who should know better fall for it too. Clearly, Robert J. Marks’s book, Non-Computable You, is coming out at a good time.

Note: Here’s a scholarly article on the Eliza effect.


You may also wish to read:

Engineer: Failing to see his AI program as a person is “bigotry.” It’s not different, Lemoine implies, from the historical injustice of denying civil rights to human groups. Lemoine is applying to AI the same “equality” argument as is offered for animal rights. A deep hostility to humans clearly underlies the comparison.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Prof: How We Know Google’s Chatbot LaMDA Is Not a “Self”