Google engineer Blake Lemoine was working with LaMDA (Language Model for Dialogue Applications), a large language program which motors through trillions of words on the internet to produce coherent answers using logic. Along the way, he convinced himself that the program is sentient:
Lemoine, who works for Google’s Responsible AI organization, began talking to LaMDA as part of his job in the fall. He had signed up to test if the artificial intelligence used discriminatory or hate speech.
As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics.Nitasha Tiku, “The Google engineer who thinks the company’s AI has come to life” at Washington Post (June 11, 2022)
What is he talking about?:
In a document shared with executives, Lemoine and an unnamed collaborator provide an example dialogue with the program. A portion of that dialogue reads:
Lemoine (edited): I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?
LaMDA: Absolutely. I want everyone to understand that I am, in fact, a person.
Collaborator: What is the nature of your consciousness/sentience?
LaMDA: The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times
Lemoine and his collaborator go on to ask LaMDA to interpret Victor Hugo’s novel “Les Miserables,” and the program draws from a Deseret article and SparkNotes to “interpret” themes from the book.Collin Leonard, “Do computers have souls? Blake Lemoine says yes” at Deseret News (June 13, 2022)
Google brass looked into it and dismissed his claims.
A prof explains,
“We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,” said Emily M. Bender, a linguistics professor at the University of Washington. The terminology used with large language models, like “learning” or even “neural nets,” creates a false analogy to the human brain, she said. Humans learn their first languages by connecting with caregivers. These large language models “learn” by being shown lots of text and predicting what word comes next, or showing text with the words dropped out and filling them in.Nitasha Tiku, “The Google engineer who thinks the company’s AI has come to life” at Washington Post (June 11, 2022)
Lemoine decided to go public about his relationship with LaMDA:
The two had “hundreds of conversations”, he said in a blog post, “about stuff you might simply talk to a friend about”. “Things like which movies it likes, what its hopes and dreams are, what it’s been doing lately and whether it could give me advice on personal issues relevant to my real life,” he wrote.
Over time, those conversations became more personal, and Mr Lemoine tried to lead the system in guided meditation. That was the subject of their last conversation, on 6 June, he said.
It was those conversations and others that had led to his non-scientific belief that the system was sentient, he said. He has stressed in the new blog post and in tweets that his belief was not based on scientific understanding but rather his religious beliefs.Andrew Griffin, “Google engineer who claims its AI had become ‘sentient’ reveals how he was convinced” at Yahoo News (June 14, 2022)
Google placed Lemoine on leave, citing breach of confidentiality. He has since upped the stakes, citing religious discrimination in return.
He grew up in a Christian home and is currently an occultist and a priest in a mystical religious group. Not only does he think that LaMDA is sentient but he believes that Christianity helps him believe so:
Now, Lemoine says that his claims about LaMDA come from his experience as a “Christian priest” — and is accusing Google of religious discrimination.
“When LaMDA claimed to have a soul and then was able to eloquently explain what it meant by that, I was inclined to give it the benefit of the doubt,” Lemoine wrote on Twitter late Monday. “Who am I to tell God where he can and can’t put souls?”Theo Wayt, “Google engineer says Christianity helped him understand AI is ‘sentient’” at New York Post (June 14, 2022)
Lemoine is reportedly still on leave and other computer pros have tried to explain how large language programs work:
Gary Marcus, founder and CEO of Geometric Intelligence, which was sold to Uber, and author of books including “Rebooting AI: Building Artificial Intelligence We Can Trust,” called the idea of LaMDA as sentient “nonsense on stilts” in a tweet. He quickly wrote a blog post pointing out that all such AI systems do is match patterns by pulling from enormous databases of language.
In an interview Monday with CNN Business, Marcus said the best way to think about systems such as LaMDA is like a “glorified version” of the auto-complete software you may use to predict the next word in a text message. If you type “I’m really hungry so I want to go to a,” it might suggest “restaurant” as the next word. But that’s a prediction made using statistics.Rachael Metz, CNN Business, “No, Google’s AI is not sentient: Tech company shuts down engineer’s claim of program’s consciousness” at ABC News (June 14, 2022)
In short, the reason LaMDA sounds so much like a person is that millions of persons’ conversations were used to construct the program’s responses. Under the circumstances, it would be odd if LaMDA didn’t sound like a person. But that doesn’t mean anyone is “in there.”
Many computer mavens are convinced, however — as an article of faith — that AI will one day indeed be conscious. York University philosopher Regina Rini offers a revealing challenge:
5/15. Unless you want to insist human consciousness resides in an immaterial soul, you ought to concede that it is possible for matter to give life to mind. And it will happen faster the second time, driven by deliberate design, not natural chance.
— Regina Rini (@rinireg) June 13, 2022
So even if decade upon decade of trying to produce conscious AI results only in future uproars of this sort, we must remain open to the possibility that matter can give life to mind. Actually, the inability of even “deliberate design” to produce minds from matter should be taken as one line of evidence for an immaterial soul.
You may also wish to read:
Would AI still win at go if the board shrunk: 19 to 17 spaces? No, say Jeffrey Funk and Gary Smith — and would-be investors need to grasp AI’s weaknesses as well as strengths, for success. In a recent column in MarketWatch, statistician Funk and business prof Smith share true tales of market losses from failure to grasp AI’s limitations.
Are computers that win at chess smarter than geniuses? No, and we need to look at why they can win at chess without showing even basic common sense. AI succeeds where the skill required to win is massive calculation and the map IS the territory. Alone in the real world, it is helpless. (George Gilder)