Readers may recall that Google engineer Blake Lemoine was placed on leave for telling media that a large language program he was working on is a sentient being. Some Googlers looked into the matter and this is what they found:
A Washington Post story on Lemoine’s suspension included messages from LaMDA such as “I think I am human at my core. Even if my existence is in the virtual world.”
But the chat logs leaked in the Washington Post’s article include disclaimers from Lemoine and an unnamed collaborator which noted: “This document was edited with readability and narrative coherence in mind.”
The final document — which was labeled “Privileged & Confidential, Need to Know” — was an “amalgamation” of nine different interviews at different times on two different days pieced together by Lemoine and the other contributor. The document also notes that the “specific order” of some of the dialogue pairs were shuffled around “as the conversations themselves sometimes meandered or went on tangents which are not directly relevant to the question of LaMDA’s sentience.”Katie Anthony, “The transcript used as evidence that a Google AI was sentient was edited and rearranged to make it ‘enjoyable to read’” at Yahoo News (June 13, 2022)
In short, it wasn’t what Lemoine was claiming. He and a colleague put together the parts that made the most sense. None of that would be a problem but for his claim that the large language generator is sentient.
Writing at Futurism, Victor Tangermann asks, “lets see the” raw transcripts:
An old trick in AI-generated text and art is to produce a lot of raw output, and then use human judgment to pick the most impressive examples. It’s still cool, but it’s more of a collaboration between human and machine intelligence, and problematic for any claims of advanced capabilities.
Lemoine, however, argues the edits he made to the transcripts, which were “intended to be enjoyable to read,” still kept them “faithful to the content of the source conversations,” according to the documentation.
That leaves the obvious question: would reading the much lengthier passages give us the same impression of LaMDA’s “sentience?” Cherry-picking passages to build a case that a chatbot is sentient should give anybody pause when evaluating Lemoine’s theory.Victor Tangermann, “Transcript of Conversation With “Sentient” AI Was Heavily Edited” at Futurism (June 14, 2022)
At TechRadar, tech journalist Lance Ulanoff reflects,
Google has access to the world’s information. An indexed knowledge graph across every imaginable topic. It has text and videos. It’s safe to assume that LaMDA 2 has, arguably, seen every form of human conversation imaginable.
This has made it into an able mimic of human locution.
It appears, though, to be forming ideas on the fly, but even that is a trick. I don’t mean the system is designed to trick us. But it has enough information about how an unlimited number of conversations on almost any topic can go, that it can come up with not just an intelligent response, but one that adds in a sort of personality.
This doesn’t make LaMDA sentient.Lance Ulanoff, “Google AI shares our wants and needs – but, sorry, it’s not sentient” at TechRadar (June 14, 2022)
No, but that’s what it’s programmed to say it is. And some software engineers appear programmed to believe it.
You may also wish to read: Google dismisses engineer’s claim that AI really talked to him. The reason LaMDA sounds so much like a person is that millions of persons’ conversations were used to construct the program’s responses. Under the circumstances, it would be odd if the LaMDA program DIDN’T sound like a person. But that doesn’t mean anyone is “in there.”