Mind Matters Natural and Artificial Intelligence News and Analysis
Chatbot conversation on smartphone screen app interface with artificial intelligence technology providing virtual assistant customer support and information, person hand holding mobile phone
Chatbot conversation on smartphone screen app interface with artificial intelligence technology providing virtual assistant customer support and information, person hand holding mobile phone
Featured image: Chatbot/NicoElNino, Adobe Stock

Google’s LaMDA Not As Smart As Some Think 

Lacking in the media coverage of the suspension of the engineer is any definition of the key word, “sentience”
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Google suspended an engineer, Blake Lemoine, for claiming that a Google AI project named LaMDA has become “sentient”. Google asserted that Lemoine had breached company confidentiality. But, in addition, Google disagreed with Lemoine that artificial intelligence at Google had achieved sentience.

Eric Holloway has exposed the workings behind the curtain that show that LaMDA (Language Model for Dialogue Applications) was trained using examples from humans to do exactly what it did. It’s behavior was planned. That’s what AI does: what it is programmed to do. 

Here’s another angle that peels off more of LaMDA’s glitter.  Lacking in the media coverage of the suspension of the engineer is any definition of the key word, “sentience.” This is an example of the seductive semantics commonly used to hype new artificial intelligence (AI). Words are used without definition. Other common seductive semantics to describe AI are consciousness, creativity, understanding, and spirituality.  Lemoine went so far as to claim LaMDA, may even have a soul.

So let’s examine the claim of AI sentience more closely. Sentience is defined as “finely sensitive in perception or feeling.” Look at something red. You are experiencing a perception. This aspect of sentience is called “qualia.” Imagine, then, explaining this experience to a man who has been blind since birth. The goal is to duplicate the experience you have when you look at red. You can describe characteristics of red to the man. You can tell him the wavelength of red on the electromagnetic spectrum and that blood is red, but none of these explanations will come close to duplicating the experience you have had. If you can’t explain in words the experience of red to a blind man using the English language, how can a computer ever hope to duplicate your experience of red using computer language?

Icelandic horses
Photo by Ursula Drake on Unsplash

The example or red applies to any other qualia from the senses, including the stab of pain, the taste of freshly buttered sweet corn, the feel of mink fur, and the smell of bacon frying.

Those who believe in the coming of Artificial General Intelligence (AGI) that duplicates humans argue that sentience will be an emergent property as AI complexity continues to increase. In other words, sentience will just happen, as a sort of natural outgrowth of the code’s increasing complexity. Such unfounded optimism is akin to that of a naive young boy standing in front of a large pile of horse manure. He becomes excited and begins digging into the pile, flinging handfuls of manure over his shoulders. “With all this horse poop,” he says, “there must be a pony in here somewhere!”

AGI proponents similarly claim, in essence, “With all this computational complexity, there must be some  sentience in here somewhere!”   Sentience  emerging from computer code won’t happen any more than a pony emerging from a pile of manure.

AI is written in computer code. Claims about properties like sentience, creativity and understanding are written in Powerpoint slides and news releases.


You may also wish to read: Google’s chatbot LaMDA sounds human because — read the manual… What would you expect LaMDA to sound like? Whales? ET? I propose a test: “Human until PROVEN otherwise” It’s impressive but, studying the documentation, I think I know what happened to Blake Lemoine. He was hired to chat with LaMDA and didn’t understand…


Google’s LaMDA Not As Smart As Some Think