Blake Lemoine, who formerly worked for Google, has doubled down on his claim that AI systems like LaMDA and Chat-GPT are “sentient.”
Lemoine went public on his thoughts on sentience in The Washington Post last June with his bold claim, and since parting ways with Google, has not backed down on his beliefs. Lemoine posits that because AI can appear to act anxious and stressed, it can be assumed to be sentient. Maggie Harrison writes at Futurism,
An interesting theory, but still not wholly convincing, considering that chatbots are designed to emulate human conversation — and thus, human stories. Breaking under stress is a common narrative arc; this particular aspect of machine behavior, while fascinating, seems less indicative of sentience, and more just another example of exactly how ill-equipped AI guardrails are to handle the tendencies of the underlying tech.-Maggie Harrison, Fired Google Engineer Doubles Down on Claim That AI Has Gained Sentience (futurism.com)
Lemoine does think that Large Language Models like Chat-GPT and Bing’s new chatbot have potential to reconfigure the technological landscape, although we aren’t even close to understanding AI’s full implications for society.