Mind Matters Natural and Artificial Intelligence News and Analysis

TagLaMDA (claims for sentience)

COSM2022-Nov10-174A0113-blake-lemoine-panel

Lemoine at COSM 2022: A Conversation on AI and LaMDA

Will AI ever become "sentient"?

Blake Lemoine, ex-Google employee and AI expert, sat down with Discovery Institute’s Jay Richards at the 2022 COSM conference last November. Together they discussed AI, Google, and how and why Lemoine got to where he is today. Lemoine famously claimed last year that LaMDA, Google’s breakthrough AI technology, had achieved sentience. Lemoine explains that many people at Google thought AI had the potential for sentience, but that such technology should not be made prematurely for fear of the negative impacts it could have on society. You can listen to their interesting and brief conversation in the video below, and be sure to see more sessions from the 2022 COSM conference featuring Lemoine and other leaders and innovators in technology on Read More ›

digital-fractal-realms-stockpack-adobe-stock
Digital Fractal Realms

Blake Lemoine and the LaMDA Question

In this continuation of last week’s conversation, ex-Googler Blake Lemoine tells Robert J. Marks what originally got him interested in AI: reading the science fiction of Isaac Asimov as a boy in rural Louisiana. The two go on to discuss and debate sentience in AI, non-computable traits of human beings, and the question of the soul. Additional Resources

touching chatbot
Chatbot computer program designed for conversation with human users over the Internet. Support and customer service automation technology concept.

A Chat with Blake Lemoine on Google and AI Sentience

Former Google employee Blake Lemoine claimed that the Large Language Model LaMDA was a sentient being. The claim got him fired. In this episode, Lemoine sits down with Robert J. Marks to discuss AI, what he was doing at Google, and why he believes artificial intelligence can be sentient.   Additional Resources

COSM2022-Nov10-174A0052-ai-panel
Panel on AI at COSM 2022

Experts at COSM Debate Whether Chatbot was Sentient

Turned out quite pleasant. Google fired him in 2022 - but what really happened there?

Last Thursday morning at COSM, a panel of experts debated whether truly sentient artificial intelligence (AI) could potentially exist — and even whether it already does. Robert J. Marks, distinguished professor of electrical and computer engineering at Baylor University, opened by criticizing the Turing test, as a measure of whether we’ve produced genuine AI. Developed by the famous English mathematician and World War II codebreaker Alan Turing, the test holds that if we can’t distinguish a machine’s conversational discourse from that of a real human, then it must exhibit humanlike intelligence. Marks maintains that this is the wrong test for detecting true AI. In his view, the Turing test fails because it “looks at a book and tries to judge Read More ›

Chatbot / Social Bot mit Quellcode im Hintergrund

Google’s Chatbot LaMDA Sounds Human Because — Read the Manual…

What would you expect LaMDA to sound like? Whales? ET? I propose a test: “Human until PROVEN otherwise”

Recently Google employee Blake Lemoine caused a media storm over the LaMDA chatbot he was working on, that he claims is sentient (it feels things like a human being). A heavily edited transcript has been released that shows him and a collaborator having a very coherent conversation with LaMDA. Many have been quick to dismiss his claims about the chatbot’s sentience, accusing the Googler of falling prey to the Eliza effect: anthropomorphizing a probability distribution over words (thus believing that he is talking to a human). The accusation is that Lemoine generated a large number of dialogs, then edited down the exchange to create a coherent narrative. Google placed Lemoine on leave, technically for breaking the non-disclosure agreement (NDA) that Read More ›