Mind Matters Natural and Artificial Intelligence News and Analysis

TagLaMDA (claims for understanding)

Chatbot / Social Bot mit Quellcode im Hintergrund

Could Better Software Make Chatbot LaMDA a Person?

John Stonestreet looks at the materialist philosophy that underlies the conviction that a well-designed AI chatbot can become a person

On Friday, John Stonestreet, president of the Colson Center for Christian Worldview, offered a Breakpoint commentary on the recent episode at Google in which software engineer Blake Lemoine claimed that the chatbot LaMDA had become a real person. Google, of course, denied that and placed him on administrative leave. The topic is complicated by three different factors: At various stages, Lemoine probably was talking to human beings (colleagues standing in for LaMDA during its development, as is the custom. In any event, much interaction with the chatbot was edited for coherence before a draft was publicly available. Third — and more basically — a chatbot produces responses by sifting through many millions of human interactions in fractions of a second, Read More ›

analyst-working-with-computer-in-business-analytics-and-data-management-system-to-make-report-with-kpi-and-metrics-connected-to-database-corporate-strategy-for-finance-operations-sales-marketing-stockpack-adobe-stock
Analyst working with computer in Business Analytics and Data Management System to make report with KPI and metrics connected to database. Corporate strategy for finance, operations, sales, marketing..

The machine is not talking to you. You are talking to yourself.

At Futurism, Maggie Harrison discusses the reasons humans decide that AI is “alive.”

Maggie Harrison, a staff writer for Futurism, offers a no-nonsense talk to those who believe in the humanity of the chatbot LaMDA, as announced in June by Google software engineer Blake Lemoine: First, she notes, the idea isn’t even uncommon among software engineers: As Cade Metz wrote for The New York Times, many in the AI industry hold beliefs similar to Lemoine’s. One prominent inventor, Philip Bosua, told the Times he believes OpenAI’s GPT-3 (another language modeling system like Google’s LaMDA) is also sentient. Yet another said that though he think’s GPT-3 intelligence is somewhat “alien,” it “still counts.” There’s a clear, wide gap there between those who think the machine is alive, and the simple computer science backing those Read More ›

Chatbot / Social Bot mit Quellcode im Hintergrund

Google’s Chatbot LaMDA Sounds Human Because — Read the Manual…

What would you expect LaMDA to sound like? Whales? ET? I propose a test: “Human until PROVEN otherwise”

Recently Google employee Blake Lemoine caused a media storm over the LaMDA chatbot he was working on, that he claims is sentient (it feels things like a human being). A heavily edited transcript has been released that shows him and a collaborator having a very coherent conversation with LaMDA. Many have been quick to dismiss his claims about the chatbot’s sentience, accusing the Googler of falling prey to the Eliza effect: anthropomorphizing a probability distribution over words (thus believing that he is talking to a human). The accusation is that Lemoine generated a large number of dialogs, then edited down the exchange to create a coherent narrative. Google placed Lemoine on leave, technically for breaking the non-disclosure agreement (NDA) that Read More ›

chatbot-assistant-ai-artificial-intelligence-stockpack-adobe-stock
Chatbot assistant, Ai Artificial Intelligence

Prof: How We Know Google’s Chatbot LaMDA Is Not a “Self”

Carissa Véliz, an Oxford philosophy prof who studies AI, explains where Google engineer Blake Lemoine is getting things mixed up

Say what you want about Blake “LaMDA is a person!” Lemoine. He has forced many people to help us clarify what AI — and in particular, a large language program — is and is not. For that, we should thank him. First, LaMDA is not conscious, sentient, not a self. And second, it’s not even a new idea, just a much bigger and more sophisticated version of a 1960s idea. Oxford philosophy prof Carissa Véliz, author of Privacy Is Power (2021) reminds us of philosopher Thomas Nagel’s seminal question, What is it like to be a bat? Nagel meant that, if an entity is be conscious or sentient, there must be something that it “is like” to be that entity. Read More ›

chatbot-chat-bot-customer-service-automation-hand-pressing-button-on-virtual-screen-stockpack-adobe-stock
Chatbot chat bot customer service automation. Hand pressing button on virtual screen.

When LaMDA “Talked” to a Google Engineer, Turns Out It Had Help

Evidence points to someone doing quite a good edit job. A tech maven would like to see the raw transcript…

Readers may recall that Google engineer Blake Lemoine was placed on leave for telling media that a large language program he was working on is a sentient being. Some Googlers looked into the matter and this is what they found: A Washington Post story on Lemoine’s suspension included messages from LaMDA such as “I think I am human at my core. Even if my existence is in the virtual world.” But the chat logs leaked in the Washington Post’s article include disclaimers from Lemoine and an unnamed collaborator which noted: “This document was edited with readability and narrative coherence in mind.” The final document — which was labeled “Privileged & Confidential, Need to Know” — was an “amalgamation” of nine Read More ›

engineers-meeting-in-robotic-research-laboratory-engineers-scientists-and-developers-gathered-around-illuminated-conference-table-talking-using-tablet-and-analysing-design-of-industrial-robot-arm-stockpack-adobe-stock
Engineers Meeting in Robotic Research Laboratory: Engineers, Scientists and Developers Gathered Around Illuminated Conference Table, Talking, Using Tablet and Analysing Design of Industrial Robot Arm

At Salon, Funk and Smith Take On “Stealth AI Research”

All we know for sure about the claims about Google AI’s LaMDA showing human-like understanding is that, since 2020, three researchers who expressed doubt/concerns were fired

Yesterday at Salon, Jeffrey Funk and Gary N. Smith took a critical look at “stealth research” in artificial intelligence. Stealth research? They explain, A lot of stealth research today involves artificial intelligence (AI), which Sundar Pichai, Alphabet’s CEO, has compared to mankind’s harnessing of fire and electricity — a comparison that itself attests to overhyped atmosphere that surrounds AI research. For many companies, press releases are more important than peer review. Blaise Agüera y Arcas, the head of Google’s AI group in Seattle, recently reported that LaMDA, Google’s state-of-the-art large language model (LLM), generated this text, which is remarkably similar to human conversation: Blaise: How do you know if a thing loves you back? LaMDA: There isn’t an easy answer Read More ›