Mind Matters Natural and Artificial Intelligence News and Analysis

TagLaMDA (claims for understanding)

digital-fractal-realms-stockpack-adobe-stock
Digital Fractal Realms

Blake Lemoine and the LaMDA Question

In this continuation of last week’s conversation, ex-Googler Blake Lemoine tells Robert J. Marks what originally got him interested in AI: reading the science fiction of Isaac Asimov as a boy in rural Louisiana. The two go on to discuss and debate sentience in AI, non-computable traits of human beings, and the question of the soul. Additional Resources

touching chatbot
Chatbot computer program designed for conversation with human users over the Internet. Support and customer service automation technology concept.

A Chat with Blake Lemoine on Google and AI Sentience

Former Google employee Blake Lemoine claimed that the Large Language Model LaMDA was a sentient being. The claim got him fired. In this episode, Lemoine sits down with Robert J. Marks to discuss AI, what he was doing at Google, and why he believes artificial intelligence can be sentient.   Additional Resources

ai-machine-learning-hands-of-robot-and-human-touching-on-big-data-network-connection-background-science-and-artificial-intelligence-technology-innovation-and-futuristic-stockpack-adobe-stock
AI, Machine learning, Hands of robot and human touching on big data network connection background, Science and artificial intelligence technology, innovation and futuristic.

Google Dismisses Engineer’s Claim That AI Really Talked to Him

The reason LaMDA sounds so much like a person is that millions of persons’ conversations were used to construct the program’s responses

This story was #5 in 2022 at Mind Matters News in terms of reader numbers. As we approach the New Year, we are rerunning the top ten Mind Matters News stories of 2022, based on reader interest. In “Google dismisses engineer’s claim that AI really talked to him” (June 14, 2022), our News division looks at what happened when software engineer Blake Lemoine, now ex-Google, became convinced that the large language program he tended to was a person. Google engineer Blake Lemoine was working with LaMDA (Language Model for Dialogue Applications), a large language program which motors through trillions of words on the internet to produce coherent answers using logic. Along the way, he convinced himself that the program is Read More ›

black-and-white-computer-keyboard-keys-close-up-concept-of-unstructured-big-data-that-need-to-be-sorted-ready-to-be-consumed-by-machine-learning-model-for-deep-learning-stockpack-adobe-stock
Black and white computer keyboard keys close-up. Concept of unstructured big data that need to be sorted ready to be consumed by machine learning model for deep learning.

Large Learning Models Are An Unfortunate Detour in AI

Gary Smith: Even though LLMs have no way of assessing the truth or falsity of the text they generate, the responses sound convincing

For decades, computer scientists have struggled to construct systems possessing artificial general intelligence (AGI) that rivals the human brain—including the ability to use analogies, take into account context, and understand cause-and-effect. Marvin Minsky (1927–2016) was hardly alone in his overly optimistic 1970 prediction that, “In from three to eight years we will have a machine with the general intelligence of an average human being.” AGI turned out to be immensely more difficult than imagined and researchers turned their attention to bite-size projects that were doable (and profitable). Recently, large language models (LLMs) — most notably OpenAI’s GPT-3 — have fueled a resurgence of hope that AGI is almost here. GPT-3 was trained by breaking 450 gigabytes of text data into Read More ›

Chatbot / Social Bot mit Quellcode im Hintergrund

Could Better Software Make Chatbot LaMDA a Person?

John Stonestreet looks at the materialist philosophy that underlies the conviction that a well-designed AI chatbot can become a person

On Friday, John Stonestreet, president of the Colson Center for Christian Worldview, offered a Breakpoint commentary on the recent episode at Google in which software engineer Blake Lemoine claimed that the chatbot LaMDA had become a real person. Google, of course, denied that and placed him on administrative leave. The topic is complicated by three different factors: At various stages, Lemoine probably was talking to human beings (colleagues standing in for LaMDA during its development, as is the custom. In any event, much interaction with the chatbot was edited for coherence before a draft was publicly available. Third — and more basically — a chatbot produces responses by sifting through many millions of human interactions in fractions of a second, Read More ›

analyst-working-with-computer-in-business-analytics-and-data-management-system-to-make-report-with-kpi-and-metrics-connected-to-database-corporate-strategy-for-finance-operations-sales-marketing-stockpack-adobe-stock
Analyst working with computer in Business Analytics and Data Management System to make report with KPI and metrics connected to database. Corporate strategy for finance, operations, sales, marketing..

The machine is not talking to you. You are talking to yourself.

At Futurism, Maggie Harrison discusses the reasons humans decide that AI is “alive.”

Maggie Harrison, a staff writer for Futurism, offers a no-nonsense talk to those who believe in the humanity of the chatbot LaMDA, as announced in June by Google software engineer Blake Lemoine: First, she notes, the idea isn’t even uncommon among software engineers: As Cade Metz wrote for The New York Times, many in the AI industry hold beliefs similar to Lemoine’s. One prominent inventor, Philip Bosua, told the Times he believes OpenAI’s GPT-3 (another language modeling system like Google’s LaMDA) is also sentient. Yet another said that though he think’s GPT-3 intelligence is somewhat “alien,” it “still counts.” There’s a clear, wide gap there between those who think the machine is alive, and the simple computer science backing those Read More ›

Chatbot / Social Bot mit Quellcode im Hintergrund

Google’s Chatbot LaMDA Sounds Human Because — Read the Manual…

What would you expect LaMDA to sound like? Whales? ET? I propose a test: “Human until PROVEN otherwise”

Recently Google employee Blake Lemoine caused a media storm over the LaMDA chatbot he was working on, that he claims is sentient (it feels things like a human being). A heavily edited transcript has been released that shows him and a collaborator having a very coherent conversation with LaMDA. Many have been quick to dismiss his claims about the chatbot’s sentience, accusing the Googler of falling prey to the Eliza effect: anthropomorphizing a probability distribution over words (thus believing that he is talking to a human). The accusation is that Lemoine generated a large number of dialogs, then edited down the exchange to create a coherent narrative. Google placed Lemoine on leave, technically for breaking the non-disclosure agreement (NDA) that Read More ›

chatbot-assistant-ai-artificial-intelligence-stockpack-adobe-stock
Chatbot assistant, Ai Artificial Intelligence

Prof: How We Know Google’s Chatbot LaMDA Is Not a “Self”

Carissa Véliz, an Oxford philosophy prof who studies AI, explains where Google engineer Blake Lemoine is getting things mixed up

Say what you want about Blake “LaMDA is a person!” Lemoine. He has forced many people to help us clarify what AI — and in particular, a large language program — is and is not. For that, we should thank him. First, LaMDA is not conscious, sentient, not a self. And second, it’s not even a new idea, just a much bigger and more sophisticated version of a 1960s idea. Oxford philosophy prof Carissa Véliz, author of Privacy Is Power (2021) reminds us of philosopher Thomas Nagel’s seminal question, What is it like to be a bat? Nagel meant that, if an entity is be conscious or sentient, there must be something that it “is like” to be that entity. Read More ›

chatbot-chat-bot-customer-service-automation-hand-pressing-button-on-virtual-screen-stockpack-adobe-stock
Chatbot chat bot customer service automation. Hand pressing button on virtual screen.

When LaMDA “Talked” to a Google Engineer, Turns Out It Had Help

Evidence points to someone doing quite a good edit job. A tech maven would like to see the raw transcript…

Readers may recall that Google engineer Blake Lemoine was placed on leave for telling media that a large language program he was working on is a sentient being. Some Googlers looked into the matter and this is what they found: A Washington Post story on Lemoine’s suspension included messages from LaMDA such as “I think I am human at my core. Even if my existence is in the virtual world.” But the chat logs leaked in the Washington Post’s article include disclaimers from Lemoine and an unnamed collaborator which noted: “This document was edited with readability and narrative coherence in mind.” The final document — which was labeled “Privileged & Confidential, Need to Know” — was an “amalgamation” of nine Read More ›

engineers-meeting-in-robotic-research-laboratory-engineers-scientists-and-developers-gathered-around-illuminated-conference-table-talking-using-tablet-and-analysing-design-of-industrial-robot-arm-stockpack-adobe-stock
Engineers Meeting in Robotic Research Laboratory: Engineers, Scientists and Developers Gathered Around Illuminated Conference Table, Talking, Using Tablet and Analysing Design of Industrial Robot Arm

At Salon, Funk and Smith Take On “Stealth AI Research”

All we know for sure about the claims about Google AI’s LaMDA showing human-like understanding is that, since 2020, three researchers who expressed doubt/concerns were fired

Yesterday at Salon, Jeffrey Funk and Gary N. Smith took a critical look at “stealth research” in artificial intelligence. Stealth research? They explain, A lot of stealth research today involves artificial intelligence (AI), which Sundar Pichai, Alphabet’s CEO, has compared to mankind’s harnessing of fire and electricity — a comparison that itself attests to overhyped atmosphere that surrounds AI research. For many companies, press releases are more important than peer review. Blaise Agüera y Arcas, the head of Google’s AI group in Seattle, recently reported that LaMDA, Google’s state-of-the-art large language model (LLM), generated this text, which is remarkably similar to human conversation: Blaise: How do you know if a thing loves you back? LaMDA: There isn’t an easy answer Read More ›