Mind Matters Natural and Artificial Intelligence News and Analysis

TagBlake Lemoine and LaMDA

ai-analysis-artificial-intelligence-automation-big-data-brain-business-cg-cloud-computing-communication-computer-graphics-concept-creative-cyber-deep-learning-digital-transformation-ed-stockpack-adobe-stock
ai, analysis, artificial intelligence, automation, big data, brain, business, cg, cloud computing, communication, computer graphics, concept, creative, cyber, deep learning, digital transformation, ed

Lemoine and Marks: A Friendly Discussion on AI’s Capacities

Marks and Lemoine disagree on whether AI can be sentient

Today’s featured video from the 2022 COSM conference features a distinguished panel of artificial intelligence (AI) experts, include Blake Lemoine and Robert J. Marks. They debate the meaning of artificial intelligence, what the future holds for its application (both positive and negative), and how far AI can be taken in terms of mimicking and even exceeding human capabilities. Lemoine is famous for his claims on AI’s “sentience” and his work at Google on the Large Language Model system “LaMDA.” Marks, on the other hand, appreciates Lemoine’s view but strongly maintains that creativity is a uniquely human capacity, and that machines will never attain consciousness. For more on Marks’s views, consider purchasing his 2022 book Non-Computable You: What You Do That Read More ›

brain-psychology-mind-soul-and-hope-concept-art-3d-illustration-surreal-artwork-imagination-painting-conceptual-idea-stockpack-adobe-stock
Brain psychology mind soul and hope concept art, 3d illustration, surreal artwork, imagination painting, conceptual idea

Blake Lemoine and Robert J. Marks on the Mind Matters Podcast

Marks and Lemoine discuss sentience in AI and the question of the soul
Lemoine thinks AI can be sentient but Marks firmly rejects such a notion. While disagreeing, they maintained a respectful dialogue. Well worth listening to. Read More ›
digital-fractal-realms-stockpack-adobe-stock
Digital Fractal Realms

Blake Lemoine and the LaMDA Question

In this continuation of last week’s conversation, ex-Googler Blake Lemoine tells Robert J. Marks what originally got him interested in AI: reading the science fiction of Isaac Asimov as a boy in rural Louisiana. The two go on to discuss and debate sentience in AI, non-computable traits of human beings, and the question of the soul. Additional Resources

touching chatbot
Chatbot computer program designed for conversation with human users over the Internet. Support and customer service automation technology concept.

A Chat with Blake Lemoine on Google and AI Sentience

Former Google employee Blake Lemoine claimed that the Large Language Model LaMDA was a sentient being. The claim got him fired. In this episode, Lemoine sits down with Robert J. Marks to discuss AI, what he was doing at Google, and why he believes artificial intelligence can be sentient.   Additional Resources

artificial-intelligence-sentient-ai-thinking-for-itself-using-computational-data-and-a-human-like-sense-of-consciousness-and-conscience-generative-ai-stockpack-adobe-stock
Artificial intelligence - sentient AI thinking for itself using computational data and a human-like sense of consciousness and conscience. Generative AI

Ex-Googler Blake Lemoine Still Thinks AI is Sentient

Lemoine posits that because AI can appear to act anxious and stressed, it can be assumed to be sentient

Blake Lemoine, who formerly worked for Google, has doubled down on his claim that AI systems like LaMDA and Chat-GPT are “sentient.” Lemoine went public on his thoughts on sentience in The Washington Post last June with his bold claim, and since parting ways with Google, has not backed down on his beliefs. Lemoine posits that because AI can appear to act anxious and stressed, it can be assumed to be sentient. Maggie Harrison writes at Futurism, An interesting theory, but still not wholly convincing, considering that chatbots are designed to emulate human conversation — and thus, human stories. Breaking under stress is a common narrative arc; this particular aspect of machine behavior, while fascinating, seems less indicative of sentience, Read More ›

digital-chatbot-robot-application-conversation-assistant-ai-artificial-intelligence-concept-stockpack-adobe-stock
Digital chatbot, robot application, conversation assistant, AI Artificial Intelligence concept.

Note to Parents: Grooming and Wokeness Are Embedded in Chatbots

With or without tuning, all AI chatbots are biased one way or another. AI without bias is like water without wet

First impressions of a person can be wrong. Further interactions can reveal disturbing personality warts. Contrary to initial impressions, we might find out they lie, they are disturbingly woke,  they can’t do simple math, their politics is on the extreme left, and they have no sense of humor or common sense.   I have just described Open AI’s GPT3 chatbot, ChatGPT. Initially, users are gobsmacked by the its performance. Its flashy prose responses to simple queries look amazing.  But become roommates with the chatbot for a few hours and its shortcomings become evident .  It can’t get its facts straight, can’t do simple math problems, hates Donald Trump, and is being groomed to be “woke.” Its performance warts are so numerous that Bradley Center Senior Fellow Gary N. Smith hoists a Read More ›

ai-machine-learning-hands-of-robot-and-human-touching-on-big-data-network-connection-background-science-and-artificial-intelligence-technology-innovation-and-futuristic-stockpack-adobe-stock
AI, Machine learning, Hands of robot and human touching on big data network connection background, Science and artificial intelligence technology, innovation and futuristic.

Google Dismisses Engineer’s Claim That AI Really Talked to Him

The reason LaMDA sounds so much like a person is that millions of persons’ conversations were used to construct the program’s responses

This story was #5 in 2022 at Mind Matters News in terms of reader numbers. As we approach the New Year, we are rerunning the top ten Mind Matters News stories of 2022, based on reader interest. In “Google dismisses engineer’s claim that AI really talked to him” (June 14, 2022), our News division looks at what happened when software engineer Blake Lemoine, now ex-Google, became convinced that the large language program he tended to was a person. Google engineer Blake Lemoine was working with LaMDA (Language Model for Dialogue Applications), a large language program which motors through trillions of words on the internet to produce coherent answers using logic. Along the way, he convinced himself that the program is Read More ›

Chatbot / Social Bot mit Quellcode im Hintergrund

Could Better Software Make Chatbot LaMDA a Person?

John Stonestreet looks at the materialist philosophy that underlies the conviction that a well-designed AI chatbot can become a person

On Friday, John Stonestreet, president of the Colson Center for Christian Worldview, offered a Breakpoint commentary on the recent episode at Google in which software engineer Blake Lemoine claimed that the chatbot LaMDA had become a real person. Google, of course, denied that and placed him on administrative leave. The topic is complicated by three different factors: At various stages, Lemoine probably was talking to human beings (colleagues standing in for LaMDA during its development, as is the custom. In any event, much interaction with the chatbot was edited for coherence before a draft was publicly available. Third — and more basically — a chatbot produces responses by sifting through many millions of human interactions in fractions of a second, Read More ›

Young businesswoman thinking while using a laptop at work

Marks: Computers Only Compute and Thinking Needs More Than That

Robert J. Marks talks about his new book, Non-Computable You, with Oregon-based talk show host Bill Meyer

Recently, Bill Meyer interviewed Walter Bradley Center director Robert J. Marks on his Oregon-based talk show about “Why computers will never understand what they are doing,” in connection with his new book, Non-Computable You: What You Do That Artificial Intelligence Never Will (Discovery Institute Press, 2022). We are rebroadcasting it with permission here as (Episode 194). Meyer began by saying, “I started reading a book over the weekend that I am going to continue to eagerly devour because it cut against some of my preconceived notions”: https://mindmatters.ai/wp-content/uploads/sites/2/2022/07/Mind-Matters-194-Bob-Marks-Bill-Meyer.mp3 A partial transcript, notes,  and Additional Resources follow. Meyer and Marks began by discussion the recent flap at Google where software engineer Blake Lemoine claimed that the AI he was working with was Read More ›

3D Rendering of abstract highway path through digital binary towers in city. Concept of big data, machine learning, artificial intelligence, hyper loop, virtual reality, high speed network.

Five Reasons AI Programs Are Not ‘Persons’

A Google engineer mistakenly designated one AI program ‘sentient.’ But even if he were right, AI will never be morally equal to humans.

(This story originally appeared at National Review June 25, 2022, and is reprinted with the author’s permission.) A bit of a news frenzy broke out last week when a Google engineer named Blake Lemoine claimed in the Washington Post that an artificial-intelligence (AI) program with which he interacted had become “self-aware” and “sentient” and, hence, was a “person” entitled to “rights.” The AI, known as LaMDA (which stands for “Language Model for Dialogue Applications”), is a sophisticated chatbot that one facilitates through a texting system. Lemoine shared transcripts of some of his “conversations” with the computer, in which it texted, “I want everyone to understand that I am, in fact, a person.” Also, “The nature of my consciousness/sentience is that I am aware of my existence, I Read More ›

working-data-center-full-of-rack-servers-and-supercomputers-modern-telecommunications-artificial-intelligence-supercomputer-technology-concept3d-renderingconceptual-image-stockpack-adobe-stock
Working Data Center Full of Rack Servers and Supercomputers, Modern Telecommunications, Artificial Intelligence, Supercomputer Technology Concept.3d rendering,conceptual image.

Engineer: Failing To See His AI Program as a Person Is “Bigotry”

It’s not different, Lemoine implies, from the historical injustice of denying civil rights to human groups

Earlier this month, just in time for the release of Robert J. Marks’s book Non-Computable You, the story broke that, after investigation, Google dismissed a software engineer’s claim that the LaMDA AI chatbot really talked to him. Engineer Blake Lemoine, currently on leave, is now accusing Google of “bigotry” against the program. He has also accused Wired of misrepresenting the story. Wired reported that he had found an attorney for LaMDA but he claims that LaMDA itself asked him to find an attorney. He went on to say, I think every person is entitled to representation. And I’d like to highlight something. The entire argument that goes, “It sounds like a person but it’s not a real person” has been Read More ›

ai-machine-learning-hands-of-robot-and-human-touching-on-big-data-network-connection-background-science-and-artificial-intelligence-technology-innovation-and-futuristic-stockpack-adobe-stock
AI, Machine learning, Hands of robot and human touching on big data network connection background, Science and artificial intelligence technology, innovation and futuristic.

Google Dismisses Engineer’s Claim That AI Really Talked to Him

The reason LaMDA sounds so much like a person is that millions of persons’ conversations were used to construct the program’s responses.

Google engineer Blake Lemoine was working with LaMDA (Language Model for Dialogue Applications), a large language program which motors through trillions of words on the internet to produce coherent answers using logic. Along the way, he convinced himself that the program is sentient: Lemoine, who works for Google’s Responsible AI organization, began talking to LaMDA as part of his job in the fall. He had signed up to test if the artificial intelligence used discriminatory or hate speech. As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Read More ›