Mind Matters Natural and Artificial Intelligence News and Analysis

TagEliza effect

mouse-cursor-clicking-captcha-im-not-robot-checkbox-stockpack-adobe-stock
Mouse Cursor Clicking CAPTCHA

CAPTCHA: How Fooling Machines Is Different From Fooling Humans

Automated censorship is intended to protect against a tidal wave of spam but it could certainly have other uses…

Readers of Mind Matters News have likely heard of the iconic Turing test. Computer pioneer Alan Turing famously invented a test to determine whether a program could pass as a human. The gist is, if a program can fool human testers into believing it is a human, then the program is intelligent. Not everyone is convinced. Thing is, it doesn’t take much to fool us humans! Take Eliza , a program of only a few hundred lines, written in the 60s, which fooled many people into believing it was a real human therapist. But what if we flip the Turing test on its head? Instead of a test where a program tries to pass as human, we use a test…

Chatbot / Social Bot mit Quellcode im Hintergrund

Could Better Software Make Chatbot LaMDA a Person?

John Stonestreet looks at the materialist philosophy that underlies the conviction that a well-designed AI chatbot can become a person

On Friday, John Stonestreet, president of the Colson Center for Christian Worldview, offered a Breakpoint commentary on the recent episode at Google in which software engineer Blake Lemoine claimed that the chatbot LaMDA had become a real person. Google, of course, denied that and placed him on administrative leave. The topic is complicated by three different factors: At various stages, Lemoine probably was talking to human beings (colleagues standing in for LaMDA during its development, as is the custom. In any event, much interaction with the chatbot was edited for coherence before a draft was publicly available. Third — and more basically — a chatbot produces responses by sifting through many millions of human interactions in fractions of a second,…

chatbot-assistant-ai-artificial-intelligence-stockpack-adobe-stock
Chatbot assistant, Ai Artificial Intelligence

Prof: How We Know Google’s Chatbot LaMDA Is Not a “Self”

Carissa Véliz, an Oxford philosophy prof who studies AI, explains where Google engineer Blake Lemoine is getting things mixed up

Say what you want about Blake “LaMDA is a person!” Lemoine. He has forced many people to help us clarify what AI — and in particular, a large language program — is and is not. For that, we should thank him. First, LaMDA is not conscious, sentient, not a self. And second, it’s not even a new idea, just a much bigger and more sophisticated version of a 1960s idea. Oxford philosophy prof Carissa Véliz, author of Privacy Is Power (2021) reminds us of philosopher Thomas Nagel’s seminal question, What is it like to be a bat? Nagel meant that, if an entity is be conscious or sentient, there must be something that it “is like” to be that entity.…