Mind Matters Natural and Artificial Intelligence News and Analysis

TagEliza effect

a-vibrant-blue-programming-code-background-represents-the-intricate-work-of-software-developers-and-the-art-of-computer-scripting-stockpack-adobe-stock
A vibrant blue programming code background represents the intricate work of software developers and the art of computer scripting

Computers Still Do Not “Understand”

Don't be seduced into attributing human traits to computers.

The subtitle of a recent New Yorker article was: “Geoffrey Hinton has spent a lifetime teaching computers to learn. Now he worries that artificial brains are better than ours.” I respectfully disagree. As I’ve repeatedly argued, the real danger today is not that computers are smarter than us but that we think computers are smarter than us. Hinton is extremely intelligent, but he is not the first, and will not be the last, extremely intelligent person to be seduced by a full-blown Eliza effect, attributing human traits to computers. Consider Hinton’s argument about large language models (LLMs): People say, It’s just glorified autocomplete . . . Now, let’s analyze that. Suppose you want to be really good at predicting the Read More ›

chat-bot-and-future-marketing-concept-chatbot-icon-hand-holding-mobile-phone-with-automatic-chatbot-message-screen-with-abstract-background-stockpack-adobe-stock
Chat bot and future marketing concept , Chatbot icon , Hand holding mobile phone with automatic chatbot message screen with abstract background

Let’s Call AI What It Really Is: Faux Intelligence

Gary Smith at Salon: While GPT-3 can string words together in convincing ways, it has no idea what the words mean

Pomona College business and investments prof Gary Smith warns Salon readers not to be too gullible about what human-sounding chatbots really amount to. He notes that in the 1960s, a pioneer chatbot called ELIZA convinced many psychiatric patients that they were interacting with a real psychiatrist. The machine simply repeated back their statements as questions, a popular psychiatric technique at the time because it generated more and more discussion — from the patient. The patients’ belief that they were interacting with a human being came to be called the Eliza effect. Has much changed? If you play around with GPT-3 (and I encourage you to do so) your initial response is likely to be astonishment — a full-blown Eliza effect. Read More ›

mouse-cursor-clicking-captcha-im-not-robot-checkbox-stockpack-adobe-stock
Mouse Cursor Clicking CAPTCHA

CAPTCHA: How Fooling Machines Is Different From Fooling Humans

Automated censorship is intended to protect against a tidal wave of spam but it could certainly have other uses…

Readers of Mind Matters News have likely heard of the iconic Turing test. Computer pioneer Alan Turing famously invented a test to determine whether a program could pass as a human. The gist is, if a program can fool human testers into believing it is a human, then the program is intelligent. Not everyone is convinced. Thing is, it doesn’t take much to fool us humans! Take Eliza , a program of only a few hundred lines, written in the 60s, which fooled many people into believing it was a real human therapist. But what if we flip the Turing test on its head? Instead of a test where a program tries to pass as human, we use a test Read More ›

Chatbot / Social Bot mit Quellcode im Hintergrund

Could Better Software Make Chatbot LaMDA a Person?

John Stonestreet looks at the materialist philosophy that underlies the conviction that a well-designed AI chatbot can become a person

On Friday, John Stonestreet, president of the Colson Center for Christian Worldview, offered a Breakpoint commentary on the recent episode at Google in which software engineer Blake Lemoine claimed that the chatbot LaMDA had become a real person. Google, of course, denied that and placed him on administrative leave. The topic is complicated by three different factors: At various stages, Lemoine probably was talking to human beings (colleagues standing in for LaMDA during its development, as is the custom. In any event, much interaction with the chatbot was edited for coherence before a draft was publicly available. Third — and more basically — a chatbot produces responses by sifting through many millions of human interactions in fractions of a second, Read More ›

chatbot-assistant-ai-artificial-intelligence-stockpack-adobe-stock
Chatbot assistant, Ai Artificial Intelligence

Prof: How We Know Google’s Chatbot LaMDA Is Not a “Self”

Carissa Véliz, an Oxford philosophy prof who studies AI, explains where Google engineer Blake Lemoine is getting things mixed up

Say what you want about Blake “LaMDA is a person!” Lemoine. He has forced many people to help us clarify what AI — and in particular, a large language program — is and is not. For that, we should thank him. First, LaMDA is not conscious, sentient, not a self. And second, it’s not even a new idea, just a much bigger and more sophisticated version of a 1960s idea. Oxford philosophy prof Carissa Véliz, author of Privacy Is Power (2021) reminds us of philosopher Thomas Nagel’s seminal question, What is it like to be a bat? Nagel meant that, if an entity is be conscious or sentient, there must be something that it “is like” to be that entity. Read More ›