Mind Matters Natural and Artificial Intelligence News and Analysis

TagChatbots (history)

chat-bot-and-future-marketing-concept-chatbot-icon-hand-holding-mobile-phone-with-automatic-chatbot-message-screen-with-abstract-background-stockpack-adobe-stock
Chat bot and future marketing concept , Chatbot icon , Hand holding mobile phone with automatic chatbot message screen with abstract background

Let’s Call AI What It Really Is: Faux Intelligence

Gary Smith at Salon: While GPT-3 can string words together in convincing ways, it has no idea what the words mean

Pomona College business and investments prof Gary Smith warns Salon readers not to be too gullible about what human-sounding chatbots really amount to. He notes that in the 1960s, a pioneer chatbot called ELIZA convinced many psychiatric patients that they were interacting with a real psychiatrist. The machine simply repeated back their statements as questions, a popular psychiatric technique at the time because it generated more and more discussion — from the patient. The patients’ belief that they were interacting with a human being came to be called the Eliza effect. Has much changed? If you play around with GPT-3 (and I encourage you to do so) your initial response is likely to be astonishment — a full-blown Eliza effect. Read More ›

chatbot-assistant-ai-artificial-intelligence-stockpack-adobe-stock
Chatbot assistant, Ai Artificial Intelligence

Prof: How We Know Google’s Chatbot LaMDA Is Not a “Self”

Carissa Véliz, an Oxford philosophy prof who studies AI, explains where Google engineer Blake Lemoine is getting things mixed up

Say what you want about Blake “LaMDA is a person!” Lemoine. He has forced many people to help us clarify what AI — and in particular, a large language program — is and is not. For that, we should thank him. First, LaMDA is not conscious, sentient, not a self. And second, it’s not even a new idea, just a much bigger and more sophisticated version of a 1960s idea. Oxford philosophy prof Carissa Véliz, author of Privacy Is Power (2021) reminds us of philosopher Thomas Nagel’s seminal question, What is it like to be a bat? Nagel meant that, if an entity is be conscious or sentient, there must be something that it “is like” to be that entity. Read More ›

computer questions.jpg
Businessman with a computer monitor head and question marks

Artificial Intelligence Understands by Not Understanding

The secret to writing a program for a sympathetic chatbot is surprisingly simple…

I’ve been reviewing philosopher and programmer Erik Larson’s The Myth of Artificial Intelligence. See my two earlier posts, here and here. With natural language processing, Larson amusingly retells the story of Joseph Weizenbaum’s ELIZA program, in which the program, acting as a Rogerian therapist, simply mirrors back to the human what the human says. Carl Rogers, the psychologist, advocated a “non-directive” form of therapy where, rather than tell the patient what to do, the therapist reflected back what the patient was saying, as a way of getting the patient to solve one’s own problems. Much like Eugene Goostman, whom I’ve already mentioned in this series, ELIZA is a cheat, though to its inventor Weizenbaum’s credit, he recognized from the get-go that it was a cheat. Read More ›