
TagELIZA (chatbot)


Prof: How We Know Google’s Chatbot LaMDA Is Not a “Self”
Carissa Véliz, an Oxford philosophy prof who studies AI, explains where Google engineer Blake Lemoine is getting things mixed upSay what you want about Blake “LaMDA is a person!” Lemoine. He has forced many people to help us clarify what AI — and in particular, a large language program — is and is not. For that, we should thank him. First, LaMDA is not conscious, sentient, not a self. And second, it’s not even a new idea, just a much bigger and more sophisticated version of a 1960s idea. Oxford philosophy prof Carissa Véliz, author of Privacy Is Power (2021) reminds us of philosopher Thomas Nagel’s seminal question, What is it like to be a bat? Nagel meant that, if an entity is be conscious or sentient, there must be something that it “is like” to be that entity. Read More ›

Here’s a Terrific Video Featuring Myth of AI Author Erik Larson
Larson, an AI professional, explains why the popular noise we hear about AI “taking over” is hypeI’ve been reviewing philosopher and programmer Erik Larson’s The Myth of Artificial Intelligence. See my earlier posts, here, here, here, here, here, and here. Here’s a terrific video interview that Larson did with Academic Influence. It was done before his book was released and gives a succinct summary of the book. It’s short (15 minutes, compared to the hour-long interview with Brookings described in my previous post). For not only the full video of this interview with Larson but also a transcript of it, go to the Academic Influence website here. For a nice period-piece video on Joseph Weizenbaum’s ELIZA program, check out this YouTube video:

Artificial Intelligence Understands by Not Understanding
The secret to writing a program for a sympathetic chatbot is surprisingly simple…I’ve been reviewing philosopher and programmer Erik Larson’s The Myth of Artificial Intelligence. See my two earlier posts, here and here. With natural language processing, Larson amusingly retells the story of Joseph Weizenbaum’s ELIZA program, in which the program, acting as a Rogerian therapist, simply mirrors back to the human what the human says. Carl Rogers, the psychologist, advocated a “non-directive” form of therapy where, rather than tell the patient what to do, the therapist reflected back what the patient was saying, as a way of getting the patient to solve one’s own problems. Much like Eugene Goostman, whom I’ve already mentioned in this series, ELIZA is a cheat, though to its inventor Weizenbaum’s credit, he recognized from the get-go that it was a cheat. Read More ›