Can AI Ever Be Sentient? A Conversation with Blake Lemoine
AI can mimic sentience, but can it ever be sentient? On this episode, we return to our conversation with former Google engineer Blake Lemoine. Host Robert J. Marks has a lively back and forth with Lemoine, who made national headlines when, as an employee of Google, he claimed that Google’s AI software, dubbed LaMDA, might be sentient. Lemoine recounts his experience at Google and explains why he thinks LaMDA might be more powerful than Google lets on. Marks, for his part, maintains that while AI might be able to mimic aspects of human intelligence, it can never understand what it’s doing or be creative and conscious in ways humans can. The pair also discuss how to define consciousness, and explore ways that intelligence can be tested on machines. Don’t miss this engaging conversation about the progress and the limits of artificial intelligence.
Additional Resources
- Blake Lemoine AI panel discussion at COSM: https://youtu.be/cnL46-3og6E?si=sjcywcl-63PYpLNw
- The Turing test is proposed in: COMPUTING MACHINERY AND INTELLIGENCE BY A. M. TURING
- Full audio performance of the conversation between Blake Lemoine and LaMDA
- Is LaMDA Sentient? – an Interview
- Software Description: LaMDA: Language Models for Dialog Applications
- Google Keynote (Google I/O ‘22) Video on LaMDA 2
- Thomas Ray Tierra: The Character of Adaptation
- Article “Join us in the AI Test Kitchen” Test LaMDA2