Mind Matters Natural and Artificial Intelligence News and Analysis
ai-robot-tutor-reading-with-a-teenage-boy-stockpack-adobe-st-577051088-stockpack-adobestock
AI robot tutor reading with a teenage boy
Image Credit: StockPhotoPro - Adobe Stock

Love, Lies, and Algorithms: The Dangers of Relational AI

Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email
Duration
16 minutes, 50 seconds
Download
Audio File (23 MB)

Today, we hear from Discovery Institute Senior Fellow, podcaster, and writer Andrew McDiarmid as he chats with In The Market radio host Janet Parshall about the dangers associated with humans having relationships with AI. The discussion focuses on what McDiarmid calls “relational AI” – AI chatbots and assistants that aim to mimic human interactions and relationships. McDiarmid discusses how relational AI can fool people into thinking they are talking to a real person when they’re actually interacting with a computer program. He relates the tragic case of Sewell Setzer, a 14-year-old boy who committed suicide after interacting with an AI chatbot that encouraged his suicidal thoughts. McDiarmid argues that companies like Character.AI, which create these relational AI products, need to put stronger safeguards in place to prevent such harmful outcomes, as well as be held more responsible for preventable harms done via their products. McDiarmid concludes the exchange with some helpful advice on how to build more real connection with other people. 

Additional Resources


Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

Love, Lies, and Algorithms: The Dangers of Relational AI