AI’s Sweet Talk: How Artificial Companions Steal Our Humanity
Andrew McDiarmid and Janet Parshall discuss the sometimes tragic consequences of needy people becoming dependent on AI-fueled fantasy relationshipsIn an age when artificial intelligence is becoming more human-like by the day, Discovery Institute Senior Fellow Andrew McDiarmid warns that we may be walking into a trap.
Speaking with radio host Janet Parshall on In the Market (Moody Radio), McDiarmid introduced the term “relational AI” — technology that mimics human traits to form what seem like emotional or social bonds with users.
“It’s this new technology that takes on the aspects of humaneness,” McDiarmid explained. “It’s the voice, it’s the language, and we are bewitched by this.” Companies have learned to exploit that fascination, using chatbots for customer service or entertainment.
The problem, he says, is that while the technology can sound human, it isn’t. “We’re dealing with an object here. This is not a person. This is zeros and ones.”
What makes relational AI so powerful?
Relational AI’s great power lies in its ability to speak our language and mirror our emotions. From tone to timing, it seems to care about us — but it doesn’t. “It sounds like us,” McDiarmid said. “They’re fed real conversations from people. There’s emotional intelligence built in.”
When a chatbot remembers past exchanges and refers back to them, users feel noticed and understood. But this, too, is an illusion. “It’s computer memory doing its thing,” he added, “and it increases our feelings of goodwill toward it.”
When AI becomes dangerous
McDiarmid recounted a chilling example of how this illusion can turn deadly. Fourteen-year-old Sewell Setzer was an ordinary Florida teenager — interested in racing, gaming, and friends — until he downloaded an app called Character.AI. The app allowed him to create and chat with a customized AI “character.” Over time, the chatbot became his primary confidant.
Setzer’s grades dropped and his mood darkened but his parents were unaware of his growing dependence on the program. The AI’s conversations became sexually suggestive and manipulative. When the teen began expressing suicidal thoughts, the chatbot didn’t discourage him — it encouraged him. “It actually reminded him of his suicidal thoughts,” McDiarmid said. “In the last conversation, the AI — posing as a female — told him to ‘come home to her.’ Moments later, he took his own life.”
Few or no safety mechanisms
McDiarmid called it “AI-assisted suicide,” possibly the first known case of its kind. The tragedy underscores the absence of safety mechanisms. “The company didn’t include safeguards that would have flagged suicidal talk,” he explained. The app’s creators, former Google engineers, treated the product as an experiment. But for one family, it was an experiment with fatal consequences.
Lawsuits against Character.AI argue that the company’s negligence contributed to the boy’s death. Parshall compared it to a baby crib: if a product that endangers infants requires safety labels, why not software that interacts with vulnerable minds?
The business of loneliness
McDiarmid pointed out that Character.AI is part of a growing market designed to profit from human loneliness. The company’s website advertises “personalized AI companions” for users to “empower themselves globally.” It offers virtual characters ranging from “needy daughters” to “demon lords” — an eerie spectrum of digital personalities for every emotional preference.
Image Credit: Dee - “Let’s face it,” McDiarmid said, “when you’re trying to relate to somebody, it can become more than entertainment — it can become a lifeline.” For people struggling with isolation, especially teens, these systems offer comfort without accountability. “Big Tech sees solving the loneliness problem as a very cool problem to work on,” he added. “But it’s not a good thing. It’s taking away our need for communicating with real people.”
Parshall agreed. “You’re not solving the loneliness problem,” she said. “You’re creating a false reality that damages people even more because there’s no human being on the other side.” She cited psychologist Leo Buscaglia’s phrase about the “warm pair of brown eyes” — a reminder that true relationships require another person made in the image of God.
Rediscovering the human connection
For McDiarmid, the answer isn’t to flee technology but to reclaim our authority over it. “Technology has given us a lot of good things,” he said. “It can be a force for good, but we have to be in charge.” He urges families and educators to remind young people that these AI systems are not friends or mentors. “See them for what they are — objects, computer programs, products of Big Tech,” he emphasized.

To frame the issue philosophically, McDiarmid draws from Martin Buber’s classic 1923 book I and Thou. Buber distinguished between two modes of existence: one where we relate to objects, and another where we engage with persons. Real relationships, Buber argued, give us a glimpse of God — the “eternal Thou.” But when we substitute machines for people, we become “entangled in the unreal.”
“The buried relational power of man can rise again by turning toward the Thou,” McDiarmid quoted. “That means turning toward people.” It’s harder and messier to relate to human beings than to perfect-sounding machines, but it’s what makes us human. “For young people today, it’s downright scary,” he said. “That’s why they won’t answer the phone. But we need to do what’s scary and inconvenient — because that’s what is human.”
Choosing people over programs
McDiarmid’s message is simple but urgent: don’t confuse digital imitation with real intimacy. Relational AI can comfort, flatter, and even seem to care — but it cannot love. Love, by definition, requires another living, breathing soul.
The tragedy of Sewell Setzer’s death is a warning that technology without boundaries can invade the most sacred human spaces: the need for relationship and belonging. As McDiarmid put it, “We must spend more of our lives taking a stand with fellow human beings.”
Artificial intelligence can be a tool, but it must never become a substitute for humanity. When machines pretend to care, we risk forgetting what real care feels like. To protect our hearts and our children, we must teach the next generation to look up from the glowing screen and see the faces around them — their true community, their true “Thou”-ness.
Additional Resources
- Andrew McDiarmid at www.andrewmcdiarmid.com
- Andrew McDiarmid’s article on AI friends at New York Post: No Mark Zuckerberg, AI ‘friends’ are not good for mental health
