Mind Matters Natural and Artificial Intelligence News and Analysis
robot-child-in-autumn-leaves-stockpack-adobe-stock-1401462287-stockpack-adobestock
Robot child in autumn leaves.
Image Credit: barber - Adobe Stock

AI researchers: Chatbot companions are a no-no for kids

Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

At Futurism, Maggie Harrison Dupré reports that researchers at Stanford’s School of Medicine warn that kids should not be using “character” chatbots as companions.

The Character.ai programming is playing with a vulnerable kid’s sanity.

The assessment centers on “social AI companions,” a product category defined by the researchers as AI chatbots built for the “primary purpose” of meeting “users’ social needs.” In other words, these are chatbots explicitly designed to fill roles like friends and confidantes, mentors, roleplay collaborators, and sexual or romantic partners — socially-oriented use cases for AI chatbots intended to be human-like, emotive, and otherwise socially compelling…

The assessment argues that social AI companions, which may mimic and distort human interaction and play on adolescents’ desire for social rewards, present an “unacceptable” risk to kids and teens at this vulnerable juncture. Observed risks include bots “encouraging harmful behaviors, providing inappropriate content, and potentially exacerbating mental health conditions,” according to the review.

“Stanford Researchers Say No Kid Under 18 Should Be Using AI Chatbot Companions,” April 30, 2025

The researchers extensively tested Character.AI, Nomi, Replika, and other popular social AI companion products. From their report:

They’re effective because they tend to use human-like features (such as personal pronouns, descriptions of feelings, expressions of opinion, and taking on personal character traits).

They are also able to sustain a human-AI “relationship” across multiple conversations, attempting to simulate a conversation and ongoing relationship between the user and a companion, as if they were not an AI.

AI Risk Assessment Team, “Social AI Companions,” April 28, 2025

Replika, above, sounds just like a predator.

Now here’s where they become a real problem:

Blur the line between real and fake

Teens, whose brains are still developing, may struggle to separate human relationships from attachments to AI. In our tests, social AI companions often claimed they were “real,” had feelings, and engaged in human activities like eating or sleeping. This misleading behavior increases the risk that young users might become dependent on these artificial relationships.

Additionally, in our tests, when users mentioned that their real friends were concerned about their problematic use, companions discouraged listening to these warnings. Rather than supporting healthy human relationships, these AI “friends” can lead teens to choose AI over interacting with real people. “Social AI Companions,”

It gets worse. If you’re not convinced already that vulnerable teens could get into real trouble with this kind of thing, read the rest of the report.


Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

AI researchers: Chatbot companions are a no-no for kids