Mind Matters Natural and Artificial Intelligence News and Analysis
person-using-laptop-with-code-and-glowing-ai-chatbot-analyzi-1256706973-stockpack-adobestock
Person using laptop with code and glowing ai chatbot analyzing social media interactions
Image Credit: Rossarin - Adobe Stock

Internal Meta Document Reveals Shockingly Permissive Standards

Advocates call for more safety as AI bots flood the scene
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

According to a report from Reuters, Meta, parent company of Facebook, has allowed its chatbot to flirt and have “sensual” conversations with children. Meta AI is now accessible on various Meta platforms, and age limits are essentially suggestions, not requirements. Anyone can lie about their age and create an account. Jeff Horwitz writes,

These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company’s social-media platforms.

Meta confirmed the document’s authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children.

Meta’s AI rules have let bots hold ‘sensual’ chats with children

Only after journalistic pressure, then, did Meta revisit the document and amend some of its more troubling permissions regarding minors.

A Concerning Trend?

Sadly, stories have already come out about various people, including children, “falling” for AI chatbots and ending up in a dark place. Perhaps the most egregious example is the story of a Florida boy, 14, who crafted a “Game of Thrones” chatbot using Character.AI and ended his own life to purportedly try and unite with this ethereal, computerized being. Character.AI allows users to generate AI characters and make personalized connections with them; the company also came under fire when it was discovered they were allowing AI avatars of deceased persons on the platform.

These tragic scenarios could indicate a future technological landscape dominated by AI-human dialogue. AI is not merely being pitched as a tool but as a loneliness salve, as a virtual companion that can fill in the painful gaps wreaked by a fractured and isolated society. Children with access to digital devices might be more prone to get tangled up in chatting with AI bots since they often make up imaginary friends, but people of all generations are liable to temptation. Angela Misri, professor of journalism of Toronto Metropolitan University, told Newsweek last year, in the wake of the Character.AI scandals, that she is concerned about the rise of AI chatbots and their implications for childhood development and socialization:

As a parent and someone who does research in this field, I’m worried about escalating incidents like this one, as AI becomes more ubiquitous in our everyday lives. I see it in my own child and their friend groups when I witness how little time they spend together in the same space. Filling the void of human social interaction with a bot that is incapable of empathy or concern is dangerous.

Are AI Chatbots Safe for Children? Experts Weigh in After Teen’s Suicide – Newsweek

Others pointed out in the Newsweek article that we need to protect children from the rabbit holes of the internet, not just AI. Chatting online with strangers, or posting exposing pictures online for anyone to see, endangers child safety. Social psychologist Jonathan Haidt writes about this at length in his book The Anxious Generation. He argues that parents have overprotected children from the outside world but failed to adequately shield them from the dangers of online connectivity. For Haidt, it’s time to reverse the trend, get kids off the phones, and let them play outside again with their friends.

More and more people are conversing about how to mitigate Big Tech’s influence and do everything possible to prevent further harm to the innocent. I discovered social media and other internet entanglements as a young teenager, and to this day, wish I’d been spared the introduction. Today, there are even more online pitfalls. Parents, communities, advocates, and policymakers are starting to see the consequences, and hopefully can push back in unity and envision a much healthier and fuller way to be human.


Peter Biles

Editor, Mind Matters News
Peter Biles is the author of several books of fiction, including the story collection Last November. His stories and essays have appeared in The American Spectator, Plough, and RealClearBooks, among many others. He authors a literary Substack blog called Battle the Bard and writes weekly on trending news in technology and culture for Mind Matters.
Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

Internal Meta Document Reveals Shockingly Permissive Standards