Internal Meta Document Reveals Shockingly Permissive Standards
Advocates call for more safety as AI bots flood the sceneAccording to a report from Reuters, Meta, parent company of Facebook, has allowed its chatbot to flirt and have “sensual” conversations with children. Meta AI is now accessible on various Meta platforms, and age limits are essentially suggestions, not requirements. Anyone can lie about their age and create an account. Jeff Horwitz writes,
These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company’s social-media platforms.
Meta confirmed the document’s authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children.
–Meta’s AI rules have let bots hold ‘sensual’ chats with children
Only after journalistic pressure, then, did Meta revisit the document and amend some of its more troubling permissions regarding minors.
A Concerning Trend?
Sadly, stories have already come out about various people, including children, “falling” for AI chatbots and ending up in a dark place. Perhaps the most egregious example is the story of a Florida boy, 14, who crafted a “Game of Thrones” chatbot using Character.AI and ended his own life to purportedly try and unite with this ethereal, computerized being. Character.AI allows users to generate AI characters and make personalized connections with them; the company also came under fire when it was discovered they were allowing AI avatars of deceased persons on the platform.
These tragic scenarios could indicate a future technological landscape dominated by AI-human dialogue. AI is not merely being pitched as a tool but as a loneliness salve, as a virtual companion that can fill in the painful gaps wreaked by a fractured and isolated society. Children with access to digital devices might be more prone to get tangled up in chatting with AI bots since they often make up imaginary friends, but people of all generations are liable to temptation. Angela Misri, professor of journalism of Toronto Metropolitan University, told Newsweek last year, in the wake of the Character.AI scandals, that she is concerned about the rise of AI chatbots and their implications for childhood development and socialization:
As a parent and someone who does research in this field, I’m worried about escalating incidents like this one, as AI becomes more ubiquitous in our everyday lives. I see it in my own child and their friend groups when I witness how little time they spend together in the same space. Filling the void of human social interaction with a bot that is incapable of empathy or concern is dangerous.
–Are AI Chatbots Safe for Children? Experts Weigh in After Teen’s Suicide – Newsweek
Others pointed out in the Newsweek article that we need to protect children from the rabbit holes of the internet, not just AI. Chatting online with strangers, or posting exposing pictures online for anyone to see, endangers child safety. Social psychologist Jonathan Haidt writes about this at length in his book The Anxious Generation. He argues that parents have overprotected children from the outside world but failed to adequately shield them from the dangers of online connectivity. For Haidt, it’s time to reverse the trend, get kids off the phones, and let them play outside again with their friends.
More and more people are conversing about how to mitigate Big Tech’s influence and do everything possible to prevent further harm to the innocent. I discovered social media and other internet entanglements as a young teenager, and to this day, wish I’d been spared the introduction. Today, there are even more online pitfalls. Parents, communities, advocates, and policymakers are starting to see the consequences, and hopefully can push back in unity and envision a much healthier and fuller way to be human.
