The Tragic Case of Teen’s Death and Character.AI
From the perspective of friends and family, it looked like Sewell just got mired in his phone. They weren’t aware he “fell for a chatbot.”Sewell Setzer III, a 14-year-old boy from Orlando, Florida, committed suicide after obsessively chatting with an online chatbot from Character.AI, according to The New York Times. Sewell’s parents noticed that he was consistently withdrawing more deeply into his room, spending hours every day talking with chatbots and losing interest in activities he formerly enjoyed.
From the perspective of friends and family, it looked like Sewell just got mired in his phone. They weren’t aware he “fell for a chatbot,” as Kevin Roose writes in the Times. Behind the scenes, though, Sewell spiraled farther down an AI vortex until he fantasized about “joining” the chatbot in some kind of postmortem union. It’s hard to fathom, but shows the powerful allure of these anthropomorphized forms of artificial intelligence.
Now, Sewell’s mother, Megan Garcia, is suing Character.AI and Google in the wake of her son’s death. The AI startup company was started by ex-Google employees who wanted to reportedly “accelerate the tech” in a more unrestricted fashion, according to The Verge. But the tech has gotten out of hand, with millions of teen users creating bots based on fictional characters and therapists to talk about their deepest problems. Just recently, it was discovered that a user created a chatbot based on Jennifer Crecente, a young woman who was murdered in 2006. Her family was furious. Character.AI took down the profile of the deceased woman and further commented that it was “heartbroken” about the death of one of its users, but the lawsuit is nonetheless underway, and awareness of the dark side of humanoid AI systems is getting a much-needed moment in the limelight.
Kevin Roose writes,
Character.AI’s chatbots are programmed to act like humans, and for many users, the illusion is working. On the Character.AI subreddit, users often discuss how attached they are to their characters. (The words “obsessed” and “addicted” come up a lot.) Some report feeling lonely or abandoned when the app goes down, or angry when their characters start behaving differently as a result of new features or safety filters.
Garcia called her son’s death “collateral damage” from a an untested experiment. It doesn’t help that one of Character.AI’s founders, Noam Shazeer, said that he wanted to create an AI system that unabashedly speeds up the technology and seeks to make something “fun.”
Tech acceleration seems to be a common goal for AI developers, but to say that AI chatbots is an antidote to loneliness or teen confusion is naive to say the least. Roose adds a quote from Shazeer on why we apparently need to be so serious about speeding up AI progress: “Moving quickly was important, he added, because ‘there are billions of lonely people out there’ who could be helped by having an A.I. companion.”
What, though, are these bots truly accomplishing and how are they impacting their users? While we can’t make universal judgements from one tragic instance, it seems more than safe to say that companies like Character.AI have created a beast that can harm the vulnerable.