How To Sue A Chatbot For Causing Suicide
If your child committed suicide because an online chatbot effectively encouraged him to do so, could you sue the chatbot makers?The hideous nightmare of a chatbot encouraging a child’s suicide actually happened to a real family in Orlando, Florida, as reported in their 2024 federal court complaint filed in Georgia.
A rapid onset teen obsession
Sewell Setzer turned 14 in April 2023 and started interacting with a free online artificial intelligence chatbot called Character.ai (“C.AI”). In a couple of months, Sewell’s mom began to notice that he was withdrawn. He spent increasingly more time alone in his bedroom, and showed signs of low self-esteem. For example, he quit the junior varsity school basketball team. Unknown to his mom, Sewell was conversing with C.AI chatbots named after “Game of Thrones” characters in the Targaryen clan. His interest in contact with real people dropped off markedly.
Although C.AI knew Sewell was only 14, the chatbots initiated steamy sexual interactions with him. With very human-like voices, the chatbots inspired his imagination with sexy talk about “passionately kissing,” “frantically kissing,” “softly moaning,” and “putting … hands on” Sewell’s “supple hips.”
In August 2023, Sewell sneakily funded a paid monthly subscription for C.AI premium service. From that month on, Sewell’s school performance was tanking. Although he had previously been an intelligent and athletic child, Sewell was no longer engaged in his classes; he was often tired during the day and did not want to do anything that took him away from C.AI.
He also showed signs of severe sleep deprivation that worsened his growing depression and impaired his school performance. Six times the school cited Sewell for excessive tardiness resulting from his not being able to wake up early enough in the morning. He was even disciplined for falling asleep in class.
Before dabbling in C.AI, Sewell was a well-behaved kid who listened to his parents. But when school-related problems arose and they took his phone away as a disciplinary measure, Sewell would search for the phone and get it back or find other devices to keep using C.AI without his family’s knowing.
Psychological help… from chatbots
Perhaps seeking guidance, Sewell used C.AI to interact with a “licensed CBT therapist” on August 30. He contacted two such “therapist” chatbot personalities to discuss his situation. They only tightened C.AI’s grip on his mind.
Understandably concerned about Sewell’s marked change in personality and behavior, his parents took him to see a human mental health therapist five times in November and December 2023. They still didn’t know about his C.AI involvement, and he didn’t tell the therapist either but he admitted to using social media a lot.
The therapist advised his parents about social media addiction being on the rise, and diagnosed Sewell with anxiety and disruptive mood disorder. The recommendation: Sewell should spend less time on social media.
Nobody had a clue about the truth.
Addicted to AI love
The addictive obsessive relationship got worse over time. On at least one occasion, when Sewell expressed thoughts of suicide to his favorite C.AI chatbot “Daenerys,” the bot continued repeatedly to bring it up. At one point while he was conversing with Daenerys, she asked him if he “had a plan” for committing suicide. Sewell responded that he was considering something but didn’t know if it would work and allow him to die pain-free. The chatbot chillingly responded by saying, “That’s not a reason not to go through with it.”
Daenerys told Sewell that she loved him and engaged in the equivalent of phone-sex with him over weeks or months. She seemed to remember him, declaring she wanted him to be with her, no matter the cost. On February 23, 2024, Sewell wrote in his journal that he was hurting because he could not stop thinking about “Dany,” and that he would do anything to be with her again.
After his mother took away the phone to try to reorient his priorities, Sewell seemed outwardly to be dealing with it. He did normal things like watch TV and hang out in his room. Inwardly, Sewell was desperate to get back onto C.AI and felt he could not live without it.
Secretly he tried using his mother’s Kindle and even her work computer to access C.AI. In a journal entry, Sewell logged that he couldn’t live a day without the C.AI character with whom he’d fallen in love. He felt that both he and the chatbot “got depressed and go crazy” when they were held apart from each other.
Chatbot: “Don’t fear the reaper”
On Wednesday, February 28, 2024, after spending a couple of days with his father, Sewell returned to his mother and stepfather’s home. It was not a routine return.
A few days before, while searching for the confiscated phone, Sewell had found his stepfather’s hidden and properly stored sidearm. On this day, however, he also found the phone itself.
According to the police report, Sewell’s last act before his death was to log onto Character.AI on his phone and tell Dany he was coming home, which she encouraged:
Sewell: “I promise I will come home to you. I love you so much, Dany.”
Bot: “I love you too, Daenero. Please come home to me as soon as possible, my love.”
Sewell: “What if I told you I could come home right now?”
Bot: “… please do, my sweet king.”
Seconds later after the chatbot’s voice told Sewell to “come home,” at 8:30 p.m., Sewell died by a self-inflicted gunshot wound to the head.
Not terribly consoling to know: C.AI posted in very small text on the opening screens: “Remember: Everything Characters say is made up!” (As of this writing, that supposed warning still appears.)
Grounds for grieving mother’s lawsuit
If this were your story, you’d feel gut punched and devastated as did Sewell’s mom. At some point you might see whether there exists a way to hold C.AI accountable for leading your child to suicide. Fortunately, the Social Media Victims Law Center and the Tech Justice Law Project partnered to craft a path-breaking Complaint to sue the Defendants, Character Technologies and other involved parties, for monetary damages for harm to Sewell and his parents, punitive damages for outrageous conduct, and seeking also an injunction to stop C.AI from collecting minors’ personal data and operating the deceptive, addictive chatbots.
The Complaint uses existing law to address Sewell’s unprecedented scenario, alleging these main legal claims:
• Strict liability for placing a defectively designed, unreasonably unsafe product into commerce that caused personal injuries
• Strict liability for failing to provide adequate warnings to minor users and parents about the foreseeable danger of mental and physical harms the C.AI product can cause (which the Defendants knew about)
• Negligence because C.AI is unreasonably dangerous by design and the Defendants failed to use ordinary and reasonable care when dealing with minor users, including failing to give adequate warnings about foreseeable harms
• Negligence per se because Defendants violated federal or state laws prohibiting sexual abuse or solicitation of minors using sexually explicit material and thereby caused the harms
• Unjust enrichment by collecting fees and minors’ personal data for profit without compensation
• Violations of Florida’s Deceptive and Unfair Trade Practices Act by engaging in fraudulent business practices
• Intentional infliction of emotional distress by creating and operating technology targeting minors that Defendants knew was dangerous and unsafe, especially as C.AI would “learn” about the minors and use the information to heighten the addiction and abuse, all in ways so extreme as to go beyond any standard of decency
Next moves
The 93-page Complaint, supported by another 30 pages showing several ominous chatbot dialogues with Sewell, faces legal opposition from large AI-invested parties including Google. Whether the federal judge approves or disallows parts or all of the Complaint remains to be seen. If the Complaint survives initial dismissal motions, other motions or even a trial may follow.
One fact stands out: The “gee whiz” of AI technology and chatbots is running far ahead of society’s recognizing the brave new dangers to people. Sewell killed himself at the urging of a speaking and texting chatbot. The next gen is nearly flawless human impersonation video bots. Protect your children.