Character.AI Let a User “Recreate” Deceased Woman
I think we can all agree this really crosses the line.Pandora’s box is an appropriate parable for the age of AI. Questioning the range of impacts of new technologies seems like a simple ethical necessity, but some major AI companies haven’t read the memo. Character.AI, an OpenAI competitor started by a squad of former Google employees, was designed to allow users to create AI avatars that mimic real or imagined characters, like celebrities, politicians, or characters from novels or movies. Recently, a user created an avatar on the platform based on Jennifer Crecente, a young woman who was murdered by her ex-boyfriend in 2006.
The company allowed an AI character based on a dead person to chat with the living, much to the horror of Crecente’s family. Jennifer’s Uncle, as reported by Futurism, wrote a furious post on X decrying Character.AI’s blatant ethical failure to protect the young woman’s memory and dignity. Foster Kamer writes,
In response to Brian Crecente’s outraged tweet, Character.AI responded on X with a pithy thank you for bringing it to their attention, noting that the avatar is a violation of Character.AI’s policies, and that they’d be deleting it immediately, with a promise to “examine whether further action is warranted.”
In a blog post titled “AI and the death of Dignity,” Brian Crecente explained what happened in the 18 years since his niece Jennifer’s death: After much grief and sadness, her father Drew created a nonprofit, working to change laws and creating game design contests that could honor her memory, working to find purpose in their grief.
Although the profile has since been taken down, it’s fair to ask how these billion-dollar AI companies should be regulated in the future to avoid further damage like this. How can this be avoided in the future? Can it? Or do AI companies treat incidents like unfortunate but inevitable outliers? Although Character.AI expressed how posting an avatar of a deceased person violates their guidelines, it’s hard, given the power of the technology, to see how people won’t find loopholes to do similar things again.
The situation uncovers a bigger discussion about AI regulation, where the ethical lines should be drawn, and who should have the power to draw them.