Mind Matters Natural and Artificial Intelligence News and Analysis
Woman walking smartphone tik tok
Top view of woman walking in the street with tik tok app on her mobile phone. Illustrative editorial
Licensed via Adobe Stock

Social Media Content Moderator Sues TikTok for PTSD

Social media moderators protect users from graphic content, but who protects the moderators?
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

A social media content moderator is suing TikTok, a popular video app, for psychological trauma developed from 12-hour shifts moderating endless graphic videos.

Candie Frazier works for Telus International, a Canadian contracting firm providing moderation services to social media apps like TikTok. Frazier filed a complaint with the California Central District Court in December, alleging that TikTok and parent company ByteDance do not provide enough support for the psychological wellbeing of their contracted moderators, whose job it is to remove violent, graphic, and otherwise inappropriate content from the platform.

TikTok’s popularity exploded in the aftermath of pandemic lockdowns, especially among millennials and Generation Z. As of September, TikTok reported 1 billion users every month.

In her complaint, Frazier explains that moderators are required to watch “three to ten videos at the same time,” with only 25 seconds to review each one. The complaint says that the videos include violent content such as “animal cruelty, torture, suicides, child abuse, murder, beheadings, and other graphic content.”

As a result, Frazier has developed symptoms of PTSD, including anxiety, depression, trouble sleeping, and “horrific nightmares.” The complaint reads, “She often lays awake at night trying to go to sleep, replaying videos that she has seen in her mind. She has severe and debilitating panic attacks.”

According to Frazier, only one 15-minute break is allowed in the first four hours of their work day, with an additional 15-minute break every following two hours. Additionally, Frazier alleges that ByteDance “heavily punishes” any additional time taken away from video moderation, despite the emotional disturbance many workers experience throughout the day.

Hilary McQuaide, a TikTok spokesperson, told The Verge:

Our Safety team partners with third party firms on the critical work of helping to protect the TikTok platform and community, and we continue to expand on a range of wellness services so that moderators feel supported mentally and emotionally.

James Vincent, “TikTok sued by former content moderator for allegedly failing to protect her mental health” at The Verge

The lawsuit requests that TikTok provide more frequent breaks as well as more visual and audio tools (such as blurring and muting options) for moderators to shield themselves from the full brunt of what they’re viewing.

Psychological Trauma Is Not New To Content Moderation

TikTok is not unique among social media platforms for these issues. Moderators at Facebook, Google, and YouTube have reported similar problems. In 2020, content moderators were awarded $52 million in a settlement against Facebook for psychological trauma.

Casey Newton at The Verge has been collecting the stories of social media content moderators over recent years, sharing their experiences and bringing awareness to the dark underbelly of social media operations. In one chilling article, Newton explained that there is a fifty-fifty chance for moderators at Facebook to develop mental health issues as a result of their work.

In a 2019 exposé titled, “The Trauma Floor,” Newton documented the panic attacks, anxiety, and depression experienced by these workers. The lack of support and empathy from leadership has created a toxic work environment, in which many employees turn to dark humor, alcohol, marijuana, and even sex during working hours in order to cope with the violence, abuse, and hate they regularly review.

According to these former moderators, Google, YouTube, and Facebook were not forthcoming during the application and job training processes about how much disturbing content they would be moderating on a regular basis.

“You always see death, every single day,” one former content moderator for Facebook told Newton in a short YouTube documentary (see below). “You see pain and suffering. And it just makes you angry because they’re not doing anything. The stuff that does get deleted, it winds up back there anyway.”

Is AI The Solution?

If the human psyche is too fragile to handle the volume of graphic content being posted to the internet on a daily basis, what is the solution? A Wild West-type internet in which violent and graphic photos and videos circulate freely? Or, could artificial intelligence effectively replace these workers?

Social media apps have begun using more artificial intelligence algorithms to automatically remove inappropriate content without human supervision. The technology, however, is not perfect, which requires that humans continue to do the work where AI fails.

Facebook’s use of AI to moderate its platforms has come in for scrutiny in the past, with critics noting that artificial intelligence lacks a human’s capacity to judge the context of a lot of online communication. Especially with topics like misinformation, bullying, and harassment, it can be near impossible for a computer to know what it’s looking at.

Facebook’s Chris Palow, a software engineer in the company’s interaction integrity team, agreed that AI had its limits, but told reporters that the technology could still play a role in removing unwanted content. “The system is about marrying AI and human reviewers to make less total mistakes,” said Palow. “The AI is never going to be perfect.” 

When asked what percentage of posts the company’s machine learning systems classify incorrectly, Palow didn’t give a direct answer, but noted that Facebook only lets automated systems work without human supervision when they are as accurate as human reviewers. “The bar for automated action is very high,” he said. Nevertheless, Facebook is steadily adding more AI to the moderation mix. 

James Vincent, “Facebook is now using AI to sort content for quicker moderation” at The Verge

In the meantime, the internet remains as free of graphic material as it can be in an imperfect world, due to the work of people, not machines. “But the risk to human lives is real,” writes Newton, “and it’s not going away.”


Further reading:

AI Is Not Ready to Moderate Content! In the face of COVID-19 quarantines for human moderators, some look to AI to keep the bad stuff off social media. Big social media companies have long wanted to replace human content moderators with AI. COVID-19 quarantines have only intensified that discussion. (Brendan Dixon)

Facebook Moderators Are Not Who We Think. Companies offer terrible working conditions partly because they think AI will just take over soon. And if that doesn’t – and perhaps can’t – happen, what’s the backup plan? Lawsuits?

Yes, There ARE Ghosts in the Machine. And one of them is you. You power AI whenever you prove your humanity to the CAPTCHA challenges overrunning the web. AI systems are not some alien brain evolving in our midst. (Brendan Dixon)


Caitlin Cory

Communications Coordinator, Discovery Institute
Caitlin Cory is the Communications Coordinator for Discovery Institute. She has previously written for Discovery on the topics of homelessness and mental illness, as well as on Big Tech and its impact on human freedom. Caitlin grew up in the Pacific Northwest, graduated from Liberty University in 2017 with her Bachelor's in Politics and Policy, and now lives in Maryland with her husband.

Social Media Content Moderator Sues TikTok for PTSD