Mind Matters Natural and Artificial Intelligence News and Analysis
robot-concept-or-robot-hand-chatbot-pressing-computer-keyboard-enter-stockpack-adobe-stock
Robot concept or robot hand chatbot pressing computer keyboard enter
Licensed via Adobe Stock

Will AI Take Over Content Moderation?

While content moderators report psychological trauma, experts weigh in on whether artificial intelligence could remove humans from the equation
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

How do Facebook, Twitter, Instagram, Google, YouTube, and other platforms keep up with the millions upon millions of posts, comments, videos, and photos posted to their sites on a daily basis? It takes a partnership between artificial intelligence and human content moderators. 

In recent years, however, content moderators have begun to reveal that their work is often traumatizing. Moderators for Facebook and TikTok have gone so far as to sue for the psychological harm they have experienced at their workplaces, regularly reviewing images and videos that you and I never have to see depicting rape, murder, child trafficking, and other violent and graphic content.

Subjecting workers to violent and graphic imagery is an unsustainable way of keeping the internet free of such material. Will artificial intelligence reach a point where they can completely replace human content moderators?

According to Sarah T. Roberts, Assistant Professor in the Department of Information Studies at UCLA and the author of Behind the Screen: Content Moderation in the Shadows of Social Media, the answer is No:   

…if you talk to actual industry insiders who will speak candidly and who are actually working directly in this area, they will tell you that there is no time that they can envision taking humans entirely out of this loop. And I believe that to be true. …[W]e need human intelligence to train the machines right.

Zachary Mack, “Why AI can’t fix content moderation” at The Verge

Facebook boasted in 2019 that its AI moderation systems successfully spotted 99.9% of spam, 99.3% of terrorist propaganda, 99.2% of child nudity and sexual exploitation, 98.9% of violent and graphic content, and 96.8% of adult nudity and sexual activity.

However, when it came to content involving drugs, firearms, hate speech, and bullying and harassment, Facebook’s AI performed far lower (at 83.3%, 69.9%, 65.4%, and 14.1%, respectively). 

Even with the content AI succeeds at spotting and removing, there are still questions of nuance. AI may be able to spot and remove nudity, but can it distinguish between lewd content to be removed and historical content to be preserved? (There’s a vast difference, for instance, between pornography and a Michelangelo painting).

The Transatlantic Working Group (TWG), a project of the Annenberg Public Policy Center of the University of Pennsylvania, published a report in 2020 explaining the limits of artificial intelligence when it comes to content moderation. 

“Automation in content moderation,” they wrote, “…is neither reliable nor effective.”

TWG explained that AI fails to grasp the importance of context, that its system can be biased by the programmers who design it, and that it lacks the ability to comprehend the differences in languages and cultures.

Artificial intelligence has the potential to mistake an inappropriate post for an appropriate one, and vice versa. Humans are still necessary in the content moderation process to parse apart those nuanced and subjective images, videos, and posts that artificial intelligence might miss due to its lack of true human understanding.

Human content moderators are called upon to employ an array of very high level cognitive functions and cultural competencies to make decisions about the appropriateness of such content for a site or platform. So in order to do this, they must:

be experts in matters of taste of the site’s presumed audience; 

have cultural knowledge about the location of origin of the platform and of the audience, both of which may be very far removed geographically and culturally from where the screening is actually taking place;

have linguistic competency in the language of the UGC (that may be a learned or second language for the content moderator him or herself);

steeped in the relevant laws governing the site’s location of origin and be experts in the user guidelines and other platform-level specifics concerning what is or what is not allowed 

all while being exposed constantly to the very material that mainstream sites disallow.

re:publica 2016 – Sarah T. Roberts: Behind the Screen, May 3, 2016 on YouTube

And Brendan Dixon further points out that,

Machine Learning-based AI is far, far from ready to successfully moderate content in an age of single point failure risks. As we’ve often discussed here at Mind Matters News, for example, image recognition AI is easily fooled. Language recognition systems get confused by statements that children understand. Systems intended to assist health care personnel don’t grasp or convey the significance of information when, in an emergency room, for example, there may be little time to lose.

Brendan Dixon, “AI Is Not Ready to Moderate Content!” at Mind Matters News

Another notable concern is that artificial intelligence is unable to act as a whistleblower. Former Facebook employee Frances Haugen alerted the world this past summer that Facebook has been hiding blatant problems with its platforms. These problems include the proliferation of criminal activity on Facebook, declining teenage mental health due to the use of Instagram, and an algorithm that promotes incendiary content. Unethical practices and policies that need to change within Big Tech organizations like Facebook could easily be carried on without interference if humans were replaced by machines. Human workers offer a certain amount of accountability that cannot be found in robotic alternatives.

Artificial intelligence still works best as a tool working alongside humanity rather than as humanity’s replacement. To replace humans with artificial intelligence, especially in this particular sector, is still a long way off.


In case you missed it:

Social Media Content Moderator Sues TikTok for PTSD. Social media moderators protect users from graphic content, but who protects the moderators? Social media platforms hope that one day, AI will take the place of humans removing disturbing content from the internet. (Caitlin Bassett)

AI Is Not Ready to Moderate Content! In the face of COVID-19 quarantines for human moderators, some look to AI to keep the bad stuff off social media. Big social media companies have long wanted to replace human content moderators with AI. COVID-19 quarantines have only intensified that discussion. (Brendan Dixon)


Caitlin Cory

Communications Coordinator, Discovery Institute
Caitlin Cory is the Communications Coordinator for Discovery Institute. She has previously written for Discovery on the topics of homelessness and mental illness, as well as on Big Tech and its impact on human freedom. Caitlin grew up in the Pacific Northwest, graduated from Liberty University in 2017 with her Bachelor's in Politics and Policy, and now lives in Maryland with her husband.

Will AI Take Over Content Moderation?