Mind Matters Natural and Artificial Intelligence News and Analysis
Woman in medical protective mask applying an antibacterial antiseptic gel for hands disinfection and health protection during during flu virus outbreak. Coronavirus quarantine and novel covid ncov
Woman in medical protective mask applying an antibacterial antiseptic gel for hands disinfection and health protection during during flu virus outbreak. Coronavirus quarantine and novel covid ncov
COVID-19 worker applying safety steps Adobe Stock licensed

AI Is Not Ready to Moderate Content!

In the face of COVID-19 quarantines for human moderators, some look to AI to keep the bad stuff off social media
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Some commentators have suggested that AI should monitor social media to prevent the spread of socially harmful stuff like child porn, terror propaganda, and baseless rumors about COVID-19.

So what (mostly) keeps child pornography and terrorist propaganda out of our faces when we use social media now? It’s not only a what; it’s a who — content moderators.

These contract workers usually do not last long and may retire with PTSD from the things their jobs require them to see and hear; for example: “The job worsened Speagle’s existing anxiety disorder so he later quit and is pursuing a teaching certificate. It especially bothered him that law enforcement authorities never seemed to follow up on video evidence of clearly illegal activities reported to them.”

Big social media companies feel little incentive to change. They believe that super-AI will soon replace these moderators, who are a mere “speed bump on the way to an AI-powered future.”

That’s all bad enough, but what happens when the moderators must shelter-in-place to limit the spread of COVID-19? Privacy and security concerns mean that working at home is not an option for most tech contract workers But neither can they remain on the crowded production floors.

So what are the social media giants doing in response? Making what Silicon Valley reporter Casey Newton calls a “risky bet on AI”:

[Facebook] will also begin to lean more heavily on those machine-learning systems in an effort to automate content moderation… But as recently as December, Google was telling me that the day when such a thing would be possible was still quite far away.”

Casey Newton, “The coronavirus is forcing tech giants to make a risky bet on AI” at The Verge

Let’s look at some of the risks of trusting content moderation to AI:

● Shuttling so much of our content through the narrow channels of a few firms has always been a risk to the dissemination of news, true as well as false. Facebook—which also owns the popular Instagram and WhatsApp applications—dominates social media. Google is synonymous with search (“googling”), even though there are other good search engines out there, like DuckDuckGo and SwissCows, which do not track the user. And, thanks to Twitter’s dominance, we now “tweet” at each other.

So a massive flaw looms. Since the birth of the internet, we’ve learned much about how to build massively scaled systems. One lesson, however, stands out: Eliminate single points of failure. If all the incoming requests must check with a single system, say, to verify a password, you can be certain your system will crash.

We’ve created a single point (or points) of failure in our massively scaled social media. And the rise of COVID-19 is exposing the frailty. Consider the real problems associated with turning moderation over to an algorithm:

Things become much trickier when the content itself can’t be easily classified even by humans. This can include content that algorithms certainly recognize, but that has many shades of meaning (like nudity — does breast-feeding count?) or that are very context-dependent, like harassment, fake news, misinformation, and so on. None of these categories have simple definitions, and for each of them there are edge-cases with no objective status, examples where someone’s background, personal ethos, or simply their mood on any given day might make the difference between one definition and another.

James Vincent, “AI won’t relieve the misery of Facebook’s human moderators” at The Verge

If the algorithm decides news is fake or content is offensive for bizarre reasons, with so few outlets controlling so much communication today, it can simply disappear. And “fake news,” for example, is notoriously likely to mean nothing more than news that certain interests would prefer that the public did not know — irrespective of whether it is true or timely.

Machine Learning-based AI is far, far from ready to successfully moderate content in an age of single point failure risks. As we’ve often discussed here at Mind Matters News, for example, image recognition AI is easily fooled. Language recognition systems get confused by statements that children understand. Systems intended to assist health care personnel don’t grasp or convey the significance of information when, in an emergency room, for example, there may be little time to lose.

● Social media are not what they started out to be. As Russ White noted here recently, they were originally designed to support comparatively small, existing groups. Over time, they have morphed into enterprises that mainly support the buildup of large followings because that’s where the social media companies can profitably sell ads. It was perhaps inevitable; data centres are expensive to build and run and the employees, including content moderators, are the most expensive asset. As White explains, “The money comes from these social networks’ ability to shape the market in ways that benefit the companies that pay them for a chance to tap into them. This power is the easiest to wield with a single person or organization that has lots of influence, that is, followers. So the shape of the social media is driven by their need to build the influence networks on the one side and find those who will pay to tap in on the other. That, in turn, increases the social media giants’ own power.” It’s hard to believe that such companies would long tolerate content moderation algorithms that threatened their own interests. But, unlike human whistleblowers, who can quit and tell us about censorship problems, the algorithms may be incomprehensible to humans, as well as silent.

● Any censorship proposal is part of a larger question of how to understand the huge international social media companies’ role. They are not governments, which — in the Western world — reduces the ways they are accountable. But what are they? Are they the telephone company (a communications platform), the newspaper (a publisher), or a series of largely unregulated private interactions? Or all three? Or something else altogether? They are fully grown and we still aren’t sure.

However we resolve any of these questions, AI is not some new magic that can solve any problem we heave over to it. It is a tool. And, like any other tool, it might help or hurt. COVID-19 has exposed the fractures in our health care systems. It’s now also revealing single points of failure of our hyper-connected world.

Further reading:

Facebook moderators are not who we think. Companies offer terrible working conditions partly because they think AI will just take over soon.


Brendan Dixon

Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Brendan Dixon is a Software Architect with experience designing, creating, and managing projects of all sizes. His first foray into Artificial Intelligence was in the 1980s when he built an Expert System to assist in the diagnosis of software problems at IBM. Since then, he’s worked both as a Principal Engineer and Development Manager for industry leaders, such as Microsoft and Amazon, and numerous start-ups. While he spent most of that time other types of software, he’s remained engaged and interested in Artificial Intelligence.

AI Is Not Ready to Moderate Content!