AI Still Struggles to Take Out the Trash
How good is AI at content moderation?How good is AI at content moderation? Also, why haven’t tech companies improved at filtering bad content? A new article at MIT Technology Review goes into some of the details of AI, content moderation, and the struggle tech companies have with “bad actors.” In particular, Large Language Models (LLMs) like ChatGPT still struggle with capturing the nuance and context of language; therefore it seems unlikely that AI will totally replace human content moderators. Tate Ryan-Mosley writes,
Large language models still struggle with context, which means they probably won’t be able to interpret the nuance of posts and images as well as human moderators. Scalability and specificity across different cultures also raise questions.
-Tate Ryan-Mosley, Catching bad content in the age of AI | MIT Technology Review
He also notes the European Union’s pending AI Act, which “requires companies that use generative AI to inform users when content is indeed machine-generated.” Talks of federal regulatory action are also starting up stateside. With AI’s explosion, such conversations and legislative proposals will only continue.