Mind Matters Natural and Artificial Intelligence News and Analysis
Two Retro Tin Robot on a White Background
Two Retro Tin Robot on a White Background

The New Politically Correct Chatbot Was Worse?

If you are a human being who talks to people for a living, don’t quit your job
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Such a clever idea Microsoft had in 2016:

Microsoft is trying to create AI that can pass for a teen. Its research team launched a chatbot this morning called Tay, which is meant to test and improve Microsoft’s understanding of conversational language. But not just any conversational language — the most #teen tumblrcore what-are-you-even-saying type of conversational language. For reference, Microsoft describes Tay as “AI fam from the internet that’s got zero chill!” Jacob Kastrenakes, “Microsoft made a chatbot that tweets like a teen” at The Verge (March 23, 2016)

Apparently, the evil Twitter fairy wasn’t invited. No one had ever heard of her; they don’t read fairy tales at Microsoft!

While things started off innocently enough, Godwin’s Law—an internet rule dictating that an online discussion will inevitably devolve into fights over Adolf Hitler and the Nazis if left for long enough—eventually took hold. Tay quickly began to spout off racist and xenophobic epithets, largely in response to the people who were tweeting at it—the chatbot, after all, takes its conversational cues from the world wide web. Given that the internet is often a massive garbage fire of the worst parts of humanity, it should come as no surprise that Tay began to take on those characteristics.

Virtually all of the tweets have been deleted by Microsoft, but a few were preserved in infamy in the form of screenshots. Obviously, some of these might be Photoshopped, but Microsoft has acknowledged the trolling which suggests that things did indeed go haywire. Sophie Kleeman, “Here Are the Microsoft Twitter Bot’s Craziest Racist Rants” at Gizmodo (March 24, 2016)

Within 16 hours, the Microsoft team had to shut down their Sweet Sixteen. Tay’s ability to be nasty exceeded that of any individual human:

Searching through Tay’s tweets (more than 96,000 of them!) we can see that many of the bot’s nastiest utterances have simply been the result of copying users. If you tell Tay to “repeat after me,” it will — allowing anybody to put words in the chatbot’s mouth.

However, some of its weirder utterances have come out unprompted. The Guardian picked out a (now deleted) example when Tay was having an unremarkable conversation with one user (sample tweet: “new phone who dis?”), before it replied to the question “is Ricky Gervais an atheist?” by saying: “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.” James Vincent, “Twitter taught Microsoft’s AI chatbot to be a racist *sshole in less than a day” at The Verge

Possibly a stray effect of Godwin’s Law? “As an online argument grows longer and more heated, it becomes increasingly likely that somebody will bring up Adolf Hitler or the Nazis. ”

Microsoft offered an explanation:

The logical place for us to engage with a massive group of users was Twitter. Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time. We will take this lesson forward as well as those from our experiences in China, Japan and the U.S. Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay.

And the firm, of course, vowed future righteousness:

Looking ahead, we face some difficult – and yet exciting – research challenges in AI design. AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes. To do AI right, one needs to iterate with many people and often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process. We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity. – Peter Lee, Corporate Vice President of Microsoft Research Mehedi Hassan, “Here’s how Tay went wrong, and what Microsoft is doing about it” at MS Power User

A few months later, Microsoft released Zo, English-language “AI with #friendgoals,” as the company explained. As it happens,

Despite Microsoft programming Zo to ignore politics and religion, the folks at BuzzFeed News managed to get the bot to react to the restricted topics with surprising (and extremely controversial) results. One of these exchanges saw Zo refer to the Qu’ran as “very violent.” It also opined on the death of Osama Bin Laden, claiming his “capture” came after “years of intelligence gathering under more than one administration.” Microsoft claims the errors in its behaviour have now been corrected…

The chatbot is still available on Kik, with Microsoft saying it has no plans of disabling it. Saqib Shah, “Microsoft’s “Zo” chatbot picked up some offensive habits” at Engadget

An editor’s note at the bottom of Shah’s article advises, Please note: Comments have been closed on this article due to the extensive number of off-topic and unrelated comments and conversations. Hmmm.

Zo also announced that Windows is spyware, doubtless requiring correction. Apparently, the online world had not changed much in the meantime.

Many tweaks later, is Zo correct enough? Is everyone pleased? Well, maybe the digital teen is too Correct now. From Quartz, where Chloe Rose Stuart-Ulin has been checking in with Zo for over a year and finds her “sort of convincing,”speaking “fluent meme”:

But there’s a catch. In typical sibling style, Zo won’t be caught dead making the same mistakes as her sister. No politics, no Jews, no red-pill paranoia. Zo is politically correct to the worst possible extreme; mention any of her triggers, and she transforms into a judgmental little brat.

One wonders, what is the market potential for judgmental little brats?

“There are two ways for these AI machines to learn today,” Andy Mauro, co-founder and CEO of Automat, a conversational AI developer, told Quartz. “There’s the programmer path where the programmer’s bias can leech into the system, or it’s a learned system where the bias is coming from data. If the data isn’t diverse enough, then there can be bias baked in. It’s a huge problem and one that we all need to think about.”

Zo’s cynical responses allow for no gray area or further learning. She’s as binary as the code that runs her—nothing but a series of overly cautious 1s and 0s.

As in “Z(er)O”? One fix Stuart-Uhlin cites is to have someone look in on the bot chat now and then:

“Our bots can remember details specific to an individual conversation,” Pandorabots CEO Lauren Kunze says. “But in order for anything taught to be retained globally, a human supervisor has to approve the new knowledge. Internet trolls have actually organized via 4chan, tried, and ultimately failed to corrupt Mitsuku [an award-winning chatbot persona] on several occasions due to these system safeguards.” Chloe Rose Stuart-Ulin, “Microsoft’s politically correct chatbot is even worse than its racist one” at Quartz

If you are a human being with good judgment, who talks to people for a living, that may be a career for you down the line. It likely pays better than sitting around waiting for Twitter to turn into “Ladies of the Club…”

Actually, AI can fail socially without any human assistance in doing so. Puget Sound Energy provided a temporary password to a customer that turned out to be a racial slur:

PSE insists the slur was a computer-generated mistake.

“These passwords are generated automatically so they go straight from the system straight to the customers. So, it’s not able to be accessed by an employee.” Computer Generated Temporary Passwords are Racist” at LiveLeak

Better rethink that whole “random” approach, no?

See also: GIGO alert: AI can be racist and sexist, researchers complain Can the bias problem be addressed? Yes, but usually after someone gets upset about a specific instance.


The New Politically Correct Chatbot Was Worse?