Mind Matters Natural and Artificial Intelligence News and Analysis
kobu-agency-798655-unsplash
Photo by Kobu Agency on Unsplash

Did AI teach itself to “not like” women?

No, the program did not teach itself anything. But the situation taught the company something important about what we can safely automate.
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Amazon logo plain.svgMuch of the recent uproar at Amazon is, understandably, off the record but the following facts have been extracted:

Amazon’s machine-learning specialists uncovered a big problem: their new recruiting engine did not like women.

The team had been building computer programs since 2014 to review job applicants’ resumes with the aim of mechanizing the search for top talent, five people familiar with the effort told Reuters.

But the firm was ultimately forced to end the project after it found the system had taught itself to prefer male candidates over females.

What? The system “taught itself” to show bias? So, not only are machines becoming conscious, but they are also becoming sexist pigs?

Hey, wait a minute… What really happened here?

Back in 2014, it was a “holy grail” machine learning program, developed in Scotland, that would sift through online resumes, using a one-to-five star rating system and cull the top five of 100, saving time and money. Within a year, a problem surfaced: It was “not rating candidates for software developer jobs and other technical posts in a gender-neutral way.”

How did the system even know? It was programmed to apply pattern recognition to historical data where most applicants were male:

In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word ‘women’s,’ as in ‘women’s chess club captain.’

And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter. Reuters, “Amazon scraps secret AI recruiting tool that ‘didn’t like women’” at Daily Mail

Reuter was not told the names of the schools.

At first, the company tried a new version of the program, said to be neutral about women, but eventually, it gave up because “that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory.”

But things were spiraling out of control:

Another issue cited in the report was the algorithm’s preference for language that was often used by male applicants. Common words and phrases like a proficiency in a certain programming language would be ignored and verbs like “executed” and “captured” were given more weight.

After 500 iterations that were each trained to understand 50,000 unique terms, the team just couldn’t get the tool to stop reverting to discriminatory practices, Reuters reported. As time went on, the models often spiraled into recommending unqualified applicants at random. Rhett Jones, “Amazon’s Secret AI Hiring Tool Reportedly ‘Penalized’ Resumes With the Word ‘Women’s’” at Gizmodo

Amazon now uses a “much-watered down” version, limited to tasks like removing duplicate resumes from the batch.

The problem is not unique:

All of this is a remarkably clear-cut illustration of why many tech experts are worried that, rather than remove human biases from important decisions, artificial intelligence will simply automate them. An investigation by ProPublica, for instance, found that algorithms judges use in criminal sentencing may dole out harsher penalties to black defendants than white ones. Google Translate famously introduced gender biases into its translations. The issue is that these programs learn to spot patterns and make decisions by analyzing massive data sets, which themselves are often a reflection of social discrimination. Programmers can try to tweak the A.I. to avoid those undesirable results, but they may not think to, or be successful even if they try. Jordan Weissmann, “Amazon Created a Hiring Tool Using A.I. It Immediately Started Discriminating Against Women.” at Slate

In short, the program did not teach itself anything. It sorted the information given according to the rules to be applied, but the rules didn’t work. But the uproar surely taught the company something: the risks of trying to automate good judgment.

See also: Why can’t machines learn simple tasks?

and

Do machines or brains really learn?


Did AI teach itself to “not like” women?