Mind Matters Natural and Artificial Intelligence News and Analysis
the-concept-of-biased-views-judged-by-appearances-various-miniature-people-standing-behind-the-glasses-stockpack-adobe-stock.jpg
The concept of biased views judged by appearances. Various miniature people standing behind the glasses.

How Bias Can Be Coded Into Unthinking Programs

MIT researcher Joy Buolamwini started the project as a trivial “bathroom mirror” message
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Coded Bias, a new documentary by 7th Empire Media that premiered at the Sundance Film Festival in January 2020, looks at the ways algorithms and machine learning can perpetuate racism, sexism, and infringements on civil liberties. The film calls for accountability and transparency in artificial intelligence systems, which are algorithms that sift large amounts of data to make predictions, as well as regulations on how these systems can be used and who has access to the data.

The documentary follows the path of MIT researcher Joy Buolamwini. Buolamwini took a class on science fiction and technology in which one of her assignments was to create a piece of technology that isn’t necessarily useful but is inspired by science fiction.

Buolamwini made a bathroom mirror equipped with facial recognition software. Her idea was the device would detect when a person stepped in front of the mirror and say something positive.

Unfortunately, the mirror didn’t work as planned, at least not for Buolamwini. The sensor-equipped mirror couldn’t detect her face when she stepped in front of the mirror. She adjusted the lighting, her glasses, the angle of the mirror, but it wouldn’t detect her face. Then Buolamwini tried putting a white mask over her face. The mirror detected a face and played a positive message.

Buolamwini is a Black American woman. So she started to investigate the data that is used to train algorithm-based technologies. She found that the algorithms used to train AI are no more or less objective than the people who designed them. Because of the predominant demographic used to train the facial recognition software, the software has a harder time detecting faces with darker skin. In fact, facial recognition technology will often mis-identify people of color as someone else in the database. The technology is also less accurate in reading women’s faces compared to men’s.

Following on her research, Buolamwini started the Algorithmic Justice League which calls for “equitable and accountable AI.”

The Problem with Facial Recognition and Mis-Identification

It is one thing when algorithms are used for trivial tech like positive bathroom mirror messages. It’s quite another when these they are used by police to identify a criminal. In one instance portrayed in Coded Bias, a fourteen-year-old boy was stopped by police in London, England, because he was wrongfully identified as someone who had committed a crime.

In another example, Amazon’s facial recognition software, Rekognition, piloted in several police precincts in the U.S., mis-identified 28 members of Congress as matches to criminals in a mugshot database. From Gizmodo (2018) on the experiment:

In total, Rekognition misidentified 28 members of Congress, listing them as a “match” for someone in the mugshot photos. 11 of the misidentified members of Congress were people of color, a highly alarming disparity… For Congress as a whole, the error rate was only five percent, but for non-white members of Congress, the error rate was 39 percent.

Sidney Fussell, “Amazon’s Face Recognition Misidentifies 28 Members of Congress as Suspected Criminals” at Gizmodo

This incident prompted a federal study of 189 facial recognition software platforms. The National Institutes of Standards and Technology (NIST) found that the majority of face recognition algorithms exhibited “demographic differentials,” meaning most of the algorithms studied performed differently for different demographics. Many of the U.S-based companies had more false-positive matches among Asian, African American, and native faces, while Asian-based software did better with Asian faces. In their “one-to-many” matching, they found some algorithms, particularly those that were generally less accurate, had a higher rate of false positive matches among African-American females.

According to Coded Bias, the U.S. has over 117 million people’s facial data stored in networks that can be searched by police. The documentary highlights that these algorithmic searches are not conducted according to any oversight or standards when it comes to accuracy. There are no rules on what kind of data can be fed to these algorithms, including whether a person has consented to have their face stored in a biometric database. Additionally, there are no standards for algorithmic performance and accuracy. The way the data is sifted is also hidden because the algorithmic code is proprietary.

In places like Hong Kong, CCTV cameras equipped with facial recognition software have come to represent the Chinese government’s encroaching authoritarianism in that “special administrative region.” In 2019 when many HongKongers engaged in protests, they hid their faces so they would not be identified and punished by the Chinese government . As Buolamwini pointed out in the documentary, “When you see how facial recognition is being deployed in different parts of the world, it shows you potential futures.”

Even if facial recognition software becomes more accurate in identifying a greater variety of faces, there’s still the question of how this information is being used and whether a person has a right to not be tracked. In one instance portrayed in the documentary, a person was fined by the British police for covering his face when he walked by a surveillance camera on a public street. In another instance, an apartment building in Brooklyn switched from key FOBs to facial recognition for residents to enter the building. In this way the landlord can track who enters the building and when.

The motivation behind this kind of tracking is safety and security. But in practice, the surveillance technology ends up disproportionately affecting the poor, while luxury technologies tend to cater to the rich. Depending on where you live, this means that certain minority groups or people living in particular locations within a city will be surveilled, whether they consented to it or not. As one tenant of the Brooklyn apartment building said, “I feel that I as a human being should not be tracked…why treat me as an animal?”

I’ve reported on the Chinese government’s use of algorithms fed with large swathes of data to control their citizens, a kind of “algorithmic obedience training.”

In China, facial recognition systems are used to enter buildings and pay for transactions at stores and vending machines. To the public, this is a convenient service, but the convenience also translates into a tracking system and a way for the CCP to control behavior. If your face is associated with a lower social credit score—perhaps because you said something negative about the government online—then you have fewer options for what you are free to do.

In the U.S., Facebook has been criticized for its use of facial recognition without consent and for its own user “trustworthiness” score. Coded Bias points out that there are no regulations preventing Facebook from supplying businesses with its facial recognition data and its trustworthiness scores, although last year Facebook was sued for the use of facial recognition software

Overall, Coded Bias provides a counter-balance to the often over-hyped promises of AI. Artificial intelligence is really a math program contrived by the real intelligence of human beings, and because of this, the algorithms cannot avoid some of the pitfalls of human biases.


Further reading:

Can robots be less biased than their creators? We often think of robots as mindless but the minds of their creators are behind them

and

How algorithms can seem racist
Machines don’t think. They work with piles of “data” from many sources. What could go wrong? Good thing someone asked…


Heather Zeiger

Heather Zeiger is a freelance science writer in Dallas, TX. She has advanced degrees in chemistry and bioethics and writes on the intersection of science, technology, and society. She also serves as a research analyst with The Center for Bioethics & Human Dignity. Heather writes for bioethics.com, Salvo Magazine, and her work has appeared in RelevantMercatorNet, Quartz, and The New Atlantis.

How Bias Can Be Coded Into Unthinking Programs