Mind Matters Natural and Artificial Intelligence News and Analysis
Beautiful Male Computer Engineer and Scientists Create Neural Network at His Workstation. Office is Full of Displays Showing 3D Representations of Neural Networks.
Beautiful Male Computer Engineer and Scientists Create Neural Network at His Workstation. Office is Full of Displays Showing 3D Representations of Neural Networks.

How Algorithms Can Seem Racist

Machines don’t think. They work with piles of “data” from many sources. What could go wrong? Good thing someone asked…
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Some of the recent conflicts around algorithms and ethnicity are flubs that social media entrepreneurs will regret. Others may endanger life.

Social media flub: Reporter Morgan Sung tried out the MIT-IBM Watson Lab’s AI Portrait Ars last summer. It renders selfies as Impressionist portraits. But not always as the Masters would have approved, it seems:

She found that the app “whitened my skin to an unearthly pale tone, turned my flat nose into one with a prominent bridge and pointed end, and replaced my very hooded eyes with heavily lidded ones.” This result is both terribly disappointing and utterly predictable.

“I wasn’t surprised at the whitewashing at all, since I’m used to things like Snapchat filters lightening my skin, making my eyes bigger, narrowing my nose. But I was taken aback by how extreme it was,” Sung told Motherboard. “The painting that AI portrait built was a completely different face.”

Edward Ongweso Jr, “Racial Bias in AI Isn’t Getting Better and Neither Are Researchers’ Excuses” at VICE (Jul 29 2019)

How did the algorithm know what people “should” look like? It turns out, the fifteen thousand portraits used for the original dataset derived mainly from Western Europe, especially the Renaissance period. Too bad Marketing didn’t recall, before the system went live, that most of the world does not look like Renaissance Europe. That said, a broader dataset should be easy to find these days.

The good news is, Big Tech is trying. Ongweso also reports that in 2018, Joy Buolamwini (right), founder of the Algorithm Justice League and author of a thesis on this problem at MIT, persuaded IBM to improve its facial recognition technology.

Apps that botch selfies make the news but they are hardly the worst problem. It gets more serious when government is involved:

The UK government went ahead with a face-detection system for its passport photo checking service, despite knowing the technology failed to work well for people in some ethnic minorities…

Now, documents released by the Home Office this week show it was aware of problems with its website’s passport photo checking service, but decided to use it regardless.

“User research was carried out with a wide range of ethnic groups and did identify that people with very light or very dark skin found it difficult to provide an acceptable passport photograph,” the department wrote in a document released in response to a freedom of information (FOI) request. “However; the overall performance was judged sufficient to deploy.”

Adam Vaughan, “UK launched passport photo checker it knew would fail with dark skin” at New Scientist (October 9, 2019)

Many databases simply do not have enough images of a specific facial type to interpret it reliably. But their users press on regardless. One analyst interpreted the current indifference as “if no one else’s image analysis works for black people, then ours doesn’t have to either” (New Scientist). Meanwhile, an activist lost a recent case in the UK (the world’s first) against the police use of facial recognition technology, an outcome which won’t likely result in more caution about FR.

Perhaps the most serious issue is the use of algorithms in health care decision-making. Impact Pro, estimated to affect 100 million Americans, was recently implicated in discriminatory decisions about who needs more care:

Today, researchers announce the latest example in a study published in the journal Science. Their findings show a widely used medical algorithm that predicts who might benefit from follow-up care drastically underestimates the health needs of black patients—even when they’re sicker than their white counterparts.

Katherine J. Wu, “Racially-biased medical algorithm prioritizes white patients over black patients” at PBS (October 24, 2019)

What went wrong?

A big part of the algorithm’s strategy, the researchers found, relies on the assumption that people who spend less on health care are more well.

But many other studies show that this is simply untrue, study author Ziad Obermeyer, a health policy researcher at the University of California, Berkeley, told Michael Price at Science. Black patients, he explains, are less likely than white patients to purchase medical services for the same conditions, due in part to unequal access to care and historical distrust in health providers.

None of this was accounted for in the algorithm. As a result, people with similar scores weren’t on level medical ground. Compared to white patients with similar “risk,” black patients suffered from more illnesses and conditions like cancer, diabetes, and high blood pressure. All told, the algorithm had missed nearly 50,000 chronic conditions in black patients—simply because they spent less on treatment.

Katherine J. Wu, “Racially-biased medical algorithm prioritizes white patients over black patients” at PBS (October 24, 2019)

The open-access paper is here.

Pondering the fact that the algorithm identified 17.7% of black patients as in need as opposed to 46.5% without the algorithm’s bias, a surgeon reflects:

It turns out that the algorithm was trained on dollars spent rather than on the underlying physiology. There are many ways to identify special healthcare needs. As a clinician, I would look at those chronic conditions and weigh certain combinations more heavily or biomarkers, like HbA1c or LDL, the bad cholesterol. The developers of the algorithm, insurance companies in the risk stratification business, chose cost. Their equation was less well, more care, more cost. Except that for blacks, less well does not automatically translate into more care and downstream more cost. The linkage between how ill one is, and the amount of care they receive is subject to all sorts of bias; lack of education, inability to get to a physician’s office, trust – a range of sociodemographic issues. Health care costs are similar to health care needs, but they are not the same.

Chuck Dinerstein, “Healthcare For Blacks: I’m Not Prejudice, The Healthcare Algorithm Made Me Do It” at American Council on Science and Health

What this demonstrates is that more algorithms will not necessarily add up to better health care. A multicultural group of family doctors, for example, might sense patients’ differing comfort with diagnosis and treatment. They can then subtly adjust their clinical approach so as to encourage more confidence in medicine. But the algorithm doesn’t and can’t have those kinds of learning experiences—or any experiences at all.

The study’s findings recall the case of the machine learning system that recommended sending pneumonia patients who had asthma home because they rarely suffered complications. As it happened, they had rarely suffered complications because they were sent to intensive care instead. The machine did not “know” that.

Machines don’t know anything they are not told and, as more people experience the problems of relying on them for some kind of remote “Answer,” we can expect more of these issues to arise.


Further reading on racism and sexism in AI:

Big Tech tries to fight racist and sexist data The trouble is, no machine can be better than its underlying training data. That’s baked in. (Brendan Dixon)

Has AI been racist? AI is, left to itself, inherently unthinking, which can result in insensitivity and bias. (Denyse O’Leary)

AI: Think about ethics before trouble arises. A machine learning specialist reflects on Micah 6:8 as a guide to developing ethics for the rapidly growing profession. (George Montañez)

and

Can an algorithm be racist? (Denyse O’Leary)

Note: The photo of Joy Buolamwini is by Niccolò Caranti at Wikimania 2018, Cape Town, Creative Commons


Denyse O'Leary

Denyse O'Leary is a freelance journalist based in Victoria, Canada. Specializing in faith and science issues, she is co-author, with neuroscientist Mario Beauregard, of The Spiritual Brain: A Neuroscientist's Case for the Existence of the Soul; and with neurosurgeon Michael Egnor of the forthcoming The Human Soul: What Neuroscience Shows Us about the Brain, the Mind, and the Difference Between the Two (Worthy, 2025). She received her degree in honors English language and literature.

How Algorithms Can Seem Racist