Mind Matters Natural and Artificial Intelligence News and Analysis
collage of young emotions.jpg
Collage of portraits of young emotional people on multicolored background in neon. Concept of human emotions, facial expression, sales. Listening to music, delighted, winner, shocked. Flyer for ad

AI Prof Sounds Alarm: AI “Emotion Detectors” Are Faulty Science

An industry worth over $30 billion uses emotion recognition technology (ERT) on school children and potential hires
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
Kate Crawford

Kate Crawford, a principal researcher at Microsoft, and author of Atlas of AI (2021), is warning at Nature that the COVID-19 pandemic “is being used as a pretext to push unproven artificial-intelligence tools into workplaces and schools.” The software is touted as able to read the “six basic emotions” via analysis of facial expressions:

During the pandemic, technology companies have been pitching their emotion-recognition software for monitoring workers and even children remotely. Take, for example, a system named 4 Little Trees. Developed in Hong Kong, the program claims to assess children’s emotions while they do classwork. It maps facial features to assign each pupil’s emotional state into a category such as happiness, sadness, anger, disgust, surprise and fear. It also gauges ‘motivation’ and forecasts grades. Similar tools have been marketed to provide surveillance for remote workers. By one estimate, the emotion-recognition industry will grow to US$37 billion by 2026.

Kate Crawford, “Time to regulate AI that interprets human emotions” at Nature (April 6, 2021)

Trouble is, there’s no strong reason to believe it’s a reliable guide to human emotions.

It’s based on the research of Paul Ekman, who hypothesized in the 1960s that there are six universal human emotions (happiness, sadness, fear, disgust, anger, surprise). Whether there are six such emotions or not, his thesis was easily programmable. After 9-11, Crawford tells us, the US Transportation Security Administration bought the system, amid much criticism, to try to identify terrorists in crowds of passengers. Today, it is also used on potential hirees as well as students:

Job applicants are being judged unfairly because their facial expressions or vocal tones don’t match those of employees; students are being flagged at school because their faces seem angry. Researchers have also shown that facial-recognition software interprets Black faces as having more negative emotions than white faces do.

Kate Crawford, “Time to regulate AI that interprets human emotions” at Nature (April 6, 2021)

If facial recognition technology generally fails in this way (study), it’s hard to see why ERT wouldn’t. It’s often done without the scan-ee’s knowledge, let alone consent.

Ekman’s research is coming under fire as well:

These ideas stood largely unchallenged for a generation. But a new cohort of psychologists and cognitive scientists has been revisiting those data and questioning the conclusions. Many researchers now think that the picture is a lot more complicated, and that facial expressions vary widely between contexts and cultures. Jack’s study, for instance, found that although Westerners and East Asians had similar concepts of how faces display pain, they had different ideas about expressions of pleasure.

Douglas Heaven, “Why faces don’t always tell the truth about feelings” at Nature (February 26, 2020)

That should hardly be surprising, given that pain is involuntary physical distress but pleasure can be a variety of voluntary experiences. Meanwhile, Heaven notes that in 2019, “Hungary, Latvia and Greece piloted a system for prescreening travellers that aims to detect deception by analysing microexpressions in the face.”

Currently, Ekman’s site lists seven emotions (contempt has been added) but many sources are still reporting his original six for which the software was programmed. According to a 2017 survey paper, there is actually “little agreement” on which emotions are basic.

Crawford is one of a number of industry voices asking for regulation of the technology. In 2019, she even called for a ban until it is regulated. Others are chiming in:

Revelations about algorithmic bias and discriminatory datasets in facial recognition technology have led large technology companies, including Microsoft, Amazon and IBM, to halt sales. And the technology faces legal challenges regarding its use in policing in the UK. In the EU, a coalition of more than 40 civil society organisations have called for a ban on facial recognition technology entirely.

The Conversation, “AI Is Increasingly Being Used to Identify Emotions, Here’s What’s at Stake” at Neuroscience News (April 18, 2021)

But multi-billion-dollar businesses are hard to fight. Maybe only a costly bungle will help many people to distinguish between cool social science theories and stuff that actually works.


You may also wish to read: Can we teach computers to feel things? A dialogue. Okay, There’s the computer’s side… and then there’s the dog’s side. Listen to both. What do people mean when they say they can give computers or robots feelings? Is it possible to feel things without being alive?


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

AI Prof Sounds Alarm: AI “Emotion Detectors” Are Faulty Science