Are Facial Expressions a Clear, Simple Basis for Hiring Decisions?Marketing AI to employers to analyze facial expressions ignores the fact that correlation is NOT causation
Have you heard of the Law of the Instrument? It just means, to quote one formulation, “He that is good with a hammer tends to think everything is a nail.” All any given problem needs is a good pounding. This is a risk with AI, as with amateur carpentry—but with AI, it get you into more serious trouble. Take hiring, for instance.
Ever since Google demonstrated the power of Deep Learning AI — first by recognizing images of cats and then, through its subsidiary DeepMind, conquering the classic game of Go—it’s been on a tear. Over the last near-decade, venture capitalists and entrepreneurs have stumbled over one another in a race to squeeze dollars out of Machine Learning’s magic hammer.
But there’s a hidden weakness in their strategy: Correlation is not causation. Statistics, or, more correctly, statistical learning, can find correlations, but it has no idea if the correlations represent useful information. Dr. Luis Perez-Breva (right)—Director of the MIT Innovation Teams Program who holds degrees in Chemical Engineering, Physics, Business, and Artificial Intelligence—puts it this way:
According to Luis, so much focus is put into obtaining data that we judge how good a machine learning system is by how much data was collected, how large the neural network is, and how much training data was used. When you collect a lot of data you are using that data to build systems that are primarily driven by statistics. Luis says that we latch onto statistics when we feed AI so much data, and that we ascribe to systems intelligence, when in reality, all we have done is created large probabilistic systems that by virtue of large data sets exhibit things we ascribe to intelligence.Ron Schmelzer, “This AI Researcher Thinks We Have It All Wrong” at Forbes
According to Dr. Perez-Breva, “machine learning has a very narrow application.” But when a lot of money is involved, pressure mounts and problems can begin to look like nails needing a hammer.
A good example of Deep Learning run amok is the rise of companies such as HireVue whose AI interview software “uses candidates’ computer or cellphone cameras to analyze their facial movements, word choice and speaking voice before ranking them against other applicants based on an automatically generated ‘employability’ score.” (Washington Post, November 6, 2019).
AI researchers told the Washington Post that they view HireVue’s efforts as a “profoundly disturbing development.” Loren Larsen, HireVue’s CTO, defends the practice, saying that “most AI researchers have a limited understanding” of the psychology. Actually, it’s probably pretty much the other way around:
When humans navigate feelings, they use an immense amount of information: from facial expressions and body language to cultural references, context, moods and more. But the AI systems trying to do the same thing tend to focus mainly on the face. This is a big flaw, according to Lisa Feldman Barrett, a psychologist at Northeastern University and co-author of a damning study on the claims being made about ‘emotion recognition’.
Of course, people do smile when they are happy and frown when they are sad, Barrett says, but the correlation is weak’. Facial expressions don’t always mean the same thing – smiles, for instance, can be wry or ironic. Also, there are a whole manner of expressions which people might make when they are happy or sad beyond smiling. Barrett was one of five scientists brought together by the Association for Psychological Science, who spent two years reviewing more than 1,000 papers on emotion detection. Their conclusion was that it is very hard to use facial expressions alone to tell, accurately, how someone is feeling.Norman Lewis, “Emotion analytics: a dystopian technology” at Spiked.com
Lewis calls it snake oil, not science. For one thing, when Lisa Feldman Barrett and others studied the link between facial expressions and emotions, they found that culture and many other factors must be taken into account in order to “read” attitudes:
The available scientific evidence suggests that people do sometimes smile when happy, frown when sad, scowl when angry, and so on, as proposed by the common view, more than what would be expected by chance. Yet how people communicate anger, disgust, fear, happiness, sadness, and surprise varies substantially across cultures, situations, and even across people within a single situation. Furthermore, similar configurations of facial movements variably express instances of more than one emotion category.Lisa Feldman Barret et. al, “Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements” at Psychological Science in the Public Interest (open access)
We’re back to a problem we’ve highlighted before: If we blind ourselves to what humans fully are, including our complex inner worlds and minds, it leads to silly beliefs and dumb decisions.
Deep Learning AI is useful when used correctly with an awareness of its limits. It is not, and cannot be, a replacement for considered human judgement.
If you really want to make a good hire, then meet the prospective employee. Talk to them. Get to know them, at least a little. Don’t fool yourself into believing that you can find out all you need to know from a few minutes of machine-filtered video. We are all much more than that.
Here are a couple of Brendan Dixon’s earlier thoughts on AI, fair hiring practices, and people who use AI to pretend to read minds:
Can we outsource hiring decisions to AI and go for coffee now? I would have fired any of my hiring managers who demonstrated characteristic AI traits immediately. So why do we tolerate it coming from a machine? The pressure to improve hiring practices is strong, especially now with historically low unemployment. Employers are tempted to reduce costs and speed up the process using artificial intelligence (AI) systems. But please, read the fine print.
Can the machine tell if you are psychotic or gay? No, and the hype around what machine learning can do is enough to make old-fashioned tabloids sound dull and respectable. Media often co-operate with researchers’ inflated claims about machine learning’s powers of discovery. An ingenious “creative” approach to accuracy enables the misrepresentation, says data analyst Eric Siegel.