Mind Matters Natural and Artificial Intelligence News and Analysis
Photo by Clem Onojeghuo

Can We Outsource Hiring Decisions to AI and Go for Coffee Now?

I would have fired any of my hiring managers who demonstrated characteristic AI traits immediately. So why do we tolerate it coming from a machine?
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Hiring is hard. I know. I’ve interviewed and hired (or not) many engineers for both large and small tech companies. Most hired worked out well; I found a few gems. I also hired a few sources of grief.

The cost of a poor hire is quite high. Even in “at will” states—those that allow employers to remove an employee without cause—the process is long and expensive (largely to forestall lawsuits). And then you must run the hiring gauntlet all over again.

So, the pressure to improve hiring practices is strong, especially now with historically low unemployment. Employers are tempted to reduce costs and speed up the process using artificial intelligence (AI) systems:

… recruiters are increasingly using AI to make the first round of cuts and to determine whether a job posting is even advertised to you. Often trained on data collected about previous or similar applicants, these tools can cut down on the effort recruiters need to expend in order to make a hire. Last year, 67 percent of hiring managers and recruiters surveyed by LinkedIn said AI was saving them time.

Rebecca Heilweil, “Artificial intelligence will help determine if you get your next job” at Vox

A number of companies—including HireVue, Pymetrics, Arya, and Ideal, among others — now offer AI-enhanced hiring packages.

But a bit of caution is in order.

Some think that hiring should be easy. Isn’t matching a candidate to a position similar to identifying a cat in a photo? These pixels fit the cat pattern and those attributes fit the job description—it’s just pattern matching! Squint hard enough and the problems look the same…

And that’s where things can go wrong.

Current pattern matching AI relies largely on Deep Learning which, generally speaking, motors through voluminous data to identify attributes that fit a pattern (like a cat). But these systems have many known and well-documented flaws.

First, they are fragile; they break easily. The internet is awash in examples of image recognition AI making all kinds of mistakes. Adversarial images, those modified in ways humans do not detect, also break these systems.

The thing to see here is that Deep Learning AI does not identify the patterns humans identify; instead, it relies on statistical patterns that may or may not have anything to do with the “real” pattern. Alter those hidden attributes and the system can no longer “see” the pattern:

These sorts of adversarial attacks are a weird feature of machine learning–based image recognition algorithms. Researchers have demonstrated that they could show an image recognition algorithm a picture of a lifeboat (which it identifies as a lifeboat with 89.2 percent confidence), then add a tiny patch of specially designed noise way over in one corner of the image. A human looking at the picture could tell that this is obviously a picture of a lifeboat with a small patch of rainbow static over in one corner. The A.I., however, identifies the lifeboat as a Scottish terrier with 99.8 percent confidence.

Janelle Shane, “Is That a Giraffe or a Cockroach?” at Slate

This feature of Deep Learning AI exposes another problem: proxy discrimination. An example of proxy discrimination might be using a loan applicant’s personal data, ferreted out through machine searches, to predict race. Race, in itself, is an illegal criterion for denying a loan. But what if the algorithm turns up a zip code that strongly predicts race? Maybe no one will even know why the programming worked that way…

Yet even if the AI is carefully trained to avoid discrimination by proxy (though it’s not clear that it can be), the training data remains biased. Self-driving cars, for example, take nearly all of their training data from Western countries. Those inferred rules will likely work poorly in non-Western countries. All data is, in some way, biased. But Deep Learning hardens that bias into rules.

And it does so without ever explaining why. Deep Learning AI is, so far, unable to explain its choices. It gives no chains of reasoning. Patterns either match or they do not.

These are not flaws in this or that AI system; they are traits of how Deep Learning AI works. That is, Deep Learning systems—by design—“lock onto” patterns in the data. But what those patterns are is the marvel and the mystery.

Hiring is hard. But handing evaluation off to AI is not the answer: It’s fragile, easily fooled, biased, and inexplicable.

I would have fired any of my hiring managers who demonstrated characteristic AI traits immediately. So why do we tolerate it coming from a machine?

Even when we feel overworked and understaffed, we can do better than this.



Further reading on bias in AI decision-making:

How algorithms can seem racist Machines don’t think. They work with piles of “data” from many sources. What could go wrong? Good thing someone asked…

Big Tech tries to fight racist and sexist data The trouble is, no machine can be better than its underlying training data. That’s baked in. (Brendan Dixon)

Has AI been racist? AI is, left to itself, inherently unthinking, which can result in insensitivity and bias. (Denyse O’Leary)

AI: Think about ethics before trouble arises. A machine learning specialist reflects on Micah 6:8 as a guide to developing ethics for the rapidly growing profession. (George Montañez)

and

Can an algorithm be racist? (Denyse O’Leary)


Brendan Dixon

Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Brendan Dixon is a Software Architect with experience designing, creating, and managing projects of all sizes. His first foray into Artificial Intelligence was in the 1980s when he built an Expert System to assist in the diagnosis of software problems at IBM. Since then, he’s worked both as a Principal Engineer and Development Manager for industry leaders, such as Microsoft and Amazon, and numerous start-ups. While he spent most of that time other types of software, he’s remained engaged and interested in Artificial Intelligence.

Can We Outsource Hiring Decisions to AI and Go for Coffee Now?