Mind Matters Natural and Artificial Intelligence News and Analysis
Streetcar in Toronto, Ontario, Canada
modern trolley Adobe Stock licensed

The “Moral Machine” Is Bad News for AI Ethics

Despite the recent claims of its defenders, there is no way we can outsource moral decision-making to an automated intelligence
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

A few years ago, MIT released an online tool — the Moral Machine — intended to “crowdsource” ethical decisions. The machine uses modified forms of the Trolley Problem, a faux ethical dilemma, in which a rail switch operator must decide who an oncoming train will kill. The Moral Machine changes the problem’s props by replacing the train with an autonomous vehicle (self-driving car) and varying the social identity of the person who will get killed. But it sticks to the basic format: The car cannot stop and someone will die, either the passenger or some bystander.

I, and others, have criticized the Moral Machine and its reliance on the Trolley Problem. The problem was summed up succinctly in 2016:

In a very literal sense, we would be surveying the public on who they would most like to see hit by a car, and then instructing cars that it’s less of a problem to hit those people.

The test is premised on indifference to death. You’re driving the car and slowing down is clearly not an option, so from the outset we know that someone’s going to get it. The question is just how technology can allocate that indifference as efficiently as possible.

That’s a bad deal, and it has nothing to do with the way moral choices actually work. I am not generally concerned about the moral agency of self-driving cars — just avoiding collisions gets you pretty far — but this test creeped me out. If this is our best approximation of moral logic, maybe we’re not ready to automate these decisions at all.

Russell Brandom, “Who will you decide to kill with your self-driving car? Let’s find out!” at The Verge

Such criticisms must have stung. The researchers recently defended both their use of the Trolley Problem and the Moral Machine in an article for Communications of the ACM (the industry publication for computer scientists): “Crowdsourcing Moral Machines.”

The article is riddled with errors. Not mathematical errors. Their errors are deeper, of the kind you get when a color-blind person tries to describe a painting. Their — unstated and possibly unrecognized — assumptions distort the picture beyond recognition.

Let’s review a few before properly framing the problem:

First, they assume that autonomous vehicles (AVs) like self-driving cars are “intelligent machines.” But no one can define intelligence. If we lack a definition, it’s naïve to label any computer intelligent. An AV is a machine that automates actions previously selected — either directly or through training data — by an intelligent agent (that is, a human being). It is automated intelligence, not artificial intelligence.

The researchers then defend their use of the Trolley Problem because real scenarios would be too complex:

Real accidents do not involve only two possible actions… AVs will have many options beyond staying or swerving… Many factors that would be relevant for real accidents are simply absent in an AV-Trolley scenario… Using realistic crash scenarios would make it difficult to tease out the effect of multiple contributing factors and make it difficult to draw general conclusions…

Edmond Awad, Sohan Dsouza, Jean-François Bonnefon, Azim Shariff, Iyad Rahwan, “Crowdsourcing Moral Machines” at Communications of the ACM, March 2020, Vol. 63 No. 3, Pages 48-55 10.1145/3339904

This arguments exhibits the “E for Effort” fallacy (and likely others). The fact that the real-world problem is hard does not make the fictional problem a useful substitute.

But, even if it did capture real-life dilemmas, it still fails. Psychology is currently juggling a replication crisis: Researchers very often cannot the replicate results of published studies of human psychology. Basically, it is very hard to reliably predict human behavior. How then can we trust answers to fictional dilemmas, even sophisticated ones?

The researchers even admit that “crowdsourcing” is a questionable method:

The idea of “crowdsourcing preferences” assumes that stated preferences provide useful evidence about what respondents would actually do when faced with a physical situation with real life-or-death consequences. But previous work has showed that people’s stated preferences and their actual actions diverge in many contexts.

Edmond Awad, Sohan Dsouza, Jean-François Bonnefon, Azim Shariff, Iyad Rahwan, “Crowdsourcing Moral Machines” at Communications of the ACM, March 2020, Vol. 63 No. 3, Pages 48-55 10.1145/3339904

They nonetheless justify using the Moral Machine dilemma by stating that armchair decisions are superior to those made in the heat of the moment. That’s false for the same reason that a simplistic scenario fails: I trust the quarterback on the field much more than guy watching the game. Armchair decisions do not reflect the real world in the heat of the moment.

There are other errors I could call attention to but the central problem is, why even create the (so-called) Moral Machine?

Here’s the dilemma: The Moral Machine feels necessary because the rules — that is, our laws — by which we order our lives are useless with automated vehicles. Laws embody principles that we apply. Machines have no mind by which to apply the rules. Instead researchers must train them with millions of examples and hope the machine extracts the correct message…

The researchers are correct in one regard: “Today, more than ever, computer scientists and engineers find themselves in a position where their work is having major societal consequences.” Your or my work using machines does not likely put society at risk. But autonomous machines come with real risks. They are easily fooled and the results can be fatal.

Researchers and computer scientists need to stop believing science fiction; they are not creating some new life form, but machines, machines that can harm as well as help. They should submit their machines to whatever tests, licensing, and certification our society considers necessary before unleashing their creations.

This should be an uncontroversial idea. We test adults before allowing them to drive. We certify engineers before they build bridges. We even license beauticians before they cut hair. The act of shoving some fraction of our intelligence into a machine does not obviate the need for certification nor does it shift responsibility from the maker to the machine.

The Moral Machine exposes the shallow thinking behind the many promises made for artificial intelligence. Machines are not humans; we must not pretend that they are. Machines can help us do what we do better but they cannot replace that which we alone possess: minds.


Further reading on the Trolley Problem:

Will self-driving cars change moral decision-making? It’s time to separate science fact from science fiction about self-driving cars. (Jay Richards)

There is no universal moral machine The “Moral Machine” project aimed at righteous self-driving cars revealed stark differences in global values.

and

Who assumes moral responsibility for self-driving cars? Can we discuss this before something happens and everyone is outsourcing the blame? (Jonathan Bartlett)


Brendan Dixon

Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Brendan Dixon is a Software Architect with experience designing, creating, and managing projects of all sizes. His first foray into Artificial Intelligence was in the 1980s when he built an Expert System to assist in the diagnosis of software problems at IBM. Since then, he’s worked both as a Principal Engineer and Development Manager for industry leaders, such as Microsoft and Amazon, and numerous start-ups. While he spent most of that time other types of software, he’s remained engaged and interested in Artificial Intelligence.

The “Moral Machine” Is Bad News for AI Ethics