Mind Matters News and Analysis on Natural and Artificial Intelligence
End of the road. Precipice, indicated by signs. 3d render
End of the road. Precipice, indicated by signs. 3d render

There is no universal moral machine

The “Moral Machine” project aimed at righteous self-driving cars revealed stark differences in global values
Brendan Dixon
Brendan Dixon

In an effort to program self-driving cars to make decisions in a crisis, MIT’s Moral Machine offered 2.3 million people worldwide a chance to crowdsource who to kill and who to spare in a road mishap.

Software architect Brendan Dixon, who took part in 2016, points out that although the Moral Machine was developed in 2014, the Trolley Problem on which it is based hails from 1967:

The Moral Machine expands the variables of the Trolley Problem, but keeps the basic binary choice and outcome. It presents to you, as a disengaged observer, 13 scenarios a self-driving car may face. In each someone, or some ones, will get hurt, possibly killed. The victims may be in the car or crossing the road in front of the car. They may be old or young, honest or criminal, have the right-of-way or jaywalking, human or animal. The only guarantee is that someone will get hurt, probably killed. And each presents just two choices. Brendan Dixon, “Artificial Intelligence Has a Morality Problem” at Evolution News and Science Today October 13, 2016

Forty million decisions later, the results were illuminating but unsettling. As reported at Nature, people from countries like Finland, where national government is strong, were more likely to choose to hit jaywalkers than people from countries like Nigeria where governments are weaker. But Finns were less likely than Colombians to choose to hit a lower status person as opposed to a higher status one. The researchers attributed to greater economic equality in Finland.

People from China and Japan were more likely to spare the old than the young. But in Western cultures, numbers matter more:

The results showed that participants from individualistic cultures, like the UK and US, placed a stronger emphasis on sparing more lives given all the other choices—perhaps, in the authors’ views, because of the greater emphasis on the value of each individual. Karen Hao, “Should a self-driving car kill the baby or the grandma? Depends on where you’re from.” at Technology Review

Whatever the causes of cultural differences, Dixon thinks that the Moral Machine presents mere caricatures of moral problems anyway. “The program reduces everything to a question of who gets hurt. There are no shades of gray or degrees of hurt. It is, as is so often with computers, simply black or white, on or off. None of the details that make true moral decisions hard and interesting remain.”

That raises the question of what use the results would be:

Bryant Walker Smith, a law professor at the University of South Carolina in Columbia, is sceptical that the Moral Machine survey will have any practical use. He says that the study is unrealistic because there are few instances in real life in which a vehicle would face a choice between striking two different types of people. “I might as well worry about how automated cars will deal with asteroid strikes,” Walker Smith says. Amy Maxmen, “Self-driving car dilemmas reveal that moral choices are not universal” at Nature

And that’s assuming that fully self-driving cars are remotely likely any time soon:

It’s important to note that although these decisions will need to be made at some point in the future, self-driving technology still has a way to go. Autonomy is still in its infancy, and self-driving cars (despite public perception) are still prototypes, not products. Experts also say that while it’s not clear how these decisions will be programmed into vehicles in the future, there is a clear need for public consultation and debate. James Vincent, “Global preferences for who to save in self-driving car crashes revealed” at The Verge

One wonders what kind of a consultation or debate could be held. Should cars be programmed differently in different regions? Should referenda decide the questions? What if public values change over time in a given region? For that matter, what if the values of those individuals who would take part in the Moral Machine differ from those of people who think the whole thing is ridiculous? Perhaps, for the foreseeable future, the most real aspect of the fully self-driving car will be all the debates around it.

See also, by Jonathan Bartlett: Who assumes moral responsibility for self-driving cars? (Jonathan Bartlett) Can we discuss this before something happens and everyone is outsourcing the blame?

Guess what? You already own a self-driving car Tech hype hits the stratosphere

and

Self-driving vehicles are just around the corner… On the other side of a vast chasm


There is no universal moral machine