Get the FREE DIGITAL BOOK: The Case for Killer Robots
Mind Matters Reporting on Natural and Artificial Intelligence
The difference between right and wrong
The difference between right and wrong, Two red and silver push button on a wooden desk with text Right and Wrong
Adobe Stock licensed

Will Self-Driving Cars Change Moral Decision-Making?

It’s time to separate science fact from science fiction about self-driving cars

Irish writer and playwright John Waters (right) has a recent piece in First Things, “Algorithm Religion,” bemoaning the moral implications of ceding our choices to ever-more-sophisticated algorithms, such as those that will presumably guide autonomous cars.

He starts by imagining a scenario in 2032, when fully autonomous cars are a reality. It’s not new; it’s a replay of the “trolley problem” from your college ethics course: A trolley is hurtling down a track toward five people who are tied to the track ahead. A switchman who spots them can flip a switch that will divert the trolley. Alas, there’s a person on that track who would be killed if he flips the switch. So what should the switchman do? If he does nothing, five people will die. If he pulls the switch, he will save the five, but kill one person who would otherwise have lived.

In Waters’ version, a self-driving car must “choose” between a young child and an old man. The car, presumably using utilitarian logic, opts to take out the old man.

By itself, it’s hard to see how Waters presents a new scenario. Surely the dilemma—however we resolve it—is simply transferred to whomever programmed the car. Just as the trolley problem would present a choice for a driver in real time, so too would it present a choice for the programmer ahead of time. Right?

Waters doesn’t think so:

Self-driving vehicles are an example of a new category of machine, in that they have access to public thoroughfares on much the same basis as humans: without constraint of track or rail—hence “autonomous.” Computer-generated movement of machines is a brave initiative for all kind of reasons, and will necessitate radical changes in laws and cultural apprehension.

His argument isn’t explicit. However, if I understand his thinking, he sees something sinister in the algorithms used in such technology because they are (and presumably will be) the product of machine learning rather than old-fashioned programming. As a result, they will be, at least in part, “opaque” and “indescribable.” That is, all their details won’t be written by moral agents such as programmers. Some of their rules will emerge, bottom up, through an iterative statistical process. So, he fears a time in the near future when

We will leave it to the computers to decide, and won’t understand or seek to understand the underlying logics being applied. Self-driving cars, though safer in many respects, will become inscrutable to users, pedestrians, and other adjacent humans.

He warns of a time, not so far away, when we might have to grant moral discretion to the opaque algorithms, just as Christians now grant to the all-knowing but often inscrutable decrees of God.

This is overblown. It’s clear that Waters—who some years ago confessed to being a “Luddite”—is taking his cues from of a couple of other writers. One is a catastrophist and the other is a techno-utopian.

Based as it is on such third-hand knowledge, Water’s piece is a member of that species of conservative commentary that naively accepts hype and then responds to it, rather than questioning the hype itself. In particular, he fails to separate science fact from science fiction with respect to algorithms and autonomous cars.

Water assumes that fully autonomous cars are right around the corner, indeed, are a “virtual … certainty, perhaps within a decade.” If you read Mind Matters, you know that that claim is justifiably disputed. What’s more likely in the near term is that our cars will take on more and more “automated’ features that augment our own roles as drivers, without replacing us full stop.

But let’s assume that soon we will all be passengers in cars that we used to drive. Even so, the mere possibility of error is hardly a serious moral quandary. If autonomous cars are, on balance, less lethal than the arrangement we have now, that would be an improvement. It hardly makes one a utilitarian to say so.

In any case, it just doesn’t follow, as Waters muses, that we will find ourselves in a world “in which there is no recourse to justice, reckoning, or even satisfactory closure after algorithms cause death or serious injury. …”

On the contrary, much the same tort and negligence laws would apply, other things being equal, to “self-driving” cars in 2032 as apply to a 2019 Honda Odyssey or, for that matter, to a Samsung refrigerator. Machine learning or not, human choices and design will be all over the technology and will be just as subject to moral scrutiny as they are now. No jury in a tort case thirteen years from now will be moved by a Waymo attorney who argues that “the algorithm will do what the algorithm will do.”

But what if these “self-driving” cars find themselves with trolley problems from time to time? So what? How will the moral situation be different than if the car had a live human driver? The main difference is that the outcome could be subject to simulation, assessment, and the moral judgment of engineers ahead of time, rather than being subject to the panic of a human driver who scarcely has time to think. In other words, the outcome could be subject to some moral judgment, however imperfect. And again, surely that is an improvement.

Waters sees a problem here because he has been led astray by academics who assume that strong AI that can think consciously for itself is bound to emerge. They muse about algorithms and machines that somehow become “artificial moral agents.” That’s a myth. Waters’ rhetorical gifts would be better served by a bit less Luddism and bit more skepticism.


If you enjoyed this piece, you may want to look at some of Mind Matters News’s other recent pieces on self-driving cars:

Would selling self-driving cars sooner save lives? Not if we look more closely at the statistics. It’s enough to make you want to run out and buy a smart car today. But just a minute. There are other statistics out there. Let’s look at some of them. (Brendan Dixon)

Will Industry Pressure Loosen Self-Driving Car Tests? Right now, the regulatory agency is under pressure to accept the industry’s “softball” testing suggestions. (Brendan Dixon)

Are self-driving cars really safer? A former Uber executive says no. Before we throw away the Driver’s Handbook… Current claims that self-driving cars are safer are hype, not measurement. Meanwhile, Congress is expected to push for legislation next month to pave the way for widespread use of self-driving vehicles without a consensus on safety standards. (Brendan Dixon)

Elon Musk walks back full self-driving claims His Q3 earnings call with investors was a stark contrast to earlier claims about a robotaxi fleet. (Jonathan Bartlett)

and

Self-driving cars: Florida lawmakers speed through caution signs: Legislation seems fuzzy about who accepts responsibility when things go wrong with autonomous vehicles. (Jonathan Bartlett)


Jay W. Richards

Senior Fellow, Assistant Research Professor, Executive Editor
Jay Richards, Ph.D., O.P., is an Assistant Research Professor in the School of Business and Economics at The Catholic University of America, Executive Editor of The Stream and a Senior Fellow at the Discovery Institute where he works with the Center on Wealth, Poverty and Morality. In addition to writing many academic articles, books, and popular essays on a wide variety of subjects, he edited the award winning anthology God & Evolution and co-authored The Privileged Planet.  His most recent book is The Human Advantage. Richards has a Ph.D., with honors, in philosophy and theology from Princeton Theological Seminary, an M.Div., a Th.M., and a B.A. with majors in Political Science and Religion. He lives with his family in the Washington DC Metro area.  

Will Self-Driving Cars Change Moral Decision-Making?