Mind Matters Natural and Artificial Intelligence News and Analysis
Car windshield view of Zabriskie Point, Death Valley, California, USA
Looking through a car windshield with view of Zabriskie Point, scenic place in Death Valley, California, USA

Who assumes moral responsibility for self-driving cars?

Can we discuss this before something happens and everyone is outsourcing the blame?
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In Part 1, we discussed the fact that the “levels” of self-driving, as identified by the Society of Automotive Engineers, are not evenly incremental. The chasms widen as the levels of self-driving rise. One outcome of ignoring that fact is that the moral chasm that stretches between levels 3 and 4 is seldom considered. By “moral” chasm, I mean the chasm of personal moral responsibility.

Moral responsibility may involve legal responsibility (or vice-versa) but the two are not identical. For example, a driver is legally responsible for rear-ending the vehicle ahead during a multi-car pile-up on black ice. But no one considers the driver morally responsible for the direction in which his car was pushed when struck from behind. However, if drinking and driving is among the probable causes of a collision, we tend to assign moral responsibility irrespective of legal responsibility. We would assign moral responsibility for impairment to a driver who is not legally at fault.

As we saw in Part 1, a Level 1 car controls speed or steering but not both, and a Level 2 car controls both speed and steering in very restricted situations. These levels of “self-driving” would probably be more accurately termed “driver assistance” and some elements of Level 1 have been available since the 1950s. In a Level 3 car, for some substantial portion of your driving experience, the car is fully automated but it must be supervised by the driver. In a Level 4 car, driver supervision is not necessary for defined domains (geographic areas and/or road conditions). Finally, in a Level 5 car, driver supervision is not necessary for anything. The driver can expect to go to sleep in the back and wake up at the intended destination on the other side of the country.

Note the fundamental moral shifts between these levels. Levels 1 and 2 are driver assistance, that is, levels of tools for the driver. As with any tools, they extend and automate the actions of the driver, who bears the entire moral responsibility for their action. For instance, if I fire an automatic rifle, the fact that the rifle automates repeated loading and firing of rounds does not limit my moral culpability for what it does. As the user, I have complete moral responsibility for what happens with any tool under my control.

The picture changes somewhat at Level 3. The moral responsibility for a tool can shift from the user to the creator if the tool acts unexpectedly in a way that cannot be anticipated by the user. Suppose I am using a table saw safely and correctly, with due regard to maintenance, but the blade flies off the machine and injures a bystander. The manufacturer would be morally culpable; the machine should not have been designed that way. Likewise, if a Level 1 or Level 2 car causes damage that the driver cannot anticipate or prevent, then the liability shifts back to the manufacturer.

A Level 3 car is similar to the extent that the driver is expected to supervise the car’s self-driving capability. However, a new issue arises: How much about the car’s behavior can the driver be expected to know? That is, in Level 1 and Level 2 self-driving, the car’s foreseeable abilities are a simple extension of the driver’s direct intentions. The car isn’t doing much else, so it can’t do much that is unexpected. In Level 3, because the car is taking over many of the driving tasks, it may be hard for a driver to know when the car is doing something unexpected. Perhaps the car is correct; it is avoiding a hazard. But perhaps the car is reacting as if it is avoiding a hazard but is actually causing one.

Imagine that your car suddenly swerves. Should you (a) assume the car knows what it is doing and let it swerve, or (b) take control, as the party responsible for the car? Level 3 introduces some moral ambiguity for certain complex behaviors of the car.

At Level 4, however, there is a giant shift of moral responsibility. The car manufacturer advertises that, to use the car properly, the driver should not intervene in the zone in which the car can self-drive. Additionally, the manufacturer is warranting that if the car is in a position where it can no longer operate in Level 4 mode, it can safely pull over to the side of the road and wait for the driver to become available.

This means that when a Level 4 self-driving car crashes while operating in that mode, the damage is the fault of the car and thus liability goes back to the manufacturer. As long as the driver is not at the wheel, the driver is operating the car as recommended by the manufacturer. The only thing the driver needs to do is to verify that the car is safe for Level 4 operation.

This is a huge moral shift, and I think that many people are taking the transformation too lightly. It may be of great benefit, but it has major implications across the industry. It will affect a driver’s insurance (most likely by reducing it) but will also cause the auto industry’s liability insurance to skyrocket, as the companies assuming liability for what happens on the road.

If Level 5 cars ever become available, the need for individual insurance will disappear altogether because the only human interface with the car is to tell the car where to go. It would be the moral equivalent of hailing a taxi. Is there any address that a passenger could give to a driver that would make the passenger liable for a traffic mishap? Apart from acts of intentional malice, such as using the vehicle to stalk someone, there is hardly anything dangerous that a user (no longer a driver, really) could do at that point. Even drunk driving would be a thing of the past, as the driver is not even expected to be awake.

I would expect Level 5 cars to be more often rented than owned by the user. Rental fees would allow car manufacturers to forecast and charge customers for their increased and continued liability. Corporate ownership would enable them to speedily remove cars from the road that the manufacturer no longer wishes to be liable for.

To return to the more likely scenario of Level 4 self-driving, another moral transition is the response of other drivers to a self-driving car. For instance, in areas where a great deal of heavy automated equipment is in use, warning signs are usually posted. Because they have been warned, those entering the area assume a moral responsibility to be aware of the risks. Should other drivers and pedestrians be warned when Level 4 and Level 5 self-driving cars are in operation? Drivers may need to know how to expect them to behave. Let’s see what happens when we get there but I expect that a highly visible indicator will need to be affixed to Level 4 and Level 5 cars while in self-driving mode.

To summarize, Level 4 self-driving vehicles will bring with them a giant shift in the moral equation of driving. Unfortunately, in a culture that seems to think that the future will take care of itself, little thoughtful public discussion is taking place. My hope is to start a discussion of how coming technological changes will affect the future moral landscape.

Jonathan Bartlett

Jonathan Bartlett is the Research and Education Director of the Blyth Institute.

Also by Jonathan Bartlett: Self-driving vehicles are just around the corner… On the other side of a vast chasm

Guess what? You already own a self-driving car Tech hype hits the stratosphere

and

When machine learning results in mishap The machine isn’t responsible but who is? That gets tricky


Who assumes moral responsibility for self-driving cars?