Mind Matters Natural and Artificial Intelligence News and Analysis
basil-samuel-lade-1132874-unsplash
Green walk signal at crosswalk
Photo by Basil Samuel Lade on Unsplash

AI Ethics and the Value of Human Life

Unanticipated consequences will always be a problem for totally autonomous AI
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
The test vehicle in the fatal accident was an Uber Volvo XC90. Note right front side damage.

Can we measure the value of human life in dollars?

There is a lot of talk about the ethics associated with AI. But AI has no more ethics in principle than a toaster. A toaster can be used to make toast or thrown into a bathtub to electrocute the occupant.

Technology is neither good nor bad in a universal sense, only in how it is used. AI may be used for good to counter evil. But the ethics are ultimately the responsibility of the programmer whose expertise and goals are translated into the machine language of AI. If the programmer is evil, the AI will perform evil tasks. If the programmer is stupid, the AI may do stupid things.

The problem is that even the best AI computer programmers can’t think of all possible outcomes of their choices. Nazi U-Boats in WWII, for example, used acoustic sensing torpedoes. They listened for engine noise and zeroed in on the target. When launched, the torpedo would detect and aim at loud noises made by Allied ships. The problem was that the U-Boat itself had an acoustic signature. Once launched, an acoustic torpedo could hear the noise from behind, turn around and blow up the U-Boat. To avoid mass suicide, the U-Boat crews began to turn off their noisy engines after a torpedo launch. After this and other torpedo weaknesses were discovered, the torpedo’s acoustic sensor was changed so that it would not activate until it was far enough from the U-Boat that the U-Boat’s engines would be an undetectable whisper.

Anyone who has written and debugged software has experienced unintended performance. You write some code, then run it to see if it is doing what you want. Something unexpected happens. You look at the code again and say “Of course! I wrote such-and-such a line and the program did exactly that. But I didn’t mean that. Silly me.” So you go back and change the code so that things more like what you want happen.

Consider the following lofty moral guideline for AI robots: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”1 It sounds great. But have we considered all possibilities? Could there be unintended consequences?

What about this? A mass shooter, armed with an AR-15 with a bumper stock and multiple high capacity magazines, enters a church and begins shooting worshippers. The gunman, dastardly deed completed, exits the church. A police officer confronts him and unholsters a stun gun, intending to incapacitate, secure, and then arrest the shooter.

A nearby robot, observing the action, remembers the command, “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” So the robot deflects the officer’s stun gun and her shot goes wild. While she unsnaps another holster to remove her Glock, the killer gets away and is never captured.

Clearly, the no-harm moral guideline must be amended to cover such cases. But the AI can’t do that, only the programmer. Such detailed guidelines are the stuff of lawmaking where, ideally, all possibilities and special cases are considered. But certainty is never possible; policy makers can only try to do their best and leave the rest to the courts. AI will never be capable of making judgments outside of what it is programmed to do.

Unanticipated consequences will always be a problem for totally autonomous AI. Windblown plastic bags are the urban equivalent of the tumbleweed. A self-driving car mistakes a flying bag for a deer and swerves to miss it.2, 3 After making this mistake, the AI can be adapted so as to not repeat this particular mistake. The problem, of course, is that all such contingencies cannot be anticipated. As a result, totally autonomous self-driving cars will always be put into situations where they will kill people.

Should totally autonomous cars be banned? The answer depends on how many people they kill. Human-driven vehicles have never been outlawed because human drivers kill. So totally autonomous self-driving cars might be adopted for mainstream use when they kill significantly fewer people on average than human-driven vehicles.

How can autonomous cars be tested and approved? The answer is putting a human on the loop. Self-driving cars can ride around with a human overseer at the wheel who takes control when the unanticipated happens. Even then, they can kill.4 In March 2018, an autonomous self-driving car operated by Uber with a backup driver behind the wheel hit and killed a woman in Tempe, Arizona. Recent evidence has suggested that the backup driver was not paying attention, which is arguably a greater risk with autonomous cars than with others.

Returning to the question of valuing a human life in dollars, courts of law have always placed a price on its value in wrongful death cases. And cars will always figure largely in such cases. In the development of technology overall, there is always a tradeoff in which human life is given a price.

For example, cheap cars aren’t safe and safe cars aren’t cheap. If you want to protect life, drive around in a fully armored Humvee. A new Humvee costs over $100k (triple the price for full armor).5 The thrifty can buy a new Honda motorcycle for under $5k. Both the Humvee and the Honda can make it from Cleveland to Chicago in a day. Most motorists will purchase a vehicle somewhere between the extremes of a Humvee and a Honda. But if society had insisted on the greatest possible safety at all times, the poor could not afford to drive. Is that fair?

This sort of tradeoff means that an unavoidable implicit price is always placed on human life, which is what makes ethics a complex problem. The programmers of the totally autonomous self-driving car will set this implicit price one way or another.

But how can autonomous AI be tested to assure ethically acceptable performance? We’ll come to this in a later post.

———–
1 This is law #1 of Asimov’s Three Laws of Robotics.

2 Domingos, Pedro. The master algorithm:How the quest for the ultimate learning machine will remake our world. Basic Books, 2015.

3 Aaron Mamiit, “Rain or snow? Rock or plastic bag? Google driverless car can’t tell,” Tech Times, 2 September 2014

4 Daisuke Wakabayashi, “Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam,” New York Times, March 19, 2018

5 Charlie Keyes, “Steep cost of military vehicles outlined in Army report,” CNN, January 27, 2011

Robert J. Marks is the Director of the Walter Bradley Center for Natural and Artificial Intelligence and holds the position of Distinguished Professor of Electrical and Computer Engineering at Baylor University.

Also by Robert Marks: Why we can’t just ban killer robots Should we develop them for military use? The answer isn’t pretty. It is yes. Autonomous AI weapons are potentially within the reach of terrorists, madmen, and hostile regimes like Iran and North Korea. As with nuclear warheads, we need autonomous AI to counteract possible enemy deployment while avoiding its use ourselves. (Robert Marks)

Killing People and Breaking Things Modern history suggests that military superiority driven by technology can be a key factor in deterring aggression and preventing mass fatalities (Robert Marks)

and

Top Ten AI hypes of 2018


AI Ethics and the Value of Human Life