Mind Matters Natural and Artificial Intelligence News and Analysis
business-ethics-stockpack-adobe-stock.jpg
Business Ethics

Artificial Ethics May Make Poor Choices

Whether or not AI can become powerful enough to follow its own rules is still an open question
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

We’ve all heard about computers that make poor ethical choices. One of the most memorable is HAL 9000 in the 1968 classic, 2001: A Space Odyssey.

In the film, HAL kills four humans and attempts to kill a fifth. The concurrently written book elaborates on HAL’s murderous plans, explaining that they were due to HAL’s inability to properly make the correct ethical choice: lie to the humans or kill them (and, thus, no longer be forced to lie to them).

Poor HAL 9000! If only people had developed a new field of academic inquiry in time to help him (or should we say, “it”?) make better fictional ethical choices!

Putting aside Hollywood’s imaginary universes, the real need for the new field of artificial ethics (AE) presupposes two things. First, that society will allow artificial intelligence (AI) to operate in situations where AI-induced ethical conundrums may result. The second, and more important, criterion, is that AI will eventually become powerful enough to be useful. A good example would be AI-powered self-driving cars. If AI becomes powerful enough to autonomously control vehicles in a way that requires difficult ethical decisions based on machine learning rather than mere rule-following then the second criterion would be satisfied.

Whether or not AI can become powerful enough to follow its own rules is still an open question. Many people, including the authors of this article, feel that the best way self-driving AI can be made useful is by way of “virtual rails.” If this is the development path chosen, then the only “decisions” autonomous vehicles could ever make would be the “if-then” rules programmed into the AI ahead of time. For example, if the dilemma confronting the AI is to hit a human pushing a stroller or a human walking a dog, the AI would follow its programming to maximize the protection of human life and swerve to hit the human-dog pair (the famous Trolley problem). Any ethical discussions would relate to the original programming, not to the AI’s machine learning, and debate (and it is reasonable to assume there will be debate) will be resolved long before the AI faces an ethical decision.

So, even if AI never becomes powerful enough to move beyond mere rule-following, it is a fair question to ask, “Who will get to determine what are ethical choices—the programmed rules for the AI—and what are not?” For the purposes of discussion let us assume there will be a multidisciplinary body that has a lawful and public mandate to make such decisions. That body and those decisions are not the concept under consideration here, but rather the methods and means by which those decisions would be implemented in the creation and programming of AI of ever-increasing complexity.

The first step towards implementing AE into AI would be to create a repository of pseudocode for potentially dangerous AI programs—with instructions that could harm human life appropriately flagged. By requiring AI programmers to submit their instructions in pseudocode, a reasonably well-educated citizen could read the flagged portions of the code and determine how the AI would respond in an ethical conundrum.

The second step would be to create legal and criminal consequences for the programmers in charge of vetting new AI code. Just as architects or engineers can be held criminally responsible for deaths due to poorly designed buildings or vehicles so also lead programmers would be responsible for human lives hurt or lost due to poorly programmed AI.

Beyond these two first steps would be a host of difficult, complicated, political and bureaucratic safeguards that society would need to implement. But even just these two steps will be challenging. The first step requires transparency of an industry known for carefully guarding its programming secrets. Even revealing pseudocode versions of their software would compromise their edge over their competitors. The second step would surely slow down innovation, as programmers agonized over whether their code is not only bug-free, but morally sound.

Indeed, would you like to bear the legal responsibility, in the above scenario, of being the lead programmer of the first truly self-driving car?

We know that the maxim, “what can be done will be done” has held true in controversial disciplines such as cloning and genetic enhancements, and it will surely rear its ugly head with AI. But, even so, the rules governing human cloning are not worthless nor would rules governing AE for AI. Martin Luther King Jr’s apt “We will have to repent in this generation not merely for the hateful words and actions of the bad people but for the appalling silence of the good people,” could be applied to this situation, too. The actions of bad AI will be lamentable, but the inaction of good humans would be just as tragic. Now is the time for our society to take the first steps toward implementing AE.


Further reading:

The moral machine is bad news for AI ethics Despite the recent claims of its defenders, there is no way we can outsource moral decision-making to an automated intelligence. (Brendan Dixon)
and

AI ethics and the value of human life Unanticipated consequences will always be a problem for totally autonomous AI. (Robert J. Marks)


Jed Macosko

Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Prof. Jed Macosko holds a B.S. from MIT and a Ph.D. from UC Berkeley. He was a postdoctoral researcher under Prof. Carlos Bustamante, and also under Prof. David Keller. The Macosko-Holzwarth lab is currently focused on the mechanical properties of cancer. Prof. Macosko first collaborated with Dr. George Holzwarth in 2004 to explore how multiple motor proteins pull a single cargo in living cells. Prof. Macosko also partners with the lab of Profs. Keith Bonin and Martin Guthold to develop a new drug discovery platform and with Dr. A. Daniel Johnson of the Department of Biology to develop new teaching technologies.

Bradley A. Alaniz

Brad Alaniz, Commander, US Navy (ret) holds a B.S. from the United States Naval Academy and an M.S. from the Naval Postgraduate School. From 2017 to 2020 he was a Military Professor at the US Naval War College. Since his retirement from Active Duty he has been a civilian Operations Research Analyst exploring new underwater technologies for the US Navy in New London, Connecticut.

Artificial Ethics May Make Poor Choices