Mind Matters Natural and Artificial Intelligence News and Analysis
nathan-dumlao-741942-unsplash
Photo by Nathan Dumlao on Unsplash

AI: Think About Ethics Before Trouble Arises

A machine learning specialist reflects on Micah 6:8 as a guide to developing ethics for the rapidly growing profession
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

He has shown you, O mortal, what is good.
    And what does the Lord require of you?
To act justly and to love mercy
    and to walk humbly with your God. (Micah 6:8, NIV)

(This talk was given on 10/26/2018 at the 2018 Dale P. Jones Business Ethics Forum, at Baylor University in Waco, Texas. The theme of the forum was The Ethics of Artificial Intelligence.” – ed.)

“Why did the pastor cross the road?

Because he wanted to get his chicken back.”

Don’t worry if you didn’t get the joke. You’re not missing anything. It is just not that funny.

This uncomfortably bad joke was an attempt at improving an even worse joke generated by an artificial neural network, which came up with the original, less intelligible, version:

“What do you call a pastor cross the road?

He take the chicken.”

Yeah. Deep-learning networks and AI systems have begun automating jobs across several industries, but stand-up comedy, at least, will be safe for a while.

Fear of AI’s Future

Jokes aside, many ethical questions surround the development and use of artificial intelligence and machine learning systems. In the popular media, visions of an impending robot takeover often raise concerns over how we should shape AI systems to safeguard the future of humanity. Some wonder whether we should require systems to be embedded with the equivalent of Asimov’s Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Others worry about the Singularity when superhuman artificial intelligence will render us redundant, or possibly eliminate us entirely. Such concerns are not entirely irrational, but they are a bit premature. Our current systems are not consciousness in any way, nor is there a clear path for how to proceed towards that goal. Simply adding more neurons to a deep-learning network isn’t likely to get us there. It remains to be seen whether we can make approximations to minds good enough that they eventually behave like free and conscious beings.

Existing Dangers of AI Systems

The AI and ML systems we have in place today are not sentient, but they are still dangerous. I am not worried about the future of AI, but I am concerned about the dangers artificial learning systems currently pose. There are obvious threats: weaponization, terrorism, fraud. But there are also less intentional threats, such as increased inequality, privacy violations, and negligence resulting in harm. For example, consider the case of the self-driving Uber vehicle that killed an Arizona pedestrian in March. According to some, that fatal crash proved that self-driving technology just isn’t ready yet, and releasing these cars on roads in their current form is an act of extreme negligence.

Cathy O’Neil, in her book, Weapons of Math Destruction highlights several cases of machine learning systems harming poor and minority groups through the uncritical use of questionable correlations in data. For example, models that attempt to predict the likelihood that someone will commit a crime if released from prison (criminal recidivism models), use early run-ins with police as a feature. Because poor minority neighborhoods are more heavily policed and minority youth are prosecuted more often than their rich white counterparts for the same crimes, such as recreational drug use and possession, this feature of prediction strongly correlates with race and poverty levels, punishing those with the wrong economic background or skin color. Another feature of these systems assesses whether a person lives near others who themselves have had trouble with the police. While this feature may, in fact, correlate with a higher risk of committing a crime in the future, it punishes the person for what others have done, which they have no control over. Given that poorer people cannot just move into wealthier neighborhoods to improve their statistics, the poor are doubly punished, simply for being poor. Other examples are not hard to find. Because of these dangers, ethics has become an area of increasing concern for AI researchers.

Why Ethics?

In light of these injustices, current mainstream interest in AI ethics is not surprising. What is surprising, however, is that many of those who call for stronger ethical safeguards, fighting against inequality and speaking out against injustice, also believe there are no objective ethical standards. They hold that all ethics and morals are merely subjective, personal value statements, no more significant than whether or not I like pineapple ice cream. (I do not.) Under this view, all ethics are merely social constructs, bound to particular times and cultures, with no transcendent aspect that grounds them in anything firmer than temporary human biases. So if ethics are nothing but subjective personal biases, why would I want to include them in the systems I build?

The first answer is that ethics are not merely subjective; there are objective ethical standards that transcend time and culture, allowing entire cultures themselves to be condemned, such as the just condemnation of the Nazi culture of death and ,in our own time, condemnation of the ISIS culture of slavery and violence. If ethics did not transcend cultures, they could not judge between cultures, but we know that they do, and should. Ethics rightly judges between cultures of oppression and their victims. Justice requires it. The second answer to the question of why we’d want to include ethics in our learning systems is much more pragmatic: All learning systems need biases and assumptions to function well, so we lose nothing by including biases that are ethically just.

The Need for Bias in Learning

By the 1980s, we knew that systems which were unbiased — meaning they did not make any assumptions beyond what was strictly consistent with training data — were systems that could not learn: they could memorize, but never generalize. When such a system was presented with a new case, one that it hadn’t seen during training, it had no basis for choosing among the competing possibilities for dealing with the new example. The existing training data was of no use because the system was not allowed to make any assumptions about how the observed training data was related to unseen test examples. Such systems performed the equivalent of electronic coin flips to handle novel examples. As a result, the distribution over the number of errors they made was exactly the distribution of the number of times you’d be able to correctly predict the outcomes of a set of fair coin flips. Biases are always necessary, but they are not always just or merciful. As humans who strive to be just and merciful, we should make these biases reflect our highest ethical ideals, not our worst prejudices.

Which Ethics?

Although we would all agree on the importance of ethics in AI, there remains one lingering question: which ethics? There are as many ethical systems as there are different individuals and it’s not immediately clear which ethical framework we should operationalize within our learning systems. Although objective ethics exist, we are not always guaranteed to know what they are and we often resist them once they become known—especially if they condemn our own behavior. As a Christian who follows Jesus, I submit to an ethical framework which values human life, tries to protect the poor and vulnerable, and honors integrity. In contrast, an agnostic may have an ethical orientation that values courage, love, and freedom. Deciding how to choose one particular ethical framework over another is a hard problem which we will not solve here. For the sake of our talk, we’ll simply assume that, through some process, a single ethical framework has been identified which we want to incorporate in our AI systems. How do we do this?

Operationalizing Ethics

How can we operationalize a set of ethics in software? This seems like another hard problem. But it is a problem we must solve in pursuit of justice. So let’s work through one particular example of what it would look like for a particular set of ethical principles. While we don’t yet have a formula for compiling principles of ethics directly into code, we can identify places where free choices are made and show how ethical principles can guide those choices. For concreteness, I will use my own ethical framework, specifically a set of three principles drawn from the Hebrew Bible, namely Micah 6:8, which asks us what is required of man. The answer given is that we are to act justly, love mercy, and walk humbly with God. Acting justly; loving mercy; walking humbly — I hope to persuade you that these are ethical standards that could be adopted by persons of any creed, despite their origin in the Judeo-Christian tradition. They will serve as our test case for embedding ethical principles in systems. As a developer of machine learning systems, formerly at Microsoft and now as an academic researcher, I actively shape my systems through the architecture choices I make and the biases, both conscious and unconscious, that I place within them. Not all biases are equal, but as we saw, biases are needed to allow for generalized learning, and I would add, are necessary for just operation in the real world.

Acting Justly

We would like our learning systems to be just. As reflections of us, people who act justly, we should create AI systems that also behave justly. How can a system behave justly? By removing systematic prejudices that punish some through no fault of their own. We’ve already seen an example of this, where law enforcement models punish those too poor to move to better neighborhoods. As a result, they receive worse treatment than those living in prosperous areas, even when convicted of similar crimes. We should be vigilant. To do so, we must question the assumptions built into our systems, and never stop questioning them. Revisit them, explain them to others and provide justification for them that go beyond system efficiency or accuracy. If you cannot do so, then reject them. Unjust prejudices do not become just simply because they boost the bottom line. External review of code by independent parties often helps because we all have blind spots that prevent us from fully seeing our own biases in an objective light. Interpretable machine learning systems such as decision trees and generalized additive models should be preferred over inscrutable black boxes like deep learning networks, wherever issues of justice are concerned. If someone cannot be told why a model condemns them, then they cannot prove their own innocence, nor can we be certain that such condemnation is not the result of an inbuilt prejudice. Many learning systems are problematic in this regard, and we must continually fight against this trend.

Sloth is its own form of injustice. When lives are affected by our algorithms, slackness is not an option, nor is apathy. We’re told that one who is lazy in his work is the brother to him who destroys. When a bridge collapses or an autonomous vehicle kills a pedestrian because of poorly tested software, we see this truth written in blood. Acting justly prohibits short-cuts in the development process. Developers can be lazy, but laziness is no excuse for negligence. So developers, write your unit tests. Document your code. Do code reviews, even though they are unpleasant. Injustice is even more unpleasant.

Truth and justice are twin pillars, supporting and reinforcing one another. What is just can only be established through what is true. False witnesses in a court of law provide an obvious example of lies perverting justice. In pursuit of justice, our systems must be built on truth. This means paying more than lip-service to the fact that correlation does not imply causation. Although A and B are always found together, this does not imply that A causes B; on the contrary, B might cause A, or some other process C might be the cause of both A and B. By pragmatic necessity, most statistical learning models are mere correlation models. Correlation is easy to discover, whereas uncovering causation takes much more deliberate effort and experimentation. When algorithms have real-world consequences it is important to keep this distinction in mind. We cannot afford to treat correlations as causation, which may lead to misguided intervention.

Machine learning researcher Rich Caruana has told the story of an AI system built for a healthcare client, which sought to model the probability of death for patients suffering from pneumonia. Surprisingly, the system discovered that those who had previously been diagnosed with asthma had a lower probability of death: a correlation. Those treating correlation as causation might enact a policy of inducing asthma to reduce the risk of death by pneumonia. Such a policy would be disastrous, inflicting unjust harm because the truth was opposite to this hasty conclusion: those with asthma were more likely to pay attention to changes in their breathing, uncovering early signs of pneumonia, leading to better outcomes. Respecting the difference between correlation and causation led Rich and his collaborators to investigate the counter-intuitive link and discover the truth, rather than blindly trusting the output of the model.

A more sobering example comes from Bayes’ theorem, a simple, widely-used, mathematical formula used for calculating conditional probabilities, as applied to proposed counter-terrorism efforts. Assume that 99% of terrorists practice a particular religion. Furthermore, assume that, among the general American population, only 1% practice that religion and fewer than one in every hundred thousand Americans are terrorists. If you encounter someone who practices “the religion of terrorists,” how likely is that person to actually be a terrorist? Bayes’ theorem gives us a counterintuitive answer: it is not very likely. The probability is less than 0.1%, meaning that there is over 99.9% probability that the person who practices that religion is not a terrorist,. Therefore, any policy that uses religion as the sole criterion for punishment will wrongly condemn 999 out of every 1000 punished.

AI systems can be fooled in the exact same way. Whenever learning systems, even highly accurate systems, try to detect rare events, Bayes’ theorem cautions us that the outcomes detected can overwhelmingly be false positives. Questioning our models and seeking to independently verify the truth of their outputs can save us from committing injustice.

We’ve seen that a just learning system is one whose internal model is interpretable, whose internal biases and assumptions are questioned often and independently justified, whose outputs are verified, and whose behavior is thoroughly tested. The results from these systems are still not blindly trusted as truth itself but are further investigated, bringing to bear our best mathematical and statistical knowledge. Just models must be continually purified of unjust biases, untrue premises, and unjustified gaps in logic.

Loving Mercy

In addition to justice, let us also have mercy. Mercy is refraining from doing what is within our power to do, because of compassion. It is similar to the Christian concept of grace, which is unmerited favor. While mercy withholds a punishment that is rightly deserved, grace foregoes punishment and instead gives a gift which is not deserved. Mercy is a choice, often a difficult choice, to refrain from seeking revenge or retribution. It is a choice to refrain from exterminating an enemy, but to seek to redeem him. It is embodied in the meekness that refuses to use a possessed advantage, not because it would be unfair, but because it would be unkind. As learning systems grow in power, amassing advantages in speed and reach, it is crucial that they embody the virtue of mercy within them.

Choosing to not exploit the easily exploitable can be viewed as a form of mercy because those who are driven purely by economics will seek any advantage that produces a profit, with the weak and uneducated suffering the greatest disadvantage. For example, some insurance companies employ learning models to predict how likely someone is to shop around for insurance and charge more when they think the person is unlikely to do so. What happens when the bulk of those affected are poor and less-educated, including recent immigrants who are less likely to realize they’re being ripped off? They’re easy targets. Should our systems exploit their weaknesses or should we extend mercy?

Image result for scrooge third ghost want and ignorance
‘Are there no prisons.’ said the Spirit, turning on [Ebenezer Scrooge] for the last time with his own words. ‘Are there no workhouses?’/John Leech, 1843, courtesy Dan Calinescu

Whenever we spot potential advantages, mercy should motivate us to stop and weigh carefully what the consequences of using our advantage will be. Just because we can do something, does not mean we should. As someone once wrote, just because I can punch you in the nose does not imply that I should punch you in the nose. Mercy keeps us from punching others even when they deserve it. And how much more so when they don’t.

To love mercy sometimes means to give up efficiency. It could mean losing a few points of model accuracy by refusing to take into account features that invade privacy or are proxies for race, leading to discriminatory model behavior. But that’s OK. The merciful are willing to give up some of their rights and advantages so they can help others.

Mercy extends to producing beauty instead of baseness. As AI developers, we can choose how we apply our gifts: in ways that promote human flourishing, or in ways that produce gain at the expense of others. We can choose to build products that increase depression and encourage addiction, or we can focus on systems that help the poor, the lost, and the vulnerable. The latter may not generate as much wealth, but mercy teaches us the nobility of giving up what we might gain for the sake of serving others. As a Christian, I see the mercy extended to us by a suffering Jesus as the most beautiful and noble in all history, voluntarily setting aside the glory that was rightfully his in order to take on the form of a servant. Where mercy flows, beauty abounds.

Furthermore, we are not told to simply be merciful, but to love mercy. We should seek out opportunities to be merciful, and not give mercy begrudgingly. Greater power should inspire ever greater mercy.

Walking Humbly

Lastly, let us consider the goodness of walking humbly. Humility is knowing your “right size” in this world, knowing yourself and your limits. This knowledge relates to our learning systems in three ways. First, because our limitations cause bugs, errors, and unfounded assumptions to slip into our work, humility requires that we constantly verify what we’ve done and seek to evaluate it against objective measures. Humble people will rightly recognize their own tendency to err and seek external accountability and review. Second, we will recognize learning systems as expressions of developer bias, or as one author put it, “opinions embedded in mathematics.” Thus, we will be skeptical of our own models and the models of others. We will not suffer the illusion of unbiased objectivity in our models but will examine the biases within them, making them as explicit and open to investigation as possible. Third, we will recognize the limitations of data science itself and the pitfalls of working with incomplete data in a messy world. Noisy data, missing information, and model overfitting are just some of the problems that plague learning systems. Just as we question ourselves and our models, we should also question the data we use and the capabilities of mathematical predictive systems. Humility reminds us that we, the systems we build, and the data we use, may all contain error. It is our responsibility to stop simply propagating that error, but to uncover it, study it, and eliminate it if possible. When not possible, then to set aside the systems until it can be eliminated. To walk humbly is to question continually.

As truth undergirds freedom, and mercy gives rise to beauty, we see that humility is the good and proper relation of man to that which is not man. Goodness itself is the right relationship between action and obligation, between object and place, between form and purpose. Humility is simply recognizing our true nature and our rightful place, moving ourselves towards where we belong, in relationship, walking alongside the one who created us for that express purpose.

Ethics as the Foundation

Beginning with simple ethical principles, we have seen how ethics can radically inform goals and guide choices when developing learning systems. For those with different ethical frameworks, I challenge you to consider how your particular ethical stance will inform your own system choices. Ethics should be the foundation, not an afterthought. Make the act of embedding ethics explicit, rather than passively allowing your systems to be shaped by mere expedience. Root your perspective and decisions in principles that are more substantial, that bring forth beauty and justice. The act of working through this together has now given you practice. May your systems reflect truth, beauty, and goodness, if not justice, mercy, and humility.


Also by George Montañez: What is learning anyway?


AI: Think About Ethics Before Trouble Arises