How does the Golden Rule apply to developers of artificial intelligence (AI)?
To simplify the application let’s assume there are only two people involved. One runs a small trucking company but also knows how to develop sophisticated AI. This business owner develops an AI enabled system capable of driving his truck. The other person is the truck driver, whom the owner no longer needs. If the owner believed in following the Golden Rule, how should he treat his driver?
Let’s assume the driver has worked for the company for forty years but is not yet financially ready to retire. A number of answers are possible. Some companies have bridged long-time employees to retirement. The owner might do that for his driver. Some companies have given generous severance packages with various re-employment services. The owner might choose that path. Most would agree that an owner who believed in the Golden Rule would take some actions that soften the impact of bringing in AI and replacing a worker. About the only wrong answer would be keeping all the profit enabled by the use of AI and letting the long-time employee suffer the full impact of being replaced.
Too often the other Golden Rule is applied: “He who has the gold rules!” When this happens, little, or no, effort is made to soften the impact. All the profits enabled by AI are kept for the business executives and investors. No law requires an employer to soften the blow of being replaced by AI but ethically that is what the Golden Rule instructs us to do.
In reality our scenario involves far more than two people. There are the development teams that develop AI-enabled products. The technology developers work at the direction of their managers, who are seeking to serve the interests of the company’s investors. The technology developed is likely sold to another company that uses the AI technology as a component part of a system they are making. The system is then sold to a third company that provides the service enabled by the AI. The negative impacts of the AI-enabled system are typically brought to individuals through an equally complicated set of relationships. In the complex real-world situation, who is responsible for thinking of their neighbor and how are they to fulfill their ethical responsibility?
The point at which multiple parties are involved, the ethical consideration encounters political and economic theory. With distributed ethical responsibility to treat your neighbor as you would be treated, should that responsibility be made mandatory? If so, who enforces it? Should the government act as an intermediary, transferring money from the beneficiaries to the victims of technology? Who decides how much compensation is adequate reimbursement for the harm suffered? These are complex questions with multiple possible answers. However, the core teaching of the Golden Rule still applies. We should have a concern for those our actions impact. When the impact is negative, we should take steps to mitigate the harm done. The Golden Rule teaches that we should think about others and take actions that minimize any harm to them.
When thinking of the impact of new technology, the transient impacts of introduction, the permanent impacts, and the future impacts must be considered. You sometimes hear, “Well in the long run it works out.” Yes, but in the very long run we are all dead! Putting people out of a job may resolve itself in time but a lot depends on how the disruption is managed.
Perhaps the metaphor of a new self-driving car getting onto an interstate is appropriate. If the highway is crowded and the new car just barrels in with no regard to the traffic it could easily cause unnecessary wrecks. Alternately, if the new car observes the traffic and enters into a gap in the traffic there is little disruption. What is a gap in the traffic for a job-replacing AI system? Perhaps that is a job where there is a labor shortage. Would anyone mind AI doing a job that few people want to do? Another kind of traffic gap would be jobs that are very dangerous or detrimental to worker’s health. Does anyone object when they see a bomb squad use a robot to dismantle a bomb? It is just fine with us if the robot gets blown up instead of a person.
Applying the Golden Rule to AI-driven innovation means being mindful about how AI is introduced and what jobs it replaces, especially initially. If the impact of AI is to take old junkers off the road, that is probably a societal benefit unless you are the owner of the old junker, now on the shoulder of the road, and it is the only way you have for getting around.
The problem is that AI is developed using that other Golden Rule, “He who has the gold rules,” and it is developed in a competitive environment with other AI developers. The most appealing business cases are the ones that will get funded. These business cases are seldom a secret. Other innovators see the opportunity and develop competitive solutions. The first company to bring a solution to the market will enjoy a very significant advantage. Who has the time or energy to worry about social impacts? If you don’t develop the system others will and you will be out of business. The focus is drawn to competitors and winning with little room left for contemplating the impacts of innovative AI technology.
Another class of problem is the future problems created once AI-enabled systems are fully deployed. There are many negative results that could arise once AI systems become dominant in some arena. Isn’t it possible that self-driving cars and trucks could pack a highway so tightly with navigation margins so small that a human-driven car could no longer get on the highway or safely change lanes? To turn to an employment example, we can think about people with disabilities who can currently find a job that suits their skillset but, in the future, find that such jobs are taken by AI. There could be categories of people who would be permanently frozen out of the workforce. A disability that today limits employment option might, in the future, cut off employment possibilities entirely.
The core question raised in this article is, “Does the Golden Rule apply to AI?” The conclusion: It does. However, the application of the Golden Rule is complicated because it is distributed. Instead of having two people, there are many people involved in developing AI-enabled systems. Further, those people play different roles and have different perspectives. Ethically, they all have some responsibility and a function in seeing that in the end AI-enabled systems are introduced in ways that mitigate any harm, transient or permanent, that they produce. Figuring out what responsibility any one individual has is difficult but is it any more difficult than figuring out how to build an AI-enabled system? The situation is complicated by the competitive environment in which AI is developed. Does the Golden Rule provide an exemption if one is in an environment with lots of distractions and competing pressures? The application of the Golden Rule to AI development is complex and multifaceted, but it is still the right rule to live by. Each participant has a role to play not only in developing the technology but to contribute to the ethical introduction and use of the technology.