Mind Matters Natural and Artificial Intelligence News and Analysis
franz-harvin-aceituna-432708-unsplash
Two pilots behind array of flight controls and computers
Photo by Franz Harvin Aceituna on Unsplash

A Critic of the Evangelical Statement on AI Misunderstands the Issues

On the question of moral responsibility, Dr. Swamidass seems to misunderstand the Statement entirely
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Recently, the Southern Baptist Convention issued a Statement of Principles on artificial intelligence for theology (hereafter, we will refer to this as “the Statement”). In a recent article in the Wall Street Journal, immunologist Joshua Swamidass offers some criticisms of the statement (hereafter, “the criticism”).

The criticism makes some good points — but seems to miss the main point of the Statement.

It focuses on the signatories’ lack of expertise in artificial intelligence. While expertise would be nice, I think the reason for its absence is very simple — the document said almost nothing about AI technology.

The Statement focuses on the ontological statuses, the basic natures, of humanity and machines. If I may sum it up: The advance of technology does not impact the ontological statuses of man or machine, or their relationship. Essentially, the statement merely restates traditional Christian theology regarding humans and machines. It continues to assert them in the case of a specific type of machine, artificial intelligence.

Swamidass’s criticism seems to miss this important distinction. The clearest example is his response to the question of moral responsibility. There, his criticism seems to misunderstand the Statement entirely. He states,

The document also states that “moral decision-making” is the exclusive responsibility of humans. Yet artificial intelligence can maneuver a Tesla. In an accident, the car may need to make moral decisions. Risking the safety of its passengers, should the vehicle dangerously swerve to avoid a pedestrian? The document seems to oppose artificial intelligence where it might delegate moral decisions like this — but it’s not clear. This risks unnecessary prohibitions of life-saving technology.

Joshua Swamidass, “Evangelicals Take On Artificial Intelligence” at Wall Street Journal

Actually, the document is very clear. The entire point of the moral decision-making section is that moral responsibility always belongs to a human. That is, if an AI kills someone, the AI does not deserve the blame. AIs cannot be blamed, nor can they mitigate the moral responsibility that belongs to a human. The Statement is quite clear on this:

We affirm the use of AI to inform and aid human reasoning and moral decision-making because it is a tool that excels at processing data and making determinations, which often mimics or exceeds human ability. While AI excels in data-based computation, technology is incapable of possessing the capacity for moral agency or responsibility.

We deny that humans can or should cede our moral accountability or responsibilities to any form of AI that will ever be created. Only humanity will be judged by God on the basis of our actions and that of the tools we create. While technology can be created with a moral use in view, it is not a moral agent. Humans alone bear the responsibility for moral decision making. (emphasis added)

Artificial Intelligence: An Evangelical Statement of Principles (April 11, 2019)

In the case of self-driving technology, humans are not ceding moral accountability to the machines. We are merely making the decisions ahead of time. If those decisions turn out to be wrong, the fact that they were made ahead of time does not remove responsibility. Even if the programmers remove themselves from the equation by delegating decision-making to massive quantities of data, they have not exempted themselves from moral responsibility.

Tesla Model 3

It is true, as we have reported here, that current trends in AI can often leave the chain of moral responsibility ambiguous, especially for consumers. What the Statement does (that the criticism fails to perceive) is to reaffirm the fact that one or more humans stands at the end of every moral decision-making process. As we have noted, this is a much-needed corrective, specifically for companies like Tesla, who seem to not understand the point.

So, rather than lacking insight into AI issues, this band of theologians is especially sagacious in applying theology to them. While I do hope that more Christian AI researchers like Swamidass enter the public conversation, I think it particularly shortsighted to think that theology itself doesn’t already provide tools to understand humanity and its relationship with the world.

The idea that theologians shouldn’t comment on anything without first running to experts to tell them what to think has neutered theology for the last century. I for one am glad to encounter theologians who understand both where their expertise lies (in this case, the relationship between humanity and our tools) and how it can be applied in the real world. They have also avoided the historic mistake of underestimating the potential power of new ideas.

See also: New Evangelical Statement on AI is balanced and well-informed (Jay Richards)

Also by Jonathan Bartlett: Who assumes moral responsibility for self-driving cars?

and

Self-driving cars need virtual rails


Jonathan Bartlett

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Jonathan Bartlett is a senior software engineer at McElroy Manufacturing. Jonathan is part of McElroy's Digital Products group, where he writes software for their next generation of tablet-controlled machines as well as develops mobile applications for customers to better manage their equipment and jobs. He also offers his time as the Director of The Blyth Institute, focusing on the interplay between mathematics, philosophy, engineering, and science. Jonathan is the author of several textbooks and edited volumes which have been used by universities as diverse as Princeton and DeVry.

A Critic of the Evangelical Statement on AI Misunderstands the Issues