Mind Matters Natural and Artificial Intelligence News and Analysis
kaleb-kendall-209302-unsplash
Photo by Kaleb Kendall on Unsplash

Slaughterbots: How far is too far?

And how will we know if we have crossed a line?
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Editor’s note: On Monday, Robert J. Marks addressed the question raised by the film Slaughterbots: Is it ethical to develop a swarm of killer AI drones? Tonight, Eric Holloway adds some thoughts:

I think the technology is feasible. Facial recognition and target tracking work as intended and autonomous drones can fly fairly well. Alarmism will not stop anything from happening.

A lethal autonomous weapon (BAE Systems Corax) during flight testing/Brigadier Lance Mans

The moral argument Dr. Marks raises is a difficult one. Where do we draw the line? If all scientific research and weapon development are justifiable to win a total war, then even something as horrible as Nazi and Communist human experimentation is licit if it means we develop the superhuman shock troopers ahead of the enemy. It is the same reasoning that people use to justify experimentation on aborted babies at the NIH where I work. The justification is that we’ll eradicate horrible genetic diseases.

At some point, we cross a line that makes us worse than anyone we might fight against or any disease or deformity we might eradicate. For example, the bombing of Hiroshima and Nagasaki is not something we can lightly justify even though it ended WWII.  It was an extremely grave decision that even now is morally controversial—despite the fact that it saved millions of Japanese who would have died, had the war continued.

On the other hand, Dr. Marks is correct on the main point: Signing treaties to outlaw some avenue of research is pointless. And developing the Star Wars technology to shoot down ICBMs is certainly an imperative. So, in the case of slaughterbots, the research of defensive technologies is clearly acceptable, as well as investigative research into possible slaughterbot implementations. However, the active development of slaughterbots for use by US forces is more morally questionable.

Unborn baby at approximately 12 weeks.

Unborn baby at approximately 12 weeks

From a practical perspective, there is no reason to believe the US will remain committed to its founding principles, which today are being rapidly eradicated. So, any technology the US could use for good today could be used for evil tomorrow. A greater focus should be on restoring the foundations of our nation over building superweapons. And the key foundation is all human beings’ right to life.

The US is unique in the entire history of our world in being based on this right to life for all. This is where the importance of AI and ethics comes into the picture: If AI can really copy human abilities, there is no effective difference between humans and computers. Because computers have no inherent moral worth, then humans would not have such worth either. If everything is computation there is no ultimate reason to consider human beings as more worthy of protection than an animal, a bot, or a rock. AI is a universal ethical solvent, and consequently is a meta-weapon of the mind because it justifies the indiscriminate use of any superweapon.

So, the key benefit of the Walter Bradley Center for Natural and Artificial Intelligence is demonstrating that humans cannot be reduced to computation. Thus a single human being is worth more than all the material goods that have existed throughout history. And our ethical decision-making is the true guard against indiscriminate killing.

See also: Slaughterbots: Is it ethical to develop a swarm of killer AI drones? (Robert J.Marks)

Eric Holloway

Eric Holloway has a Ph.D. in Electrical & Computer Engineering from Baylor University. He is a current Captain in the United States Air Force where he served in the US and Afghanistan He is the co-editor of the book Naturalism and Its Alternatives in Scientific Methodologies. Dr. Holloway is an Associate Fellow of the Walter Bradley Center for Natural and Artificial Intelligence.


Slaughterbots: How far is too far?