Mind Matters Natural and Artificial Intelligence News and Analysis
meteorite explosion in the air on black bakcground
Missile explosion

Why we can’t just ban killer robots

Should we develop them for military use? The answer isn’t pretty. It is yes.
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
Autonomous underwater monitoring vehicle/Woods Hole

My position counters that of over a thousand AI experts who put their X on a letter demanding a ban of all autonomous weapons in 2015. Celebrity signatories included the late Stephen Hawking, Elon Musk, Apple’s Steve Wozniak, Noam Chomsky, and Skype co-founder Jaan Tallinn.

These luminaries are looking at their feet rather than over the landscape of behavioral and historical reality.

First, let’s agree on some definitions. We need to distinguish between autonomous and semiautonomous weapons. Semiautonomous weapons have humans in the decision-making loop, hence the prefix “semi.” This category includes base station control of missiles outfitted with onboard cameras and the launch of loitering munitions from submerged submarine platforms. Semiautonomous weapons attract less controversy because human judgement is assumed to be always in control. Humans should be in the loop whenever possible.

Contrast that with fully autonomous weapons (“killerbots”). Once deployed, autonomous weapons make decisions on their own, independent of human counsel. A human in the loop can, however, monitor the autonomous weapon’s performance. If the observer has a finger poised over a self-destruct button during the operation, is the autonomous machinery now only semiautonomous? It depends on your dictionary. The question becomes even more confusing when terms like autonomous and semiautonomous are redefined for political and public relations reasons.1

The important distinction is whether a human has control of the AI. Although ultimate human control is a worthy goal for developers of AI weapons, there are times when it is not possible. The required reaction time might exceed the abilities of humans. Two battling drone swarms could include many agents who individually require reaction times in milliseconds to succeed.

AI requires autonomy in other situations as well. Communication with deployed AI can be interrupted, leaving the AI on its own. Indeed, disruption will be one of the enemy’s objectives. They will try to jam communications with misleading signals, rendering friendly control impossible. For that reason, the very use of signals can be dangerous. Friendly control signals can also be detected by the enemy and used to pinpoint and destroy both the weapon and the control center. For example, if the enemy homes in on the source of radar signals, it can send missiles to destroy the radar facility.

Other scenarios come to mind:

  • A smart missile that depends on GPS for its location may lose its signal and therefore need onboard image recognition to make autonomous decisions.
  • A robot exploring a cave for enemy combatants may be deprived of communication by its environment. In the same way that walls diminish the WiFi signal in your home, thick, damp rock cave walls can weaken radio waves, making communication impossible.
  • Underwater vehicles are limited to acoustic (sound wave) communication, which is extremely slow. Radio waves travel in water about as well as a laser pointer’s beam goes through chocolate milk. Thus, underwater vehicles (AUV’s)2 can require autonomy in some cases.

AUVs have many nonviolent uses, of course, including oil exploration, underwater pipeline inspection, and environmental monitoring. The AUV is also a great way to smuggle drugs across a waterway if a dealer can afford it. The military uses AUVs for defensive purposes such as surveillance and mine detection.

But its offensive uses can be chilling. Imagine a fleet of almost undetectable nuclear-armed AUVs loitering in deep water. They keep moving to escape detection. When a short acoustic code is heard, the AUVs travel to the surface and launch their lethal payload. Military technology must develop a defense strategy against such horrors.

Many authoritative sources are now weighing in. For example, the issue was discussed yesterday by a panel at a meeting of the American Association for the Advancement of Science (AAAS):

None of the panellists said they were seeking to bar the use of robotics in all military applications. However, a sharp moral line should be drawn when it comes to robots or autonomous systems that can decide on their own when to use lethal force without “supervision or meaningful human control,” said Peter Asaro, an associate professor at the New School in New York and co-founder of an organization of scientists and technologists in support of robot arms control.

Ivan Semeniuk, “Scientists call for ban on lethal, autonomous robots” at Washington Post

Public opinion is developing on the issue as well, accompanied by calls for bans and regulation:

There is widespread public support for a ban on so-called “killer robots”, which campaigners say would “cross a moral line” after which it would be difficult to return. Polling across 26 countries found over 60 per cent of the thousands asked opposed lethal autonomous weapons that can kill with no human input, and only around a fifth backed them.

Josh Gabatiss, “WORLD CALLS FOR INTERNATIONAL TREATY TO STOP KILLER ROBOTS BEFORE ROGUE STATES ACQUIRE THEM” at The Independent

The problem is, constructing offensive autonomous AI weapons is a lot easier that developing the atomic bomb was. Autonomous AI weapons are potentially within the reach of terrorists, madmen, and hostile regimes like Iran and North Korea. As with nuclear warheads, we need autonomous AI to counteract possible enemy deployment while avoiding its use ourselves.

In future posts, I will talk about the ethics and reliability of autonomous AI weapons, which, as you may expect, raise many difficult problems.


1 Scharre, Paul. Army of None: Autonomous Weapons and the Future of War, WW Norton & Company, 2018.

2 Autonomous underwater vehicle (AUV) National Ocean Service


Robert J. Marks is the Director of the Walter Bradley Center for Natural and Artificial Intelligence and holds the position of Distinguished Professor of Electrical and Computer Engineering at Baylor University.

Also by Robert J. Marks: Killing People and Breaking Things Robert J. Marks: Modern history suggests that military superiority driven by technology can be a key factor in deterring aggression and preventing mass fatalities

and

Top Ten AI hypes of 2018
(Robert Marks)


Why we can’t just ban killer robots