Mind Matters Natural and Artificial Intelligence News and Analysis
AI warfare
Close-up, a small town, a village with small houses, a river, a bridge, roads and other strategically important objects to be under the gun and shadowed. Concept of computer game and spying on people
Image licensed via Adobe Stock

US Military Updates Autonomous Killer Robot Policy

Like it or not, properly vetted AI autonomous weapons are in the future of US adversaries and in ours as well
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The Pentagon has updated its policy on the use of artificially intelligent autonomous weapons. Autonomous means the weapon can operate on its own without human oversight.

As unpacked in my book, The Case for Killer Robots, technology wins wars and gives pause to potentially dangerous adversaries.  This includes autonomous AI systems currently under development in the US, China and elsewhere. When there is an option, a human should be given oversight of the AI to avoid unexpected contingencies. But this is not always wise when (1) autonomy gives an overwhelming strategic advantage, (2) communication with remote AI is unwise or not possible, and (3) when events unfold faster than a human can respond. Imagine, for example, being attacked by a large swarm of armed drones. There is not time for a human to aim and shoot at each drone individually. The task must be handed to a fast autonomous weapon that can rapidly aim and shoot at the members of the attacking swarm and not be slowed because of slow human response time.

One of the goals of Pentagon’s Artificial Intelligence Strategy and Implementation Pathway framework is imposition of design ethics I have previously promoted. Design ethics requires delivered technology to do what it was designed to do and nothing more. This requires deep domain expertise both in the design and testing of the technology.

The more complicated a technology, the more difficult it becomes to apply design ethics.  Technology with narrow  missions are easier to vet. In a peer reviewed IEEE journal paper Sam Haug, William Dembski and I showed that the number of possible contingencies can grow exponentially as a function of system complexity.  This means that poorly applied design ethics will also likely exponentially increase AI’s unintended consequences. So when lives are at stake, it’s best to keep the job of the autonomous AI as narrow and simple as possible.

The Harpy airborne munition is an example of narrow mission AI. Around for over 30 years, the Harpy has a single job. It loiters over enemy territory until illuminated by enemy radar. Using the illumination as a homing beacon, the Harpy goes kamikaze and destroys the enemy radar installation. The Harpy can operate in a totally autonomous mode with no human oversight.

On the other side, self driving cars are an example of complex AI technology. Complexity is a function of the number of possible environments. Since there are numerous conditions a moving car can be exposed to, self driving cars for general use all currently require human oversite for operation. Totally autonomous vehicles (i.e. level 5 self driving cars) are not here yet due to the complexity imposed by varying surroundings.

So a rule of thumb for development of autonomous AI weapons that conform to design ethics is this: keep it simple.

There is a second issue motivating the release of the updated AI weapons policy: trust. Too much policy inertia can impede thoughtful military strategy being updated. Hype spawned by media and sci-fi can shadow understanding of the true limitations of AI. Separating AI fact from AI fiction was a prime motivator that prompted me to write the book Non-Computable You. AI becoming sentient as fictionally depicted in the Terminator and Matrix movies, will never happen.

In 2020 DARPA, the US military’s “Department of Mad Scientists”  pushed back against anti-AI sentiment by staging dogfights between real pilots and AI.  US fighter pilots have a long and proud tradition of big egos doing things a certain way. (Think Tom Cruise and Val Kilmer in Top Gun. ) DARPA wanted fighter pilots to be convinced of the capabilities and utility of AI in combat beyond current practice. The contest garnered new respect among some of the pilots.

New Pentagon policy wants to spread informed AI trust beyond the efforts of DARPA. Like it or not, properly vetted AI autonomous weapons are in the future of US adversaries and by necessity in our future as well.


US Military Updates Autonomous Killer Robot Policy