Mind Matters Natural and Artificial Intelligence News and Analysis
A concept of a city being hit by a weapon of mass destruction suffering terrible consequences caused by terrorism or an act of war by a hostile country launching a devastating attack with atomic bomb
A concept of a city being hit by a weapon of mass destruction suffering terrible consequences caused by terrorism or an act of war by a hostile country launching a devastating attack using an atomic bomb

What Can We Learn from History About Stopping AI Warfare?

International agreements can work, but only under certain circumstances
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The reach and pervasiveness of the Internet, including the “Internet of Things”, is growing in tandem with the growth in capability and sophistication of artificial intelligences (AI).

Combined, these parallel developments could produce AI that damages property, both digital and physical, and endangers lives. If the past is any guide, the introduction of technologies that can be weaponized may mean that we experience a catastrophic event caused by AI before we take global steps to properly regulate its use.

A catastrophic event caused by AI could be intentional or unintentional on the part of a human actor, an AI, or a combination of both. Although accidents are a valid concern, here we want to focus on the need for an international prohibition on the use of AI to intentionally and directly cause harm.

The prohibition on malicious AI would mirror the international ban on chemical weapons following their catastrophic use in World War I. However, it is our hope that the prohibition can be established before a catastrophic event occurs. Here, we look at three scenarios of intentionally malicious AI use, discuss the requirements of a prohibition on malicious AI, and examine the barriers that need to be overcome in order to enact such a prohibition.

Imagine a country that produces a large fraction of the AI-enabled electronic devices used by other countries (China comes to mind). Now imagine that this country wishes to deter other countries from interfering with its foreign policy. It could design the AI devices to respond
to a “master switch” that would cause a catastrophic event (prior to, say, its invasion of a neighboring nation). For example, AI-enabled vehicles could simultaneously accelerate to maximum speed and lock out any attempts to steer or brake.

Another example would be the subversion of AI-controlled infrastructure such as a nation’s power grid or aircraft control system in order to cause disruption, chaos, or even physical damage. Admittedly, such acts of aggression would turn the entire world against the perpetrating country. But Germany’s invasion of Poland (September 1, 1939) and Japan’s surprise attack on Pearl Harbor (December 7, 1941) had similar foreseeable effects—ultimate disaster for the aggressors—and yet the aggressor countries still favored these strategies.

A less overtly aggressive but still catastrophic use of AI would be to embed instructions in AI devices that lead users into vices. For example, the AI produced by the ambitious nation could suggest activities that would weaken family ties (adultery, pornography, etc.) or promote addiction. With the proliferation of streaming media, with customized content for every user, this scenario is likely possible today.

As mentioned earlier, there have already been efforts to ban Weapons of Mass Destruction (WMD) such as chemical weapons and nuclear weapons and with very few exceptions (like chemical weapon use in Iraq in 1998 and in Syria in 2013) these agreements have held. However, other internationally coordinated weapons ban agreements have been persistently and repeatedly ignored by some of the countries that signed them (agreements about land mines and aerial weapons, for instance).

It appears that the key difference between the agreements that have been honored and the agreements that have not been honored is that the the honored ones involved weapons of mass destruction. An effective ban on malicious AI requires the global community to first agree that such a form (or use) of AI would be a WMD.

The first step toward an agreement that certain AI meet the criteria of WMD would be for a meeting of international AI experts with this goal in mind. The meeting would mirror the 1975 Asilomar Conference on Recombinant DNA, where experts agreed that organisms with modified DNA should not be released into the wild.

The concern in 1975 was that human ingenuity could create new life forms capable of taking over the world. The concern today is that humans can make AI that can take over the world.

Since 1975, it has become clear that creating new life forms is not as easy as once presumed. Certainly, humans have the power to cobble together a “superbug”—a bacterium resistant to all known antibiotics—by breeding bacteria that are each resistant to one antibiotic. But the ingenuity required to create a brand-new life form from scratch that can survive on its own, let alone destroy all of humanity, is well-beyond our current abilities. Still, the practices put in place after the Asilomar conference were a good example of the precautionary principle which should also be applied to the danger of AI.

We authors are not convinced that human-created AI could ever become self-aware and see humankind as a threat that requires elimination. However, even if AI never becomes self-aware, it could become a WMD, like the human-created superbugs and the agents used in biological warfare. Thus, we think the precautionary principle should be applied and the global community should adopt policies to limit the spread of malicious AI. It is not too soon for global AI experts to convene on this matter and for the public to be alerted to their recommendations on how best to contain malicious AI.


Further reading: Why AI can’t win wars as if wars were chess games Is Vladimir Putin right? Will whoever leads in AI rule the world? It’s not so simple. (Bradley A. Alaniz and Jed Macosko)


Bradley A. Alaniz

Brad Alaniz, Commander, US Navy (ret) holds a B.S. from the United States Naval Academy and an M.S. from the Naval Postgraduate School. From 2017 to 2020 he was a Military Professor at the US Naval War College. Since his retirement from Active Duty he has been a civilian Operations Research Analyst exploring new underwater technologies for the US Navy in New London, Connecticut.

Jed Macosko

Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Prof. Jed Macosko holds a B.S. from MIT and a Ph.D. from UC Berkeley. He was a postdoctoral researcher under Prof. Carlos Bustamante, and also under Prof. David Keller. The Macosko-Holzwarth lab is currently focused on the mechanical properties of cancer. Prof. Macosko first collaborated with Dr. George Holzwarth in 2004 to explore how multiple motor proteins pull a single cargo in living cells. Prof. Macosko also partners with the lab of Profs. Keith Bonin and Martin Guthold to develop a new drug discovery platform and with Dr. A. Daniel Johnson of the Department of Biology to develop new teaching technologies.

What Can We Learn from History About Stopping AI Warfare?