Get the FREE DIGITAL BOOK: The Case for Killer Robots
Mind Matters Reporting on Natural and Artificial Intelligence
Drone monitoring barbed wire fence on state border or restricted area. Modern technology for security.
Drone monitoring barbed wire fence on state border or restricted area. Modern technology for security.
Killer drone 3 Adobe Stock licensed

Iran Conflict Shows Why the US Needs Autonomous Lethal AI Weapons

The bipartisan National Security Commission on Artificial Intelligence recently released a sobering report about the U.S. lagging in development of killer robots

Is there any doubt that Iran would use artificial intelligence (AI) weapons in their recent attacks against the United States? We have seen no new AI weapons yet in combat. But we will in the future.

To remain competitive, the U.S. military must respond and adapt to new warfare technology including weapons using AI, sometimes called killer robots. This includes autonomous AI that acts on its own. Chillingly, unlike atomic weapons, the tools to construct lethal AI weapons are cheap and readily available to all.

There is loud opposition to development of killer robots. Chicken Little headlines scream “We’re running out of time to stop killer robot weapons,” and, in all caps, “KILLER ROBOTS WILL START SLAUGHTERING PEOPLE IF THEY’RE NOT BANNED SOON.”  United Nations Secretary General António Guterres warns “machines that have … discretion to take human lives are politically unacceptable. [They] are morally repugnant and should be banned by international law.”

It’s true that international laws and treaties can help. But there will always be despots like Syrian strongman Bashar Assad using outlawed poison gas and North Korea’s Kim Jong Un developing banned atomic bombs. There are no international laws against killer robots yet. China is investing billions into development of killer robots. Besides, does anyone believe the Islamic Republic of Iran will feel bound by international law?

History shows that technical superiority shortens and wins wars. More importantly, it forestalls conflicts by giving pause to potential adversaries. No one wants to get into a fight they know they will lose.

U.S. adversaries are on board. Russian President Putin prophesizes, “Whoever becomes the leader in this [AI] sphere will become the ruler of the world.”  China President Xi Jinping agrees. He says “… accelerating AI development is a strategic issue to decide whether we can grasp opportunities.”

The U.S. must accelerate the development of AI to maintain military parity and ultimate superiority. The bipartisan National Security Commission on Artificial Intelligence recently released a sobering report about the U.S. lagging in development of killer robots. Commission chairman Eric Schmidt warns:

“How the U.S. adopts AI will have profound ramifications for our economic well-being and position in the world.”

He ends with a sobering conclusion about development of killer robots by the U.S. military.

“We’ve got to get this right. It’s not optional.”

Schmidt is right. Like it or not, killer robots are coming one way or another. We ignore this undeniable development at our own peril.

Robert J. Marks II is the author of the free eBook The Case for Killer Robots; Why America’s Military Needs to Continue Development of Lethal AI (Discovery Press, 2020) and Director of the Walter Bradley Center for Natural and Artificial Intelligence.

(This piece was originally published at CNS, January 28, 2020)


For more resources and media discussions, see

Book at a Glance: Robert J. Marks’s Killer Robots asks the question most commentators want to avoid: What if ambitious nations such as China and Iran develop lethal AI military technology but the United States does not? Many sources (30 countries, 110+ NGOs, 4500 AI experts, the UN Secretary General, the EU, and 26 Nobel Laureates) have called for these lethal AI weapons to be banned. Dr. Marks disagrees: Deterrence reduces violence, he argues. 

and

Killer Robots on the Radio: The issues around AI in warfare seem fairly simple until we look at them more closely. Can we afford to let hostile powers develop AI warfare and not do so ourselves? Artificial intelligence expert Robert J. Marks has been discussing the issue in podcasts of varying lengths, if you want to listen in.


Robert J. Marks II

Director, Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Besides serving as Director, Robert J. Marks Ph.D. hosts the Mind Matters podcast for the Bradley Center. He is Distinguished Professor of Electrical and Computer Engineering at Baylor University. Marks is a Fellow of both the Institute of Electrical and Electronic Engineers (IEEE) and the Optical Society of America. He was Charter President of the IEEE Neural Networks Council and served as Editor-in-Chief of the IEEE Transactions on Neural Networks. He is coauthor of the books Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks (MIT Press) and Introduction to Evolutionary Informatics (World Scientific). For more information, see Dr. Marks’s expanded bio.

Iran Conflict Shows Why the US Needs Autonomous Lethal AI Weapons