Mind Matters Natural and Artificial Intelligence News and Analysis
group of drones above the city
group of drones approaching the city

Slaughterbots

Is it ethical to develop a swarm of killer AI drones?
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Should the development of killer AI robots be banned? If everyone in the world were friendly and followed the Golden Rule, such a ban might work. But there will always be evil humans around.

Just ask Jay Austin and Lauren Geoghegan who set off on a bicycle trip of love through ISIS-infested Tajikistan.  “There’s magic out there, in this great big beautiful world,” wrote Austin. The idealistic couple was proven wrong. They were murdered.

The fallen nature of humanity is central to Judeo-Christian theology. It is a sad but unchangeable state of humanity. Like fire, new technology is neither good nor bad. The good or bad outcome is ultimately the responsibility of a human being.

Learning from History

Some seem to think all new advances in military technology are bad. While I was at the University of Washington in 1988, I received a grant from the Office of Naval Research related to Ronald Reagan’s Strategic Defense Initiative (SDI) better known as Star Wars. SDI’s aim was to shoot down threatening missiles aimed at the United States using space deployed technology. Many of my idealist ivory tower colleagues at the University of Washington refused to participate in the SDI and weren’t happy I was involved. SDI had to do with war, they reasoned, and war was bad. The SDI program, though, turned out to be instrumental in the collapse of the Soviet Union and ending the cold war. The Soviet Union figured it would go bankrupt trying to keep up with the US defense budget.   The good intentions of my anti-war-at-all-costs colleagues would have prolonged the cold war.

Long before the Cold War, the US and Nazi Germany were racing to develop an atomic bomb in WWII. The war in Europe ended before Germany succeeded. Suppose an American peace movement had succeeded in banning development of the terrible bomb and the war hadn’t ended when it did. Had the Nazis developed the bomb first, flags in the US today might be sporting swastikas instead of the Stars and Stripes.   That’s the scenario depicted in the Netflix alternative history series The Man in the High Castle where the Allies have lost WWII because the Nazis won the atomic bomb race.

By Source, Fair use

The atomic bombs dropped on Imperial Japan to win WWII saved the lives of countless Allied soldiers, including my Uncle Junior McHenry. Uncle Junior was a paratrooper who, during the planned Japanese equivalent of D-Day, was trained to parachute behind enemy lines with 24 pounds of demolition explosives attached to each leg. Given the militancy of Imperial Japan and its people, Uncle Junior’s assignment was essentially a suicide mission.  Fortunately, the atomic bomb ended the war with evil Imperial Japan so Uncle Junior never had to make the jump. He returned home to West Virginia where he worked as a Greyhound bus driver, married my Aunt Justine, and raised three children. Thousands of other Allied soldiers were likewise blessed with a long life because of the atomic bomb.

It is true that Japanese deaths from atomic bombs totaled 146,000 in Hiroshima and 80,000 in Nagasaki. But what would have happened without the bomb, which convinced the Japanese to simply surrender?  Historian Philip Jenkins estimates, “… together with likely Japanese fatalities, you get about ten million dead – and that’s a conservative figure. The vast majority of those additional deaths would have been East and South-East Asians, mainly Japanese and Chinese.”

Truman’s directive to drop the bomb had tragic consequences, but his decision looks to have been the right one.

Scary Slaughterbots

The ethical questions around the development of killer AI robots and drone swarms today somewhat parallel those around of the creation of atomic and thermonuclear bombs. Scary headlines are plentiful. The Sun warns us:

“South Korean university is secretly developing killer AI robot army that could destroy humanity.”

The Daily Star trumpets:

Killer robots poised for ‘MASS PRODUCTION’

And there is the Independent’s all-caps grabber:

“‘KILLER ROBOTS’ WILL START SLAUGHTERING PEOPLE IF THEY’RE NOT BANNED SOON.”

A potential threat, yes. But we have somehow learned to live with the hydrogen bomb. Can we live in a world with killer AI robots?

Fictional CEO demonstrates microdrone

Fictional CEO demonstrates microdrone/Fair Use

Some think not. Slaughterbots, a slickly produced Black Mirror- flavored short video, unveils a killer drone about the size of an Oreo cookie. The drone contains embedded AI in the form of facial recognition, and coordination skills as a member of a drone swarm. It also contains a bullet-shaped exploding charge. Once programmed with the face or a characteristic of the target, the drone autonomously flies into a theater of operation, searches for the face, places itself close to the subject’s forehead and shoots a projectile into the brain. Slaughterbots released in a swarm, it is argued, could win wars quickly, or kill all the Democrats attending a joint session of Congress.

Slaughterbots has an agenda of course. It condescendingly presents the drone developers as mere stereotypical warmongers. Stuart Russell, a professor of computer science at Berkeley, ends the video with an appeal to join the fight against the development of autonomous AI killing machines. A similar plea was made by Elon Musk who, together with over 100 research scientists, signed a petition to ban the development of killer robots. George Gilder’s observation that “Elon Musk is a tremendous entrepreneur, yet he’s a quite retarded thinker” applies here.

Killer AI Is Already Here

Sobering indeed is the realization that the slaughterbots are not science fiction.  Given enough time for development, we have the technology to build them. Paul Scharre, the author of Army of None: Autonomous Weapons and the Future of War, who was instrumental in fashioning the US Department of Defense’s policy directive on autonomy in weapons, agrees that the technology exists but says the rest of the premises of Slaughterbots range from “questionable… to completely fanciful.” He writes:

The video assumes the following:

  • Governments will mass produce lethal micro-drones to use them as weapons of mass destruction;
  • There are no effective defenses against lethal micro-drones;
  • Governments are incapable of keeping military-grade weapons out of the hands of terrorists;
  • Terrorists are capable of launching large-scale coordinated attacks.

These assumptions range from questionable, at best, to completely fanciful.

Paul Scharre, “Why You Shouldn’t Fear ‘Slaughterbots’” at IEEE Spectrum

Scharre also notes that terrorists already have access to slaughterbot technology. He writes:

There is nothing we can do to keep [slaughterbot-like] … technology out of the hands of would-be terrorists. Just like how terrorists can and do use cars to ram crowds of civilians, the underlying technology to turn hobbyist drones into crude autonomous weapons is already too ubiquitous to stop.

Today, powerful AI technology is already offered free for anyone in the world to use:

“Facebook, Amazon, and Microsoft have all, like Google, released as open source software AI that their own engineers use for machine learning. All, including to some extent famously secretive Apple, encourage their AI researchers to openly publish their latest ideas.

Tom Simonite, “DESPITE PLEDGING OPENNESS, COMPANIES RUSH TO PATENT AI TECH” at Wired

The Answer to Killer AI

So how do we live in a world where AI is tasked to kill?

For threats like slaughterbots, the answer is the development of newer technology. Like it or not, history is replete with accounts of new military technology replacing old. First, there were military airplanes. Then there was radar. Then there was stealth technology to avoid radar. Evil, seeking influence, demands a response, so the technology to provide one must be developed.

During the Cold War, technology acceleration was called the arms race. First, there were missiles. Then there were anti-missiles. Then there were anti-missile-missiles. Israel has gone the next step by deploying a missile protection Iron Dome cloak over its country.

Is there existing technology to counter slaughterbots?  The Slaughterbots video screams “NO.”“We can’t defend ourselves!”

Instead of acting like such untutored Chicken Littles, the producers should have asked seasoned experts with domain expertise like Paul Scharre.  The notion of helplessness against slaughterbots, Scharre insists, is a “farce.”

Every military technology has a countermeasure, and countermeasures against small drones aren’t even hypothetical. The U.S. government is actively working on ways to shoot down, jam, fry, hack, ensnare, or otherwise defeat small drones. The micro-drones in the video could be defeated by something as simple as chicken wire.

Paul ScharreWhy You Shouldn’t Fear ‘Slaughterbots’” at IEEE Spectrum

One consequence of the development of killer AI might be the frightening — but effective — mutually assured destruction (MAD) strategy normally associated with thermonuclear weapons. The use of hydrogen bombs is seen as so horrible they are no longer even tested. Similarly, the use of chemical and biological weapons on the battlefield is banned by treaty. But no matter how good our intentions, evil men like North Korea’s Kim Jong Un will try to build atomic bombs and threaten humanity with them. And murderers like Syria’s Bashar al-Assad will continue to kill with chemical weapons. Treaties and agreements are only of use among the honorable.  Ask Neville Chamberlain about his “peace for our time” treaty with Adolf Hitler.

AI and other technology must continue to be developed with never-ending vigilance in order to counter current and potential military and terrorist threats. Corralling technology’s use is the role of political leaders whose negotiation and retaliation power comes largely from military strength. And military power is born of technical superiority—including AI.

Robert J. Marks II, Ph.D., is Distinguished Professor of Engineering in the Department of Electrical & Computer Engineering at Baylor University.  Marks is the founding Director of the Walter Bradley Center for Natural & Artificial Intelligence and hosts the podcast Mind Matters. He is the Editor-in-Chief of BIO-Complexity and the former Editor-in-Chief of the IEEE Transactions on Neural Networks. He served as the first President of the IEEE Neural Networks Council, now the IEEE Computational Intelligence Society. He is a Fellow of the IEEE and a Fellow of the Optical Society of America. His latest book is Introduction to Evolutionary Informatics coauthored with William Dembski and Winston Ewert. A Christian, Marks served for 17 years as the faculty advisor for CRU at the University of Washington and currently is a faculty advisor at Baylor University for the student groups the American Scientific Affiliation and Oso Logos, a Christian apologetics group.


Slaughterbots