Mind Matters Natural and Artificial Intelligence News and Analysis
swarm-of-drones-surveying-flying-over-city-stockpack-adobe-stock.jpg
Swarm of drones surveying, flying over city
Swarm of drones surveying, flying over city

Is the U.S. Military Falling Behind in Artificial Intelligence?

What is the likely outcome of allowing those with very different value systems to have control of global AI warfare technology?
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In a recent podcast, Walter Bradley Center director Robert J. Marks spoke with Robert D. Atkinson and Jackie Whisman at the prominent AI think tank, Information Technology and Innovation Foundation, about his recent book, The Case for Killer Robots—a plea for American military brass to see that AI is an inevitable part of modern defense strategies, to be managed rather than avoided. (The book may be downloaded free here.)

As they jointly see the problem,

There’s a lot of doomsday hype around artificial intelligence in general, and the idea of so-called “killer robots” has been especially controversial. But when it comes to the ethics of these technologies, one can argue that robots actually could be more ethical than human operators. Humans can commit war crimes. They can deliberately kill innocent people or enemies that have surrendered. Humans get stressed and tired and bring any number of biases to the table. But robots just follow their code. Moreover, U.S. adversaries are deploying these technologies quickly, and stakes are high if we don’t keep up. (August 10, 2020)

Essentially, what is the likely outcome of allowing only those with very different value systems to have control of global AI warfare technology? In this first of four parts, we look at where the United States is with autonomous weapons (starts at 3:38).

A portion of the transcript follows, with notes. The whole transcript is here. More notes and links follow below.

Jackie Whisman (pictured): When people hear killer robots, they think Terminator, they think science fiction. They think of something that’s far away. While AI will, in all likelihood, never be self-aware, in reality autonomous weapons systems are proceeding and are probably much easier to implement than self-driving cars even… Welcome, professor. Can you explain to us and listeners, what is AI in the context of military applications?

Robert J. Marks: Well, I think part of this is not engineering. It’s rather historicity. I think to remain competitive, the US military needs to respond and adapt to new warfare technology. Military superiority wins and it shortens wars. More importantly, it gives pause to potential adversaries. We see this in history all over the place in warfare. We don’t have to go back too far to World War II to see that technology helped win World War II by our cracking of the Nazi Enigma code, the Norden bombsight, which increased the accuracy of dropping bombs, of radar, which at the time was a highly-protected intellectual invention. And of course, the thing that ended the war was the atomic bomb. So military superiority helps win and shorten wars and gives pause to those who would be an adversary, so I believe that this relates now to the use of artificial intelligence as the latest technology to be incorporated into warfare.

Rob Atkinson: Can you say a little bit more on that? Why do we have to do this? Why can’t we decide we’re going to take the lead in AI weapons?

Robert J. Marks: Well, unfortunately I think we’re a little bit behind on the AI technology development, at least as far as I can see from what’s available in the media. The New York Times says the autonomous weapons arms race is already taking place. It turns out that this idea of an arms race, which you talked about is something which continues. And it’s typically technological. Andrew Yang, who ran for Democratic nomination for president said that the US was losing the AI arms race. I think we’re making steps in the right direction right now. So we do have challenges, in terms of the AI arms race. We need to catch up on it.

Dr. Marks went on to speak about swarm intelligence, one of his areas of expertise:

Robert J. Marks: The one that gives me pause the most is drone swarms. I don’t know if that has been addressed as much as possible as it should be, but this is really scary stuff because swarm intelligence actually has a history in warfare. The RAND Corporation put out a whole book onswarm intelligence for the use of the military.

Note: Dr. Marks has pointed out elsewhere that one of the real advantages of AI drone swarms is that only a few of them need survive to accomplish their task: “The US Army is developing a “swarm” of autonomous AI drones to protect combat helicopters. The swarm is modeled after social swarming insects like bees and ants who protect their queen. A drone nest protects the queen helicopter at all costs.”

Robert J. Marks: But the use of swarming drones is really chilling to me because you are able to take out a few of the elements in the drone swarm, and it still be effective. It’s like kicking over an anthill and stomping on the ants. You can do that a lot, but if you come back in a week, the anthill is reconstructed and it’s the same thing with the drones. So they’re very robust and Israel has taken, as I see, some steps toward anti-drone defense, but this is something which we need to pay more attention to. So I guess there are some specific areas where we can apply artificial intelligence. I believe that the drone swarm is one of them, but I think across the spectrum of all applications, artificial intelligence is applicable.

AI is going to be used for much more than just weapons systems. It’ll be used for intelligence gathering. It’ll be used to make sure soldiers have the right sensors on them in the battlefield, a whole set of things, but in the weapons part, your point’s an important point, which is it’s not just used for offensive weapons, it’ll be used for defensive weapons as well.

Next: Part 2: How will we keep legitimate, defensive warfare from morphing into the Terminator?:


You may also wish to look at:

Part 2: AI is not nearly smart enough to morph into the Terminator. Computer engineering prof Robert J. Marks offers some illustrations in an ITIF think tank interview. AI cannot, for example, handle ambiguities like flubbed headlines that can be read two different ways, Dr. Marks said.

Part 3: Is the research money for AI spent at universities just wasted? A computer engineering prof tells an AI think tank about the time a four-star general visited his university. Robert J. Marks, author of the forthcoming book Supply Side Academics, says that the strong focus on publishing papers in journals doesn’t lead to advances in the discipline.

Part 4: Computer prof: Feds should avoid university, seek entrepreneurs. Too much time at the U is wasted on getting papers into theoretical journals, not enough time spent on innovation, he contends. Robert J. Marks, author of Killer Robots and the forthcoming Supply Side Academics, wants a bigger focus on developing practical technologies.

Russia is systematically copying U.S. military AI robotics. In Russia’s topdown system, the military and corporations are essentially part of the same enterprise.

Show Notes

  • 01:19 | Introduction to the podcast topic
  • 02:13 | Introducing Dr. Robert J. Marks
  • 03:38 | AI in military applications
  • 05:07 | Staying ahead in development
  • 06:31 | Major areas of AI in the military
  • 07:10 | Drone swarms
  • 09:26 | Will AI be sentient?
  • 11:30 | Autonomous weapons
  • 16:07 | Ethics
  • 17:48 | The state of AI research
  • 20:31 | Top priority in tech policy

Additional Resources

Podcast Transcript


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Is the U.S. Military Falling Behind in Artificial Intelligence?