Two battling drone swarms can have numerous agents who, in order to be effective in combat, individually require reaction times in the milliseconds. Humans cannot react quickly enough for one, let alone hundreds, of interacting swarm agents. Autonomous operation can be appropriate.
Marks asks us to picture two gunslingers in the Old West, facing each other on Main Street. The faster draw wins. The second fastest draw is usually dead.
Military strategists call the response to a threat the OODA loop: observe–orient–decide–act.
Swarm conflict, in Marks’s view, is like two teams of gunfighters facing each other. Each side is trying to shoot all the members of the other team. The team with the best OODA loop wins.
The U.S. military seems to agree, according to a recent article in Wired:
LAST AUGUST, SEVERAL dozen military drones and tanklike robots took to the skies and roads 40 miles south of Seattle. Their mission: Find terrorists suspected of hiding among several buildings.
So many robots were involved in the operation that no human operator could keep a close eye on all of them. So they were given instructions to find—and eliminate—enemy combatants when necessary.Will Knight, “The Pentagon Inches Toward Letting AI Control Weapons” at Wired (May 10, 2021)
Four star General John Murray, who leads the US Army Futures Command, encourages AI autonomous weapons. He asked an audience at the U.S. Military Academy a pointed question last month:
Murray asked: “Is it within a human’s ability to pick out which ones have to be engaged” and then make 100 individual decisions? “Is it even necessary to have a human in the loop?” he added.Will Knight, “The Pentagon Inches Toward Letting AI Control Weapons” at Wired (May 10, 2021)
Marks told Mind Matters News, “I was impressed when General Murray visited our lab at Baylor. He was not interested in the number of papers a professor published, but what they did in terms of practical importance and reduction to practice.”
Murray has good reason for thinking ahead. In a test last August, AI won a demonstration contest with a fighter pilot easily. If that’s where things are headed, he wouldn’t want to see a situation where only the enemy forces had up-to-date AI.
Marks acknowledges the danger in autonomous AI weapons. But, he says, “There is danger in every weapon. The danger is not in the autonomy, but rather whether the weapon will do what it was designed to do and no more. This requires careful design, extensive testing and end user expertise. These are the hallmarks of any design ethics.”
When US drone swarms are involved, he says, they had better be the faster draw.
You may also wish to read: After Thursday’s dogfight, it’s clear: DARPA gets AI right. In the dogfight Thursday between AI and a pilot, AI won. But what does that mean? By posing relevant questions, DARPA’s overall AI strategy accurately embraces both the capabilities and limitations of AI. (Robert J. Marks)