The US Air Force began as Billy Mitchell’s prophecy that air power could decide the next war. That happened, of course, when the B-29 superfortress dropped the atomic bomb on Hiroshima. Ever since, the USAF has sought to adapt to the latest and most decisive forms of military technology. But the challenges have drastically changed.
Most recently the USAF started USCYBERCOMMAND, due to the fact that a lone hacker can paralyze an entire nation’s infrastructure. The USAF also started the new Space Force, since a well placed orbital burst can take down the world’s satellite grid.
Now USAF is delving into the world of autonomous drones, the next logical step from the manned drones (drones operated by a controller on the ground, not in the plane, as per the video below) that have taken over the skies.
Up till now, the drones have not been considered a complete replacement for in-plane pilots due to slow reaction time. A fighter pilot must react in a split second in order to put the opponent on the defensive, thus “getting within the enemy’s OODA loop.” Manned drones cannot do that because the remote drone pilot’s reaction is too sluggish.
Computers, on the other hand, have a reaction time that is orders of magnitude faster than human reaction time—measured in terms of light speed instead of milliseconds. So, if computers can make intelligent enough decisions in flight, then an autonomous drone could plausibly defeat human fighter pilots.
This thesis will be put to the test in the near future. Steven Rogers of the Air Force Research Lab has a team working on an autonomous fighter drone. The goal is to have a machine vs. man dogfight in July 2021.
He has lead a fundamental research project known as QuEST (Qualia Exploitation of Sensors) to apply the philosophical notion of “qualia” to artificial intelligence. Originally, it was an AI-only project aimed at giving computers some kind of consciousness but has since morphed into developing a set of decision-making criteria that would enable us to know when to apply a human-in-the-loop to AI. Dr. Rogers has successfully created and sold an AI-based startup that uses machine learning to detect breast cancer, using the same trade-off principle.
If Dr. Rogers’s autonomous (or perhaps semi-autonomous) drones succeed, they could usher in a new stage in warfare, where robotic weapons controlled by augmented human intelligence can outperform standard human-controlled weapons.
We saw such a revolution occur after Deep Blue defeated Kasparov. Kasparov held a subsequent competition pitting humans, computers, and human computer hybrid players against each other. The surprising outcome is that neither chess experts nor supercomputers won. Instead, a team of amateurs that augmented their play with a chess engine defeated all other players.
In the same way, the outcome of future warfare will be decided, not by AI alone, but by finding and optimizing the trade-off between human and artificial intelligence.
The brain exceeds the most powerful computers in efficiency. Human thinking takes vastly less computational effort to arrive at the same conclusions. (Eric Holloway)
Why AI can’t win wars as if wars were chess games: Is Vladimir Putin right? Will whoever leads in AI rule the world? It’s not so simple. (Bradley A. Alaniz and Jed Macosko)
Why I doubt that AI can match the human mind: Computers are exclusively theorem generators, while humans appear to be axiom generators (Jonathan Bartlett)
Book at a Glance: Robert J.Marks’s Killer Robots