In 2017, Russian President Vladimir Putin discussed artificial intelligence (AI) with a student audience, saying, “Whoever becomes the leader in this sphere will become the ruler of the world.” Was this hawkish Russian leader thinking about military uses of AI? And what, exactly, are the implications of AI for global warfare?
As previously discussed, AI can now defeat humans in chess even when the humans have intelligence augmenting (IA) computers helping them. But the two-decades-long lag between AI’s defeat of an unaided human (Garry Kasparov) and AI’s defeat of IA- human teams demonstrates that the expression IA>AI indeed holds true—at least as long as it takes for technology to advance to the next level. At that point, once AI technology reaches the next level, humans are no longer needed for certain tasks, be it playing chess or working on an assembly line. But as AI technology advances, new tasks come into play where the IA>AI expression once again holds true.
So, will the technology we use to fight wars soon advance to the point where humans are no longer needed? Is warfare the task that Putin is referring to when he says that the country that leads in the sphere of AI will rule the world?
We don’t think so.
AI has succeeded at chess, shogi, go, and other board games because these games have clearly defined states. Winning involves determining which of a large number of states is most advantageous. The number is large but not infinite and therefore a cleverly designed search algorithm can find it. However, at the strategic and operational levels of warfare—the realm of human decision -making that requires creativity and original thought in order to compel or dissuade other humans—there are very few defined states. Furthermore, the number of non-defined, ambiguous states is essentially infinite.
Rule-driven games played in explicitly defined and limited spaces (e.g. chess and go), no matter how complicated, have both finite and clearly defined states. While some very narrowly defined tactical problems in warfare (e.g. the optimal search pattern to locate an enemy submarine) are amenable to function-based optimization enabled by either IA or AI, there is a fundamental gap between these simple problems and the complicated ones that require solutions unreachable by objective function optimization. Indeed, it is often the case in human conflict that neither side truly understands what they were trying to achieve, even after the fact.
Though AI may never figure out the game of war the way it figured out chess, it could still offer a military advantage. Most people imagine that autonomous drone swarms will dominate future warfare. Others think there is another possibility: AI will do to warfare what Big Data did to baseball.
In baseball, the advantage of signing certain players (pitchers who generated faster spin, etc.) was discovered using a statistical analysis of large amounts of data. With this advantage, the 2002 Oakland A’s beat the longest winning streak of any American League team. Their dramatic turnaround is chronicled in the book and the movie, Moneyball. Their success revolutionized professional baseball. Can AI working on Big Data similarly revolutionize how we fight wars?
Again, we don’t think so.
Big Data is about finding patterns and trends in data. It’s fundamentally about analytics, not about decision -making per se. It absolutely does have a role in informing a decision –maker, whether human, AI, or IA. Big Data can provide very useful tools to a decision -maker, but actually replacing the decision-maker is not one of them.
While we don’t think AI, in conjunction with Big Data, will revolutionize warfare, we do think it is prudent to keep careful watch on developments in this area. The military is already using AI and IA to inform its decisions. Thus, it is crucial to monitor the way in which humans come to depend on increasingly powerful AI and IA tools.
Putin was correct, in a way. Whichever country becomes a leader in the sphere of AI and IA will do well. But whichever countries end up following, mindlessly, the advice of these tools will do so at their own great peril.
(The views expressed in this piece are the authors’ own and do not represent the official views of the Navy or the Department of Defense.)
Also by Jed Macosko: Can Big Data Beat the Humans Who Compile It?
Also by Bradley A. Alaniz and Jed Macosko: Robotic swarms specially adapted to operate in austere environments could be useful responses to natural and manmade disasters
See also: Big Data Can Lie: Simpson’s Paradox The Paradox illustrates the need for seasoned human experts in the loop to examine and query the results from Big Data. (Robert J. Marks)