Mind Matters Natural and Artificial Intelligence News and Analysis
artificial-intelligence-self-aware-android-robots-patrolling-a-destroyed-city-3d-rendering-stockpack-adobe-stock.jpg
Artificial Intelligence self aware android robots patrolling a destroyed city. 3d rendering

Study: AI Will Make Human Factors More, Not Less, Critical in War

Counterintuitive? Not when we factor in the “fog of war” that makes military situations more confusing than, say, conventional business ones
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
Jon Lindsay

We sometimes hear that artificial intelligence in the military means that AI takes the risks and does the fighting while humans direct from a safe distance. It sounds reassuring but it’s not likely, say Georgia Institute of Technology cybersecurity professor Jon Lindsay and University of Toronto AI professor Avi Goldfarb:

Many policy makers assume human soldiers could be replaced with automated systems, ideally making militaries less dependent on human labor and more effective on the battlefield. This is called the substitution theory of AI, but Lindsay and Goldfarb state that AI should not be seen as a substitute, but rather a complement to existing human strategy.

“Machines are good at prediction, but they depend on data and judgment, and the most difficult problems in war are information and strategy,” he [Jon Lindsay] said. “The conditions that make AI work in commerce are the conditions that are hardest to meet in a military environment because of its unpredictability.”

Georgia Institute of Technology, “Military cannot rely on AI for strategy or judgment, study suggests” at ScienceDaily (June 14, 2022) The paper is open access.

That unpredictability is called the “fog of war.” As Lindsay puts it, “You need human sense-making and to make moral, ethical, and intellectual decisions in an incredibly confusing, fraught, scary situation.” It may be relevant, Lindsay and Goldfarb point out, that in warfare, intense enemy efforts will be directed at manipulating or disrupting data, making human intervention and judgment even more necessary. Indeed. Science fiction could become science fact if enemy forces hack AI so that it starts providing fatally deceptive information.

They’re not against using AI in warfare. They think it should be used in stable, bureaucratic environments “on a task-by-task basis.”

”All the excitement and the fear are about killer robots and lethal vehicles, but the worst case for military AI in practice is going to be the classically militaristic problems where you’re really dependent on creativity and interpretation,” Lindsay said. “But what we should be looking at is personnel systems, administration, logistics, and repairs.”

Georgia Institute of Technology, “Military cannot rely on AI for strategy or judgment, study suggests” at ScienceDaily (June 14, 2022) The paper is open access.
soldiers using a drone for scouting

A critical question the two analysts raise is, “are there components of judgment cannot be automated?” As military AI continues to be developed, we will probably find out soon enough, perhaps the hard way.

Here’s their abstract, which is admirably easy to read:

Recent scholarship on artificial intelligence (AI) and international security focuses on the political and ethical consequences of replacing human warriors with machines. Yet AI is not a simple substitute for human decision-making. The advances in commercial machine learning that are reducing the costs of statistical prediction are simultaneously increasing the value of data (which enable prediction) and judgment (which determines why prediction matters). But these key complements—quality data and clear judgment—may not be present, or present to the same degree, in the uncertain and conflictual business of war. This has two important strategic implications. First, military organizations that adopt AI will tend to become more complex to accommodate the challenges of data and judgment across a variety of decision-making tasks. Second, data and judgment will tend to become attractive targets in strategic competition. As a result, conflicts involving AI complements are likely to unfold very differently than visions of AI substitution would suggest. Rather than rapid robotic wars and decisive shifts in military power, AI-enabled conflict will likely involve significant uncertainty, organizational friction, and chronic controversy. Greater military reliance on AI will therefore make the human element in war even more important, not less.

Avi Goldfarb, Jon R. Lindsay; Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in War. International Security 2022; 46 (3): 7–50. doi: https://doi.org/10.1162/isec_a_00425

You may also wish to read these combat-related posts by Robert J. Marks: : DARPA has scheduled AI vs. AI aerial dogfights for next week. Robert J. Marks: A round robin tournament will select the AI that faces off against a human pilot Thursday. A successful AI dogfight tournament is exciting but it is only a first step toward enabling such fighters to be used in combat. (August 2020)

and

After Thursday’s dogfight, it’s clear: DARPA gets AI right. In the dogfight Thursday between AI and a pilot, AI won. But what does that mean? By posing relevant questions, DARPA’s overall AI strategy accurately embraces both the capabilities and limitations of AI. (August 2020)

Further reading: Why we can’t just ban killer robots Should we develop them for military use? The answer isn’t pretty. It is yes. (Robert J. Marks) (February 2019)


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Study: AI Will Make Human Factors More, Not Less, Critical in War