Mind Matters Natural and Artificial Intelligence News and Analysis
in-the-twilight-desert-a-network-distribution-in-blue-hues-u-866159883-stockpack-adobestock
In the twilight desert, a network distribution in blue hues unfolds with a military unmanned drone concept, showcasing AI video streaming from above. AI generative
Image Credit: Supaporn Chanthra - Adobe Stock

Autonomous AI War Technology Delivers Killer Robots

AI systems are designed to always carry out the mission. A software logic problem in special situations is a worst-case scenario that we must expect
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

I’m a lot more worried about evil people with mediocre advances in technology than AI deciding that it’s going to wipe us all out.

Thus spake Palmer Luckey, billionaire founder of defense tech company Anduril Industries, pushing back on fears about AI-powered weapons. His point starkly outlines a worldview-level controversy: Which is more dangerous with large scale weapons — humans or autonomous AI systems?

Luckey’s opinion matters. He spearheads leading-edge innovations in war-making. One Anduril product, “Roadrunner,” is a drone designed to intercept and destroy enemy drones. Powered by a twin turbo-jet engine, Roadrunner can take off vertically, identify and strike a drone, or if no drone is found, return to land and try again.

Roadrunner-M, a high-explosive model, targets any aerial threat.

And then there’s Fury and Dive XL…

Another Anduril product is “Fury,” a multi-mission autonomous air vehicle (AAV) that operates like a fighter aircraft. It can fly over 650 mph at altitudes reaching 50,000 feet. Cooperating with piloted military aircraft, it flies ahead to look for threats and destroy enemy craft before the human pilots arrive.

For undersea operations, Anduril developed “Dive XL,” a military unmanned submarine. In his 60 Minutes interview on May 18, 2025, Luckey also described Dive XL:

It’s not remote-controlled by [a land-based] computer … It’s doing it on the brain on the submarine itself. So, if I told it to go off and perform some mission that’s months-long, like, “Go to this target, listen for this particular signature, and if you see this signature, run; if you see this one, hide; if you see this one, follow it,” it could do that all on its own without being detected, without [my further] communicating with it.

Keyword: Autonomous

All three Anduril products share two key features. One is “Lattice,” AI-powered mission autonomy software, designed to integrate and control autonomous systems across land, sea, air, and cyber domains. Lattice collects and processes data from distributed sensors, satellites, drones, and surveillance radar and camera systems, creating a real-time operational picture. Lattice also uses AI and machine learning to filter high-value information and task assets like drones or ground vehicles without human intervention. All of this technology supports synchronizing several weapons systems to move assets and execute missions faster than humans can.

The other key feature is autonomous operations. Roadrunner, Fury, and Dive XL have onboard AI systems so they can deploy into combat without human direction. Luckey in his interview confirmed, “It’s the AI onboard all these weapons that makes it possible to make it so easy.”

Luckey assures listeners that Anduril’s weapons all have a “kill switch” for humans to take control just in case. (It is worth wondering what weak links undermine that promise…)

System-level risk factors

In Soulless Intelligence (2024), engineers Bryan Trilli and Greg Trilli warn that AI systems are designed to always carry out the mission. If the programmers imperfectly or incompletely specify the mission in the software, we can expect the AI to nevertheless strive to do that mission. A software logic problem that outcrops only in special situations is the worst-case scenario that we must expect to actually occur.

Essential to a successful mission is knowing when the job is done, or what is “enough,” so that the system stops pursuing its goals. But when an AI system is receiving new and changed data inputs constantly, then the goal posts may get moved.

To see this problem, watch the “Sorcerer’s Apprentice” segment in the 1940 Disney movie, Fantasia. Apprentice Mickey Mouse gives the command to the brooms to fetch water, which seems specific enough. But his instructions did not define when the brooms should stop. Only the Master Sorcerer could give the correct order to stop the brooms!

By definition, autonomous AI systems carry out their tasks without direct human control. Sometimes, AI systems programmed to detect new or returning threats may erroneously misidentify and respond to a non-threat. Or their real-time combat decisions can end up escalating a situation, destroying unintended targets, and killing civilians.

Macro and Long-Term Dangers

Palmer Luckey recommended two policy approaches drawn from his vision for Anduril and national defense weapons strategy. First:

the United States needs to arm our allies and partners around the world so that they can be prickly porcupines that nobody wants to step on, nobody wants to bite them.

Luckey’s view has merit, reminiscent of the “peace through strength” motto and President Roosevelt’s famous “walk softly but carry a big stick.” Armed defenses demonstrably deter aggressors, thus tending to keep the peace. Indeed, “eternal vigilance is the price of liberty.”

Second, Luckey suggested revising U.S. foreign policy: “We need to transition from being the world police to being the world gun store.” His idea fatally lacks some nuance.

A “gun,” meaning a hand-held firearm, is a tool under continuous direct human control. Such weapons empower individual people to defend against aggressors and are universally available.

Luckey’s actual situation is quite different from that: His company builds autonomous military machines that seek targets they themselves have chosen, such as buildings, ships, aircraft, and blow them up — without human intervention. Those machines aren’t “guns.”

Becoming the world’s AI weapon superstore can only mean the widespread possession and use of such weapons by any well-funded entity anywhere, whether for defensive or offensive use.

Recall that Luckey said he feared “evil people with mediocre weapons” more than AI systems killing us all. Problem is, worldwide deployment of autonomous AI weapons systems from the Anduril superstore would empower those same evil people with powerful killer robots with minds of their own.

Stopping the onslaught of AI weapons appears impossible. But, as Soulless Intelligence urges, humanity must face the “alignment problem,” i.e., making sure to design all AI systems to protect human life, liberty, and property except when defending against aggression. AI systems must be programmed to treat all non-aggressor humans as infinitely valuable.

Come to think of it, the humans need that programming, too.


Richard Stevens

Fellow, Walter Bradley Center on Natural and Artificial Intelligence
Richard W. Stevens is a retiring lawyer, author, and a Fellow of Discovery Institute's Walter Bradley Center on Natural and Artificial Intelligence. He has written extensively on how code and software systems evidence intelligent design in biological systems. Holding degrees in computer science (UCSD) and law (USD), Richard practiced civil and administrative law litigation in California and Washington D.C., taught legal research and writing at George Washington University and George Mason University law schools, and specialized in writing dispositive motion and appellate briefs. Author or co-author of four books, he has written numerous articles and spoken on subjects including intelligent design, artificial and human intelligence, economics, the Bill of Rights and Christian apologetics. Available now at Amazon is his fifth book, Investigation Defense: What to Do When They Question You (2024).
Enjoying our content?
Support the Walter Bradley Center for Natural and Artificial Intelligence and ensure that we can continue to produce high-quality and informative content on the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism.

Autonomous AI War Technology Delivers Killer Robots