In November, 2022, San Francisco voted to allow police to deploy killer robots. Less than a month later, the city reversed their decision. Initially, in an 8-3 vote, San Francisco’s Board of Supervisors allowed law enforcement to use robots “as a deadly force option when risk of loss of life to members of the public or officers is imminent and outweighs any other force option available to SFPD.” Sounds like reasonable policy, but protestors held up “NO KILLER ROBOTS!” signs at City Hall and the Board of Supervisors caved. This may be a case of hoplophobia, an irrational fear of firearms. So-called “killer robots” can deploy explosives to allow passage through blockaded doors or, in extreme situations, kill those who put innocent Read More ›
What we really need is not a Turing test or a Lovelace test, but a Terminator test. Just imagine. If we create an all-powerful artificial intelligence, we cannot assume it will be friendly. We cannot guarantee anything about the AI’s behavior due to something known as Rice’s theorem. Rice’s theorem states that all non-trivial semantic properties of programs are undecidable. Benevolence is certainly a non-trivial semantic property of programs, which means we cannot guarantee benevolent AIs. Therefore, what we really need is a way to distinguish the all-powerful artificial intelligence from human intelligence, so we can protect ourselves from humanized mass murdering robots. Let us think about this in terms of test errors. When we perform a test on some Read More ›
The article “How an Artificial Superintelligence Might Actually Destroy Humanity” is one of the most irresponsible pieces about AI I have read in the last five years. The author, transhumanist George Dvorsky, builds his argument on a foundation of easily popped balloons. AI is and will remain a tool. Computers can crunch numbers faster than you or me. Alexa saves a lot of time looking up results on the web or playing a selected tune from Spotify. A car – even a bicycle – can go a lot faster than I can run. AI is a tool like fire or electricity used to enhance human performance and improve lifestyles. Like fire and electricity, AI can be used for evil or Read More ›
In a recent podcast, Walter Bradley Center director Robert J. Marks spoke with Robert D. Atkinson and Jackie Whisman at the prominent AI think tank, Information Technology and Innovation Foundation, about his recent book, The Case for Killer Robots—a plea for American military brass to see that AI is an inevitable part of modern defense strategies, to be managed rather than avoided. (The book may be downloaded free here.) As they jointly see the problem, There’s a lot of doomsday hype around artificial intelligence in general, and the idea of so-called “killer robots” has been especially controversial. But when it comes to the ethics of these technologies, one can argue that robots actually could be more ethical than human operators. Read More ›
Robert J. Marks discusses AI and the military, autonomous weapons, and his book The Case for Killer Robots with hosts Robert D. Atkinson and Jackie Whisman from the Information Technology & Innovation Foundation (ITIF). Dr. Marks’ book The Case for Killer Robots is available at Amazon.com in print, audio and Kindle formats. For a limited time, the Bradley Center is Read More ›
To remain competitive, the U.S. military must respond and adapt to new warfare technology including weapons using AI, sometimes called killer robots. This includes autonomous AI that acts on its own. Chillingly, unlike atomic weapons, the tools to construct lethal AI weapons are cheap and readily available to all.
In an op-ed at CNS this morning, Walter Bradley Center director Robert J. Marks summarizes his case, as an artificial intelligence expert, that the United States must remain competitive in military AI or, as it is called, “killer robots,” because hostile nations are forging ahead.
Autonomous AI weapons are potentially within the reach of terrorists, madmen, and hostile regimes like Iran and North Korea. As with nuclear warheads, we need autonomous AI to counteract possible enemy deployment while avoiding its use ourselves.