Mind Matters Natural and Artificial Intelligence News and Analysis

TagKiller robots

illustration-of-a-technological-eye-close-up-future-concept-generative-ai-stockpack-adobe-stock
illustration of a technological eye, close up, future concept, generative ai

Megan Review, Part 1

An AI doll that does more than just play.

Since it’s nearing Halloween, I figured now would be a good time to review some Sci-Fi movies that dabble in the horror genre. Megan came out in 2022 and has been referred to as Chucky for Zoomers. The premise is the same as the horror movie, Child’s Play, from 1988: a child gets a doll. Doll turns psychotic and kills people. It’s pretty straightforward. However, Megan differs by adding a technological twist, calling back to the creepy Furbies, which came out in 1998. Really, those awful toys should’ve had a horror movie of their own. There are many a tale of the mechanical monsters waking up under the bed in the dead of night six months after the poor child Read More ›

military-bomb-defusing-robot-with-shepherd-dog-in-the-background-stockpack-adobe-stock
Military bomb defusing robot with shepherd dog in the background.

How San Francisco’s Gun Fears Prevented Lifesaving Innovation

Killer robots in law enforcement would reduce the death toll but they are a bridge too far for many politicians

In November, 2022, San Francisco voted to allow police to deploy killer robots. Less than a month later, the city reversed their decision. Initially, in an 8-3 vote, San Francisco’s Board of Supervisors allowed law enforcement to use robots “as a deadly force option when risk of loss of life to members of the public or officers is imminent and outweighs any other force option available to SFPD.” Sounds like reasonable policy, but protestors held up “NO KILLER ROBOTS!” signs at City Hall and the Board of Supervisors caved. This may be a case of hoplophobia, an irrational fear of firearms. So-called “killer robots” can deploy explosives to allow passage through blockaded doors or, in extreme situations, kill those who put innocent Read More ›

3d-illustration-roboter-auge-stockpack-adobe-stock
3D Illustration Roboter Auge

Move Over Turing and Lovelace – We Need a Terminator Test

More research should be spent on a Terminator test to mitigate the threat of an unfriendly, all-powerful artificial intelligence

What we really need is not a Turing test or a Lovelace test, but a Terminator test. Just imagine. If we create an all-powerful artificial intelligence, we cannot assume it will be friendly. We cannot guarantee anything about the AI’s behavior due to something known as Rice’s theorem. Rice’s theorem states that all non-trivial semantic properties of programs are undecidable. Benevolence is certainly a non-trivial semantic property of programs, which means we cannot guarantee benevolent AIs. Therefore, what we really need is a way to distinguish the all-powerful artificial intelligence from human intelligence, so we can protect ourselves from humanized mass murdering robots. Let us think about this in terms of test errors. When we perform a test on some Read More ›

robot-playing-chess
Robot Playing Chess

Chicken Little AI Dystopians: Is the Sky Really Falling?

Futurist claims about human-destroying superintelligence are uninformed and irresponsible

The article “How an Artificial Superintelligence Might Actually Destroy Humanity” is one of the most irresponsible pieces about AI I have read in the last five years. The author, transhumanist George Dvorsky, builds his argument on a foundation of easily popped balloons. AI is and will remain a tool. Computers can crunch numbers faster than you or me. Alexa saves a lot of time looking up results on the web or playing a selected tune from Spotify. A car – even a bicycle – can go a lot faster than I can run. AI is a tool like fire or electricity used to enhance human performance and improve lifestyles. Like fire and electricity, AI can be used for evil or Read More ›

swarm-of-drones-surveying-flying-over-city-stockpack-adobe-stock.jpg
Swarm of drones surveying, flying over city

Is the U.S. Military Falling Behind in Artificial Intelligence?

What is the likely outcome of allowing those with very different value systems to have control of global AI warfare technology?

In a recent podcast, Walter Bradley Center director Robert J. Marks spoke with Robert D. Atkinson and Jackie Whisman at the prominent AI think tank, Information Technology and Innovation Foundation, about his recent book, The Case for Killer Robots—a plea for American military brass to see that AI is an inevitable part of modern defense strategies, to be managed rather than avoided. (The book may be downloaded free here.) As they jointly see the problem, There’s a lot of doomsday hype around artificial intelligence in general, and the idea of so-called “killer robots” has been especially controversial. But when it comes to the ethics of these technologies, one can argue that robots actually could be more ethical than human operators. Read More ›

dozens-of-drones-swarm-in-the-cloudy-sky-stockpack-adobe-stock.jpg
Dozens of Drones Swarm in the Cloudy Sky.

Robert J. Marks on Killer Robots

Robert J. Marks discusses AI and the military, autonomous weapons, and his book The Case for Killer Robots with hosts Robert D. Atkinson and Jackie Whisman from the Information Technology & Innovation Foundation (ITIF). Dr. Marks’ book The Case for Killer Robots is available at Amazon.com in print, audio and Kindle formats. For a limited time, the Bradley Center is Read More ›

Drone monitoring barbed wire fence on state border or restricted area. Modern technology for security.

Iran Conflict Shows Why the US Needs Autonomous Lethal AI Weapons

The bipartisan National Security Commission on Artificial Intelligence recently released a sobering report about the U.S. lagging in development of killer robots

To remain competitive, the U.S. military must respond and adapt to new warfare technology including weapons using AI, sometimes called killer robots. This includes autonomous AI that acts on its own. Chillingly, unlike atomic weapons, the tools to construct lethal AI weapons are cheap and readily available to all.

Read More ›
Military Drone in the blue sky, 3D rendering

Robert J. Marks: Peace May Depend on Killer Robots

Calls for a ban on killer robots impact the United States but not the non-democratic nations who are developing them now

In an op-ed at CNS this morning, Walter Bradley Center director Robert J. Marks summarizes his case, as an artificial intelligence expert, that the United States must remain competitive in military AI or, as it is called, “killer robots,” because hostile nations are forging ahead.

Read More ›
meteorite explosion in the air on black bakcground
Missile explosion

Why we can’t just ban killer robots

Should we develop them for military use? The answer isn’t pretty. It is yes.

Autonomous AI weapons are potentially within the reach of terrorists, madmen, and hostile regimes like Iran and North Korea. As with nuclear warheads, we need autonomous AI to counteract possible enemy deployment while avoiding its use ourselves.

Read More ›