Mind Matters Natural and Artificial Intelligence News and Analysis
3d-illustration-roboter-auge-stockpack-adobe-stock
3D Illustration Roboter Auge
Licensed via Adobe Stock

Move Over Turing and Lovelace – We Need a Terminator Test

More research should be spent on a Terminator test to mitigate the threat of an unfriendly, all-powerful artificial intelligence
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

What we really need is not a Turing test or a Lovelace test, but a Terminator test. Just imagine. If we create an all-powerful artificial intelligence, we cannot assume it will be friendly. We cannot guarantee anything about the AI’s behavior due to something known as Rice’s theorem.

Rice’s theorem states that all non-trivial semantic properties of programs are undecidable. Benevolence is certainly a non-trivial semantic property of programs, which means we cannot guarantee benevolent AIs. Therefore, what we really need is a way to distinguish the all-powerful artificial intelligence from human intelligence, so we can protect ourselves from humanized mass murdering robots.

Let us think about this in terms of test errors. When we perform a test on some aspect of reality, the outcome of the test can either be positive or negative. If the outcome matches the reality, then that is great, and the test tells us useful information about reality. If the test does not match reality, there are two kinds of ways the test can be wrong.

The first way the test can be wrong is if the test gives us a false positive. This means the test outcome is positive, but reality is actually negative. In the context of our Terminator test, this means our test says the being is a human when in reality it is a terminator. We definitely want to avoid this outcome with our Terminator test, otherwise the robots will destroy us all.

The second way the test can be wrong is if the test gives us a false negative, which is the converse of the previous scenario. The test outcome is negative, whereas reality is positive. In context, the Terminator test says a being is not a human when in reality it is. Since our primary goal is to identify terminators, misidentifying humans as terminators is not such a big deal.

The Terminator robot from the movie “Terminator” (1984)

In the movie Terminator, the humans use dogs to detect the terminators, but eventually the robots figure out how to use organic skin to fool the dogs. We can imagine this occurring with any sort of external test of the terminator’s appearance. So, to make a test that does not give us false positives, we need to look internally to the fundamental limits of computers, and there are a lot of them.

An idealized computer is a Turing machine. Turing machines have logical limits and performance limits. The most well-known limits are the halting problem and NP completeness. The halting problem is completely unsolvable by computers, and NP completeness means many important problems become unsolvable way before they become useful. Furthermore, there are many different problems that humans solve routinely that fall into these categories. So, if we want a good place to look for Terminator tests, this is where we should look.

Yet, despite the threat of AI wiping out humanity, and the fecundity of possible applications, there is zero research into Terminator tests. So, move over Turing and Lovelace tests. These will do nothing to save us. I challenge you, technically astute reader, to prevent the extinction of the human race and develop a Terminator test.


You may also like to read:

We Need a Better Test for True AI Intelligence. The difficulty is that intelligence, like randomness, is mathematically undefinable. The operation of human intelligence must be non-physical because it transcends Turing machines, which in turn transcend every physical mechanism. (Eric Holloway)

“Friendly” Artificial Intelligence Would Kill Us. Is that a shocking idea? Let’s follow the logic. We don’t want to invent a stupid god who accidentally turns the universe into grey glue or paperclips, but any god we create in our image will be just as incompetent and evil as we are. (Eric Holloway)

AI Is Not Nearly Smart Enough to Morph Into the Terminator. Computer engineering prof Robert J. Marks offers some illustrations in an ITIF think tank interview. AI cannot, for example, handle ambiguities like flubbed headlines that can be read two different ways, Dr. Marks said.

A Scientific Test for True Intelligence. A scientific test should identify precisely what humans can do that computers cannot, avoiding subjective opinion. The “broken checkerboard” is not the ultimate scientific test for intelligence that we need, but it is a truly scientific test. (Eric Holloway)


Eric Holloway

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Eric Holloway is a Senior Fellow with the Walter Bradley Center for Natural & Artificial Intelligence, and holds a PhD in Electrical & Computer Engineering from Baylor University. A Captain in the United States Air Force, he served in the US and Afghanistan. He is the co-editor of Naturalism and Its Alternatives in Scientific Methodologies.

Move Over Turing and Lovelace – We Need a Terminator Test