Mind Matters Natural and Artificial Intelligence News and Analysis

TagTerminator test

mouse-cursor-clicking-captcha-im-not-robot-checkbox-stockpack-adobe-stock
Mouse Cursor Clicking CAPTCHA

CAPTCHA: How Fooling Machines Is Different From Fooling Humans

Automated censorship is intended to protect against a tidal wave of spam but it could certainly have other uses…

Readers of Mind Matters News have likely heard of the iconic Turing test. Computer pioneer Alan Turing famously invented a test to determine whether a program could pass as a human. The gist is, if a program can fool human testers into believing it is a human, then the program is intelligent. Not everyone is convinced. Thing is, it doesn’t take much to fool us humans! Take Eliza , a program of only a few hundred lines, written in the 60s, which fooled many people into believing it was a real human therapist. But what if we flip the Turing test on its head? Instead of a test where a program tries to pass as human, we use a test…

3d-illustration-roboter-auge-stockpack-adobe-stock
3D Illustration Roboter Auge

Move Over Turing and Lovelace – We Need a Terminator Test

More research should be spent on a Terminator test to mitigate the threat of an unfriendly, all-powerful artificial intelligence

What we really need is not a Turing test or a Lovelace test, but a Terminator test. Just imagine. If we create an all-powerful artificial intelligence, we cannot assume it will be friendly. We cannot guarantee anything about the AI’s behavior due to something known as Rice’s theorem. Rice’s theorem states that all non-trivial semantic properties of programs are undecidable. Benevolence is certainly a non-trivial semantic property of programs, which means we cannot guarantee benevolent AIs. Therefore, what we really need is a way to distinguish the all-powerful artificial intelligence from human intelligence, so we can protect ourselves from humanized mass murdering robots. Let us think about this in terms of test errors. When we perform a test on some…