CAPTCHA: How Fooling Machines Is Different From Fooling Humans
Automated censorship is intended to protect against a tidal wave of spam but it could certainly have other uses…Readers of Mind Matters News have likely heard of the iconic Turing test. Computer pioneer Alan Turing famously invented a test to determine whether a program could pass as a human. The gist is, if a program can fool human testers into believing it is a human, then the program is intelligent. Not everyone is convinced.
Thing is, it doesn’t take much to fool us humans! Take Eliza , a program of only a few hundred lines, written in the 60s, which fooled many people into believing it was a real human therapist.
But what if we flip the Turing test on its head? Instead of a test where a program tries to pass as human, we use a test that a program cannot pass, but a human can. For example, consider the CAPTCHA test we encounter on many websites. The term “CAPTCHA”stands for “Completely Automated Public Turing test to tell Computers and Humans Apart.”
I call this sort of test a “Terminator” test, because it helps us exclude the programs from our midst. This will be especially useful when the programs want to eliminate the human race, as in the movie Terminator!
The test is not just as an annoying popup. It can help us beat censorship.
How so, you ask? Due to the scale of online communication, it is next to impossible for humans to scan all the messages that people send. Thus, automated censorship is a necessity.
What is automated censorship? It is programs trying to understand human text. If we had a way of communicating that humans could easily understand, but programs could not understand, we’d have a way to defeat automated censorship. We’d also have a Terminator test.
It turns out programs are quite easy to fool too, though in different ways and for different reasons from humans. The fanciest AIs can be tripped up by slight tweaks, making them think a picture of a kitten is really an avocado. These tweaks can be indiscernible to the human eye, which means people can easily receive messages that completely confuse our robot overseers.
So, what’s an easy way to communicate in a Terminator-proof way? Screenshots! Reading text from pictures is very difficult for computers. I gave an OCR program a screenshot of this article, and here’s what it derived:
What is automated ZUSOnIsliP7 thri’s programs tr inq 7o 111der.
Change fonts, add some colors, maybe a bit of random noise, and the text will become completely impossible for a program to understand, but extremely easy for humans.
And that’s all it’ll take to defeat widespread, automated censorship. If you are bothered by censorship on some social media platform, just send a screenshot instead.
You may also wish to read: Fact checking as the new censorship — surer than the old type. Apparently, Facebook has created a special portal for government to report “disinformation.” Just what constitutes disinformation/misinformation is often very subjective and quite often political as well. And governments are often not straightforward.