If you give an infinite number of monkeys typewriters and allow them to type freely, will they eventually produce the works of Shakespeare? Call this the infinite monkey theorem (IMT), widely attributed to Thomas Henry Huxley (1825–1895), best remembered today as “Darwin’s Bulldog” for his defense of Darwin’s theory of evolution. In 2000, a tongue-in-cheek “protocol” for such an experiment was developed. Independently, in 2003, enterprising researchers gave a group of monkeys keyboards, in what they were willing to discuss as a test of the theory:
Researchers at Plymouth University in England reported this week that primates left alone with a computer attacked the machine and failed to produce a single word.
“They pressed a lot of S’s,” researcher Mike Phillips said Friday. “Obviously, English isn’t their first language.”
A group of faculty and students in the university’s media program left a computer in the monkey enclosure at Paignton Zoo in southwest England, home to six Sulawesi crested macaques. Then, they waited.Associated Press, “Monkeys Don’t Write Shakespeare” at Wired (May 9, 2003)
Unfortunately, the macaques also relieved themselves on the keyboards.
Despite the entertainment value provided by the use of live monkeys, Huxley’s contention was a serious one: Mathematically, given infinite resources, would not the works of Shakespeare be typed out? If so, meaning does not require intelligence; can arise from inanimate and meaningless sources.
Some say that the internet itself offers such a test. Billions of people use it but the works of Shakespeare have not randomly resulted. Is the problem that we have not yet spent enough time on the internet?
Let us approach the problem mathematically. Why, precisely, do the theoretical infinite monkeys typing infinitely not produce the works of Shakespeare? According to information theory, given infinite resources, they should be able to. And yet, we know — intuitively — that they will not. The answer to this question lies in the nature of language — and that answer has implications for the future of artificial intelligence.
To effectively transmit data across any medium, we need four things:
● a way to marshal the information
● a way to control for errors in the transmission
● a way to control the speed at which the information flows
● a way to pick out a receiver.
In this column, I will focus on the marshaling of information, that is, finding a way for the sender to format the information so that the receiver can understand it.
A good place to begin is with typewriters. The typewriter imprints a mark onto a piece of paper when a key is pressed. The typewriter can be said to constrain the kinds of marks made on the paper because it can only produce marks that fit within a certain language system. The “monkeys” are, in effect, constrained within a specific dictionary or set of symbols that everyone using the language system has agreed to have certain meanings. This restriction to a specific dictionary is a “helper” for the monkeys; without the dictionary, the monkeys would be left to make random marks on the papers placed in front of them.
Restricting the monkeys to the possibilities within the dictionary represented by the typewriters, however, does not mean they produce the works of Shakespeare by infinite typing. In order to get closer to the works of Shakespeare, the monkeys must produce collections of letters in the form of words. Each word, in turn, has a range of meaning defined by yet another dictionary, which is also agreed on by those wishing to communicate, before communication begins.
Words, however, are not enough. The words must be arranged using a grammar so that they form a thought—a third layer of arrangement which must be agreed on by all the participants before communication can take place. Grammars are typically “fuzzy”; there are many ways to say something, and the meaning of words interacts with the grammar in somewhat unpredictable ways.
For instance — suppose it is Tuesday at 1:59 in the afternoon. You are walking through the park and see someone throw a ball which appears to curve so that it will directly strike someone. Do you shout: “Hey, you need to move before 2 PM!”? Or do you shout: “Watch out!”? The two sentences express the same thought — but the meaning of the two sentences is completely different.
Marshaling information requires these two things: at least one dictionary and at least one grammar.
As shown in the IMT example, there can be many layers of dictionaries, and the grammar can be quite fuzzy. The grammar and dictionary often interact with one another, as well.
These issues speak to intent. It is not enough to form symbols; it is not enough to form words; it is not enough to form sentences. Before communication can begin, there must be an intention to communicate which results in the creation of dictionaries and grammars which interact with one another and are often layered in complex ways. Intent, then, is a critical component of communication.
The implication for artificial intelligence is this: it is not enough, as Turing proposed, to trick a person into thinking a computer is a person. Somewhere there must be a person who intends this result. If the artificial intelligence cannot provide that intent, then the person who designs the system must.
Practitioners in the field of artificial intelligence often follow Turing’s lead in either one of two things. Either they assume that intent does not matter in defining intelligence, as he does in the imitation game and in arguing that it is possible to replace human calculators with machines. Or they presuppose that intent does not exist, that it is a useful illusion.
Neither of these approaches, however, will ultimately work — real communication requires intent, not only in the communication itself but even in the creation of the shared framework (dictionaries and grammars) in which communication takes place. Ultimately, then, thinking through the IMT shows us that artificial intelligence cannot produce the works of Shakespeare. There can be an illusion of intent but the original intent required to communicate just is not there.
Further reading by Russ White, on the real world of high tech:
Pop-ups? Just say no, and close those tabs! Making the internet work for YOU means, among other things, getting control of who can follow you around. If allowing these notifications sounds like a perfect avenue for an attacker, that’s because it is. This attack surface is a very large hole in the security of your computer.
Should You Pay For a Virtual Private Network (VPN)? Here’s what a VPN can and can’t do for you. In some cases, specifically when you are using public wireless services, using a VPN can add measurably to your privacy and security. But VPNs are not a “silver bullet” in solving the many security and privacy issues users face today.
The internet’s structure builds in privacy flaws. The Domain Name resolver knows every service you visit, and every service those services rely on, as you move around the internet
You think you have nothing to hide? Then why are Big Tech moguls making billions from what you and others tell them?
Why you can’t just ask social media to forget you. While we now have a clear picture of the challenges current social media pose to peoples and cultures, what to do is unclear