Marks: Artificial Intelligence Is No More Creative Than a Pencil
You can use a pencil — but the creativity comes from you. With AI, clever programmers can conceal that fact for a while(Non-Computable You: What You Do That Artificial Intelligence Never Will (Discovery Institute Press, 2022) by Robert J. Marks is available here.)
Some have claimed AI is creative. But “creativity” is a fuzzy term. To talk fruitfully about creativity, the term must be defined so that everyone is talking about the same thing and no one is bending the meaning to fit their purpose. In this and subsequent chapters we will explore what creativity is, and in the end it will become clear that, properly defined, AI is no more creative than a pencil.
Creativity: Originating Something New
Lady Ada Lovelace (1815–1852), daughter of the poet George Gordon, Lord Byron, was the first computer programmer, writing algorithms for a machine that was planned but never built.1 She also was quite possibly the first to note that computers will not be creative—that is, they cannot create something new. She wrote in 1842 that the computer “has no pretensions whatever to originate anything. It can do [only] whatever we know how to order it to perform.”2
Alan Turing disagreed. Turing is often called the father of computer science, having established the idea for modern computers in the 1930s.3 Turing argued that we can’t even be sure that humans create, because humans do “nothing new under the sun”—but they do surprise us. Likewise, he said, “Machines take me by surprise with great frequency.” So perhaps, he argued, it is the element of surprise that’s relevant, not the ability to originate something new.4
Machines can surprise us if they’re programmed by humans to surprise us, or if the programmer has made a mistake and thus experienced an unexpected outcome. 5 Often, though, surprise occurs as a result of successful implementation of a computer search that explores a myriad of solutions for a problem. The solution chosen by the computer can be unexpected. The computer code that searches among different solutions, though, is not creative. The creativity credit belongs to the computer programmer who chose the set of solutions to be explored. Shortly, we’ll give examples from computer searches for making the best move in the game of GO and for simulated swarms. Both results are surprising and unexpected, but there is no creativity contributed from computer code.
The Flawed Turing Test
Alan Turing, an atheist, wanted to show we are machines and that computers could be creative. Turing equated intelligence with problem solving, did not consider questions of consciousness and emotion,6 and referred to people as “human computers.”7 Turing’s version of the “imitation game” was proposed to show that computers could duplicate the conversational human. This is why the biographical movie starring Benedict Cumberbatch as Turing was titled The Imitation Game.
How can computers imitate humans, according to Turing?
The imitation game (which came to be called the Turing test) simply asks whether, in a conversational exchange using text (that is, an exchange in which the participants are hidden from each other), a sufficiently sophisticated computer can be distinguished from a human. If a questioner gets lucid, human-sounding answers from the computer, and believes the computer is in fact a human typing in answers from another room, then the test has been passed.
(Incidentally, the converse of the Turing test is easy. Simply ask it to calculate the cube root of twelve out to ten significant figures. If the answer is almost immediate, you are talking to a computer.)
There are those who claim that today’s chatbots8 have passed the Turing test. Most famously, a computer program known as “Eugene Goostman” purportedly passed. One posted subtitle claiming this reads, “‘Eugene Goostman’ fools 33% of interrogators into thinking it is human, in what is seen as a milestone in artificial intelligence.”9
But those making a test often cheat to tweak the outcome. That’s what Goostman’s programmers did. Goostman supposedly was a Ukrainian, which meant English was not his first language, so his poor grammar could be excused. He was supposedly a teenager, so any lack of depth of understanding could be chalked up to his naïve intellect. Likewise, his tendency to be silly and deflect questions could be considered typical teenage behavior. In short, Goostman’s software character was gamed to pass the Turing test.
Here are a couple of examples of an exchange between Goostman and his questioner.10 Goostman is first asked a question proposed in the first line of the lyrics of the song “Close to You,” popularized by the Carpenters:
Questioner: Why do birds suddenly appear?
Goostman: Just because 2 plus 2 is 5! By the way, what’s your occupation? I mean — could you tell me about your work?
We see here an often-used cheat in chatbots. If an answer is not known, the chatbot responds with an unrelated question to derail the conversation’s direction.
Here’s another exchange with Eugene Goostman:
Questioner: It is ok, I get sick of sick people. How is your stomach feeling today? Is it upset maybe?
Goostman: I think you can’t smile at all. I bet you work in a funeral agency.
As you can see, Goostman’s answers here are elusively non-responsive.
Selmer Bringsjord correctly notes the Turing test is gamed by programmers. “Gamed” here is a nice word for being an elusive cheat. As Bringsjord writes, “Though progress toward Turing’s dream is being made, it’s coming only on the strength of clever but shallow trickery.”11
When gaming the system, chatbots can deflect detection by answering questions with other questions, giving evasive answers, or admitting ignorance. They display general intellectual shallowness as regards creativity and depth of understanding.
Goostman answered questions with questions like, “By the way, what’s your occupation?” He also tried to change topics with conversational whiplash responses like “I bet you work in a funeral agency.” These are examples of the “clever but shallow trickery” Bringsjord criticized.
What, then, do Turing tests prove? Only that clever programmers can trick gullible or uninitiated people into believing they’re interacting with a human. Mistaking something for human does not make it human. Programming to shallowly mimic thought is not the same thing as thinking. Rambling randomness (such as the change-of-topic questions Goostman spit out) does not display creativity.
“I propose to consider the question, ‘Can machines think?’ Turing said. Ironically, Turing not only failed in his attempt to show that machines can be conversationally creative, but also developed computer science that shows humans are non-computable.
Here are all of the excerpts in order:
Why you are not — and cannot be — computable. A computer science prof explains in a new book that computer intelligence does not hold a candle to human intelligence. In this excerpt from his forthcoming book, Non-Computable You, Robert J. Marks shows why most human experience is not even computable.
The Software of the Gaps: An excerpt from Non-Computable You. In his just-published book, Robert J. Marks takes on claims that consciousness is emerging from AI and that we can upload our brains. He reminds us of the tale of the boy who dug through a pile of manure because he was sure that … underneath all that poop, there MUST surely be a pony!
Marks: Artificial intelligence is no more creative than a pencil. You can use a pencil — but the creativity comes from you. With AI, clever programmers can conceal that fact for a while. In this short excerpt from his new book, Non-Computable You, Robert J. Marks discusses the tricks that make you think chatbots are people.
Machines with minds? The Lovelace test vs. the Turing test. The answers computer programs give sometimes surprise me too — but they always result from their programming. When it comes to assessing creativity (and therefore consciousness and humanness), the Lovelace test is much better than the Turing test.
Machines with minds? The Lovelace test vs. the Turing test The answers computer programs give sometimes surprise me too — but they always result from their programming. When it comes to assessing creativity (and therefore consciousness and humanness), the Lovelace test is much better than the Turing test.
and
AI: The shadow of Frankenstein lurks in the Uncanny Valley. The fifth and final excerpt from Non-Computable You (2022), from Chapter 6, focuses on the scarier AI hype. Mary Shelley’s “Frankenstein” monster (1808) wasn’t strictly a robot. But she popularized the idea — now AI hype — of creating a human-like being in a lab.
Notes
1 Lovelace is often credited with writing an algorithm for Charles Babbage’s “Analytical Engine,” a machine that was planned but never built. There is some controversy as to whether Lovelace or Babbage wrote this first program. In any case, Lovelace undoubtedly was involved to an extensive degree in the very earliest computer programs, and she was also the first to say that a computer could be programmed to do more than merely compute. For an overview of her contributions, see Christopher Hollings, Ursula Martin, and Adrian Rice, “Ada Lovelace and the Analytical Engine,” Bodleian Libraries (July 26, 2018).
2 Lady Lovelace, Appendix I to Faster Than Thought: A Symposium on Digital Computing Machines, ed. B.V. Bowden (London: Pitman, 1953), 398.
3 For an overview of the similarities and differences between Babbage’s and Turing’s machines, see Nathan Zeldes, “Babbage and Turing: Two Paths to Inventing the Computer,” Nathan Zeldes (website), April 29, 2021.
4 Alan Turing, “Computing Machinery and Intelligence,” Mind 49, no. 235 (October 1950): 433–460.
5 Turing concedes that when machines surprise him, it tends to be because of traceable human error in his calculations. He also anticipates the objection that machine “surprises” are “due to some creative mental act on my part, and reflect no credit on the machine,” but does not answer this objection except to say it leads back to the question of consciousness, which “we must consider closed.” Turing, “Computing Machinery and Intelligence,” section titled “Lady Lovelace’s Objection.”
6 Turing, “Computer Machinery and Intelligence,” section titled “The Argument from Consciousness.”
7 In Turing’s “Computing Machinery” article he refers to “human computers” no fewer than ten times.
8 Chatbots are computer programs that respond in a human-like way in text-based exchanges. You might have interacted with a chatbox that popped up on your screen to offer help or answer questions. Chatbots are useful to businesses because they can function as digital help desks. But they are limited in the questions they can answer, and can only respond as they are programmed to respond.
9 Press Association, “Computer Simulating 13-Year-Old Boy Becomes First to Pass Turing Test,” Guardian, June 9, 2014.
10 George D. Montanez, “Detecting Intelligence: The Turing Test and Other Design Detection Methodologies,” Proceedings of the 8th International Conference on Agents and Artificial Intelligence, vol. 2 (Setubal, Portugal: Science and Technology Publications, 2016), 517–523.
11 Selmer Bringsjord, Paul Bello, and David Ferrucci, “Creativity, the Turing Test, and the (Better) Lovelace Test,” in The Turing Test: The Elusive Standard of Artificial Intelligence, ed. James H. Moor (Boston: Kluwer Academic Publishers, 2003), 215–239.