Mind Matters Natural and Artificial Intelligence News and Analysis
3d-render-beautiful-woman-computer-generated-photo-realistic-to-to-illustrate-the-uncanny-valley-effect-stockpack-adobe-stock
3D render beautiful woman computer generated photo realistic to to illustrate the uncanny valley effect
Photo licensed via Adobe Stock

AI: The Shadow of Frankenstein Lurks in the Uncanny Valley

The fifth and final excerpt from Non-Computable You (2022), from Chapter 6, focuses on the scarier AI hype

Wrapping AI in an impressive physical package can magnify the perceived impact of new technology. Doing so uses seductive optics.

The confusing of AI packaging with AI content was evident in media excitement about a Buddhist robot who delivers messages to the faithful. “The world’s first sutra-chanting android deity, modeled after Kannon the Buddhist Goddess of Mercy, was introduced to the public last week,” the report reads. The robot can “move its eyes, hands, and torso, make human-like gestures during its speech, and brings its hands together in prayer. A camera implanted in the left eye to focus on a subject gives the impression of eye contact.”1

Technologically speaking, nothing special is happening here. The messages from the Buddhist robot are pre-recorded and not the product of AI. The mouth movements are synced to the recording. This technology dates back at least to the Disney Hall of Presidents, launched in 1971. All the US Presidents in Disneyland give presentations akin to the Buddhist robot. Their mouths move and they gesture. The technology, dubbed Audio-Animatronic, was trademarked by Disney in 1964.2

But the packaging and context made this robot seem special. Monks gathered at the robot’s opening ceremony and performed with “chanting, bowing, drumming, and the ringing of bells.” The robot, named “Mindar,” was designed to look like an androgynous human, with “special features designed to evoke both feminine and masculine qualities…. the plain facial features give room for visitors to use their own imagination in how they’d like the deity to appear.”3

Sound familiar? Like seductive semantics, here we have seductive optics. The AI looks generally human, but also leaves space for people to impose their own preferences.

The media obsession with the Buddhist robot story is due to seductive optics.

Some of the panicky AI-will-take-over-the-world talk grows out of seductive optics — that is, the AI packaging. Author and poet Diane Ackerman confesses, “Artificial intelligence is growing up fast, as are robots whose facial expressions can elicit empathy and make your mirror neurons quiver.”4

A factor contributing to fear of AI is the so-called Frankenstein Complex.5 The term, coined by science fiction writer Isaac Asimov6, originally described the fear of the mechanical man in science fiction of old. Frankenstein refers to Mary Shelley’s 1818 novel Frankenstein, or The Modern Prometheus. A young scientist, Dr. Victor Frankenstein, sews together dead body parts to create a monster. (In the book Frankenstein is the doctor’s last name, but today Frankenstein’s monster is often referred to as simply Frankenstein.)

Thomas Edison first put the story to film in a silent 1910 movie.

Some of us are familiar with Boris Karloff’s depiction of the monster in the 1931 motion picture classic Frankenstein.7

Today’s film monsters are typically a lot scarier than those depicted in 1930s movies with their clunky special effects. But even today, Karloff’s Frankenstein monster makes one’s skin crawl. The question is, why? After all, he moves clumsily in slow motion; even someone on crutches could avoid him. He’s tall, sure, but the smaller, fast-moving, hard-punching Mike Tyson could no doubt take him in the ring. The monster is less dangerous than a bobcat or alligator, yet we get chills just looking at Karloff’s Frankenstein monster, and we don’t when thinking about alligators or bobcats. What’s going on here?

The Frankenstein complex is explained by a related idea dubbed the uncanny valley.8 The hypothesis is named after a dip in a regression curve. For the most part, and all other things being equal, as an object comes to resemble a human more and more, our reaction to the object becomes increasingly positive. But if the likeness is a near miss, we experience the uncanny valley. Anything not human that appears very nearly human is scary.

The Frankenstein complex/uncanny valley contributes to fears of (and fascination with) AI. Consider the chatbot Sophia the Robot.9 Sophia has its own Facebook page10 and has been awarded citizenship in Saudi Arabia.11 Its speech is augmented by facial expressions using small feature changes akin to those used by cartoonists (which we will discuss in just a moment). Sophia’s human-like container, its seductive optics, has little to do with its chatbot AI. (If you want to brave the revenue-generating ads, there are many interesting videos of Sophia on YouTube.)

Sophia the Robot.

Sophia is bald and the back of its head is clear plastic that reveals electronics inside its head. The Frankenstein complex/uncanny valley reaction might diminish if Sophia wore a wig, or this might plunge the robot deeper into the uncanny valley, since it still wouldn’t look fully human. I suspect AI optics will get better to the point of being visually indistinguishable from humans when not closely examined. Currently, though, seamless human form representation in robots is not well developed. It’s close enough, however, that marketers of Sophia the Robot and other AI can grab our attention via the uncanny valley. Today more than ever the goal in promotion is to get the attention of the reader and the media. Making things look almost human and, therefore, a little creepy does this.


Here are all of the excerpts in order:

Why you are not — and cannot be — computable. A computer science prof explains in a new book that computer intelligence does not hold a candle to human intelligence. In this excerpt from his forthcoming book, Non-Computable You, Robert J. Marks shows why most human experience is not even computable.

(Non-Computable You (Discovery Institute Press,
2022) by Robert J. Marks is available here.)

The Software of the Gaps: An excerpt from Non-Computable You. In his just-published book, Robert J. Marks takes on claims that consciousness is emerging from AI and that we can upload our brains. He reminds us of the tale of the boy who dug through a pile of manure because he was sure that … underneath all that poop, there MUST surely be a pony!

Marks: Artificial intelligence is no more creative than a pencil.
You can use a pencil — but the creativity comes from you. With AI, clever programmers can conceal that fact for a while. In this short excerpt from his new book, Non-Computable You, Robert J. Marks discusses the tricks that make you think chatbots are people.

Machines with minds? The Lovelace test vs. the Turing test. The answers computer programs give sometimes surprise me too — but they always result from their programming. When it comes to assessing creativity (and therefore consciousness and humanness), the Lovelace test is much better than the Turing test.

Machines with minds? The Lovelace test vs. the Turing test The answers computer programs give sometimes surprise me too — but they always result from their programming. When it comes to assessing creativity (and therefore consciousness and humanness), the Lovelace test is much better than the Turing test.

and

AI: The shadow of Frankenstein lurks in the Uncanny Valley. The fifth and final excerpt from Non-Computable You (2022), from Chapter 6, focuses on the scarier AI hype. Mary Shelley’s “Frankenstein” monster (1808) wasn’t strictly a robot. But she popularized the idea — now AI hype — of creating a human-like being in a lab.

Notes

1 Thisanka Siripala, “An Ancient Japanese Shrine Debuts a Buddhist Robot,” Diplomat, March 5, 2019.

2 “Audio-Animatronics Trademark Details,” Justia Trademarks.

3 Siripala, “An Ancient Japanese Shrine.”

4 Marr, “Twenty-Eight Best Quotes.”

5 Rushing, Janice Hocker, and Thomas S. Frentz,“The Frankenstein Myth in Contemporary Cinema, Critical Studies in Media Communication 6, no. 1 (1989): 61–80; Sam N. Lehman-Wilzig, “Frankenstein Unbound: Towards a Legal Definition of Artificial Intelligence,” Futures 13, no. 6 (1981): 442–457.

6 Lee McCauley, “The Frankenstein Complex and Asimov’s Three Laws,” Association for the Advancement of Artificial Intelligence, 2007.

7 To see a 1935 Universal Pictures promotional photo, visit “Frankenstein’s Monster,” Wikimedia Foundation.

8 Maya B. Mathur and David B. Reichling, “Navigating a Social World with Robot Partners: A Quantitative Cartography of the Uncanny Valley,” Cognition 146 (2016): 22–32.

9 “Sophia (Robot),” Wikimedia Foundation, last modified October 12, 2021.

10 “Sophia the Robot,” Facebook.

11 Chris Weller, “Meet the First-Ever Robot Citizen—A Humanoid Named Sophia That Once Said It Would ‘Destroy Humans,’” Business Insider, October 28, 2017.


Robert J. Marks II

Director, Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Besides serving as Director, Robert J. Marks Ph.D. hosts the Mind Matters podcast for the Bradley Center. He is Distinguished Professor of Electrical and Computer Engineering at Baylor University. Marks is a Fellow of both the Institute of Electrical and Electronic Engineers (IEEE) and the Optical Society of America. He was Charter President of the IEEE Neural Networks Council and served as Editor-in-Chief of the IEEE Transactions on Neural Networks. He is coauthor of the books Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks (MIT Press) and Introduction to Evolutionary Informatics (World Scientific). For more information, see Dr. Marks’s expanded bio.

AI: The Shadow of Frankenstein Lurks in the Uncanny Valley