Mind Matters Natural and Artificial Intelligence News and Analysis
building-a-human-head-in-a-directionesque-world-stockpack-adobe-stock
Building A Human Head In A Directionesque World
Image licensed via Adobe Stock

Westworld: Episode 3 Review

Here we get to the theory of consciousess
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Not too much happens in episode three. It builds off the events set up in episode two. But there is one particular scene that has a couple of interesting ideas we’re going to explore. First, a quick recap of the episode.

Dolores hides the gun she found in a dresser drawer, and eventually asks Teddy to teach her to shoot. He does, but unfortunately, not for very long because Dr. Ford has now given him something Teddy’s never had before: a backstory. Ford gives Teddy a nemesis named Wyatt. Teddy was once a part of his gang, and while riding with the outlaw, he committed horrible crimes. Therefore, Teddy is on a quest for redemption and is resolved to finish his business with Wyatt before he can consider himself worthy of Dolores. So, when a group of men rides up to Teddy while he’s teaching Dolores to shoot, he goes with them instead of completing his usual loop of dying while trying to save Dolores. Dolores is now alone when her family’s home is attacked, but now, Dolores has a gun. So, the question is, did Dr. Ford plan this?

Dolores does her normal circuit, and this time, when the outlaws come and murder her parents, she uses the gun to escape. While all these events are taking place, William has gone off with a posse to find an outlaw, and during the hunt, he meets Dolores as she’s fleeing from her home. As for poor Teddy, he gets ambushed by Wyatt and his gang. The poor guy just can’t get a win.

Are These My Thoughts or the Voice of God?

Meanwhile, Elisa, the technician who saved Maeve from being decommissioned, is still investigating the glitches in the park. Bernard tells her she’s reading too much into the problem; however, it’s evident that Bernard is not entirely convinced of this because he goes to Dr. Ford and asks him about the issue. He tells Dr. Ford Elisa has discovered that some of the broken robots are repeating the name Arnold, and this has led him to believe that Dr. Ford has not told him everything he needs to go with regard to the history of the park. Dr. Ford then tells Bernard that Arnold was his business partner and that he wanted to give the robots consciousness. Dr. Ford explains that while the robots had passed The Turing Test, Arnold wasn’t satisfied with this and had created a theory of consciousness revolving around a pyramid with four layers. The first layer was memory, the second, improvisation, and the third, self-interest. As for the fourth, Arnold never got that far, but he was convinced his theory was sound, and to help the robots along, he’d adopted a theory called the Bicameral Mind, which stated that primitive man mistook their thoughts for the voice of god. Arnold decided to play god by becoming the voice inside the robots’ minds. So, the voice Delores and others have been hearing this whole time is Arnold’s, or some of the voices are Arnold, at least. In the end, Dr. Ford shut down this aspect of Arnold’s work after he committed suicide in the park, leaving only the voice commands, so they could more easily control the robots.

I found this scene of the episode fascinating. For one thing, The Turing Test, which was proposed by Alan M. Turing has been subject to debate, and in my opinion, comfortably refuted. This test was also called The Imitation Game, and putting it simply, stated that if a human interrogator couldn’t tell the difference between a robot and human based on the responses each subject gave within a certain period of time, then the robot was, for all intent and purposes, human. The obvious rebuttal to this was asking whether or not the robot was simply parroting human responses, as was demonstrated by John Searle’s proposed Chinese Room Argument. This argument presented a scenario where a human is locked in a room and given a manual showing how to match the correct responses to Chinese symbols. If someone slipped these symbols under the door and the human was able to give the appropriate responses to the symbols because he had access to the manual, then for all intent and purposes, he’d passed the Turing Test without having any understanding of what the symbols meant.

Consciousness and the “Bicameral Mind”

The Turing Test is a flimsy proposal at best, so it’s no wonder Arnold wasn’t satisfied by it, but what about the theory of the Bicameral Mind? Well, the theory was presented by psychologist Julian Jaynes, and it is exactly what the show pushes. At one point in time, man believed his own thoughts to be the voice of god. To me, this is a rather unsatisfactory explanation for consciousness. For one thing, there has to be a sense of self in order for the man in question to be able to identify a thought to be the voice of anything. In fact, I think there would need to be a sense of self just to identify a thought period, let alone to consider a thought as something coming from outside oneself. So, if Arnold were to plant his own voice inside these robots’ heads, how does the robot go from simply responding to the command to identifying the command as its own independent thought? How does the robot separate the command from itself? And just for giggles, allow me to pose another question. How do we know the ancient human ancestors were wrong? What if some intuitions are coming from an outside source? If we consider each thought to be somehow separated from our own self-awareness, how would we know if something foreign entered our mind?

So, I don’t think the Bicameral Mind tells us much when it comes to the origins of consciousness because it essentially puts the cart before the horse. As for Arnold’s Pyramid Scheme of consciousness, memory, improvisation, self-interest, and . . . who knows, the show doesn’t have much to say on the matter because, at the start of the series, the robots are already displaying the lower three qualities. I would’ve been interested to see how the robots went from improvisation to self-interest, but the writers decided not to go there.

Westworld ultimately fails to deliver a satisfactory theory regarding how the robots are coming to life, but at least they’re trying. The writers have certainly put more thought into the subject than many of the Sci-Fi shows we’ve reviewed. I’ll give them credit for that much. We’ll cover episode four in the next review.


Gary Varner

Gary Varner is the Assistant to the Managing and Associate Directors at the Center for Science & Culture in Seattle, Washington. He is a Science Fiction and Fantasy enthusiast with a bachelor’s degree in Theater Arts, and he spends his time working with his fellows at Discovery Institute and raising his daughter who he suspects will one day be president of the United States. For more reviews as well as serial novels, go to www.garypaulvarner.com to read more.

Westworld: Episode 3 Review