Mind Matters Natural and Artificial Intelligence News and Analysis
white-futuristic-robot-crossed-arms-unhappy-face-stockpack-adobe-stock
White futuristic robot, crossed arms, unhappy face
Image licensed via Adobe Stock

When a Robot Commits Suicide — an Elegy for What?

What’s frustrating about Episode 1 of Orville, Season Three is that robot Isaac’s claim to personhood is not ambiguous so much as confused and contradictory
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Last time, we began our review of The Orville, Season Three, by discussing the unorthodox relationship between Isaac, the ship’s token robot, and Dr. Claire Finn. The bottom line is that they were romantically involved until Isaac turned out to be a sleeper cell for his race of robots, the Kaylon, who controlled an entire planet. In a climactic moment, Isaac — because of his attachment to Dr. Finn’s son Ty — defeats his programming and saves the Orville. Unfortunately, however, in the battle between the Kaylon and the Union fleet, many Union soldiers were killed defeating the Kaylon — and most in the Orville’s crew have not forgotten it.

Dr. Finn and Isaac’s relationship remains somewhat ambiguous. What feels so strange about this episode is that it seems we’re supposed to take Dr. Finn’s debate over her romantic feelings for Isaac seriously without any discussion at all as to whether Isaac is really sentient and therefore accountable for his actions. Her conflict is framed in such a way as to suggest that Isaac is, absolutely, morally responsible for what happened. The fact that he overrode his programming prompts no questions. The question for Dr. Finn is, can she live with herself knowing she’s in love with a monster? She does not wonder about the origin and nature of Isaac’s ability to choose, and his corresponding level of responsibility for his decisions.

To be candid, this entire shoe-horned conundrum is ridiculous. Really, Dr. Finn should have figured out that dating Isaac was a bad idea, based on what she learned in previous episodes. Isaac has made it clear that his actions are directed by the highest probability of success based on his desired outcome, which is determined by his programming. He’s not actually choosing anything on impulse, but rather, within the framework of a mission. For all Dr. Finn knows, he simply sees her as a grid of numbers, rather than a free-thinking being with a distinct personhood. In the words of Futurama, “Don’t Date Robots.”

I’m not reading my personal interpretation of Isaac’s perspective into the plot here. Isaac makes it perfectly clear that he has no emotional preferences one way or another. He literally can’t love. He says he can’t love. All the emoting done by the rest of the cast falls on deaf ears when it comes to him because he does not have the capacity to understand any of it.

I’m reminded of a robot butler from an animated TV show called Clone High (2002–2003). The students would come to the butler and tell him their problems. The butler would spew some generic psychobabble. Then the students would thank the butler with tears in their eyes. And the robot would say, “You’re welcome, Wesley,” because, to the robot, every child in that school was named Wesley.

I also think of the time Gene Belcher from Bob’s Burgers (TV 2014– ) told an electronic toilet, “I love you.” The toilet asked. “Do you wish to vomit? Seat up for vomit.”

By the show’s own rules, Isaac cannot feel. He is not equivalent to the android Data in the Star Trek universe, who was something of an anomaly. Nobody understood how Data was made, so what he could do and how human he could become remained an open question.

Isaac, on the other hand, is a robot in the traditional sense of the word, and the show stuck to this understanding of Isaac’s character until he saved Ty. He’d never done anything like that before, so the question on everyone’s mind should be, “Is there more to Isaac than metal?” But the episode never addresses that. That’s infuriating.

Viewers are asked to accept that, although Isaac cannot feel emotions and therefore lacks the fundamental components that would make one sentient, it shouldn’t really matter because life is essentially all the same thing. Thus, the writers feel justified in dumping a high degree of accountability onto Isaac for his role in the Kaylon betrayal. That approach refuses to acknowledge the distinction between organic and synthetic life. We are asked to pass judgement on objects even if they have no sentient capabilities, which would be akin to judging a toaster for burning a slice of bread.

Then again, the message might be that the entire question of life is irrelevant anyway — it’s all a matter of individual perception. Now, as poetic as that might sound to some people, it is a lousy way to tell a story because — once again, I must resort to what is coming to be my favorite word in these reviews — stakes.” What is at stake emotionally?

Our emotional investment in Dr. Finn and Isaac’s relationship completely depends on whether Isaac, even if he isn’t sentient, has the potential of becoming so. If Isaac can become more human, then his and Dr. Finn’s story becomes one of true love conquering all obstacles. If Isaac cannot become more human, then Dr. Finn is dating a toaster, and we should pity her.

Unlike Star Trek, which kept Data’s potential for becoming more human a nagging question throughout the series, Orville seems to go out of its way to ignore the problem altogether. In fact, it’s absolutely preachy! The scenes all but scream, “You WILL accept this relationship, and if you don’t, you’re a bigot!” Nobody raises the question of what we are asked to accept. Nobody tells Dr. Finn that dating a toaster is sad, and she could do better.

As an aside, the justification for Dr. Finn’s feelings in this episode is a blatant misrepresentation of events in an earlier episode. When she first explains why she fell in love with Isaac, she says it started the day she, Isaac, and her children crashed their shuttle on a remote planet. This episode occurred in Season One. But that is not what happened, exactly. Isaac had to babysit the kids because Dr. Finn was kidnapped by some mutants. At the end of the episode, she smiles and tells him, “Welcome to the family,” because the kids wound up really liking him, especially Ty. In fact, at the beginning of this episode in Season One, the implication was that Dr. Finn was somewhat prejudiced against the robot. Thus, her scene with him at the end of the episode was her way of accepting him into the crew, of overcoming her own prejudices. There was no hint or indication of a crush on electronics. This plot development was shoehorned into the middle of the second season — and it shows.

One might ask, “Why am I ranting about Isaac’s nature?” Because this entire episode revolves around one central event: The moment Isaac kills himself.

Every scene preceding the robot’s suicide — Dr. Finn describing her conflicting emotions, the crew shunning Isaac — creates the impression that Isaac is, on some level, feeling the impact of these events. But the writers of this episode, for whatever reason, keep insisting that Isaac doesn’t feel anything at all. This dichotomy makes the episode confusing and it’s impossible for me as a viewer to decide how much I should care about what going on.

Following the show’s logic, I can’t even decide if Isaac is a good guy or a bad guy, let alone judge whether or not I’m supposed to care if he offs himself. Moral ambiguity around characters is fine up to a point. But if we are expected to care about what happens to the character, we needs to at least know whether the character is really sentient or can become so. But the question of life is ignored. The crew pretends that Isaac is sentient, and not only sentient but morally culpable for what happened during the Kaylon betrayal. In so doing, they ignore the fact that Isaac overrode his programming, which is the only indication we have that might prove he really is sentient.

And don’t think for a second, that this refusal to acknowledge the question of his programming is because the crew actually believes Isaac is alive. Captain Ed Mercer’s eulogy for the robot makes it very clear that Isaac was as human as they allowed him to be in their minds, a poetic word salad that basically means, “You’re attending the funeral for a toaster, but we’re all going to pretend like this matters.”

The bottom line is, either the show can’t make up its mind with regards to what Isaac actually is, or it’s saying that it doesn’t matter what reality is because everything is based on one’s own subjective experience. But that philosophy simply won’t work here. If a robot kills itself, either it was overwhelmed with grief, or it was stupid. I’ll elaborate on this point later but for now, let’s continue with the story.

Isaac is harassed for a while, and eventually told by Marcus, Dr. Finn’s elder son, that he should be dead. He goes to his workstation, gives some final advice for the crew, and basically fries his brain with some battery-type device.

Of course, the writers feel compelled to insist that Isaac is really, truly, super duper dead. And they pull the whole, “No! We mean it! He’s dead! We swear!” routine because they know there is no way the viewer is actually going to buy it. Isaac is a recurring character. The chances of his irreversible death are very small.

And what follows is one of the most bizarre sequences I’ve ever seen in a series. The crew spends what felt like twenty minutes mourning Isaac. And it drags! As I watched this monotony, I remember thinking “Good night! Spock didn’t get this much airtime when he died!”

The worst part is, you believe none of it. So you’re simply waiting for the episode to end. Or more to the point, waiting for the Big Twist — which is not really a twist. Of course, the twist eventually arrives, and we’ll talk about that next time.

Here’s my review of Part I: Should we love or hate an intelligent robot? Or care at all? In Season 3 of Orville that becomes a serious question. Is there more to Isaac the robot than metal? But how is that even possible? ? And why isn’t the question addressed?


Gary Varner

Gary Varner is the Assistant to the Managing and Associate Directors at the Center for Science & Culture in Seattle, Washington. He is a Science Fiction and Fantasy enthusiast with a bachelor’s degree in Theater Arts, and he spends his time working with his fellows at Discovery Institute and raising his daughter who he suspects will one day be president of the United States. For more reviews as well as serial novels, go to www.garypaulvarner.com to read more.

When a Robot Commits Suicide — an Elegy for What?