Mind Matters Natural and Artificial Intelligence News and Analysis
smart-robotic-farmers-concept-robot-farmers-agriculture-tech-467058395-stockpack-adobe_stock
Smart robotic farmers concept, robot farmers, Agriculture technology, Farm automation.
Image Credit: Es sarawuth - Adobe Stock

Computer Prof: Handing Off Risky Operations to AI Would Be Stupid

When everything went wrong in Space Odyssey 2001, the cause was faulty programming, not the computer deciding to take over, he says.
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

When computer engineer Robert J. Marks was discussing whether computers can be creative with Dallas radio host Mark Davis (660 AM), the talk turned to intelligent machines, like HAL 9000 in Space Odyssey 2001 or maybe David, the Weyland Corporation robot in the Alien series.

Marks talked about things that machines don’t do:

Robert Marks: We love, we have compassion. I think even deeper, we have creativity. We are able to understand. AI will never understand. It can add the numbers; computers can add the numbers six and four, but it doesn’t understand what the number six is. It doesn’t understand what the number four is.

So these are attributes which we have that AI will never have. And interestingly, this is more than speculation. Alan Turing (1912–1954) back in the 1930s showed that there were certain problems you could not solve by step-by-step procedures. And that’s manifest in us as human beings.

And Davis suddenly remembered science fiction great Isaac Asimov (1920–1992) and I, Robot.

Mark Davis: So that is in its way comforting but the heartlessness of it might be a double-edged sword … The robots will never do anything to harm us. Well, the robots took that very seriously and figured out that we were going to harm ourselves.

So humanity kind of had to be destroyed. In its heartlessness, in its cold calculations, is there any reason for it to be concerned about handing over to AI certain decision-making where even with its heartlessness and its denotative data-driven derivative method, it’ll do something that would just be a nightmare?

Robert Marks: Oh, exactly. And I think that anybody that hands over anything, any operation that’s dangerous potentially to AI is really stupid. And this is… A great movie that depicts this is 2001: A Space Odyssey:

Robert Marks: That was not AI going conscious. HAL was not programmed to take over. He was programmed rather that the mission was more important than the lives of the astronauts. So this was an example of faulty AI programming and it manifested itself terribly. So let’s not put AI in the charge of everything without human oversight.

So HAL was not being conscious or creative in order to rescue the mission; he was simply following his programming when he barred the door.

Could David the evil robot in Prometheus (2012) be creative, as he was portrayed?

Marks would say no. Machines and creativity is one of the topics of his recent book, Non-Computable You (Discovery Institute Press, 2022).

Would laws of robotics help robots behave better?

Asimov’s Foundation Trilogy offered Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect it’s own existence as long as such protection does not conflict with the First or Second Law.

Here he describes the laws himself:

Some have argued for the idea, at least in science fiction, that robots could be programmed to act overall for the good of humans, so they came up with “Zero”th Law of Robotics: to do that.

  1. A robot must act in the long-range interest of humanity as a whole, and may overrule all other laws whenever it seems necessary for that ultimate good.

It was called the “Zero”th Law on the theory that it should precede the others.

But that’s all fiction of course. Maybe best to keep it that way.

You may also wish to read: Why are robots built to look like humans? It’s not because that’s the most efficient way to design them for high tech work. The main effect of humanoid robots is psychological; people tend to believe they are actually thinking when they are not.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Computer Prof: Handing Off Risky Operations to AI Would Be Stupid