Mind Matters Natural and Artificial Intelligence News and Analysis
evil-robot-glowing-lights-shiny-metalic-parts-stockpack-adobe-stock
Evil robot, glowing lights, shiny metalic parts
Photo licensed via Adobe Stock

So, Can a Computer Really Be Irrational?

Computer prof Robert J. Marks tells Wesley J. Smith: No, and here’s why … from his experience
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In a recent episode at Mind Matters News podcasting, “Can a computer be a person?” (November 10, 2022), Robert J. Marks and Wesley J. Smith discussed that in connection with Marks’s new book, Non-Computable You:

Some excerpts:

Wesley J. Smith: Let me ask the question in a different way. Can an AI ever be irrational?

Robert J. Marks: Yes. Irrational in the sense of being irrational from the point of an observer. A classic example, and this happened a number of years ago, was that the Soviets during the Cold War developed a high technology to decide whether the US was being attacked by… I’m sorry, whether the Soviet Union was being attacked by the United States. And so they had these missile detectors. And there was this false alarm one day when the OKO system (O-K-O) was stimulated and sirens went off and it says they were being attacked by US missiles. And the protocol was to launch a counterstrike.

But fortunately, the person in charge, if I remember, he was Lieutenant Stanislav, I believe. He said, “This doesn’t make sense. Because if the US was doing a military strike, they would just not send one or two missiles over. They would do just a preemptive strike of just launching a number of different missiles towards the Soviet Union.”

So he called his superiors and they canceled it off. So this was something, I believe, an example of AI, or at least high technology being irrational and coming to the incorrect conclusion. Later, they found out that the Oko system had mistaken the reflection of sunlight off the clouds as a US missile. And so it was something which was totally incorrect. But that one guy, that one Lieutenant Stanislav, he saved possibly a nuclear exchange, really.

Wesley J. Smith: Because he had something you talk about in your book, called common sense…

Robert J. Marks: Yes, he has common sense.

Wesley J. Smith: … that AI doesn’t have.

Robert J. Marks: Yes, it doesn’t have. And I used to believe that common sense might be able to be programmed into a computer. But I’m starting to believe with more and more results that this might be something which is also non-computable.

One of my favorite ideas of ambiguity and the inability of artificial intelligence to understand and resolve ambiguity are so-called “flubbed headlines.”? And I have a great collection of them, by the way, over a hundred flubbed headlines … “Hospitals sued by seven foot doctors.”

Is it seven podiatrists or is it these really tall doctors?

“Farmer bill dies in house.”

Was it a farmer or was it a piece of legislation? So there are a number of those. And one of the challenges with artificial intelligence is there’s been these claims of this certain narrow type of ambiguity called Winograd schema

But a lot of these Winograd schema are actually answered online. And so if it’s on Wikipedia, if it’s on the web, it [an AI]can resolve it because it’s seen it before.

But just by its own… ?

You may also wish to read: Can we install common sense in AI? I propose a new challenge: Teach computers to correctly understand the headline “Students Cook and Serve Grandparents.” The late Paul Allen thought teachings computers common sense was a key AI goal. To help, I offer the Flubbed Headline Challenge. (Robert J. Marks)


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

So, Can a Computer Really Be Irrational?