Mind Matters Natural and Artificial Intelligence News and Analysis
a-young-man-with-a-pistol-gun-is-standing-in-front-of-a-high-school-preparing-to-go-inside-and-commit-a-horrible-violent-mass-murder-shooting-stockpack-adobe-stock
A young man with a pistol gun is standing in front of a high school preparing to go inside and commit a horrible violent mass murder shooting.
Photo licensed via Adobe Stock

Why AI Could (But May Not) Predict School Shootings

There is no solution that is not run through natural intelligence, computer science prof Robert J. Marks explains
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In recent weeks, WBC director Robert J. Marks has done a number of interviews for his new book, Non-Computable You: What You Do That Artificial Intelligence Never Will (Discovery Institute Press, 2022). At The Cat’s Roundtable, he was asked by host John Catsimatidis about the hype around AI and whether AI could, for example, predict school shootings.

Robert J. Marks: It’ll continue doing great and exciting things, but there’s a lot of hype associated with it. People think, “Well, is the Terminator going to come alive? Are we ever going to face a scenario like we see in The Matrix?

And the answer is, “No. There are certain brick walls, which artificial intelligence will never go through,” and we know this because we’re computable. It does turn out, as far back as the 1930s, computer scientists knew that there were things which are non-computable. The biggest one was Alan Turing. He was the father of computer science, and he showed a bunch of problems, which you couldn’t compute. You couldn’t take to the computer and solve these problems. So, this begs the question, “Are there things that humans do that are not computable?”

Well, if they’re not computable, they’re not going to be captured by artificial intelligence. I would maintain that the simple ones to grasp are emotions like love and empathy and anger, but more significant is the idea that artificial intelligence will never understand what it’s doing. It’ll never have sentience, and it will never be creative. Now, I’ve mentioned these things, and one has to be aware of seductive semantics. All of them need to be defined before they’re deconstructed, but fundamentally, artificial intelligence is never going to go through that, and you’re never going to have artificial intelligence which duplicates the human.

John Catsimatidis: Well, not today. I mean, I watched Star Trek and Data s doing pretty good. Dr. Marks, the question is… Give me what our creator, who created us, the human brain as a comparison to what today’s computers are. Give me the comparison.

Robert J. Marks: I think that if one is a diehard materialist, and believes everything can be explained by science, and are “materialist,” if you will, you can go to no other conclusion than, “The brain is nothing more than a computer made out of meat.”

But some people, most notably Roger Penrose, who won the Nobel Prize, says just the opposite. In fact, I’ve learned a lot from Roger Penrose, reading his book The Emperor’s New Mind. He pointed out that there were things that computers will never be able to do.

Someone says, “Well, what about computers of tomorrow?” Well, “computable” means computable, and if you go back to the 1930s computers, today’s computers, and computers of the future, it all addresses the question, “Is what you do computable?”

And in computer science, there’s something fancy called the Church-Turing thesis, which basically says that even the computers of the future will be limited to computable things, and they will be able to do them faster. They’ll have more memory. They’ll be more amazing, but basically, we could do them on computers here even though it would take longer.
John Catsimatidis: Can we develop them to have emotions the way we have?

Robert J. Marks: You can always make artificial intelligence simulate emotions. I don’t know if you ever saw the movie AI, where there’s a robot boy:

John Catsimatidis: I did, but there was rumors around that Google last week had a computer that was a sensual being in artificial intelligence, and the computer was afraid it’s going to be turned off, and was going to hire … the computer was going to hire a lawyer.

Robert J. Marks: There are so many ways to take down that story, John. These hype stories usually they come out, and the rebuttal doesn’t get the press that the original claim did. In the Washington Post, a recent article said that this computer program that was named LaMDA was trained using dialogue of over a 100,000 questions and answers. These were called “crowd workers.” These are guys that come in and you hire them to do something, and they were told explicitly “The crowd workers are explicitly informed to reply in a safe, sensible, specific, interested, grounded, and informative manner.” And guess what — the artificial intelligence was trained to respond in a safe, sensible, specific, interesting, grounded, and informative matter.

John Catsimatidis: The computer didn’t interpret what the answer should be. It was a preprogram by the programmers?

Robert J. Marks: Yes, exactly. There’s the old thing of “garbage in, garbage out.” Computer programs, including artificial intelligence, will do exactly what the programmer said they’re going to do. They will never be creative.

John Catsimatidis: 50 years ago, I was a computer programmer. When I had my first three stores, I programmed the computers, the basic computer at that time. Now I’m just an idiot right now with these new computers, but can the computer sort out all the social messaging, and predict who the FBI or the CIA should look out for, really bad people?

Robert J. Marks: I think AI can allow us to drill deeper in there, and better identify these people.

One of the problems is this: AI can tell us who the school shooters might be, but there is something lacking called an “explanation facility.” It can tell us who that potential shooter is going to be, but it can’t tell us why. And so it can aid us in doing that, but this is never going to stand up as due process in the court of law, and there’s actually legal history of where AI has identified troubling people. The court has thrown it out, because the AI could not explain why it reached this conclusion. It can help as a tool but not as a final authority.

You may also wish to read: Google’s chatbot LaMDA sounds human because — read the manual… What would you expect LaMDA to sound like? Whales? ET? I propose a test: “Human until PROVEN otherwise.” It’s impressive but, studying the documentation, I think I know what happened to Blake Lemoine. He was hired to chat with LaMDA and didn’t understand…

Additional Resources

Podcast Transcript Download


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Why AI Could (But May Not) Predict School Shootings