Mind Matters Natural and Artificial Intelligence News and Analysis

TagAbduction

confused-robot-stockpack-adobe-stock
Confused robot

If AI’s Don’t Know What They’re Doing, Can We Hope to Explain It?

With AI, we have a world of powerful, useful, but entirely opaque systems. We don’t know why they make decisions and neither do they

In yesterday’s post, I talked about the fact that AI’s don’t understand the work they’re doing. That makes the goal — to make them think like people — elusive. This brings us to the second problem, which ended up spawning an entire field, known as “Explainable AI.” Neural networks not only don’t know what they’re doing when they do it, they can’t in general explain to their designers or users why they made such-and-such a decision. They’re a black box; in other words, they are obstinately opaque to any attempts at a conceptual understanding of their decisions or inferences. How does that play out? It means, for example, that, with image recognition tasks like facial recognition, the network can’t explain Read More ›

giraffe-looking-through-a-planes-window-stockpack-adobe-stock
Giraffe looking through a plane's window

Atheist Claims About Logical Fallacies Often Just Mean: Shut Up!

In the recent debate, Matt Dillahunty accuses theists of “the fallacy of the argument from personal incredulity” because we examine his claims and find them incredible

What does atheist Matt Dillahunty mean when he accuses theists of “the fallacy of the argument from personal incredulity”? Atheist rhetoric is a mish mash of ignorance, denial and pretense, often mingled with explicit or implicit efforts at censorship. Atheists travel in herds—contrary to their own inflated sense of their ‘freethought’ and ‘skepticism’, they are the most gullible idealogues. In debate with atheists, specific themes show up again and again, and atheist accusation of ‘the fallacy of the argument from personal incredulity’ is among the most common, usually aimed at Christians who challenge atheist arguments. Matt Dillahunty invoked ‘the fallacy of the argument from personal incredulity’ in our recent debate. It’s worthwhile examining what this ‘fallacy’ is and why atheists Read More ›

set-of-detective-of-holmes-times-stockpack-adobe-stock.jpg
set of Detective of Holmes times

Abduction: A Thinking Skill You Can Do But Computers Can’t

A Norwegian police detective fills us in on how to use abduction for better thinking

Ivar Fahsing, detective chief superintendent at the Norwegian Police University College, has “worked on some of the worst crimes in Norway for 30 years.” He had to hone his thinking skills but, he says, many of us have never learned to “make safe judgments under pressure.” He is also convinced that any of us can improve our skills and he offers some help from his experience investigating crimes. One skill he focuses on is abduction, which was Sherlock Holmes’s favorite method. Yes, Holmes always tells his sidekick and foil, Dr. Watson, that he uses deduction — but he doesn’t: In the Sherlock Holmes novels, our titular hero continuously assails Dr Watson, a man of science, about the merits of deductive Read More ›

detective-board-with-photos-of-suspected-criminals-crime-scenes-and-evidence-with-red-threads-stockpack-adobe-stock.jpg
Detective board with photos of suspected criminals, crime scenes and evidence with red threads

Why Computers Will Likely Never Perform Abductive Inferences

As Erik Larson points out in The Myth of Artificial Intelligence, what computers “know” must be painstakingly programmed

I’ve been reviewing philosopher and programmer Erik Larson’s The Myth of Artificial Intelligence. See my earlier posts, here, here, here, here, and here. Larson did an interesting podcast with the Brookings Institution through its Lawfare Blog shortly after the release of his book. It’s well worth a listen, and Larson elucidates in that interview many of the key points in his book. The one place in the interview where I wish he had elaborated further was on the question of abductive inference (aka retroductive inference or inference to the best explanation). For me, the key to understanding why computers cannot, and most likely will never, be able to perform abductive inferences is the problem of underdetermination of explanation by data. This may seem like a mouthful, but the idea is straightforward. Read More ›

White robot using floating digital network connections with dots and lines 3D rendering
White robot using floating digital network connections with dots and lines 3D rendering

No AI Overlords?: What Is Larson Arguing and Why Does It Matter?

Information theorist William Dembski explains, computers can’t do some things by their very nature

Yesterday, we were looking at the significance of AI researcher Erik J. Larson’s new book, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do, contrasting it with claims that AI will merge with or replace us. Some such claims are made by industry insiders like Ray Kurzweil. But more often we hear them from science celebs like the late Stephen Hawking and Richard Dawkins, who, on these topics, are more known than knowledgeable. So why does Larson think they are wrong? He offers two arguments. First, as information theorist William Dembski explains, is that there are some kinds of thinking that, by their nature, computers don’t do: With regard to inference, he shows that a form Read More ›

Head of the robot girl

A Type of Reasoning AI Can’t Replace

Abductive reasoning requires creativity, in addition to computation

AI, says William Littlefield, would get stuck in an endless loop with abductive reasoning, which is an inference to the best explanation or an educated guess. But it plays an important role in creating hypotheses in the sciences.

Read More ›