Mind Matters Natural and Artificial Intelligence News and Analysis

TagAbduction

set-of-detective-of-holmes-times-stockpack-adobe-stock.jpg
set of Detective of Holmes times

Abduction: A Thinking Skill You Can Do But Computers Can’t

A Norwegian police detective fills us in on how to use abduction for better thinking

Ivar Fahsing, detective chief superintendent at the Norwegian Police University College, has “worked on some of the worst crimes in Norway for 30 years.” He had to hone his thinking skills but, he says, many of us have never learned to “make safe judgments under pressure.” He is also convinced that any of us can improve our skills and he offers some help from his experience investigating crimes. One skill he focuses on is abduction, which was Sherlock Holmes’s favorite method. Yes, Holmes always tells his sidekick and foil, Dr. Watson, that he uses deduction — but he doesn’t: In the Sherlock Holmes novels, our titular hero continuously assails Dr Watson, a man of science, about the merits of deductive…

detective-board-with-photos-of-suspected-criminals-crime-scenes-and-evidence-with-red-threads-stockpack-adobe-stock.jpg
Detective board with photos of suspected criminals, crime scenes and evidence with red threads

Why Computers Will Likely Never Perform Abductive Inferences

As Erik Larson points out in The Myth of Artificial Intelligence, what computers “know” must be painstakingly programmed

I’ve been reviewing philosopher and programmer Erik Larson’s The Myth of Artificial Intelligence. See my earlier posts, here, here, here, here, and here. Larson did an interesting podcast with the Brookings Institution through its Lawfare Blog shortly after the release of his book. It’s well worth a listen, and Larson elucidates in that interview many of the key points in his book. The one place in the interview where I wish he had elaborated further was on the question of abductive inference (aka retroductive inference or inference to the best explanation). For me, the key to understanding why computers cannot, and most likely will never, be able to perform abductive inferences is the problem of underdetermination of explanation by data. This may seem like a mouthful, but the idea is straightforward.…

White robot using floating digital network connections with dots and lines 3D rendering
White robot using floating digital network connections with dots and lines 3D rendering

No AI Overlords?: What Is Larson Arguing and Why Does It Matter?

Information theorist William Dembski explains, computers can’t do some things by their very nature

Yesterday, we were looking at the significance of AI researcher Erik J. Larson’s new book, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do, contrasting it with claims that AI will merge with or replace us. Some such claims are made by industry insiders like Ray Kurzweil. But more often we hear them from science celebs like the late Stephen Hawking and Richard Dawkins, who, on these topics, are more known than knowledgeable. So why does Larson think they are wrong? He offers two arguments. First, as information theorist William Dembski explains, is that there are some kinds of thinking that, by their nature, computers don’t do: With regard to inference, he shows that a form…

Head of the robot girl

A Type of Reasoning AI Can’t Replace

Abductive reasoning requires creativity, in addition to computation

AI, says William Littlefield, would get stuck in an endless loop with abductive reasoning, which is an inference to the best explanation or an educated guess. But it plays an important role in creating hypotheses in the sciences.

Read More ›