Mind Matters Natural and Artificial Intelligence News and Analysis
Head of the robot girl

A Type of Reasoning AI Can’t Replace

Abductive reasoning requires creativity, in addition to computation
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Software engineer and philosopher William J. Littlefield II pointed out in a recent essay that there are three types of reasoning. Two of them, we probably all learned in school: deductive and inductive reasoning. Computers can do both of these quite well.

Deductive reasoning:

Dogs are canines.
Tuffy is a dog.
Therefore Tuffy is a canine.

The early computers, says Littlefield, generally used deductive reasoning (which he thinks of as “top-down” reasoning). It enables powerful computers to beat humans at games like chess and Go by computing many more logical moves at once than a human can.

Inductive reasoning is, by contrast, “bottom-up” reasoning, from a series of relevant facts to a conclusion, for example:

The club has held 60 swim meets, 20 in each venue below:

When the Club holds swim meets at Sandy Point, we get 80% approval on average.

When the Club holds swim meets at Stony Point, we get 60% approval on average.

When the Club holds swim meets at Rocky Point, we get 40% approval on average.

Conclusion: Club members prefer sandy beaches to other kinds.

Again, he says, the advent of new methods like neural networks enabled powerful computers to assemble a great deal of information so as to enable inductive reasoning (Big Data).

However, Watson’s flop in medicine suggests that in situations where—unlike chess—there aren’t really “rules,” machines face considerable difficulty in deciding what data is really information. Perhaps Even Bigger Data will solve that problem. We shall see.

But, according to Littlefield, the third type of reasoning, abductive reasoning, works a bit differently:

Unlike induction or deduction, where we start with cases to make conclusions about a rule, or vice versa, with abduction, we generate a hypothesis to explain the relationship between a case and a rule. More concisely, in abductive reasoning, we make an educated guess.

William J. Littlefield II, “The Human Skills AI Can’t Replace” at Quillette

Abductive reasoning, originally described by an American philosopher Charles Sanders Peirce (1839–1914), is sometimes called an “inference to the best explanation,” as in the following example:

One morning you enter the kitchen to find a plate and cup on the table, with breadcrumbs and a pat of butter on it, and surrounded by a jar of jam, a pack of sugar, and an empty carton of milk. You conclude that one of your house-mates got up at night to make him- or herself a midnight snack and was too tired to clear the table. This, you think, best explains the scene you are facing. To be sure, it might be that someone burgled the house and took the time to have a bite while on the job, or a house-mate might have arranged the things on the table without having a midnight snack but just to make you believe that someone had a midnight snack. But these hypotheses strike you as providing much more contrived explanations of the data than the one you infer to.

Igor Douven , “Abduction” at Stanford Encyclopaedia of Philosophy

Notice that the conclusion is not, strictly, a deduction and there is not enough evidence for an induction either. We simply choose the simplest explanation that accounts for all the facts, keeping in mind the possibility that new evidence may force us to reconsider our view.

Now, why can’t computers do that? Littlefield says that they would get stuck in an endless loop:

Part of what makes abduction challenging is that we have to infer some likely hypotheses from a truly infinite set of explanations…

The reason that this is significant is because when we are faced with complex problems, part of the way that we solve them is by tinkering. We play, trying several approaches, keeping our own value system fluid as we search for potential solutions. Specifically, we generate hypotheses. Where a computer might be stuck in an endless loop, iterating over infinite explanations, we use our value systems to quickly infer which explanations are both valid and likely. Peirce knew that abductive reasoning was central to how we tackle novel problems; in particular, he thought it was how scientists discover things. They observe unexpected phenomena and generate hypotheses that would explain why they would occur.

William J. Littlefield II, “The Human Skills AI Can’t Replace” at Quillette

Abductive reasoning, in other words, is not strictly a form of calculation so much as an educated guess— an assessment of probabilities based on experience. It plays an important role in creating hypotheses in the sciences:

For example, a pupil may have noticed that bread appears to grow mold more quickly in the bread bin than the fridge. Abductive reasoning leads the young researcher to assume that temperature determines the rate of mold growth, as the hypothesis that would best fit the evidence, if true.

This process of abductive reasoning holds true whether it is a school experiment or a postgraduate thesis about advanced astrophysics. Abductive thought allows researchers to maximize their time and resources by focusing on a realistic line of experimentation.

Abduction is seen very much as the starting point of the research process, giving a rational explanation, allowing deductive reasoning dictate the exact experimental design.

Martyn Shuttleworth, “Abductive Reasoning ” at Explorable.com

As you can see, abductive reasoning involves a certain amount of creativity because the suggested hypothesis must be developed as an idea, not just added up from existing pieces of information. And creativity isn’t something computers really do.

That’s one reason that philosopher Jay Richards argues, in The Human Advantage: The Future of American Work in an Age of Smart Machines, that AI will not put most humans out of work. Rather, it will change the nature of jobs, typically rewarding creativity, flexibility, and a variety of other traits that cannot be calculated or automated.


Further reading on computers and thought processes from Eric Holloway:

The flawed logic behind thinking computers:

Part I: A program that is intelligent must do more than reproduce human behavior

Part II: There is another way to prove a negative besides exhaustively enumerating the possibilities

and

Part III: No program can discover new mathematical truths outside the limits of its code

Will artificial intelligence design artificial superintelligence?

Artificial intelligence is impossible

and

Human intelligence as a Halting Oracle


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

A Type of Reasoning AI Can’t Replace