Mind Matters Natural and Artificial Intelligence News and Analysis
literary-fiction-police-inspector-investigate-crime-and-mystery-story-conceptual-idea-with-sherlock-holmes-detective-hat-smoking-pipe-retro-magnifying-glass-and-book-isolated-on-wood-table-top-stockpack-adobe-stock
Literary fiction, police inspector, investigate crime and mystery story conceptual idea with sherlock holmes detective hat, smoking pipe, retro magnifying glass and book isolated on wood table top
Literary fiction, police inspector, investigate crime and mystery story conceptual idea with sherlock holmes detective hat, smoking pipe, retro magnifying glass and book isolated on wood table top

Researchers: Learning by Inference Beats Learning by Association

They found that seeing the patterns underlying events (inference) allowed test volunteers to make predictions about future events
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

When we learn by association, we notice that some things occur together. For example, suppose three items are frequently seen together on a kitchen table — salt, ketchup, and vinegar. So we might learn to associate salt and vinegar with ketchup. But what, if any, is the relationship?

Concentrated elementary student examining circuit board

When we infer information about the world around us, we don’t just associate items with each other. We see the pattern underlying them. By seeing the pattern in the group of condiments, we learn more: In this case, we infer that dinner will likely be fish and chips.

If the group had been plum sauce, soya sauce, and Sriracha sauce, we would infer that fish and chips won’t be served this time; the pattern suggests that East Asian food is much more likely.

A group of education researchers at the University of California – Davis argues that teaching students to make inferences is a better education strategy than teaching them to make associations alone:

Most studies of learning have focused on learning by association — how animals learn to associate one thing with another, through trial and error. The difference between what was expected and what actually happened drives learning in such cases.

When there is a hidden structure behind those associations, you can use direct observations to infer indirect, unseen outcomes, leaping ahead of the chain of direct association…

Another example is an investor inferring that the drop in Facebook shares can be attributed to a tech bubble, suggesting that Microsoft shares will likely drop soon, too.

“Knowing this hidden relationship means you can learn a lot faster,” [Eric] Boorman said.

University of California – Davis, “Understanding learning by inference” at ScienceDaily (June 16, 2022) The paper is open access.

There is no direct relationship between Facebook’s shares and Microsoft’s shares but a pattern that may govern the fate of both is inferred. Knowing that pattern enables the investor to suspect that Hewlett-Packard’s shares might decline in value too.

The researchers decided to test student learning by inference vs. learning by association with volunteers who could win gift cards if they made inferences successfully. In a second trial, some of the volunteers had their brains scanned while performing the task. Their brains appeared to be creating a “cognitive map” of outcomes.

The fMRI results show the brain representing different outcomes in relation to each other, Boorman said. This representation allows for those “aha” moments.

University of California – Davis, “Understanding learning by inference” at ScienceDaily (June 16, 2022) The paper is open access.

The team hopes to use these findings to introduce an inference learning approach in addition to the more common approach of learning by association.

Interestingly, inference — sometimes called abductive inference — is a type of reasoning computers don’t do. William Dembski points out in a review of Erik J. Larson’s The Myth of Artificial Intelligence: (2021), what computers “know” must be painstakingly programmed; they require complete data to come to a conclusion, where humans can often work well with incomplete data:

if you are going to get a computer to achieve anything like understanding in some subject area, it needs a lot of knowledge. That knowledge, in all the cases we know, needs to be painstakingly programmed. This is true even of machine learning situations where the underlying knowledge framework needs to be explicitly programmed (for instance, even Go programs that achieve world class playing status need many rules and heuristics explicitly programmed).

Humans, on the other hand, need none of this. On the basis of very limited or incomplete data, we nonetheless come to the right conclusion about many things (yes, we are fallible, but the miracle is that we are right so often).

William Dembski, “Why computers will likely never perform abductive inferences” at Mind Matters News (April 24, 2021)

Incidentally, the iconic fictional detective Sherlock Holmes is often thought to have used deduction to solve his cases but in reality, he mainly used abductive inference. Perhaps students will also enjoy their schoolwork more if they master the technique of inferring the patterns underlying events.


You may also wish to read:

Abduction: A thinking skill you have but computers can’t. A Norwegian police detective fills us in on how to use abduction for better thinking. Abductive reasoning, reasoning from effects to causes, is a powerful mechanism for reasoning in the absence of complete knowledge.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Researchers: Learning by Inference Beats Learning by Association