Mind Matters Natural and Artificial Intelligence News and Analysis

TagOccam's razor

human-brain-digital-illustration-electrical-activity-flashes-and-lightning-on-a-blue-background-stockpack-adobe-stock
Human brain digital illustration. Electrical activity, flashes and lightning on a blue background.

An Alternative to the Tractable Cognition Thesis

The Tractable Cognition Thesis presents us with a gap in the logic when it comes to NP-Complete problems. How can we solve for it?

The Tractable Cognition Thesis is the proposal that all processes in the brain can be modeled by a polynomial time algorithm. This includes situations where the brain solves problems that are within NP-Complete domains. In the latter situation, it is assumed the brain is only solving a subset of the NP-Complete domain where the problems can be solved with a polynomial time algorithm. With these assumptions in place, the overall implication is that there is a specific polynomial time algorithm that can emulate every process in the brain. However, there is a gap in the logic when it comes to NP-Complete problems. It is well known that humans solve many problems that are in the general case NP-Complete. Route planning, Read More ›

Playing cards for poker and gambling, isolated on white background with clipping path

Machine Learning: Using Occam’s Razor to Generalize

A simple thought experiment shows us how

This approach is contrary to Fisher's method where we formulate all theories before encountering the data. We came up with the model after we saw the cards we drew.

Read More ›
Brush and razor for shaving beard. Concept background of hair salon men, barber shop

Occam’s Razor Can Shave Away Data Snooping

The greater an object's information content, the lower its probability.

One technique to avoid data snooping is based on the intersection of information theory and probability: An object’s probability is related to its information content. The greater an object’s information content, the lower its probability. We measure a model’s information content as the logarithmic difference between the probability that the data occurred by chance and the number of bits required to store the model. The negative exponential of the difference is the model’s probability of occurring by chance. If the data cannot be compressed, then these two values are equal. Then the model has zero information and we cannot know if the data was generated by chance or not. For a dataset that is incompressible and uninformative, swirl some tea Read More ›

tomas-robertson-1463833-unsplash

Successful Generalization Is a Key to Learning

In machine learning, the Solomonoff induction helps us decide how successful a generalization is

In the model of generalization set out in the paper, imperfect models can get better scores but they are discounted according to the amount of error they have.

Read More ›