Information theory is a deep field that is responsible for our modern internet and satellite TV. The field was pioneered by Claude Shannon to measure our ability to communicate meaning. But besides powering the information revolution, information theory is also very widely applicable elsewhere. Once you understand the basic intuition, you can see applications popping up all over the place. To prove the point, I’ll show how we can apply information theory to gain insight in the very low tech world of running. I’ve been running off and on for many years and I’ve noticed that information theory describes a good run. First of all, what is a good run? A good run is when your body feels as if Read More ›
While many are concerned about all the jobs that AI will eliminate, no one is talking about the fact that AI needs humans. Information is the fuel that powers AI, and only humans can create this information. So, the real revolution that AI will bring is not data exploitation, but the empowering of people all around the world to power our economy through creation of information. What’s bad news for authoritarian groups like the Chinese Communist party is good news for everyone else.
First of all, COVID-19 clearly does not attack the globe uniformly by latitude. The second standout feature is that it targets the northern hemisphere. How can a disease’s spread be affected by hemisphere, let alone latitude? Let’s look a little deeper for some clues.
As a jokester recently demonstrated, even “shirts without stripes” is a fundamental, unsolvable problem for computers
April 21, 2020
At first, “shirts without stripes” might not seem like much of an issue but it turns out that many important and interesting problems for computers fundamentally reduce to this “halting problem.” And understanding human language is one of these problems.
The “broken checkerboard” is not the ultimate scientific test for intelligence that we need. But it is a truly scientific test in the sense that it is capable of falsifying the theory that the mind is reducible to computation.
We often hear that what’s hard for humans is easy for computers. But it turns out that many kinds of problems are exceedingly hard for computers to solve. This class of problems, known as NP-Complete (NPC), was independently discovered by Stephen Cook and Leonid Levin.
Because AI research is based on a fundamental assumption that has not been scientifically tested—that the human mind can be reduced to a computer—then the research itself cannot be said to be scientific.
Gödel’s discovery brought back a sense of wonder to mathematics and to the rest of human knowledge. His incompleteness theorem underlies the fact that human investigation can never exhaust all that can be known. Every discovery builds a path to a new discovery.
According to analytical philosopher Richard Johns, we cannot represent ourselves completely mathematically so we cannot generate fundamentally contradictory thoughts about ourselves. Some part of us lies beyond mathematics. An android would not be so lucky, as Captain Kirk realized in an early Star Trek episode.
Recent experiments in entanglement of particles in time as well as space show that our entire universe is imbued with final causality within its very fabric. This final causality must come from some source beyond the universe.
AI can certainly help scientists. But to understand why AI can’t do science on its own, we should take a look at the NP-Hard Problem in computer science. The “Hard” is in the name of the problem for a reason…
The intensity of my mental processing brought about an observable brain state. The causality did not go in the other direction; the magenta brain state did not increase my conscious process. This type of observation causes a problem for those hoping to duplicate human intelligence in a computer program.
We want our calculation to demonstrate the notion that if we have high accuracy and a small model, then we have high confidence of generalizing. Intuitively, then, we add the model size to the accuracy and subtract this quantity from the entropy of having absolutely no information about the problem.
Google’s quantum supremacy claim is certainly fascinating and controversial, but even if true, it ultimately only amounts to an incremental and even inconsequential improvement in the state of AI and ML, due to the still-unmet need for a halting oracle.
One technique to avoid data snooping is based on the intersection of information theory and probability: An object’s probability is related to its information content. The greater an object’s information content, the lower its probability. We measure a model’s information content as the logarithmic difference between the probability that the data occurred by chance and the number of bits required to store the model. The negative exponential of the difference is the model’s probability of occurring by chance. If the data cannot be compressed, then these two values are equal. Then the model has zero information and we cannot know if the data was generated by chance or not. For a dataset that is incompressible and uninformative, swirl some tea Read More ›