
Machine learning: Harnessing the Power of Empirical Generalized Information (EGI)
We can calculate a probability bound from entropy. Entropy also happens to be an upper bound on the binomial distributionWe want our calculation to demonstrate the notion that if we have high accuracy and a small model, then we have high confidence of generalizing. Intuitively, then, we add the model size to the accuracy and subtract this quantity from the entropy of having absolutely no information about the problem.
Read More ›