How Can We Measure Meaningful Information?Neither randomness nor order alone creates meaning. So how can we identify communications?
Are randomness and order different? Intuitively, we think they are.
Dropping a handful of toothpicks on the table seems to produce a different sort of pattern than spelling out a word with toothpicks. We call the dropped toothpicks “random” but we call the toothpicks spelling out a word “orderly.”(PICK, for example, can be spelled, block style, with thirteen unbroken toothpicks.)
Surprisingly, this intuitive distinction is harder to make in math and the sciences. To understand why this is so, look at Claude Shannon’s theory of information, intended to optimize communications systems and Andrey Kolmogorov’s theory of complexity, to see what they don’t tell us.
Shannon defines information based on probability. A highly probable event has little information and a low probability event has a lot of information. If two different events have the same probability of occurrence, then they have the same amount of information. Thus, according to Shannon’s theory, thirteen dropped toothpicks have the same amount of information as the thirteen toothpicks spelling out a word.
The Shannon approach probably strikes you as counterintuitive, and with good reason. The problem is that probability is defined over a collection of events (thirteen toothpicks are dropped), not for individual events. So, the solution is to describe the orderliness of an individual event (the toothpicks spell out a word).
Will Kolmogorov complexity help? Kolmogorov complexity states that the information in an event is a function of how concisely the event can be described. Random events do not have a concise description but orderly events do. Thus, according to Kolmogorov complexity theory, random events contain more information than orderly events. For example, it is harder to describe where and how the thirteen randomly dropped toothpicks land than to say “They spell out the word PICK.” So, while Kolmogorov complexity allows us to distinguish between random and orderly events, it still counters our intuition that orderly events contain more information than disorderly events.
This leads us to a third concept, algorithmic specified complexity (ASC). ASC solves the problem by combining the two measures. ASC states that an event has a high amount of information if it has both low probability and a concise description. This matches our intuition much better.
For example, if we had a keyboard that consisted only of the letter A, its output would be very orderly (a long line of As), but it would not communicate anything. On the other hand, if we had a keyboard with all the letters of the alphabet but we communicated by having a monkey bang on it, there would be great variety, but the output would be meaningless. The key to communication is a wide variety of message possibilities (low probability) along with the ability to select just the messages that are orderly (concise description).
To return to our toothpick example, there is a great variety of ways toothpicks could land. Nothing constrains them to fall in such a way as to form letters. On the other hand, the formation of toothpicks that spell a word can be described much more concisely than the formation of the dropped toothpicks. Thus, ASC allows us mathematically measure our intuition that randomness and order are intrinsically different and that order conveys information while randomness does not convey information.
Eric Holloway has a Ph.D. in Electrical & Computer Engineering from Baylor University. He is a current Captain in the United States Air Force where he served in the US and Afghanistan He is the co-editor of the book Naturalism and Its Alternatives in Scientific Methodologies. Dr. Holloway is an Associate Fellow of the Walter Bradley Center for Natural and Artificial Intelligence.
Also by Eric Holloway: Human intelligence as a halting oracle
Does information theory support design in nature?