Mind Matters Natural and Artificial Intelligence News and Analysis
signal noise visualization.jpg
Futuristic heads up display blue green abstract interface

How the Explanatory Filter Can Help Quash Conspiracy Theories

I found Dembski’s explanatory filter quite helpful in investigating voter fraud claims
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

William Dembski’s explanatory filter is a decision strategy for identifying events that are unlikely to have happened purely by chance. The filter proceeds in three main steps, which can be illustrated via the plot device in Contact, a novel (1985) by Carl Sagan, followed by a film (1997):

  1. Eliminate events of large probability (necessity): A radio telescope receives a pattern of beeps and pauses. Perhaps the pattern seems strange to us but we could just be overinterpreting inevitable space noise.
  2. Eliminate events of medium probability (chance): The pattern turns out to be a sequence of prime numbers. However, large randomly generated numbers sometimes feature apparent patterns (five 5s in a row, for example) that don’t signify anything.
  3. Specify the event of small probability with an independent knowledge source. The pattern of prime numbers was over 1000 symbols (unlikely to be chance) which pointed to a message: “Sure enough the message alerted the SETI researchers to a subsequent message containing the blue prints for building an extraordinary machine that ultimately transported Ms Foster into another world.” – “Explanatory filter,” Conservapedia

The filter is often used in debates etween atheists and theists over the existence of God. But one of its everyday practical uses is in evaluating conspiracy theories.

One skeptical approach would be to just dismiss all conspiracy theories, as an atheist might just dismiss all theories of God’s actions, however grounded. The trouble is, there really are conspiracies. Political or ideological subterfuge occur quite frequently in politics. Whittaker Chambers (1901–1961), about whom I wrote yesterday, was a well-placed journalist who was also a Communist spy during the the Cold War (1946–1991). He came clean about his activity in a memoir that also outed an extensive and influential network of highly placed spies in the US government, one of whom was Alger Hiss. Hiss was no minor clerk himself; he was instrumental, in his role as Secretary General, in drafting the UN charter.

Similar examples can be found today. So adopting a completely skeptical position would give a blank check to people like Chambers and Hiss to hijack a country’s policies for their own agenda. But how can we avoid being too credulous at the same time?

We can understand the tradeoff ranging from too incredulous to too credulous using the concept of Type 1 and Type 2 errors. Type 1 errors are false negatives (= no snake is venomous) and Type 2 errors are false positives (= all snakes are venomous).

A false negative occurs when we reject a true proposition and a false positive occurs when we accept a false proposition. A complete skeptic is immune from Type 2 errors, false positives, while a “true believer” is completely immune from Type 1 errors, false negatives.

The optimal position is when an individual is resistant to both kinds of errors, concluding that some snakes, as reported, are venomous. We can see that to reject or accept all theories, whether of divine action, conspiracy, or snakes, at the outset is a Type 1 or Type 2 error of thinking.

The explanatory filter is a cornerstone of intelligent design theory. It allows for a better tradeoff between the types of errors than the extremes. The , like the skeptical position, emphasizes avoiding false positives. However, unlike the skeptical position, which takes the lazy solution of avoiding false positives by rejecting all positives, the explanatory filter accepts the possibility of true positives. It is a probabilistic method and cannot guarantee true positives, except perhaps in the limit. But it does allow us to quantify the probability of a true positive. In a scenario where there is an equal balance of true and false positives, it will allow through more true than false ones.

This image has an empty alt attribute; its file name is Coincidence-M.-Schuppich-Adobe-Stock-81333661-1597x1597.jpeg

Now let’s circle back to conspiracy theories. Conspiracy theorists attempt to explain a perceived pattern that could be a natural event as conscious design. For example, they might begin to believe that a curious change in the pattern in wind currents or a multi-year drought are caused by a hostile foreign power interfering with the climate or the weather.

True believers tend to jump straight to conscious design. I’ve observed this tendency in myself: I look at a randomly patterned carpet long enough and start seeing faces. The silhouette of a face triggers an instinctive response in our minds but the geometry of the silhouette can be roughly met by randomly assembled shapes. Just because I see a lot of faces in the carpet does not mean the manufacturer had any intent to make the carpet design look like faces. And indeed, identifying a specific pattern is part of the explanatory filter’s inference to design.

Yet, this is where the conspiracy theorist errs. Identifying a specific pattern is not the whole of the explanatory filter. That comes, if it comes at all, only at the end. It is the prior two steps of the filter that the conspiracy theorist misses.

The two prior steps of the filter require that the theorist first eliminate necessity and chance as explanations. Necessary causes, such as laws of nature, are first: Laws of nature can create very specific patterns, like highly regular crystal lattices, or ferromagnetism when all the molecular magnetic dipoles are aligned. And if we immediately jump to design when confronted with a specific pattern, we will end up with a huge number of false positives when presented with natural laws.

One example of this error that I ran across while investigating voter fraud claims is the observation that votes for Joe Biden arrived in blocks of almost precisely 4800. Initially, I thought this was highly suspicious, evidence of some sort of tampering with the vote. But, then I found out that the dataset was derived from percentages that only had three decimal places of precision which, with mathematical necessity, resulted in the 4800 vote blocks. Since I found I could account for the regular pattern with an appeal to mathematical necessity, I realized the 4800 vote blocks could not count as evidence of tampering with the election.

Eliminating chance is also very important. Regular patterns can also appear in random data. If you flip a coin enough times, you are bound to eventually write the complete works of Shakespeare (although it would take a very long time). The way to address this problem is ask how likely a chance explanation is. If our chance explanation of generating the works of Shakespeare by flipping a coin is too small for the lifetime of our universe, we can rule out chance too.

However, therein lies the rub, as Shakespeare himself would say. Care must be taken to identify which chance hypothesis one is eliminating because eliminating one chance hypothesis does not eliminate all of them. I discovered this issue when investigating another supposed evidence of voter fraud that turned on Benford’s Law. Benford’s Law states that, when looking at the first digit in a large amount of numerical data, we will find that the number 1 occurs proportionately more often than the number 2, the number 2 proportionately more often than the number 3, and so on. This distribution, known as the power law distribution, is used by accountants to detect financial fraud.

Researchers discovered that Biden’s votes did not follow Benford’s Law in certain voting districts, while Trump and Jorgenson’s votes did follow it. Aha, evidence of electoral fraud via a well established fraud detection technique! However, it turns out Benford’s law only works with the right base distribution over numerical data. And since Biden won in the voting districts, his vote distribution was bound to fail to meet the criteria for Benford’s Law.

The same violation of Benford’s Law occurred with Trump’s votes when he won in 2016. So, because the chance hypothesis was not correctly eliminated in this case, violation of Benford’s Law did not count as an inference to design.

So we have done our due diligence and eliminated necessity and chance as explanations for what may be evidence of a major conspiracy. But we still are not done! There are still traps we can fall into if we don’t follow the explanatory filter carefully. I have discovered yet one more in my investigations of voter fraud claims.

The following evidence introduced was much more sophisticated than the previous two. A researcher found that, with census and registration data, he could precisely predict voter turnout across an entire state. He considered this to be conclusive proof that an algorithm was controlling the election across the entire state, and quite possibly the entire nation. In this case, we seem to have satisfied the first two steps of the explanatory filter. Voters don’t collaborate across an entire state, so we cannot appeal to necessity.

Voters are not very homogenous across an entire state, so we cannot appeal to chance to explain a very predictable pattern. This must mean we’ve finally reached a valid design inference with our specified pattern! Eureka!

Unfortunately, this analysis misses one final key component for the third step in the explanatory filter. The third step is indeed to identify a specific pattern. High predictability is a specific pattern. High prediction implies high regularity, and high regularity implies a very specific description of the data. Yet, high specificity is not enough. The pattern that is identified must be formulated independently from the data, a concept called “detachability.” In fact, it is trivially easy to generate a bad inference to design if we don’t follow this final criterion. The way to cheat is to just say, “the data is the pattern.”

This is not quite what I discovered while investigating the voter turnout prediction, but it is related. I discovered that, using the researcher’s techniques, I could also predict the 2012 and 2016 election vote turnouts with very high accuracy. So, this property that voter turnout is very predictable is not “detachable” from the phenomena under analysis because it is also a property of the two prior elections.

Consequently, the predictable voter turnouts did not qualify as evidence of design according to the explanatory filter.

So, let’s circle back to the beginning. We began with the concern that using intelligent design techniques like the explanatory filter would lead to us being too credulous and accepting too many false positives. However, we have instead discovered that, by a careful application of all the steps of the explanatory filter, we’ve been able to eliminate significant false positives that have incorrectly convinced a great many people they have conclusive evidence of design in terms of election interference.

Thus, rather than being an enabler of false inferences to design, intelligent design turns out to be a very useful tool for debunking false inferences to design, and is of great benefit to skeptics everywhere.


You may also wish to read: Is GMO detection an application of Dembski’s explanatory filter? If so, it would be an instance of the use of the filter in biology


Eric Holloway

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Eric Holloway is a Senior Fellow with the Walter Bradley Center for Natural & Artificial Intelligence, and holds a PhD in Electrical & Computer Engineering from Baylor University. A Captain in the United States Air Force, he served in the US and Afghanistan. He is the co-editor of Naturalism and Its Alternatives in Scientific Methodologies.

How the Explanatory Filter Can Help Quash Conspiracy Theories