Mind Matters Natural and Artificial Intelligence News and Analysis
guaranteed-likely-probable-certainty-measuring-confidence-level-stockpack-adobe-stock
Guaranteed Likely Probable Certainty Measuring Confidence Level

Fine-tuning? How Bayesian Statistics Could Help Break a Deadlock

Bayesian statistics are used, for example, in spam filter technology, identifying probable spam by examining vast masses of previous messages
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In the earlier part of podcast episode 150, “Ours is a finely tuned — and No Free Lunch — universe,” Swedish mathematician Ola Hössjer and University of Miami biostatistician Daniel Andrés Díaz-Pachón discussed with Walter Bradley Center director Robert J. Marks the many ways in which the universe is finely tuned for life. Many theorists are not happy with the idea of fine-tuning because they are uncomfortable with its theistic implications. In this second portion of the episode, they discuss how a method of estimating probability called Bayesian statistics or Bayes theorem could help break a deadlock around fine-tuning:

This portion begins at 13:00 min. A partial transcript, Show Notes, and Additional Resources follow.

Robert J. Marks: Bayes’ theorem is criticized for its a posteriori or “after-the-fact” probability. Let me illustrate: Suppose you were to apply Bayesian analysis (after-the-fact analysis) to the probability that the three of us would be here — one in Sweden, one in Colombia, and one in Texas — and that we’ve been doing a podcast. I don’t know if it makes a lot of sense to talk about the probability that we would be here doing this. So what am I missing here?

Note: Thomas Bayes (1702–1761) was a British mathematician and clergyman who developed a new theorem for calculating probabilities.

P(A|B) = P(A) P(B|A) P(B)

“Bayes’ Rule represents the probability of an event based on the prior knowledge of the conditions that might be related to that event… If we already know the conditional probability, we use Bayes’ Theorem to find the reverse probabilities. All this means is that we are going to use a Tree Diagram in reverse.” – CalcWorkshop At Calc Workshop, an illustration from a mystery story in the Harry Potter series is offered, along with the equation. Bayes’ Rule or theorem is often contrasted the frequentist approach: “The frequentist view defines the probability of an event as the proportion of times that the event occurs in a sequence of possibly hypothetical trials.” – ScienceDirect At Stats Stack Exchange, one contributor offers an illustration of the difference between the two approaches in terms of Grandma trying to find her misplaced phone.

Daniel Díaz: There has been a long, long debate between two different approaches to statistics [Bayesian and frequentist]. I can see many cases in which one of the approaches is better than the other. So it depends on the problem that you are working on.

Robert J. Marks: As an engineer I’m always interested in reduction to practice. And one of the things that Bayesian statistics is used for is spam filtering. You gather a lot of emails and figure out the probability that if a “Nigerian prince” is mentioned, it is spam. A lot of people fell for it. … But you can look at past data. You can look at all of the labeled emails that have been labeled spam and figure out how many that had “Nigerian prince” in them were actually spam.

The spam filter is much more complicated than this but that’s a simple illustration. And this is the reason we use Bayesian statistics … Your Bayesian statistics can say, what is the probability that this email is spam? As an engineer, I always say that reduction to practice is the proof of the validity of a theory. So I think that the criticism of Bayesian statistics that I mentioned was totally inaccurate. It’s not a good argument.

Daniel Diaz

Daniel Díaz: It is also useful in some areas of medicine. I work in biostatistics, so I know that in many, many analysis of treatments, Bayesian results are very useful and more useful in some cases than the frequentist approach. But as I say, there are other approaches, there are other problems in which the frequentist approach works better. The important thing for our conversation is that, actually, the right way to approach our problem is using Bayesian theory and Bayes’ theorem in order to determine the distribution of maximum entropy to use. So for fine-tuning, in order to avoid certain problems in the past, the right approach is to consider that prior distribution was given in terms of the maximum entropy. And that is also done with the help of Bayes’ theorem.

Ola Hössjer: Yes. I totally agree with that because we use Bayesian statistics and put a prior distribution on the possible values or a certain constant or nature. Because of the Bayesian statistics approach, not the frequentist approach, we are in fact, able to talk about the probability of this interval. With a frequentist approach that would not have been possible. We could only talk about how consistent each possible constant of nature is with data and so on.

Robert J. Marks: Excellent. So the frequentist approach talks about probabilities of events that haven’t happened yet whereas the Bayesian approach talks about the probability of events that have already occurred Is that fair?

Ola Hössjer: Yes. Because in Bayesian statistics, you have probabilities of the past, that is your prior, what you believe about a certain parameter, for instance the constant of nature. And then what is going to happen is data that you might not have collected yet. Then you also have a distribution on that in Bayesian statistics, whereas in frequentist statistics you only impose a probability distribution on data for the future, not from the past.

Ola Hössjer

From the past, you regard each possible value of this parameter as the possible value of this constant of nature as a fixed constant. You don’t put the distribution on it.

Probabilities to work on: Specified complexity and irreducible complexity

Robert J. Marks: Daniel, what is specified complexity and how does it measure fine tuning?

Daniel Díaz: We can think of the life-permitting interval as an interval that is satisfying or fulfilling a very special function: Outside it, life as we know it could have not existed in our universe. We could say that the life-permitting interval is specified. Now in a specified complexity, there are basically two components, the specification and the complexity.

The specification is given by the function that the interval is fulfilling. As I said, if the constant is inside the interval, it is going to allow for a universe to have life. But if the constant is outside the interval, then no life could exist in the universe at least as we know it. We are thinking of carbon-based life here.

Complexity is more simple to understand. You can think of complexity as something that is improbable. So complexity is inversely proportional to probability. The more probable an event is, the less complex it is. And also the less probable an event is, the more complex it is. So when we are thinking in terms of fine tuning, then the life permitting interval is specified. So what we need to measure now is its complexity. We need to know its probability. That is how a specified complexity makes its way in the context of fine tuning.

Robert J. Marks: The last topic that we want to talk about is irreducible complexity. How does that measure fine tuning? Ola, could you talk about that?

Ola Hössjer: Irreducible complexity is a special case, I would say, of specified complexity. As Daniel said, if something is complex, it has a small probability of occurring by chance. And there is an independent specification. For instance, the universe admits life or, in biology, a molecular machine that works and so on. Irreducible complexity is a special case when this complexity is like a machine that has many parts. In order for this machine to be specified, in order for it to work, all the parts have to work.

So if you remove one single part, the machine ceases to function. That typically makes the machine more complex because, as Daniel was saying, complexity has an inverse relation to probability. The more parts that are all needed, the less likely it is that this machine evolved by chance. So the summary is that irreducible complexity is a special case of specified complexity where the structure consists of many parts that are all needed.

Robert J. Marks: Excellent. Well, we’ve been going over the methods that fine tuning can be measured. It’s heuristically obvious, but it’s a lot better if we can put numbers to them. And let me just summarize, we’re going through active information, small intervals, the probability measure of the fine tuning. We’ve gone through specified complexity. And then Ola has just talked about irreducible complexity … Next time, we’ll talk more specifically about monkeying as Hoyle said. Monkeying with the universe in biology with some fine-tuning that allows life.

The next episode: Life is so wonderfully finely tuned that it’s frightening!


Here are all of the instalments, in order, of the discussion between Robert J. Marks, Ola Hössjer, and Daniel Díaz on the fine tuning of the universe for life:

The first episode:

Ours is a finely tuned — and No Free Lunch — universe. Mathematician Ola Hössjer and biostatistician Daniel Díaz explain to Walter Bradley Center director Robert J. Marks why nature works so seamlessly. A “life-permitting interval” makes it all possible — but is that really an accident?

and

Fine-tuning? How Bayesian statistics could help break a deadlock Bayesian statistics are used, for example, in spam filter technology, identifying probable spam by examining vast masses of previous messages. The frequentist approach assesses the probability of future events but the Bayesian approach assesses the probability of events that have already occurred.

The second episode:

Life is so wonderfully finely tuned that it’s frightening! A mathematician who uses statistical methods to model the fine tuning of molecular machines and systems in cells reflects… Every single cell is like a city that cannot function without a complex network of services that must all work together to maintain life.

Can there be a general theory for fine-tuning? If you make a bowl of alphabet soup and the letters arrange themselves and say, good morning, that is specified. What are the probabilities? Ola Hössjer sees the beauty of mathematics in the fact that seemingly unrelated features in cosmology and biology can be modeled using similar concepts.

The third episode

Was the universe created for life forms to live in? How would we know? We can begin by looking at the fundamental constants that underlie the universe. The constants of the universe — gravitational constant, entropy, and cosmological constant — must be finely tuned for life to exist.

Why did Stephen Hawking give up on a Theory of Everything? Daniel Díaz and Ola Hössjer continue their discussion of the fine tuning of the universal constants of nature with Robert J. Marks. The probability, they calculate, that the fine tuning of our universe is simply random is down to 10 to the minus sixty — a very small number.

The fourth and final episode

Is life from outer space a viable science hypothesis? Currently, panspermia has been rated as “plausible but not convincing.” Marks, Hössjer, and Diaz discuss the issues. Famous atheist scientists have favored panspermia because there is no plausible purely natural explanation for life on Earth that would make it unnecessary.

Could advanced aliens have fine-tuned Earth for life? That’s a surprisingly popular thesis, considering how hard it is to account for life without assuming a creator. As Robert Marks, Ola Hössjer, and Daniel Díaz discuss, some prominent atheists/agnostics have chosen to substitute advanced extraterrestrials for God.

Our universe survived a firing squad and it’s just an accident? According to the Weak Anthropic Principle, if things weren’t the way they are, we wouldn’t be here and that’s all there is to it. Given the odds, a philosopher likens the Weak Anthropic Principle to surviving a firing squad and concluding, incuriously, well… that’s just the way things are.

In an infinity of universes, countless ones are run by cats… Daniel Díaz notes that most of the talk about the multiverse started to appear once it was realized that there was fine-tuning in nature.
Robert J. Marks points out that even 10 to the 1000th power of universes would only permit 3,322 different paths. Infinity is required but unprovable.

and

If extraterrestrials didn’t fine tune Earth, maybe there is a God. In the face of a grab bag of ideas like creation by ETs or countless universes (some run by cats), why does the idea of a Creator seem far out? Traditional philosophers, not committed to a religion, have thought that deism (and theism) are rational, science-based conclusions, based on fine tuning.

You may also wish to read: No Free Lunches: Robert J. Marks: What the Big Bang teaches us about nothing. Bernoulli is right and Keynes is Wrong. Critics of Bernoulli don’t appreciate the definition of “knowing nothing.” The concept of “knowing nothing” can be tricky.

Show Notes

  • 00:11 | A Little Fine Tuning
  • 01:36 Introducing Ola Hössjer and Daniel Díaz
  • 03:19 | No Free Lunch Theorems
  • 05:44 | Formula 409
  • 08:39 | Active Information
  • 09:41 | Intervals
  • 13:53 | Maximum Entropy
  • 21:03 Intervals of Infinite Length
  • 24:26 | Reduction to Practice
  • 29:31 Specified Complexity
  • 32:01 | Irreducible Complexity

Additional Resources

Podcast Transcript Download


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Fine-tuning? How Bayesian Statistics Could Help Break a Deadlock