Tuesday, August 6, 2013

"Proof is for mathematical theorems and alcoholic beverages. It's not for science."

So says Michael Mann, climate scientist. Phil Plait points to an editorial from Rich Truzpek who argues that Mann is trying to redefine science:
That comes as something of a shock to me. When I was going to school to earn my degree in chemistry, we were taught that science was indeed all about absolute truths and proofs at the end of the day. “Credible theories” is how you got to those truths, not an alternative to them.
Lots of people are chiding Truzpek for his ignorance of science. I'll pile on by proving, mathematically, that you can't have proof in science.

First, a note: like the word "theory," "proof" has more than one definition. Some people simply mean "that which can be shown to be true." In that sense, science certainly does have proof. In the sense Mann, Plait, and others are discussing, "proof" means, instead, "that which can be shown to be true with 100% certainty." In math and classical propositional logic, proofs are begun with statements of 100% certainty, and conclusions are shown to be logically implied by those statements, therefore themselves 100% certain.

Mann, Plait, and the rest are saying that we don't have hypotheses of 100% certainty in sciences. The type of evidence we deal with when generalizing from observation is always statistical. You can't witness every single instance of an apple falling from a tree in the whole history of apples falling from trees, you can only witness a sample of apples falling from trees and generalize from that.

Let's explore this mathematically. The scientifically correct way to update your beliefs on evidence is with Bayes' theorem:

P(H|E) = (P(E|H)*P(H))/P(E)

Where H is the hypothesis in question and E is some observed evidence.

Probabilities, are, of course, numbers between 0 and 1. But they can be represented other ways. Any transformation that preserves the ordering of the numbers is isomorphic to probabilities written between 0 and 1. So, for instance, transforming probabilities into odds ratios via

P/(1 - P)

will give you the same information and will work in all the same equations. Likewise transforming into log odds via

log(P/(1-P))

will give you the same information in yet another form. Odds ratios are usually presented as M:N, for example 1:1 (meaning 50/50, meaning .5/(1 - .5)). Log odds are usually presented as decibels.

It's sometimes easier to do Bayes' Theorem calculations using these. In our case, we want to solve for how much more evidence we would need to take a hypothesis from 99.99% certain to 100% certain. To be very precise, we can measure this in decibels.

How should you think about a "decibel" of evidence? A comparison of odds ratios works well. Say we want to go from 50% certainty to 60% certainty. In odds ratios, this is going from 1:1 to 1.5:1. 80% certainty is 4:1. This should all be intuitive: we're going from equally likely, 1/2, to slightly more likely, 1.5/2.5, to very likely, 4/5.

Let's now translate each of those into decibels. You can use any log base greater than 1 but the standard for explaining Bayesian reasoning is to mimic acoustics, using log base 10 and multiplying the answer by 10. 50% is (log (.5) - log (1 - .5))*10 = 0. 60% is (log (.6) - log (1 - .6))*10 = 1.76. 80% is 6.02. As you can see, closing the gap as we go up requires more decibels of evidence.

99.99% in odds ratios is 9999:1. Adding another decimal place of certainty, 99.999%, is 99999:1. Adding that tiniest bit of certainty makes you over 10 times more certain. Transforming to decibels, we can see that 99.99% certainty requires 39.99 decibels of evidence and 99.999% requires 49.99 decibels, so that 10 times more certainty means 10 more decibels of evidence. As you can probably guess, since you can't transform 1 into an odds ratio or log odds without a divide-by-zero error, getting to 100% certainty requires you to be infinitely more certain than any smaller number and requires infinity decibels of evidence.

So when Micahel Mann says that science doesn't deal in proof, by which he means that science can't get you 100% certainty, he means it--it's impossible. We can never observe infinity decibels of evidence for any proposition because our lives, and the observable universe, are finite, alas. You can find this proposition, if not the mathematical demonstration, in any modern philosophy of science textbook; it's not the least bit controversial. It's a shame Truzpek made it through a university program that taught him otherwise, because what he claims to have learned is ridiculously false.

No comments:

Post a Comment