Site icon Climate Etc.

Confidence levels inside and outside an argument

by Judith Curry

[G]iving a very high level of confidence requires a check that you’re not confusing the probability inside one argument with the probability of the question as a whole. – NotWrong

I spotted this on my twitter feed, on a blog called LessWrong (I love it), a post entitled Confidence levels inside and outside an argument.  Excerpts:

Suppose the people at FiveThirtyEight have created a model to predict the results of an important election. After crunching poll data, area demographics, and all the usual things one crunches in such a situation, their model returns a greater than 999,999,999 in a billion chance that the incumbent wins the election. Suppose further that the results of this model are your only data and you know nothing else about the election. What is your confidence level that the incumbent wins the election?

Mine would be significantly less than 999,999,999 in a billion.

When an argument gives a probability of 999,999,999 in a billion for an event, then probably the majority of the probability of the event is no longer in “But that still leaves a one in a billion chance, right?”. The majority of the probability is in “That argument is flawed”. Even if you have no particular reason to believe the argument is flawed, the background chance of an argument being flawed is still greater than one in a billion.

More than one in a billion times a political scientist writes a model, ey will get completely confused and write something with no relation to reality. More than one in a billion times a programmer writes a program to crunch political statistics, there will be a bug that completely invalidates the results. More than one in a billion times a staffer at a website publishes the results of a political calculation online, they will accidentally switch which candidate goes with which chance of winning.

So one must distinguish between levels of confidence internal and external to a specific model or argument. Here the model’s internal level of confidence is 999,999,999/billion. But my external level of confidence should be lower, even if the model is my only evidence, by an amount proportional to my trust in the model.

Absolute authority

Another relevant post at LessWrong, entitled Absolute Authority.  Excerpts:

The one comes to you and loftily says:  “Science doesn’t really know anything.  All you have are theories—you can’t know for certain that you’re right.  You scientists changed your minds about how gravity works—who’s to say that tomorrow you won’t change your minds about evolution?”

Behold the abyssal cultural gap.  If you think you can cross it in a few sentences, you are bound to be sorely disappointed.

In the world of the unenlightened ones, there is authority and un-authority.  What can be trusted, can be trusted; what cannot be trusted, you may as well throw away.  There are good sources of information and bad sources of information.  If scientists have changed their stories ever in their history, then science cannot be a true Authority, and can never again be trusted—like a witness caught in a contradiction, or like an employee found stealing from the till.

Plus, the one takes for granted that a proponent of an idea is expected to defend it against every possible counterargument and confess nothing.  All claims are discounted accordingly.  If even the proponent of science admits that science is less than perfect, why, it must be pretty much worthless.

When someone has lived their life accustomed to certainty, you can’t just say to them, “Science is probabilistic, just like all other knowledge.”  They will accept the first half of the statement as a confession of guilt; and dismiss the second half as a flailing attempt to accuse everyone else to avoid judgment.

You have admitted you are not trustworthy—so begone, Science, and trouble us no more!

This experience, I fear, maps the domain of belief onto the social domains of authority, of command, of law.  In the social domain, there is a qualitative difference between absolute laws and nonabsolute laws, between commands and suggestions, between authorities and unauthorities.  There seems to be strict knowledge and unstrict knowledge, like a strict regulation and an unstrict regulation.  Strict authorities must be yielded to, while unstrict suggestions can be obeyed or discarded as a matter of personal preference.  And Science, since it confesses itself to have a possibility of error, must belong in the second class.

The abyssal cultural gap between the Authoritative Way and the Quantitative Way is rather annoying to those of us staring across it from the rationalist side.  Here is someone who believes they have knowledge more reliable than science’s mere probabilistic guesses—such as the guess that the moon will rise in its appointed place and phase tomorrow, just like it has every observed night since the invention of astronomical record-keeping, and just as predicted by physical theories whose previous predictions have been successfully confirmed to fourteen decimal places.  And what is this knowledge that the unenlightened ones set above ours, and why?  It’s probably some musty old scroll that has been contradicted eleventeen ways from Sunday, and from Monday, and from every day of the week.  Yet this is more reliable than Science (they say) because it never admits to error, never changes its mind, no matter how often it is contradicted.  They toss around the word “certainty” like a tennis ball, using it as lightly as a feather—while scientists are weighed down by dutiful doubt, struggling to achieve even a modicum of probability.  “I’m perfect,” they say without a care in the world, “I must be so far above you, who must still struggle to improve yourselves.”

There is nothing simple you can say to them—no fast crushing rebuttal.  By thinking carefully, you may be able to win over the audience, if this is a public debate.  Unfortunately you cannot just blurt out, “Foolish mortal, the Quantitative Way is beyond your comprehension, and the beliefs you lightly name ‘certain’ are less assured than the least of our mighty hypotheses.”  It’s a difference of life-gestalt that isn’t easy to describe in words at all, let alone quickly.

What might you try, rhetorically, in front of an audience?  Hard to say… maybe:

But, in a way, the more interesting question is what you say to someone not in front of an audience.  How do you begin the long process of teaching someone to live in a universe without certainty?

I think the first, beginning step should be understanding that you can live without certainty—that if, hypothetically speaking, you couldn’t be certain of anything, it would not deprive you of the ability to make moral or factual distinctions.

It would concede far too much (indeed, concede the whole argument) to agree with the premise that you need absolute knowledge of absolutely good options and absolutely evil options in order to be moral.  You can have uncertain knowledge of relatively better and relatively worse options, and still choose.  It should be routine, in fact, not something to get all dramatic about.

I mean, yes, if you have to choose between two alternatives A and B, and you somehow succeed in establishing knowably certain well-calibrated 100% confidence that A is absolutely and entirely desirable and that B is the sum of everything evil and disgusting, then this is a sufficient condition for choosing A over B.  It is not a necessary condition.

Oh, and:  Logical fallacy:  Appeal to consequences of belief.

Let’s see, what else do they need to know?  Well, there’s the entire rationalist culture which says that doubt, questioning, and confession of error are not terrible shameful things.

There’s the whole notion of gaining information by looking at things, rather than being proselytized.  When you look at things harder, sometimes you find out that they’re different from what you thought they were at first glance; but it doesn’t mean that Nature lied to you, or that you should give up on seeing.

Then there’s the concept of a calibrated confidence—that “probability” isn’t the same concept as the little progress bar in your head that measures your emotional commitment to an idea.  It’s more like a measure of how often, pragmatically, in real life, people in a certain state of belief say things that are actually true.  If you take one hundred people and ask them to list one hundred statements of which they are “absolutely certain”, how many will be correct?  Not one hundred.

If anything, the statements that people are really fanatic about are far less likely to be correct than statements like “the Sun is larger than the Moon” that seem too obvious to get excited about.  For every statement you can find of which someone is “absolutely certain”, you can probably find someone “absolutely certain” of its opposite, because such fanatic professions of belief do not arise in the absence of opposition.  So the little progress bar in people’s heads that measures their emotional commitment to a belief does not translate well into a calibrated confidence—it doesn’t even behave monotonically.

JC comments:  I found both of these essays to provide substantial insights into reasoning about climate uncertainty, confidence levels, communicating uncertainty to the public, and playing politics with uncertainty and confidence levels.

The IPCC has a very bad case of confusing the probability inside their argument with the probability of the question as a whole (e.g. 20th century attribution, 21st century projections, climate sensitivity).  Dangerous anthropogenic global warming is one possible scenario of the future; there are many other possible scenarios that  the IPCC completely ignores (heck, we cant predict solar variations, volcanic eruptions, and natural internal variability so we might as well ignore them).

The appeal to the consequences of belief pretty much sums up the public debate on climate change.

Exit mobile version