by Judith Curry
If we omit discussion of tail risk, are we really telling the whole truth?
Kerry Emanuel
This post is motivated by an essay by Kerry Emanuel published at the Climate Change National Forum, entitled Tail Risk vs. Alarmism, which is in part motivated by my previous post AAAS: What we know. Excerpts:
In assessing the event risk component of climate change, we have, I would argue, a strong professional obligation to estimate and portray the entire probability distribution to the best of our ability. This means talking not just about the most probable middle of the distribution, but also the lower probability high-end risk tail, because the outcome function is very high there.
Do we not have a professional obligation to talk about the whole probability distribution, given the tough consequences at the tail of the distribution? I think we do, in spite of the fact that we open ourselves to the accusation of alarmism and thereby risk reducing our credibility. A case could be made that we should keep quiet about tail risk and preserve our credibility as a hedge against the possibility that someday the ability to speak with credibility will be absolutely critical to avoid disaster.
Uncertainty monster simplification
In my paper Climate Science and the Uncertainty Monster, I described 5 ways of coping with the monster. Monster Simplification is particularly relevant here: Monster simplifiers attempt to transform the monster by subjectively quantifying or simplifying the assessment of uncertainty.
The uncertainty monster paper distinguished between statistical uncertainty and scenario uncertainty:
Statistical uncertainty is the aspect of uncertainty that is described in statistical terms. An example of statistical uncertainty is measurement uncertainty, which can be due to sampling error or inaccuracy or imprecision in measurements.
Scenario uncertainty implies that it is not possible to formulate the probability of occurrence of one particular outcome. A scenario is a plausible but unverifiable description of how the system and/or its driving forces may develop over time. Scenarios may be regarded as a range of discrete possibilities with no a priori allocation of likelihood.
Given our uncertainty and ignorance surrounding climate sensitivity, I have discussed the folly of attempting probabilistic estimates of climate sensitivity, and to create a pdf (see this previous post Probabilistic estimates of climate sensitivity). In my opinion, the most significant point in the IPCC AR5 WG1 report is their acknowledgment that they cannot create a meaningful pdf of climate sensitivity with a central tendency, and hence they only provide ranges with confidence levels (and they avoid identifying a best estimate of 3C as they did in the AR4). The strategy used in the AR5 is appropriate in context of scenario uncertainty, where they identify some bounds for sensitivity, and present some assessment of likelihood (values less than 1C are extreme unlikely, and values greater than 6C are very unlikely).
So I starkly disagree with this statement by Emanuel:
we have a strong professional obligation to estimate and portray the entire probability distribution to the best of our ability.
In my opinion, we have a strong profession obligation NOT to simply the uncertainty by portraying it as a pdf, when the situation is characterized by substantial uncertainty that is not statistical in nature. This issue is discussed in a practical way with regards to climate science in a paper by Risbey and Kandlikar (2007), see especially Table 5:
Climate sensitivity is definitely not characterized by #1, rather it is characterized by #2 or #4. The lower bound is arguably well defined; the upper bound is not. The problem at the upper bound is what concerns Emanuel; I am arguing that the way to address this is NOT through considering a fat tail that extends out to infinity of a mythical probability distribution.
Nicholas Taleb’s black swan arguments emphasize the non-computatability of the consequential rare events using scientific methods (owing to the very nature of small probabilities)
What’s the worst case?
I have spent considerable effort in identifying possible/plausible worst case scenarios, black swans and dragon kings:
- My 2010 AGU talk Climate surprises, catastrophes and fat tails (end of the post)
- Anticipating the climate black swan
- My presentation at the NOAA Water Cycle Challenges Workshop
- My recent presentation Generating possibility distributions of scenarios for regional climate change
Identifying possible/plausible worst case scenarios is much more useful in my opinion than the fat tail approach to identifying possible black swans and dragon kings in the climate system.
The philosophical foundation for thinking about ‘worst case scenarios’ is laid out in the work of Gregor Betz, see especially his paper What‘s the Worst Case? The Methodology of Possibilistic Prediction. This paper deserves a thread of its own (I regard it as hugely important), but I want to at least introduce the relevant concepts here. Excerpts
Where even probabilistic prediction fails, foreknowledge is (at most) possibilistic in kind; i.e. we know some future events to be possible, and some other events to be impossible.
Gardiner, in defence of the precautionary principle, rightly notes that (i) the application of the precautionary principle demands that a range of realistic possibilities be established, and that (ii) this is required by any principle for decision making under uncertainty whatsoever.
Accepting the limits of probabilistic methods and refusing to make probabilistic forecasts where those limits are exceeded, originates, ultimately, from the virtue of truthfulness, and from the requirements of scientic policy advice in a democratic society.
‘Possibility’, here, means neither logical nor metaphysical possibility, but simply (logical and statistical) consistency with our relevant background knowledge.
A surprise of the rst type occurs if a possibility that had not even been articulated becomes true. Hypothesis articulation is, essentially, the business of avoiding surprises (of this 1st type). There is, however, a second type of surprise that does not simply extend the picture we’ve drawn so far, but rather shakes it. Our scentific knowledge is constantly changing, whereas that change is not cumulative: scientific progress also comprises refuting, correcting, and abandoning previous scientificc results. Now a readjustment of the background knowledge questions the entire former assessment of possibilistic hypotheses.
I take this brief discussion to indicate that the decision deliberation starts to become messy and complicated. It is not clear to me whether there are general principles which can guide rational decisions in such situations at all. This, however, must not serve as an excuse for simplifying the epistemic situation we face! If a policy decision requires a complex normative judgement, then democratically legitimised policy makers have arguably a hard job; it is, nevertheless, their job to balance and weigh the diverse risks of the alternative options. That is not the job of scientic policy advisers who might be tempted to simplify the situation, thereby pre-determining the complex value judgements.
Alarmism
I have written two previous posts that address the idea that uncertainty increases the argument for action
As Betz points out, there is no simple decision rule for dealing with this kind of deep uncertainty.
Alarmism occurs when possible, unverified worst case scenarios are touted as almost certain to occur. U.S. Secretary of State John Kerry frequently does this, as does Joe Romm (and Rachendra Pachauri). A recent example from Dana Nuccitelli, John Cook and Stephen Lewandowsky:
The problems with this kind of thinking is summarized in my two previous posts (cited a few paragraphs above); in summary this is a stark and potentially dangerous oversimplification of how to approach decision making about this complex problem.
Summary
Back to the AAAS statement What We Know. Unverified hypotheses about fat tail events are NOT what we KNOW. Presenting this as knowledge rather than speculation, and unduly focusing on it for policy decisions, is alarmist.
My biggest concern is that by unduly (and almost exclusively) focusing on AGW that we are making a type 1 error: a possibility that has not been articulated might come true. These possibilities (e.g. abrupt climate change) are associated with natural climate variability, and possibly its interaction with AGW.
Pretending that all this can be characterized by a fat tail derived from estimates of climate sensitivity is highly misleading, in my opinion.
So I agree with Emanuel that we should think about worst cases (e.g. black swans and dragon kings); I disagree with him regarding how this should be approached scientifically and mathematically. However, undue focus on on unverified worst case scenarios as a strategy for building political will for a particular policy option constitutes undesirable alarmism.
