Site icon Climate Etc.

Probabilistic(?) estimates of climate sensitivity

by Judith Curry

James Annan (with Hargreaves) has a new paper out, entitled “On the generation and interpretation of probabilistic estimates of climate sensitivity.”  Here is the abstract:

The equilibrium climate response to anthropogenic forcing has long been one of the dominant, and therefore most intensively studied, uncertainties in predicting future climate change. As a result, many probabilistic estimates of the climate sensitivity (S) have been presented. In recent years, most of them have assigned significant probability to extremely high sensitivity, such as P(S gt 6C) gt 5%. In this paper, we investigate some of the assumptions underlying these estimates. We show that the popular choice of a uniform prior has unacceptable properties and cannot be reasonably considered to generate meaningful and usable results. When instead reasonable assumptions are made, much greater confidence in a moderate value for S is easily justified, with an upper 95% probability limit for S easily shown to lie close to 4°C, and certainly well below 6°C. These results also impact strongly on projected economic losses due to climate change.

The punchline is this.  The IPCC AR4 states that equilibrium climate sensitivity is likely (> 66%) to lie in the range 2–4.5C and very unlikely (< 10%) to lie below 1.5C.   Annan and Hargreaves  demonstrate that the the widely-used approach of a uniform prior fails to adequately represent “ignorance” and generates rather pathological results which depend strongly on the selected upper bound.  They then turn to the approach of representing reasonable opinion through an expert prior.  They examine Beta distributions and Cauchy distributions, including a long tailed Cauchy prior.  They conclude that:

Thus it might be reasonable for the IPCC to upgrade their confidence in S lying below 4.5oC to the “extremely likely” level, indicating 95% probability of a lower value.

This is a nice analysis with a useful link to economic decision making, but possibly with a fatal flaw.

Background

Some background on climate sensitivity:

There are many scientific papers and blog posts on this topic, but these provide a basic introduction if you aren’t familiar with the topic

Ignorance, pdfs, and Bayesian reasoning

Take a look at this figure from the IPCC AR4, which represents the pdfs or relative likelihoods for equilibrium climate sensitive from a range of studies (both models and observations).  The distributions cover a wide range.   Recall that the IPCC AR4 states that equilibrium climate sensitivity is likely (> 66%) to lie in the range 2–4.5C and very unlikely (< 10%) to lie below 1.5C.  Annan and Hargreaves find that it is very unlikely to be above 4.5C in the context of a Bayesian analysis, as a result of prior selection and expert judgment.

The issue that I have with this is that the level of ignorance is sufficiently large that probabilities determined from Bayesian analysis are not justified.  Bayesian reasoning does not deal well with ignorance.  The fact that others have created pdfs from sensitivity estimates and that economists uses these pdfs is not a justification; rather, climate researchers and statisticians need to take a close look at this to see whether this line of reasoning is flawed.

My comments here relate specifically to determination of equilibrium climate sensitivity from climate models.  Stainforth et al. (2007) argue that model inadequacy and an inadequate number of simulations in the ensemble preclude producing meaningful probability PDFs from the frequency of model outcomes of future climate. Smith (2006) goes as far to say “Indeed, I now believe that model inadequacy prevents accountable probability forecasts in a manner not dissimilar to that in which uncertainty in the initial condition precludes accurate best first guess  forecasting in the root-mean-square sense.”  Stainforth et al. state:

The frequency distributions across the ensemble of models may be valuable information for model development, but there is no reason to expect these distributions to relate to the probability of real-world behaviour. One might (or might not) argue for such a relation if the models were empirically adequate, but given nonlinear models with large systematic errors under current conditions, no connection has been even remotely established for relating the distribution of model states under altered conditions to decision-relevant probability distributions. . . There may well exist thresholds, or tipping points (Kemp 2005), which lie within this range of uncertainty. If so, the provision of a mean value is of little decision-support relevance.

Furthermore, they are liable to be misleading because the conclusions, usually in the form of PDFs, imply much greater confidence than the underlying assumptions justify; we know our current models are inadequate and we know many of the reasons why they are so.These methods aim to increase our ability to communicate the appropriate degree of confidence in said results. Each model run is of value as it presents a ‘what if’ scenario from which we may learn about the model or the Earth system. Such insights can hold non-trivial value for decision making.

I agree with Stainforth et al. and Smith on this.  Insufficiently large initial condition ensembles combined with model parameter and structural uncertainty preclude forming a pdf from climate model simulations that has much meaning in terms of establish a mean value or confidence intervals.

Annan and Hargreaves recognize the potential inadequacy of the Bayesian paradigm:

However, it must be recognised that in fact there can be no prior that genuinely represents a state of complete ignorance (Bernardo and Smith, 1994, Section 5.4), and indeed the impossibility of representing true ignorance within the Bayesian paradigm is perhaps one of the most severe criticisms that is commonly levelled at it.

Imprecise probabiity measures

Annan and Hargreaves state:

While the Bayesian approach is not the only possible paradigm for the treatment of epistemic uncertainty in climate science (eg Kriegler, 2005), it appears to be the dominant one in the literature. We do not wish to revisit the wider debate concerning the presentation of uncertainty in climate science (eg Moss and Schneider, 2000; Betz, 2007; Risbey, 2007; Risbey and Kandlikar, 2007) but merely note that despite this debate, numerous authors have in fact presented precise pdfs for climate sensitivity, and furthermore their results are frequently used as inputs for further economic and policy analyses (eg Yohe et al., 2004; Meinshausen, 2006; Stern, 2007; Harvey, 2007)

While uncertainty has traditionally been represented by probability, probability is not good at representing ignorance and there are often substantial difficulties in assigning probabilities to events. Other representations of uncertainty have been considered in the literature that address these difficulties (see Halpern  2003 for a review), including Dempster-Shafer evidence theory, possibility measures, and plausibility measures.  These alternative representations of uncertainty allow for ignorance and also surprises. While it is beyond the scope of this thread to describe these methods in any detail, a brief description is given of Dempster-Shafer evidence theory and possibility theory, including their relative advantages and disadvantages and how various concepts of likelihood play out in each of these methods. The description below follows Halpern (2003) and references therein.

Dempster-Shafer theory of evidence (also referred to as evidence theory) allows nonmeasureable events, hence allowing one to specify a degree of ignorance. In evidence theory, likelihood is assigned to an interval (referred to as sets), as opposed to probability theory where likelihood is assigned to a point-valued probability and a probability density function. Evidence theory allows for a combination evidence from different sources and arrives at a degree of belief (represented by a belief function) that accounts for all the available evidence. Beliefs corresponding to independent pieces of information are combined using Dempster’s rule of combination. The amount of belief for a given hypothesis forms a lower bound, and plausibility is the upper bound on the possibility that the hypothesis could be true, whereby belief ≤ plausibility. Probability values are assigned to sets of possibilities rather than single events: their appeal rests on the fact they naturally encode evidence in favor of propositions.  Evidence theory is being used increasingly for the design of complex engineering systems.  Evidence theory and Bayesian theory provide complimentary information when evidence about uncertainty is imprecise. Dempster–Shafer theory allows one to specify a degree of ignorance in this situation instead of being forced to supply prior probabilities which add to unity. (see also the Wikipedia).

Possibility theory is an imprecise probability theory that considers incomplete information. Possibility theory is driven by the principle of minimal specificity that states that any hypothesis not known to be impossible cannot be ruled out. In contrast to probability, possibility theory describes how likely an event is to occur using the dual concepts of the possibility and necessity of the event, which makes it easier to capture partial ignorance. A possibility distribution distinguishes what is plausible versus the normal course of things versus surprising versus impossible.   Possibility theory can be interpreted as a non-numerical version of probability theory or as a simple approach to reasoning with imprecise probabilities. (see also the Wikipedia).

More generalized approaches such as plausibility theory and the Tesla evidence theory combine probability measures, probability measures, Dempster-Shafer belief functions, ranking functions and possibility and necessity measures .

I don’t know why imprecise probability paradigms aren’t more widely used in climate science.  Probabilities seem misleading, given the large uncertainty.

Black swans and dragon kings

On the one hand, it is “comforting” to have a very likely confidence level of sensitivity  not exceeding 4.5C, since this provides a concrete range for economists and others to work with.  It may be a false comfort.    I discussed this issue in my AGU presentation “Climate surprises, catastrophes, and fat tails” .

Stainforth et al. states:

How should we interpret such ensembles in terms of information about future behaviour of the actual Earth system? The most straightforward interpretation is simply to present the range of behaviour in the variables of interest across different models (Stainforth et al. 2005, 2007). Each model gives a projected distribution; an evaluation of its climate. A grand ensemble provides a range of distributions; a range of ICE means (figure 2), a range of 95th centiles, etc. These are subject to a number of simple assumptions: (i) the forcing scenario explored, (ii) the degree of exploration of model and ICU, and (iii) the processes included and resolved. All analysis procedures will be subject to these assumptions, at least, unless a reliable physical constraint can be identified. Even were we to achieve the impossible and have access to a comprehensive exploration of uncertainty in parameter space, the shape of various distributions extracted would reflect model constructs with no obvious relationship to the probability of real-world behaviour.

Thus, how should we interpret and utilize these simulations? A pragmatic response is to acknowledge and highlight such unquantifiable uncertainties but to present results on the basis that they have zero effect on our analysis. The model simulations are therefore taken as possibilities for future realworld climate and as such of potential value to society, at least on variables and scales where themodels agree in terms of their climate distributions (Smith 2002). But even best available information may be rationally judged quantitatively irrelevant for decision-support applications. Quantifying this relevance is a missing link in connecting modelling and user communities. Today’s ensembles give us a lower bound on the maximum range of uncertainty. This is an honest description of our current abilities and may still be useful as a guide to decision and policy makers.

Applying pdfs of climate sensitivity to economic modelling seems fraught with the potential to mislead and preclude the possibilities of black swans and dragon kings.  Didier Sornette’s presentation at the AGU entitled “Dragon-Kings, black swans, and prediction”  raises these issues, which need to be considered in the context of risk assessment and economic modeling.

Questions

In summary, given the large uncertainties, I am unconvinced by Annan and Hargreave’s analysis in terms of providing limits to the range of expected climate sensitivity values.  Expert judgment and Bayesian analysis are not up to this particular task, IMO.

I would appreciate some input and assessment on these issues from the denizens with statistical expertise.  Imprecise probability theories (e.g. evidence theory, possibility theory, plausibility theory) seem to be far better fits to the climate sensitivity problem.  As far as I can tell, these haven’t been used in climate science (other than Krieger’s 2005 Ph.D. thesis), and my fledgling attempts (e.g. Italian flag analysis; note Part II is under construction).

This is a technical thread; comments will be moderated for relevance (general comments about the greenhouse effect should be made on the Pierrehumbert thread).  Relevant topics are the statistical methods used by Annan and Hargreaves, economic modelling using climate sensitivity information, and evidence for/against a large sensitivit (>4C) and for a low sensitivity (<1.5C).

Exit mobile version