Site icon Climate Etc.

Reasoning About Climate Uncertainty – Draft

by Judith Curry

Here is a complete (albeit rough) draft of my paper for the special issue in the journal Climatic Change (founding editor Steve Schneider) entitled Framing and Communicating Uncertainty and Confidence Judgments by the IPCC.


The target length is 3000 words; my paper is substantially longer than this already.  I have selected only a few topics to cover.  I have a much longer paper on uncertainty for which I just got the reviews, that covers additional topics; more on that paper soon (you’ve already seen some excerpts in the uncertainty series).

For background on uncertainty and the IPCC, see these previous threads (Part I and Part II.) For additional relevant threads, see uncertainty monster and reasoning threads.

Here is the main text of the paper (no abstract yet, or reference list).

1. Introduction

The challenge of framing and communicating uncertainty about climate change is a symptom of the challenges of understanding and reasoning about such a complex system.  Our understanding of the complex climate system is hampered by a myriad of uncertainties, indeterminacy, ignorance, and cognitive biases. Complexity of the climate system arises from the very large number of degrees of freedom, the number of subsystems and complexity in linking them, and the nonlinear and chaotic nature of the atmosphere and ocean. A complex system exhibits behavior not obvious from the properties of its individual components, whereby larger scales of organization influence smaller ones and structure at all scales is influenced by feedback loops among the structures. The epistemology of computer simulations of complex systems is a new and active area research among scientists, philosophers, and the artificial intelligence community. How to reason about the complex climate system and its computer simulations is not simple or obvious.

How has the IPCC dealt with the challenge of uncertainty in the complex climate system? Until the time of the IPCC TAR and the Moss-Schneider (2000) Guidance paper, uncertainty was dealt with in an ad hoc manner.  The Moss-Schneider guidelines raised a number of important issues regarding the identification and communication of uncertainties. However, the actual implementation of this guidance in the TAR and AR4 adopted a subjective perspective or “judgmental estimates of confidence.” Defenders of the IPCC uncertainty characterization argue that subjective consensus expressed using simple terms is understood more easily by policy makers.

The consensus approach used by the IPCC to characterize uncertainty has received a number of criticisms. Van der Sluijs et al. (2010b) finds that the IPCC consensus strategy underexposes scientific uncertainties and dissent, making the chosen policy vulnerable to scientific error and limiting the political playing field. Van der Sluijs (2010a) argues that matters on which no consensus can be reached continue to receive too little attention by the IPCC, even though this dissension can be highly policy-relevant. Oppenheimer et al. (2007) point out the need to guard against overconfidence and argue that the IPCC consensus emphasizes expected outcomes, whereas it is equally important that policy makers understand the more extreme possibilities that consensus may exclude or downplay. Gruebler and Nakicenovic (2001) opine that “there is a danger that the IPCC consensus position might lead to a dismissal of uncertainty in favor of spuriously constructed expert opinion.”

While the policy makers’ desire for a clear message from the scientists is understandable, the consensus approach being used by the IPCC has not produced a thorough portrayal of the complexities of the problem and the associated uncertainties in our understanding. While the public may not understand the complexity of the science or be culturally predisposed to accept the consensus, they can certainly understand the vociferous arguments over the science portrayed by the media.  Better characterization of uncertainty and ignorance and a more realistic portrayal of confidence levels could go a long way towards reducing the “noise” and animosity portrayed in the media that fuels the public distrust of climate science and acts to stymie the policy process.  Not to mention that an improved characterization of uncertainty and ignorance would promote a better overall understanding of the science and how to best target resources to improve understanding.  Further, improved understanding and characterization of uncertainty is critical information for the development of robust policy options.

Indeterminacy and framing of the climate change problem

An underappreciated aspect of characterizing uncertainty is associated with the questions that do not even get asked. Wynne (1992) argues that scientific knowledge typically investigates “a restricted agenda of defined uncertainties—ones that are tractable— leaving invisible a range of other uncertainties, especially about the boundary conditions of applicability of the existing framework of knowledge to new situations.” Wynne refers to this as indeterminacy, which arises from the “unbounded complexity of causal chains and open networks.” Indeterminacies can arise from not knowing whether the type of scientific knowledge and the questions posed are appropriate and sufficient for the circumstances and the social context in which the knowledge is applied.

In the climate change problem, indeterminacy is associated with the way the climate change problem has been framed.  Frames are organizing principles that enable a particular interpretation of an issue. De Boerg et al. (2010) state that: “Frames act as organizing principles that shape in a “hidden” and taken-for-granted way how people conceptualize an issue.”  Risbey et al. (2005)??? argue that decisions on problem framing influence the choice of models and what knowledge is considered relevant to include in the analysis. De Boerg et al. further state that frames can express how a problem is stated, who is expected to make a statement about it, what questions are relevant, and what range of answers might be appropriate.

The decision making framework provided by the UNFCCC Treaty provides the rationale for framing the IPCC assessment of climate change and its uncertainties, in terms of identifying dangerous climate change and providing input for decision making regarding CO2 stabilization targets. In the context of this framing, certain key scientific questions receive little attention.  In the detection and attribution of 20th century climate change, Chapter 9 of the AR4 WG1 Report all but dismisses natural internal modes of multidecadal variability in the attribution argument.  Further, impacts of the low level of understanding of solar variability and its potential indirect effects on the climate are not explored in any meaningful way in terms of its impact on the confidence level expressed in the attribution statement.  In the WG II Report, the focus is on attributing possible dangerous impacts to AGW, with little focus in the summary statements on how warming might actually be beneficial to certain regions or in certain sectors.

Further, the decision analytic framework associated with setting a CO2 stabilization target focuses research and analysis on using expert judgment to identify a most likely value of sensitivity/ warming and narrowing the range of expected values, rather than fully exploring the uncertainty and the possibility for black swans (Taleb 2007) and dragon kings (Sornette 2009). The concept of imaginable surprise was discussed in the Moss-Schneider uncertainty guidance documentation, but consideration of such possibilities seems largely to have been ignored by the AR4 report. The AR4 focused on what was “known” to a significant confidence level. The most visible failing of this strategy was neglect of the possibility of rapid melting of ice sheets on sea level rise in the Summary for Policy Makers (e.g. Oppenheimer et al. 2007; Betz 2009). An important issue is to identify the potential black swans associated with natural climate variation under no human influence, on time scales of one to two centuries. Without even asking this question, judgments regarding the risk of anthropogenic climate change can be misleading to decision makers.

The presence of sharp conflicts with regards to both the science and policy reflects an overly narrow framing of the climate change problem.  Until the problem is reframed or multiple frames are considered by the IPCC, the scientific and policy debate will continue to ignore crucial elements of the problem, with confidence levels that are too high.

Uncertainty, ignorance and confidence

The Uncertainty Guidance Paper by Moss and Schneider (2000) recommended a common vocabulary to express quantitative levels of confidence based on the amount of evidence (number of sources of information) and the degree of agreement (consensus) among experts. This assessment strategy does not include any systematic analysis of the types and levels uncertainty and quality of the evidence, and more importantly dismisses indeterminacy and ignorance as important factors in assessing these confidence levels. In context of the narrow framing of the problem, this uncertainty assessment strategy promotes the consensus into becoming a self-fulfilling prophecy.

The uncertainty guidance provided for the IPCC AR4 distinguished between levels of confidence in scientific understanding and the likelihoods of specific results. In practice, primary conclusions in the AR4 included a mixture of likelihood and confidence statements that are ambiguous. Curry and Webster (2010) have raised specific issues with regards to the statement “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.” Risbey and Kandlikar (2007) describe ambiguities in actually applying likelihood and confidence, including situations where likelihood and confidence cannot be fully separated, likelihood levels contain implicit confidence levels, and interpreting uncertainty when there are two levels of imprecision is in some cases rather difficult.

Numerous methods of categorizing risk and uncertainty have been described in the context of different disciplines and various applications; for a recent review, see Spiegelhalter and Riesch (2011).  Of particular relevance for climate change are schemes for analyzing uncertainty when conducting risk analyses. My primary concerns about the IPCC’s characterization of uncertainty are twofold:

Following Walker et al. (2003), statistical uncertainty is distinguished from scenario uncertainty, whereby scenario uncertainty implies that it is not possible to formulate the probability of occurrence particular outcomes. A scenario is a plausible but unverifiable description of how the system and/or its driving forces may develop in the future. Scenarios may be regarded as a range of discrete possibilities with no a priori allocation of likelihood.  Wynne (1992) defines risk as knowing the odds (analogous to Walker et al.’s statistical uncertainty), and uncertainty as not knowing the odds but knowing the main parameters (analogous to Walker et al.’s scenario uncertainty).

Stainforth et al. (2007) argue that model inadequacy and an insufficient number of simulations in the ensemble preclude producing meaningful probability distributions from the frequency of model outcomes of future climate. Stainforth et al. state: “[G]iven nonlinear models with large systematic errors under current conditions, no connection has been even remotely established for relating the distribution of model states under altered conditions to decision-relevant probability distributions. . . . Furthermore, they are liable to be misleading because the conclusions, usually in the form of PDFs, imply much greater confidence than the underlying assumptions justify.” Given climate model inadequacies and uncertainties, Betz (2009) argues for the logical necessity of considering climate model simulations as modal statements of possibilities, which is consistent with scenario uncertainty. Stainforth et al.  makes an equivalent statement: “Each model run is of value as it presents a ‘what if’ scenario from which we may learn about the model or the Earth system.”   Insufficiently large initial condition ensembles combined with model parameter and structural uncertainty preclude forming a PDF from climate model simulations that has much meaning in terms of establish a mean value or confidence intervals. In the presence of scenario uncertainty, which characterizes climate model simulations, attempts to produce a PDF for climate sensitivity (e.g. Annan and Hargreaves 2010) are arguably misguided and misleading.

Ignorance is that which is not known; Wynne (1992) finds ignorance to be endemic because scientific knowledge must set the bounds of uncertainty in order to function.  Walker et al. (2003) categorize the following different levels of ignorance. Total ignorance implies a deep level of uncertainty, to the extent that we do not even know that we do not know. Recognized ignorance refers to fundamental uncertainty in the mechanisms being studied and a weak scientific basis for developing scenarios. Reducible ignorance may be resolved by conducting further research, whereas irreducible ignorance implies that research cannot improve knowledge (e.g. what happened prior to the big bang). Bammer and Smithson (2008) further distinguish between conscious ignorance, where we know we don’t know what we don’t know, versus unacknowledged or meta-ignorance where we don’t even consider the possibility of error.

While the Kandlikar et al. (2005) uncertainty schema explicitly includes effective ignorance in its uncertainty categorization, the AR4 uncertainty guidance (which is based upon Kandlikar et al.) neglects to include ignorance in the characterization of uncertainty.  Hence IPCC confidence levels determined based on the amount of evidence (number of sources of information) and the degree of agreement (consensus) among experts do not explicitly account for indeterminacy and ignorance, although recognized areas of ignorance are mentioned in some part of the report (e.g. the possibility of indirect solar effects in sect xxxx of the AR4 WG1 Report).  Overconfidence is an inevitable result of neglecting indeterminacy and ignorance.

A comprehensive approach to uncertainty management and elucidation of the elements of uncertainty is described by the NUSAP scheme, which includes methods to determine the pedigree and quality of the relevant data and methods used (e.g. van der Sluijs et al. 2005a,b).  The complexity of the NUSAP scheme arguably precludes its widespread adoption by the IPCC.  The challenge is to characterize uncertainty in a complete way while retaining sufficient simplicity and flexibility for its widespread adoption. In the context of risk analysis, Speigelhalter and Riesch (2011) describe a scheme for characterizing uncertainty that covers the range from complete numerical formalization of probabilities to indeterminancy and ignorance, and includes and the possibility of unspecified but surprising events.  Quality of evidence is an important element of the NUSAP scheme and the scheme described by Spiegelhalter and Riesch (2011).  The GRADE scale of Guyatt et al. (2008) provides a simple yet useful method for judging quality of evidence, with a more complex scheme for judging quality utilized by NUSAP.

Judgmental estimates of confidence need to consider not only the amount of evidence for and against and the degree of consensus, but also need to consider the adequacy of the knowledge base (which includes the degree of uncertainty and ignorance), and also the quality of the information that is available.  A practical way of incorporating these elements into an assessment of confidence is provided by Egan (2005).  The crucial difference between this approach and the consensus-based approach is that the dimension associated with the degree of consensus among experts is replaced by specific judgments about the adequacy of the knowledge base and the quality of the information available.

Consensus and disagreement

The uncertainty associated with climate science and the range of decision making frameworks and policy options provides much fodder for disagreement.  Here I argue that the IPCC’s consensus approach enforces overconfidence, marginalization of skeptical arguments, and belief polarization. The role of cognitive biases (e.g. Tverskey and Kahnemann 1974) has received some attention in the context of the climate change debate, as summarized by Morgan et al. (2009, chapter xxx). However, the broader issues of the epistemology and psychology of consensus and disagreement have received little attention in the context of the climate change problem.

Kelly (2005, 2008) provides some general insights into the sources of belief polarization that are relevant to the climate change problem.  Kelly (2008) argues that “a belief held at earlier times can skew the total evidence that is available at later times, via characteristic biasing mechanisms, in a direction that is favorable to itself.” Kelly (2008) also finds that “All else being equal, individuals tend to be significantly better at detecting fallacies when the fallacy occurs in an argument for a conclusion which they disbelieve, than when the same fallacy occurs in an argument for a conclusion which they believe.”  Kelly (2005) provides insights into the consensus building process: “As more and more peers weigh in on a given issue, the proportion of the total evidence which consists of higher order psychological evidence [of what other people believe] increases, and the proportion of the total evidence which consists of first order evidence decreases . . .  At some point, when the number of peers grows large enough, the higher order psychological evidence will swamp the first order evidence into virtual insignificance.”  Kelly (2005) concludes: “Over time, this invisible hand process tends to bestow a certain competitive advantage to our prior beliefs with respect to confirmation and disconfirmation. . . In deciding what level of confidence is appropriate, we should taken into account the tendency of beliefs to serve as agents in their own confirmation.”

So what are the implications of Kelly’s arguments for consensus and disagreement associated with climate change and the IPCC? Cognitive biases in the context of an institutionalized consensus building process have arguably resulted in the consensus becoming increasingly confirmed in a self-reinforcing way. The consensus process of the IPCC has marginalized dissenting skeptical voices, who are commonly dismissed as “deniers”  (e.g. Hasselmann 2010). This “invisible hand” that marginalizes skeptics is operating to the substantial detriment of climate science, not to mention the policies that are informed by climate science. The importance of skepticism is aptly summarized by Kelly (2008): “all else being equal, the more cognitive resources one devotes to the task of searching for alternative explanations, the more likely one is hit upon such an explanation, if in fact there is an alternative to be found.”

The intense disagreement between scientists that support the IPCC consensus and skeptics becomes increasingly polarized as a result of the “invisible hand” described by Kelly (2005, 2008).  Disagreement itself can be evidence about the quality and sufficiency of the evidence. Disagreement can arise from disputed interpretations as proponents are biased by excessive reliance on a particular piece of evidence. Disagreement can result from “conflicting certainties,” whereby competing hypotheses are each buttressed by different lines of evidence, each of which is regarded as “certain” by its proponents. Conflicting certainties arise from differences in chosen assumptions, neglect of key uncertainties, and the natural tendency to be overconfident about how well we know things (e.g. Morgan 1990).

What is desired is a justified interpretation of the available evidence, which is completely traceable throughout the process in terms of the quality of the data, modeling, and reasoning process. A thorough assessment of uncertainty by proponents of different sides in a scientific debate will reduce the level of disagreement.

Reasoning about uncertainty

The objective of the IPCC reports is to assess existing knowledge about climate change.  The IPCC assessment process combines a compilation of evidence with subjective Bayesian reasoning.  This process is described by Oreskes (2007)  as presenting a “consilience of evidence” argument, which consists of independent lines of evidence that are explained by the same theoretical account. Oreskes draws an analogy for this approach with what happens in a legal case.

The consilience of evidence argument is not convincing unless it includes parallel evidence-based analyses for competing hypotheses, for the simple reason that any system that is more inclined to admit one type of evidence or argument rather than another tends to accumulate variations in the direction towards which the system is biased. In a Bayesian analysis with multiple lines of evidence, you could conceivably come up with enough multiple lines of evidence to produce a high confidence level for each of two opposing arguments, which is referred to as the ambiguity of competing certainties. If you acknowledge the substantial level of ignorance surrounding this issue, the competing certainties disappear (this is the failing of Bayesian analysis, it doesn’t deal well with ignorance) and you are left with a lower confidence level.

To be convincing, the arguments for climate change need to change from the burden of proof model to a model whereby a thesis is supported explicitly by addressing the issue of what the case would have to look like for the thesis to be false, in the mode of argument justification (e.g. Betz 2010). Argument justification invokes counterfactual reasoning to ask the question “What would have to be the case such that this thesis were false?” The general idea is that the fewer positions supporting the idea that the thesis is false, the higher its degree of justification. Argument justification provides an explicit and important role for skeptics, and a framework whereby scientists with a plurality of viewpoints participate in an assessment.  This strategy has the advantage of moving science forward in areas where there are competing schools of thought.  Disagreement then becomes the basis for focusing research in a certain area, and so moves the science forward.

The rationale for parallel evidence-based analyses of competing hypotheses and argument justification is eloquently described by Kelly’s (2008) key epistemological fact:  “For a given body of evidence and a given hypothesis that purports to explain that evidence, how confident one should be that the hypothesis is true on the basis of the evidence depends on the space of alternative hypotheses of which one is aware. In general, how strongly a given body of evidence confirms a hypothesis is not solely a matter of the intrinsic character of the evidence and the hypothesis. Rather, it also depends on the presence or absence of plausible competitors in the field. It is because of this that the mere articulation of a plausible alternative hypothesis can dramatically reduce how likely the original hypothesis is on one’s present evidence.”

Reasoning about uncertainty in the context of evidence based analyses and the formulation of confidence levels is not at all straightforward for the climate problem.  Because of the complexity of the climate problem, Van der sluijs et al. (2005) argue that uncertainty methods such as subjective probability or Bayesian updating alone are not suitable for this class of problems, because the unquantifiable uncertainties and ignorance dominate the quantifiable uncertainties. Any quantifiable Bayesian uncertainty analysis “can thus provide only a partial insight into what is a complex mass of uncertainties”  (van der Sluijs et al. 2005).

Given the dominance of unquantifiable uncertainties in the climate problem, expert judgment about confidence levels are made in the absence of a comprehensive quantitative uncertainty analysis.  Because of the complexity and in the absence of a formal logical hypothesis hierarchy used in the IPCC assessment, individual experts use different mental models and heuristics for evaluating the interconnected evidence. Biases can abound when reasoning and making judgments about such a complex problem.  Bias can occur by excessive reliance on a particular piece of evidence, the presence of cognitive biases in heuristics, and logical fallacies and errors including circular reasoning.  Further, the consensus building process itself can be a source of bias (Kelly 2005).

Identifying the most important uncertainties and introducing a more objective assessment of confidence levels requires introducing a more disciplined logic into the climate change assessment process. A useful approach would be the development of hierarchical logical hypothesis models that provides a structure for assembling the evidence and arguments in support of the main hypotheses or propositions. A logical hypothesis hierarchy (or tree) links the root hypothesis to lower level evidence and hypotheses. While developing a logical hypothesis tree is somewhat subjective and involves expert judgments, the evidential judgments are made at a lower level in the logical hierarchy.  Essential judgments and opinions relating to the evidence and the arguments linking the evidence are thus made explicit, lending structure and transparency to the assessment. To the extent that the logical hypothesis hierarchy decomposes arguments and evidence to the most elementary propositions, the sources of disputes are easily illuminated and potentially minimized.

Bayesian Network Analysis using weighted binary tree logic is one possible choice for such an analysis.  However, a weakness of Bayesian Networks is its two-valued logic and inability to deal with ignorance, whereby evidence is either for or against the hypotheses.   An influence diagram is a generalization of a Bayesian Network that represents the relationships and interactions between a series of propositions or evidence (Spiegelhalter, 1986).  Cui and Blockley (1990) introduce interval probability  three-valued logic into an influence diagram with an explicit role for uncertainties (the so-called “Italian flag”) that recognizes that evidence may be incomplete or inconsistent, of uncertain quality or meaning. Combination of evidence proceeds generally as a Bayesian combination, but  combinations of evidence are modified by the factors of sufficiency, dependence and necessity.  Practical applications to the propagation of evidence using interval probability theory are described by Bowden (2004) and Egan (2005).

An application of influence diagrams and interval probability theory to a climate relevant problem is described by Hall et al. (2006), regarding a study commissioned by the UK Government that sought to establish the extent to which very severe floods in the UK in October–November 2000 were attributable to climate change. Hall et al. used influence diagrams to represent the evidential reasoning and uncertainties in responding to this question. Three alternative approaches to the mathematization of uncertainty in influence were compared, including Bayesian belief networks and two interval probably methods (Interval Probability Theory and Support Logic Programming).  Hall et al. argue that “ interval probabilities represent ambiguity and ignorance in a more satisfactory manner than the conventional Bayesian alternative . . . and are attractive in being able to represent in a straightforward way legitimate imprecision in our ability to estimate probabilities.”

Hall et al. concluded that influence diagrams can help to synthesize complex and contentious arguments of relevance to climate change.   Breaking down and formalizing expert reasoning can facilitate dialogue between experts, policy makers, and other decision stakeholders. The procedure used by Hall et al. supports transparency and clarifies uncertainties in disputes, in a way that expert judgment about high level root hypotheses does not.

Conclusions

In this paper I have argued that the problem of communicating climate uncertainty is fundamentally a problem of how we have framed the climate change problem, how we have characterized uncertainty, how we reason about uncertainty, and the consensus building process itself.  As a result of these problems, the IPCC has not produced a thorough portrayal of the complexities of the problem and the associated uncertainties in our understanding. Improved characterization of uncertainty and ignorance and a more realistic portrayal of confidence levels could go a long way towards reducing the “noise” and animosity portrayed in the media that fuels the public distrust of climate science and acts to stymie the policy process.  Not to mention that an improved characterization of uncertainty and ignorance would promote a better overall understanding of the science and how to best target resources to improve understanding.

Improved understanding and characterization of uncertainty is critical information for the development of robust policy options. When working with policy makers and communicators, it is essential not to fall into the trap of acceding to inappropriate demands for certainty.  Wynne (1992) makes an erudite statement: “the built-in ignorance of science towards its own limiting commitments and assumptions is a problem only when external commitments are built on it as if such intrinsic limitations did not exist.”

Acknowledgements. I would like to acknowledge the contributions of the Denizens of my blog Climate Etc.  (judithcurry.com) for their insightful comments and discussions on the numerous uncertainty threads.

Moderation note: this is a technical thread, and comments will be moderated for relevance.  I really appreciate your comments and review of this draft paper.

Exit mobile version