Reasoning About Climate Uncertainty – Draft

by Judith Curry

Here is a complete (albeit rough) draft of my paper for the special issue in the journal Climatic Change (founding editor Steve Schneider) entitled Framing and Communicating Uncertainty and Confidence Judgments by the IPCC.


The target length is 3000 words; my paper is substantially longer than this already.  I have selected only a few topics to cover.  I have a much longer paper on uncertainty for which I just got the reviews, that covers additional topics; more on that paper soon (you’ve already seen some excerpts in the uncertainty series).

For background on uncertainty and the IPCC, see these previous threads (Part I and Part II.) For additional relevant threads, see uncertainty monster and reasoning threads.

Here is the main text of the paper (no abstract yet, or reference list).

1. Introduction

The challenge of framing and communicating uncertainty about climate change is a symptom of the challenges of understanding and reasoning about such a complex system.  Our understanding of the complex climate system is hampered by a myriad of uncertainties, indeterminacy, ignorance, and cognitive biases. Complexity of the climate system arises from the very large number of degrees of freedom, the number of subsystems and complexity in linking them, and the nonlinear and chaotic nature of the atmosphere and ocean. A complex system exhibits behavior not obvious from the properties of its individual components, whereby larger scales of organization influence smaller ones and structure at all scales is influenced by feedback loops among the structures. The epistemology of computer simulations of complex systems is a new and active area research among scientists, philosophers, and the artificial intelligence community. How to reason about the complex climate system and its computer simulations is not simple or obvious.

How has the IPCC dealt with the challenge of uncertainty in the complex climate system? Until the time of the IPCC TAR and the Moss-Schneider (2000) Guidance paper, uncertainty was dealt with in an ad hoc manner.  The Moss-Schneider guidelines raised a number of important issues regarding the identification and communication of uncertainties. However, the actual implementation of this guidance in the TAR and AR4 adopted a subjective perspective or “judgmental estimates of confidence.” Defenders of the IPCC uncertainty characterization argue that subjective consensus expressed using simple terms is understood more easily by policy makers.

The consensus approach used by the IPCC to characterize uncertainty has received a number of criticisms. Van der Sluijs et al. (2010b) finds that the IPCC consensus strategy underexposes scientific uncertainties and dissent, making the chosen policy vulnerable to scientific error and limiting the political playing field. Van der Sluijs (2010a) argues that matters on which no consensus can be reached continue to receive too little attention by the IPCC, even though this dissension can be highly policy-relevant. Oppenheimer et al. (2007) point out the need to guard against overconfidence and argue that the IPCC consensus emphasizes expected outcomes, whereas it is equally important that policy makers understand the more extreme possibilities that consensus may exclude or downplay. Gruebler and Nakicenovic (2001) opine that “there is a danger that the IPCC consensus position might lead to a dismissal of uncertainty in favor of spuriously constructed expert opinion.”

While the policy makers’ desire for a clear message from the scientists is understandable, the consensus approach being used by the IPCC has not produced a thorough portrayal of the complexities of the problem and the associated uncertainties in our understanding. While the public may not understand the complexity of the science or be culturally predisposed to accept the consensus, they can certainly understand the vociferous arguments over the science portrayed by the media.  Better characterization of uncertainty and ignorance and a more realistic portrayal of confidence levels could go a long way towards reducing the “noise” and animosity portrayed in the media that fuels the public distrust of climate science and acts to stymie the policy process.  Not to mention that an improved characterization of uncertainty and ignorance would promote a better overall understanding of the science and how to best target resources to improve understanding.  Further, improved understanding and characterization of uncertainty is critical information for the development of robust policy options.

Indeterminacy and framing of the climate change problem

An underappreciated aspect of characterizing uncertainty is associated with the questions that do not even get asked. Wynne (1992) argues that scientific knowledge typically investigates “a restricted agenda of defined uncertainties—ones that are tractable— leaving invisible a range of other uncertainties, especially about the boundary conditions of applicability of the existing framework of knowledge to new situations.” Wynne refers to this as indeterminacy, which arises from the “unbounded complexity of causal chains and open networks.” Indeterminacies can arise from not knowing whether the type of scientific knowledge and the questions posed are appropriate and sufficient for the circumstances and the social context in which the knowledge is applied.

In the climate change problem, indeterminacy is associated with the way the climate change problem has been framed.  Frames are organizing principles that enable a particular interpretation of an issue. De Boerg et al. (2010) state that: “Frames act as organizing principles that shape in a “hidden” and taken-for-granted way how people conceptualize an issue.”  Risbey et al. (2005)??? argue that decisions on problem framing influence the choice of models and what knowledge is considered relevant to include in the analysis. De Boerg et al. further state that frames can express how a problem is stated, who is expected to make a statement about it, what questions are relevant, and what range of answers might be appropriate.

The decision making framework provided by the UNFCCC Treaty provides the rationale for framing the IPCC assessment of climate change and its uncertainties, in terms of identifying dangerous climate change and providing input for decision making regarding CO2 stabilization targets. In the context of this framing, certain key scientific questions receive little attention.  In the detection and attribution of 20th century climate change, Chapter 9 of the AR4 WG1 Report all but dismisses natural internal modes of multidecadal variability in the attribution argument.  Further, impacts of the low level of understanding of solar variability and its potential indirect effects on the climate are not explored in any meaningful way in terms of its impact on the confidence level expressed in the attribution statement.  In the WG II Report, the focus is on attributing possible dangerous impacts to AGW, with little focus in the summary statements on how warming might actually be beneficial to certain regions or in certain sectors.

Further, the decision analytic framework associated with setting a CO2 stabilization target focuses research and analysis on using expert judgment to identify a most likely value of sensitivity/ warming and narrowing the range of expected values, rather than fully exploring the uncertainty and the possibility for black swans (Taleb 2007) and dragon kings (Sornette 2009). The concept of imaginable surprise was discussed in the Moss-Schneider uncertainty guidance documentation, but consideration of such possibilities seems largely to have been ignored by the AR4 report. The AR4 focused on what was “known” to a significant confidence level. The most visible failing of this strategy was neglect of the possibility of rapid melting of ice sheets on sea level rise in the Summary for Policy Makers (e.g. Oppenheimer et al. 2007; Betz 2009). An important issue is to identify the potential black swans associated with natural climate variation under no human influence, on time scales of one to two centuries. Without even asking this question, judgments regarding the risk of anthropogenic climate change can be misleading to decision makers.

The presence of sharp conflicts with regards to both the science and policy reflects an overly narrow framing of the climate change problem.  Until the problem is reframed or multiple frames are considered by the IPCC, the scientific and policy debate will continue to ignore crucial elements of the problem, with confidence levels that are too high.

Uncertainty, ignorance and confidence

The Uncertainty Guidance Paper by Moss and Schneider (2000) recommended a common vocabulary to express quantitative levels of confidence based on the amount of evidence (number of sources of information) and the degree of agreement (consensus) among experts. This assessment strategy does not include any systematic analysis of the types and levels uncertainty and quality of the evidence, and more importantly dismisses indeterminacy and ignorance as important factors in assessing these confidence levels. In context of the narrow framing of the problem, this uncertainty assessment strategy promotes the consensus into becoming a self-fulfilling prophecy.

The uncertainty guidance provided for the IPCC AR4 distinguished between levels of confidence in scientific understanding and the likelihoods of specific results. In practice, primary conclusions in the AR4 included a mixture of likelihood and confidence statements that are ambiguous. Curry and Webster (2010) have raised specific issues with regards to the statement “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.” Risbey and Kandlikar (2007) describe ambiguities in actually applying likelihood and confidence, including situations where likelihood and confidence cannot be fully separated, likelihood levels contain implicit confidence levels, and interpreting uncertainty when there are two levels of imprecision is in some cases rather difficult.

Numerous methods of categorizing risk and uncertainty have been described in the context of different disciplines and various applications; for a recent review, see Spiegelhalter and Riesch (2011).  Of particular relevance for climate change are schemes for analyzing uncertainty when conducting risk analyses. My primary concerns about the IPCC’s characterization of uncertainty are twofold:

  • lack of discrimination between statistical uncertainty and scenario uncertainty
  • failure to meaningfully address the issue of ignorance

Following Walker et al. (2003), statistical uncertainty is distinguished from scenario uncertainty, whereby scenario uncertainty implies that it is not possible to formulate the probability of occurrence particular outcomes. A scenario is a plausible but unverifiable description of how the system and/or its driving forces may develop in the future. Scenarios may be regarded as a range of discrete possibilities with no a priori allocation of likelihood.  Wynne (1992) defines risk as knowing the odds (analogous to Walker et al.’s statistical uncertainty), and uncertainty as not knowing the odds but knowing the main parameters (analogous to Walker et al.’s scenario uncertainty).

Stainforth et al. (2007) argue that model inadequacy and an insufficient number of simulations in the ensemble preclude producing meaningful probability distributions from the frequency of model outcomes of future climate. Stainforth et al. state: “[G]iven nonlinear models with large systematic errors under current conditions, no connection has been even remotely established for relating the distribution of model states under altered conditions to decision-relevant probability distributions. . . . Furthermore, they are liable to be misleading because the conclusions, usually in the form of PDFs, imply much greater confidence than the underlying assumptions justify.” Given climate model inadequacies and uncertainties, Betz (2009) argues for the logical necessity of considering climate model simulations as modal statements of possibilities, which is consistent with scenario uncertainty. Stainforth et al.  makes an equivalent statement: “Each model run is of value as it presents a ‘what if’ scenario from which we may learn about the model or the Earth system.”   Insufficiently large initial condition ensembles combined with model parameter and structural uncertainty preclude forming a PDF from climate model simulations that has much meaning in terms of establish a mean value or confidence intervals. In the presence of scenario uncertainty, which characterizes climate model simulations, attempts to produce a PDF for climate sensitivity (e.g. Annan and Hargreaves 2010) are arguably misguided and misleading.

Ignorance is that which is not known; Wynne (1992) finds ignorance to be endemic because scientific knowledge must set the bounds of uncertainty in order to function.  Walker et al. (2003) categorize the following different levels of ignorance. Total ignorance implies a deep level of uncertainty, to the extent that we do not even know that we do not know. Recognized ignorance refers to fundamental uncertainty in the mechanisms being studied and a weak scientific basis for developing scenarios. Reducible ignorance may be resolved by conducting further research, whereas irreducible ignorance implies that research cannot improve knowledge (e.g. what happened prior to the big bang). Bammer and Smithson (2008) further distinguish between conscious ignorance, where we know we don’t know what we don’t know, versus unacknowledged or meta-ignorance where we don’t even consider the possibility of error.

While the Kandlikar et al. (2005) uncertainty schema explicitly includes effective ignorance in its uncertainty categorization, the AR4 uncertainty guidance (which is based upon Kandlikar et al.) neglects to include ignorance in the characterization of uncertainty.  Hence IPCC confidence levels determined based on the amount of evidence (number of sources of information) and the degree of agreement (consensus) among experts do not explicitly account for indeterminacy and ignorance, although recognized areas of ignorance are mentioned in some part of the report (e.g. the possibility of indirect solar effects in sect xxxx of the AR4 WG1 Report).  Overconfidence is an inevitable result of neglecting indeterminacy and ignorance.

A comprehensive approach to uncertainty management and elucidation of the elements of uncertainty is described by the NUSAP scheme, which includes methods to determine the pedigree and quality of the relevant data and methods used (e.g. van der Sluijs et al. 2005a,b).  The complexity of the NUSAP scheme arguably precludes its widespread adoption by the IPCC.  The challenge is to characterize uncertainty in a complete way while retaining sufficient simplicity and flexibility for its widespread adoption. In the context of risk analysis, Speigelhalter and Riesch (2011) describe a scheme for characterizing uncertainty that covers the range from complete numerical formalization of probabilities to indeterminancy and ignorance, and includes and the possibility of unspecified but surprising events.  Quality of evidence is an important element of the NUSAP scheme and the scheme described by Spiegelhalter and Riesch (2011).  The GRADE scale of Guyatt et al. (2008) provides a simple yet useful method for judging quality of evidence, with a more complex scheme for judging quality utilized by NUSAP.

Judgmental estimates of confidence need to consider not only the amount of evidence for and against and the degree of consensus, but also need to consider the adequacy of the knowledge base (which includes the degree of uncertainty and ignorance), and also the quality of the information that is available.  A practical way of incorporating these elements into an assessment of confidence is provided by Egan (2005).  The crucial difference between this approach and the consensus-based approach is that the dimension associated with the degree of consensus among experts is replaced by specific judgments about the adequacy of the knowledge base and the quality of the information available.

Consensus and disagreement

The uncertainty associated with climate science and the range of decision making frameworks and policy options provides much fodder for disagreement.  Here I argue that the IPCC’s consensus approach enforces overconfidence, marginalization of skeptical arguments, and belief polarization. The role of cognitive biases (e.g. Tverskey and Kahnemann 1974) has received some attention in the context of the climate change debate, as summarized by Morgan et al. (2009, chapter xxx). However, the broader issues of the epistemology and psychology of consensus and disagreement have received little attention in the context of the climate change problem.

Kelly (2005, 2008) provides some general insights into the sources of belief polarization that are relevant to the climate change problem.  Kelly (2008) argues that “a belief held at earlier times can skew the total evidence that is available at later times, via characteristic biasing mechanisms, in a direction that is favorable to itself.” Kelly (2008) also finds that “All else being equal, individuals tend to be significantly better at detecting fallacies when the fallacy occurs in an argument for a conclusion which they disbelieve, than when the same fallacy occurs in an argument for a conclusion which they believe.”  Kelly (2005) provides insights into the consensus building process: “As more and more peers weigh in on a given issue, the proportion of the total evidence which consists of higher order psychological evidence [of what other people believe] increases, and the proportion of the total evidence which consists of first order evidence decreases . . .  At some point, when the number of peers grows large enough, the higher order psychological evidence will swamp the first order evidence into virtual insignificance.”  Kelly (2005) concludes: “Over time, this invisible hand process tends to bestow a certain competitive advantage to our prior beliefs with respect to confirmation and disconfirmation. . . In deciding what level of confidence is appropriate, we should taken into account the tendency of beliefs to serve as agents in their own confirmation.”

So what are the implications of Kelly’s arguments for consensus and disagreement associated with climate change and the IPCC? Cognitive biases in the context of an institutionalized consensus building process have arguably resulted in the consensus becoming increasingly confirmed in a self-reinforcing way. The consensus process of the IPCC has marginalized dissenting skeptical voices, who are commonly dismissed as “deniers”  (e.g. Hasselmann 2010). This “invisible hand” that marginalizes skeptics is operating to the substantial detriment of climate science, not to mention the policies that are informed by climate science. The importance of skepticism is aptly summarized by Kelly (2008): “all else being equal, the more cognitive resources one devotes to the task of searching for alternative explanations, the more likely one is hit upon such an explanation, if in fact there is an alternative to be found.”

The intense disagreement between scientists that support the IPCC consensus and skeptics becomes increasingly polarized as a result of the “invisible hand” described by Kelly (2005, 2008).  Disagreement itself can be evidence about the quality and sufficiency of the evidence. Disagreement can arise from disputed interpretations as proponents are biased by excessive reliance on a particular piece of evidence. Disagreement can result from “conflicting certainties,” whereby competing hypotheses are each buttressed by different lines of evidence, each of which is regarded as “certain” by its proponents. Conflicting certainties arise from differences in chosen assumptions, neglect of key uncertainties, and the natural tendency to be overconfident about how well we know things (e.g. Morgan 1990).

What is desired is a justified interpretation of the available evidence, which is completely traceable throughout the process in terms of the quality of the data, modeling, and reasoning process. A thorough assessment of uncertainty by proponents of different sides in a scientific debate will reduce the level of disagreement.

Reasoning about uncertainty

The objective of the IPCC reports is to assess existing knowledge about climate change.  The IPCC assessment process combines a compilation of evidence with subjective Bayesian reasoning.  This process is described by Oreskes (2007)  as presenting a “consilience of evidence” argument, which consists of independent lines of evidence that are explained by the same theoretical account. Oreskes draws an analogy for this approach with what happens in a legal case.

The consilience of evidence argument is not convincing unless it includes parallel evidence-based analyses for competing hypotheses, for the simple reason that any system that is more inclined to admit one type of evidence or argument rather than another tends to accumulate variations in the direction towards which the system is biased. In a Bayesian analysis with multiple lines of evidence, you could conceivably come up with enough multiple lines of evidence to produce a high confidence level for each of two opposing arguments, which is referred to as the ambiguity of competing certainties. If you acknowledge the substantial level of ignorance surrounding this issue, the competing certainties disappear (this is the failing of Bayesian analysis, it doesn’t deal well with ignorance) and you are left with a lower confidence level.

To be convincing, the arguments for climate change need to change from the burden of proof model to a model whereby a thesis is supported explicitly by addressing the issue of what the case would have to look like for the thesis to be false, in the mode of argument justification (e.g. Betz 2010). Argument justification invokes counterfactual reasoning to ask the question “What would have to be the case such that this thesis were false?” The general idea is that the fewer positions supporting the idea that the thesis is false, the higher its degree of justification. Argument justification provides an explicit and important role for skeptics, and a framework whereby scientists with a plurality of viewpoints participate in an assessment.  This strategy has the advantage of moving science forward in areas where there are competing schools of thought.  Disagreement then becomes the basis for focusing research in a certain area, and so moves the science forward.

The rationale for parallel evidence-based analyses of competing hypotheses and argument justification is eloquently described by Kelly’s (2008) key epistemological fact:  “For a given body of evidence and a given hypothesis that purports to explain that evidence, how confident one should be that the hypothesis is true on the basis of the evidence depends on the space of alternative hypotheses of which one is aware. In general, how strongly a given body of evidence confirms a hypothesis is not solely a matter of the intrinsic character of the evidence and the hypothesis. Rather, it also depends on the presence or absence of plausible competitors in the field. It is because of this that the mere articulation of a plausible alternative hypothesis can dramatically reduce how likely the original hypothesis is on one’s present evidence.”

Reasoning about uncertainty in the context of evidence based analyses and the formulation of confidence levels is not at all straightforward for the climate problem.  Because of the complexity of the climate problem, Van der sluijs et al. (2005) argue that uncertainty methods such as subjective probability or Bayesian updating alone are not suitable for this class of problems, because the unquantifiable uncertainties and ignorance dominate the quantifiable uncertainties. Any quantifiable Bayesian uncertainty analysis “can thus provide only a partial insight into what is a complex mass of uncertainties”  (van der Sluijs et al. 2005).

Given the dominance of unquantifiable uncertainties in the climate problem, expert judgment about confidence levels are made in the absence of a comprehensive quantitative uncertainty analysis.  Because of the complexity and in the absence of a formal logical hypothesis hierarchy used in the IPCC assessment, individual experts use different mental models and heuristics for evaluating the interconnected evidence. Biases can abound when reasoning and making judgments about such a complex problem.  Bias can occur by excessive reliance on a particular piece of evidence, the presence of cognitive biases in heuristics, and logical fallacies and errors including circular reasoning.  Further, the consensus building process itself can be a source of bias (Kelly 2005).

Identifying the most important uncertainties and introducing a more objective assessment of confidence levels requires introducing a more disciplined logic into the climate change assessment process. A useful approach would be the development of hierarchical logical hypothesis models that provides a structure for assembling the evidence and arguments in support of the main hypotheses or propositions. A logical hypothesis hierarchy (or tree) links the root hypothesis to lower level evidence and hypotheses. While developing a logical hypothesis tree is somewhat subjective and involves expert judgments, the evidential judgments are made at a lower level in the logical hierarchy.  Essential judgments and opinions relating to the evidence and the arguments linking the evidence are thus made explicit, lending structure and transparency to the assessment. To the extent that the logical hypothesis hierarchy decomposes arguments and evidence to the most elementary propositions, the sources of disputes are easily illuminated and potentially minimized.

Bayesian Network Analysis using weighted binary tree logic is one possible choice for such an analysis.  However, a weakness of Bayesian Networks is its two-valued logic and inability to deal with ignorance, whereby evidence is either for or against the hypotheses.   An influence diagram is a generalization of a Bayesian Network that represents the relationships and interactions between a series of propositions or evidence (Spiegelhalter, 1986).  Cui and Blockley (1990) introduce interval probability  three-valued logic into an influence diagram with an explicit role for uncertainties (the so-called “Italian flag”) that recognizes that evidence may be incomplete or inconsistent, of uncertain quality or meaning. Combination of evidence proceeds generally as a Bayesian combination, but  combinations of evidence are modified by the factors of sufficiency, dependence and necessity.  Practical applications to the propagation of evidence using interval probability theory are described by Bowden (2004) and Egan (2005).

An application of influence diagrams and interval probability theory to a climate relevant problem is described by Hall et al. (2006), regarding a study commissioned by the UK Government that sought to establish the extent to which very severe floods in the UK in October–November 2000 were attributable to climate change. Hall et al. used influence diagrams to represent the evidential reasoning and uncertainties in responding to this question. Three alternative approaches to the mathematization of uncertainty in influence were compared, including Bayesian belief networks and two interval probably methods (Interval Probability Theory and Support Logic Programming).  Hall et al. argue that “ interval probabilities represent ambiguity and ignorance in a more satisfactory manner than the conventional Bayesian alternative . . . and are attractive in being able to represent in a straightforward way legitimate imprecision in our ability to estimate probabilities.”

Hall et al. concluded that influence diagrams can help to synthesize complex and contentious arguments of relevance to climate change.   Breaking down and formalizing expert reasoning can facilitate dialogue between experts, policy makers, and other decision stakeholders. The procedure used by Hall et al. supports transparency and clarifies uncertainties in disputes, in a way that expert judgment about high level root hypotheses does not.

Conclusions

In this paper I have argued that the problem of communicating climate uncertainty is fundamentally a problem of how we have framed the climate change problem, how we have characterized uncertainty, how we reason about uncertainty, and the consensus building process itself.  As a result of these problems, the IPCC has not produced a thorough portrayal of the complexities of the problem and the associated uncertainties in our understanding. Improved characterization of uncertainty and ignorance and a more realistic portrayal of confidence levels could go a long way towards reducing the “noise” and animosity portrayed in the media that fuels the public distrust of climate science and acts to stymie the policy process.  Not to mention that an improved characterization of uncertainty and ignorance would promote a better overall understanding of the science and how to best target resources to improve understanding.

Improved understanding and characterization of uncertainty is critical information for the development of robust policy options. When working with policy makers and communicators, it is essential not to fall into the trap of acceding to inappropriate demands for certainty.  Wynne (1992) makes an erudite statement: “the built-in ignorance of science towards its own limiting commitments and assumptions is a problem only when external commitments are built on it as if such intrinsic limitations did not exist.”

Acknowledgements. I would like to acknowledge the contributions of the Denizens of my blog Climate Etc.  (judithcurry.com) for their insightful comments and discussions on the numerous uncertainty threads.

Moderation note: this is a technical thread, and comments will be moderated for relevance.  I really appreciate your comments and review of this draft paper.

217 responses to “Reasoning About Climate Uncertainty – Draft

  1. Dr. Curry

    For, “The epistemology of computer simulations of complex systems is a new and active area research among scientists, philosophers, and the artificial intelligence community.”

    Suggest, The epistemology of computer simulations of complex systems is a new and active research area among scientists, philosophers, and the artificial intelligence community.

  2. Judith

    I believe that the following paragraph is very important, and may deserve highlighting with a more prominent location in your overall excellent article:

    ‘The rationale for parallel evidence-based analyses of competing hypotheses and argument justification is eloquently described by Kelly’s (2008) key epistemological fact: “For a given body of evidence and a given hypothesis that purports to explain that evidence, how confident one should be that the hypothesis is true on the basis of the evidence depends on the space of alternative hypotheses of which one is aware. In general, how strongly a given body of evidence confirms a hypothesis is not solely a matter of the intrinsic character of the evidence and the hypothesis. Rather, it also depends on the presence or absence of plausible competitors in the field. It is because of this that the mere articulation of a plausible alternative hypothesis can dramatically reduce how likely the original hypothesis is on one’s present evidence.”’

    Rather than merely introduce the analyses of competing hypotheses with the slightly-earlier referenced consilience argument, why not begin with this subject as a central unifying theme, as a measure to set each part of your analysis against?

    This may allow you to organize your thoughts in a slightly more expedient format, which may save some space overall.

    Though one would be mad to suggest cutting anything out of what you’ve said.

  3. Thanks for the preview, Judith.

    • Indeed. Are you planning on providing status updates as it makes it way through the review process?

  4. Judy – For now, I’ll withhold comments on our areas of disagreement, where I see much less uncertainty than you regarding major conclusions about long term global temperature change. Instead, let me comment simply from a more editorial perspective on the draft.

    It’s thoughtful, well-researched, and coherent. My concern relates to my perception that it is almost uniformly written on what semanticists refer to as a high level of abstraction. For example, there is an abundance of words such as indeterminacy, confidence, ambiguity, consensus, belief polarization, organizing principle, epistemology, and other conceptual terms, but very little in the way of words such as lapse rate, radiative transfer, temperature anomaly, relative humidity, albedo, convection, tropopause, ocean heat capacity, and other items of a specific nature.

    If you were writing an ordinary research paper, the latter terms would dominate. I realize that is not your purpose, and that you are describing an overarching perspective that demands conceptualization. However, in my experience, the most convincing perspectives are those that travel up and down the ladder of abstraction to relate general concepts to specific salient examples. It is the examples that I perceive to be too few. The result is that many readers are likely to be “lost in the cloud of abstractions” – if they already share some of your perspectives, they will nod their heads in agreement, if they disagree, they will shake their heads, but if they are still waiting to be enlightened, they may have trouble incorporating many of your statements into their own thought processes. I assume it is this last group whom you most want to reach.

    Since your draft is already too long, you will need to cut ruthlessly. If you choose to travel between the abstract and specific at least a few times, your editing task becomes even harder. Perhaps you are tackling too much for a 3000 word paper, or perhaps you can resolve the issue by mentioning many of the concepts in only a few general sentences, while focusing on a small number as the core of your argument.

    I expect that others may respond differently, but my suggestions are offered simply as those of one person trying to offer my individual perception in hopes it might be helpful.

    • Thanks Fred, point acknowledged. Actually, my core audience for this paper is the IPCC, and other people writing articles for this special issue. I am trying to influence the official uncertainty guidance protocol for the IPCC.

      • Won’t those readers be responsive to specific examples used to illustrate your points – particularly examples illustrating how you see the process as having fallen short?

      • examples are given in my other uncertainty paper, not sure how to manage this given the word count.

      • That sounds like a difficult issue…because if it’s not the majority of the IPCC scientists themselves, then certainly a majority of the vocal folks within the IPCC seem to prefer the ‘tidier’ language that puts ‘conclusion-drawing’ in play on a range of fronts within the field of climate change science (if it doesn’t actually go ahead and establish what all the conclusions are that should be drawn).

      • Sounds to me like an opportunity for a second post on the same paper, but this one a technical thread specifically asking for hard examples traveling up and down Fred’s ladder.

        Even before the Internet and the convenience of URL’s, the practice of appendices and footnotes referencing supporting material was well-worn and accepted for readers who wished to do the work themselves of research to better understand an author’s conceptual framework.

        You’ve certainly done enough for us.

        Put us to work for you.

      • good idea. and i’m definitely appreciating the work you are doing on this thread

      • Judith
        I agree with Fred but only up to a point. A few examples would add some context and would certainly make it easier to read. That said, any examples should not be specified in too much detail (helps your word count) otherwise it might prove to be a distraction to readers as their minds wander away from your analysis to areas of technical and emotional disagreement. For example, somebody might regard albedo as a certain matter and therefore a done deal in scientific terms. For them, they might spend the rest of the article brooding on this minor mental disagreement with the author and therefore miss the excellent points you make. That would be a shame as I think you are trying to provide a framework rather than the answer. As ever, it’s a matter of balance. Hope this is constructive. I enjoyed it, thanks.

        Rob

      • andrew adams

        But surely Judith’s whole argument rests on her central premise that the “consensus” view significantly understates the level of uncertainty in our knowlege of how the climate works.
        This is a controversial claim – surely she should demonstrate it with real examples. I don’t see why readers should just accept this assertion on trust.

      • That’s a very fair point, Andrew. I suppose it depends on the purpose of the paper. I was merely pointing out that by diving straight into the areas of disputed scientific uncertainty, the reader might miss the fact that the scientific/policy interface could be improved by an alternative framework for providing information. In my view, there’s just not enough space within a 3000 word limit to deconstruct all the areas of scientific uncertainty and then offer an alternative process for policy makers. Judith at least makes reference to the disadvantages of the existing system and the fact that decision making is currently stymied. You could extend the paper into the scientific areas, but I feel it would be self-defeating and the paper could lose its way.

      • What about this paragraph?

        The consensus approach used by the IPCC to characterize uncertainty has received a number of criticisms. Van der Sluijs et al. (2010b) finds that the IPCC consensus strategy underexposes scientific uncertainties and dissent, making the chosen policy vulnerable to scientific error and limiting the political playing field. Van der Sluijs (2010a) argues that matters on which no consensus can be reached continue to receive too little attention by the IPCC, even though this dissension can be highly policy-relevant. Oppenheimer et al. (2007) point out the need to guard against overconfidence and argue that the IPCC consensus emphasizes expected outcomes, whereas it is equally important that policy makers understand the more extreme possibilities that consensus may exclude or downplay. Gruebler and Nakicenovic (2001) opine that “there is a danger that the IPCC consensus position might lead to a dismissal of uncertainty in favor of spuriously constructed expert opinion.”

      • This work by Esper was a good example I recently came across.

        http://www.geo.uni-mainz.de/esper/pdf/Esper_2009_CC_IPCC.pdf

  5. Do you have a rebuttal all ready to go for when some folks read this through a polarized lens that seeks to define you as someone who seeks to enhance doubt or uncertainty in various areas of climate science for its own sake? Perhaps the story goes that if enough uncertainty can be wedged into the political consciousness, no possible mitigation policy of any significance can be supportable…?

    • actually my other uncertainty paper (12,000 words) addresses this issue, among others

      • Dr. Curry,

        Does the other paper address decision-making strategies under uncertainty or will that be yet another paper?

        I’m looking forward to reading it either way.

  6. FWIW, I am impressed with Bart R’s recommendation on focus to reduce the word count. I am looking forward to reading the longer paper. Especially, a comparison of Annan Hargrove’s Bayesian approach.

    I think their approach has merit for reducing the focus to the area of greater likelihood as long as there is a long tail approach for comparison. That may not be the proper way to look at uncertainty, but a comparison is easier for the statistically challenged.

  7. Suggest you minimize the use of adjectives. Cut directly to the key thought of the sentences and not worry about explaining nuances. Should be able to hit the 3000 word limit that way.

    • Latimer Alder

      I second Mke’s remarks. Simple direct language, short sentences where possible, please.

      ‘The cat sat on the mat’

      not

      ‘Recumbent upon the woven floor covering primarily designed to protect the underlying caprtetted area from damage, was a quadripedal domestic pet – distingushable from a canine by its whiskers and occasonal propensity to make ‘miaow’ like sounds.

      You are not the worst offender, but as I dug through the long sentences and paragraphs with lots of abstract nouns I began to wonder when the key points were coming. It may be a mark of stylishness in acdemia to develop your argument at length, but does little to influence a general reader.

      Hint: stick the (4-6) key points on Post It notes or old fashioned index cards and rearrange them on the wall/desk until you have a structure that makes sense. Then write a few sentences only about each. Organise and edit, leave it for a few hours then edit again, then edit a third time. Then write a summary para.

      Maybe grandmas eggs, but still useful gudance for anything that is written to persuade. And here, your job is not merely to get another tick in the publications category, but to change people’s minds.

      • “Simple direct language, short sentences where possible, please.”

        I also urge you to rewrite with this in mind. The meat is there, but the prose is instant eyeglaze. (Sorry) Yes, you are an academic, and yes, not everyone can write like Freeman Dyson (or even Richard Muller), but good prose sells. You are selling a product here (Judith Curry).

        Pull down your old Strunk & White, or treat yourself to the wonderful new edition illustrated by Moira Kalman.

        Failing that, you might put up the text on a wikipage, and invite reader modifications.

        Good luck, and have fun working with Muller & Rohde out in Berserkely… http://en.wikipedia.org/wiki/Berkeley_Earth_Surface_Temperature

        And stop by Scream Sorbet on Telegraph in Oakland. Have a Thai Basil, say hello to my very tall stepson Noah, tell him it’s on me…
        http://chowhound.chow.com/topics/747478

        Cheers — Pete Tillman
        Professional geologist, Climate gadfly

      • I worked in a small economics consultancy, mainly providing policy-oriented papers to the Queensland, Australia, government. We knew that the pollies and department heads would read, at best, the Executive Summary. This had to pass the “Queensland grannie” test – to be understandable by someone who doesn’t read newspapers. The body was aimed at policy officers, few of whom would be economists.; it had to be clearly argued but not too technical. The appendix, which few would read, was at a level to convince professional economists.

        The IPCC per se is a mixed bag; it may be that some/many of those Judith seeks to convince, while hopefully not “Queensland grandmothers,” require a more terse presentation in which the major points are easily grasped.

    • A great guide for writing is “The Little English Handbook”

      It teaches how to communicate information in a minimum number of words for maximum comprehension.

      The handbook makes the point that the best written english should read like the best spoken spoken english.

      The book deals with the issue of complex writing styles in academia and elsewhere. Why they exists and why they are a barrier to communication.

    • Also, open the paper by praising the reader. If the audience is the IPCC, find something you honestly feel they have done a good job with, and use this as your opening. This will make them more receptive to what you are presenting. This technique is often used by successful presenters. see: “How to Win Friends and Influence People”.

  8. One other approach I’ve found useful when invited to write a chapter for a volume devoted to a particular theme is to try to find out what the other authors will be focusing on. That sometimes makes it possible to let someone else articulate certain views you share with that other person, allowing you to focus on points where your contribution will be unique.

  9. John Carpenter

    Dr. Curry,

    The paper is very informative and your train of logic that comes to the final conclusion is well constructed and coherent. As someone who not as familiar with the IPCC methodology of assigning confidence levels, you explain the weaknesses of the methodology very well wrt making the tent bigger for all viewpoints climate.

    Though the numerous quotes from Kelly you site are very good and help to fill out your argument as to why there is overconfidence with the consensus approach, upon reflection to the “core audience” you are targeting (the IPCC) I would consider paring out this type of “extra” information. The most substantive parts of your argument, that the current methodology does not, is not, capable of analyzing crucial aspects of uncertainty (indeterminacy and ignorance) and that Bayesian network analysis lack the flexibility to deal with either are the points that need to be hammered home.

    Using a way to quantify the uncertainties with less subjectivity, the way you are suggesting, is the best way to win over skeptical viewpoints. As you note, this will help point the direction to areas of non-consensus where research is most vital. This can not be emphasized enough.

  10. With respect: you could drop the opening sentence, “The challenge of framing and communicating uncertainty about climate change is a symptom of the challenges of understanding and reasoning about such a complex system.” entirely. Or you could write, “Explicating the uncertainties of climate change is challenging because comprehending its complexity is very difficult.”

    • Latimer Alder

      Or:

      ‘The climate is a complex system and we don’t really understand very much about it. We’ve made a bad job of explaining the uncertainties.

  11. The uncertainties of climate science are rather well known and include sulphates and black carbon, cloud radiative forcing, top down solar UV forcing and internal climate variability resulting from dynamical complexity. The uncertainties of models involve dynamical complexity as well – GCM having the properties of sensitive dependence and structural instability.

    ‘Prediction of weather and climate are necessarily uncertain: our observations of weather and climate are uncertain, the models into which we assimilate this data and predict the future are uncertain, and external effects such as volcanoes and anthropogenic greenhouse emissions are also uncertain. Fundamentally, therefore, therefore we should think of weather and climate predictions in terms of equations whose basic prognostic variables are probability densities ρ(X,t) where X denotes some climatic variable and t denoted time. In this way, ρ(X,t)dV represents the probability that, at time t, the true value of X lies in some small volume dV of state space.’ (Predicting Weather and Climate – Palmer and Hagedorn eds – 2006)

    This seems so evident that no detailed example is needed for an informed readership. The policy debate is however vexed. Both sides of the issue have by now a firmly entrenched detection bias – more rudely known as cognitive dissonance. They can articulate a simple narrative and so believe that they have science (or God) on their side. They have a need for certainty (the apocryphal one handed economist) that they will not easily surrender. I suggest that it is an essential aspect of the human condition – that many are constitutionally unable to bear uncertainty and it has been thus in all human history.

    Those who have knowledge, don’t predict. Those who predict, don’t have knowledge.
    – Lao Tzu (6th Century BC Chinese Poet)

    True knowledge exists in knowing that you know nothing. And in knowing that you know nothing, that makes you the smartest of all.
    – Socrates

    The greatest nonsense in human history has arisen from the assumption that we know much rather than the assumption that we know little – and that certainly applies to scientists as much as anyone else.

    But the other side of uncertainty is risk. In climate it is a risk described by Wally Broeker long ago as like poking a stick at a wild and unpredictable beast. Not something a sensible adult would do.

    • And yes – from now on when I see the 1997/98 El Niño referred to as an extreme outlier I insist that it is rightfully a Dragon King.

      • Pardon my ignorance, but I’m not across “black swans” and “Dragon Kings.” Could you enlighten me, please?

      • Faustino;
        They’re terms relating to “outlier” observations of great significance. One black swan observation disproves the generality that “All swans are white”. Black swans were first observed in Australia, shocking the Europeans. One black swan event can destroy plans based on white swan generalities and assumptions.

        A “Dragon King” is a major event or instance outside the “normal” distribution which is of great magnitude and importance, and is crucial to understanding the actual mechanisms at work.

      • Dragon King is a new expression to me as well – but one I like quite a lot. It appeals to the purely poetical.

        Sornette (2009) used it as in the sense of extreme meaningful outliers in dynamically complex systems. As the climate system approaches a bifurcation point autocorrelation increases – climatic fluctuations such as ENSO, PNA, PDO and AO slow. At a bifurcation point there is extreme variance – like the 1997/98 El Nino/2000 La Nina. ‘We emphasize the importance of understanding dragon-kings as being often associated with a neighborhood of what can be called equivalently a phase transition, a bifurcation, a catastrophe (in the sense of René Thom), or a tipping point.’
        http://arxiv.org/ftp/arxiv/papers/0907/0907.4290.pdf

        ‘These standard bearers of doubt engage in a global dance.
        Occasionally, they pirouette towards a grand crescendo and,
        then fly wildly to the ends of Earth in a new choreography,
        Tremendous energies cascading though powerful systems.

        Unless I miss my mark then this is the mark of chaos and
        a danger in its own right as climate system components
        jostle unpredictably and things settle into whatever pattern
        emerges – mayhaps a cold, cold, cold day on planet Earth.’
        from Song of a Climate Zombie

        ‘The four Dragon Kings (龙王; pinyin: Lóng Wáng) are, in Chinese mythology, the divine rulers of the four seas (each sea corresponding to one of the cardinal directions). Although Dragon Kings appear in their true forms as dragons, they have the ability to shapeshift into human form. The Dragon Kings live in crystal palaces, guarded by shrimp soldiers and crab generals.

        The Dragon Kings are:
        Dragon of the East: Ao Guang (敖廣)
        Dragon of the South: Ao Qin (敖欽)
        Dragon of the West: Ao Run (敖閏)
        Dragon of the North: Ao Shun (敖順)

        Besides ruling over the aquatic life, the Dragon Kings also manipulate clouds and rain. When enraged, they can flood cities. According to The Short Stories on the Tang People (唐人傳奇 Tangren Chuanqi), the Qian Tang Dragon King did just that when he found out his niece had been abused by her husband.’

        So sayeth the sacred hydrological texts. Cool hey?

  12. What are the chances that Schneider will even allow this to be published?

    • John Carpenter

      Interesting thought that I had too.

      Dr. Curry, what is the focus of the special issue of CC?

      • My level of surprise if this article never gets printed in the publication it is being written for will be nil.
        If it is published, my bet is it will be used by the believers and promoters as an example of how to trash skeptics and heretics or simply ignored by them.

      • John Carpenter

        Hunter,

        Your pessimism is well founded. If Dr. Curry is able to build bridges, all may not be lost. I think this is an admirable approach, I do not get an air of condesendence from the paper, the tone is factually based and straight forward, making it harder to dismiss. I see this as “glass half full”.

      • The quality of hte paper is everything a paper on any serious topic should be: serious, thoughtful, fact based and well documented.
        That is precisely why I will be pleasantly surprised if it is allowed to go to print in anything Schneider and pals have control of.
        This is, for ‘the team’, the worst sort of paper: credible heresy.

      • hunter,

        I also wonder about the reception this will get in review, but you should know that Steve Schneider passed away last year.

      • Gene,
        You are, of course, correct. Somehow my sleep deprived brain confused the late Dr. Schneider with Gavin Schmidt.
        But that does mean the chance of Schneider allowing the draft that this thread is based on to be published is of course 0%.
        But I do believe the question is valid for the successors.
        As to my faux pas, in the inestimable words of H. Simpson:

    • Well, Dr. Scneider is pretty unlikely to veto it now:
      http://www.nytimes.com/2010/07/20/science/earth/20schneider.html

  13. “Speigelhalter” should be Spiegelhalter

  14. That was a thoroughly interesting read. My comments for what they’re worth:

    —–the kelly points on confirmation bias and that inabilities to spot problems in theories one holds rather than one one doesn’t is a very important point. -You don’t need to suggest this is happening- but i think this point should be highlighted further.

    – this section: “Argument justification invokes counterfactual reasoning to ask the question “What would have to be the case such that this thesis were false?” The general idea is that the fewer positions supporting the idea that the thesis is false, the higher its degree of justification”
    —–i think is critical. Is there a way you can expand on this section? I think this would greatly improve the quality of the science.

    — I like the discussion on the models, but i think you needed to go further and comment on the validity and the testing of the models. The consillience of argument approach to models (more runs = more accuracy) is only true if the underlying mechanisms are accurate. We don’t know this (either way), so all the models agreeing is not evidence that the models are right- onlt that the assumptions made within are consistent (hardly suprising given the consensus position). If we could address this it would be a HUGE step.

    Perhaps a breif forray into model validation methods and the ‘labelling’ of models as validated/non validated in the IPCC reports?

    — The decision tree i think is an excellent idea, for a number of reasons.
    1- i think it’s important to identify just what are THE important (and interconnected) aspects of the climate for this theory
    2- you can use it to highlight areas of uncertainty THROUGHOUT, rather than using an arbitrary figure at the end. Conversly you can hihglight areas of HIGH certainty as well.
    3- you can highlight areas where there is good evidence, or where ‘expert’ interpretations/judgements are made (this i think too is very important- all expert judgements should be clearly identified and labelled by the expert).

    In fact i like this logic tree method so much i may try it out on one of my own projects! I’d be tempted to add it as a reccomended method for the next report.

    There’s more (little bits- nothing serious), but it’s already turning into an essay!

    Again though- it was a good read and i think you’ve made some very valid points VERY tactfully.

    • You Wrote: The consillience of argument approach to models (more runs = more accuracy) is only true if the underlying mechanisms are accurate. We don’t know this (either way), so all the models agreeing is not evidence that the models are right- onlt that the assumptions made within are consistent (hardly suprising given the consensus position). If we could address this it would be a HUGE step.

      Perhaps a breif forray into model validation methods and the ‘labelling’ of models as validated/non validated in the IPCC reports?

      You can’t just label a model as Validated. You must provide the Evidence that was used to Validate the model

      You must also provide the Evidence that was used to Validate the Theory

      If all the Models are based on Theory that is Flawed, all the Models will be wrong. Bad Forecasts for the Severe Winters we have had during the past Decade indicate that the Theories are Most Likely Wrong.

  15. Thanks for the preview. It makes a interesting read, although abstract in its approach. Some readers might prefer some more concrete illustrations. For those readers who are not familiar with the literature which you cite it might be helpful to provide a reference list on this blog.

  16. Tomas Milanovic

    Insufficiently large initial condition ensembles combined with model parameter and structural uncertainty preclude forming a PDF from climate model simulations that has much meaning in terms of establishing (?) a mean value or confidence intervals.
    The text in bold is mine.

    I like your draft and have nothing to add to the overall philosophy and target.

    So this is just a short technical comment.
    There is à priori no warranty that such an invariant (from initial conditions) and time independent PDF exists at all.
    As your auditory is à priori scientific it should be noticed that the point whether “climate” (a term having a technical definition problem due to ambiguous metrics especially concerning spatial scales) is ergodic or not is completely open and badly studied if at all.
    If anything, studies of non linear spatio temporal systems (like plumes , advection, phase change etc) lean more to the non ergodic than to the ergodic side.

    Also an epistemological point.
    The reductionism doesn’t work well with non linear systems. It works not at all with non linear highly complex problems.
    As you rightly noticed it is the internal coupling and interaction loops between different time and space scales that are responsible for this.
    Yet the reductionism and the “all else being equal” methods are still prominent in most climate studies.
    This can’t lead to progress and to reducing uncertainties.

  17. de boerg (2010) is de boer 2010

    I’m just reading it, but could you provide a pdf of the text (or a doc)? Thanks in advance.

  18. “The challenge of framing and communicating uncertainty about climate change is a symptom of the challenges of understanding and reasoning about such a complex system.”

    Using influence diagrams and interval probability methods would push expert judgment down a level closer to the research, simplifying the conditions under which reasoning and judgment is exercised. This is the (partial) solution to the indicated problem.

    I don’t think you need to cover the history of IPCC’s treatment of uncertainty – they already know it, so just the criticisms should be fine.

  19. Judith,

    I learned long ago that temperature data is NOT science.
    It is a man made concept for creature comforts of knowing when to wear shorts or dress warm. It still will not predict precipitation or weather fronts.
    So, while scientists are on this folly, I’m looking at real physical evidence from the past. This is a great deal of unpublished material due to the way our system has generated the peer-review system of following a criteria and a mathematical formula. Also a man made concept.
    Since no one has followed this route, I have made great strides in understanding this planet and solar system without the assistance of peer-reviewed material. The problem with this route, is that I have a great deal of new research material that will NOT be publishable due to the system in place.

  20. TimTheToolMan

    Minor nitpick to save you a word :-)

    “Our understanding of the complex climate system is hampered by a myriad of uncertainties, indeterminacy, ignorance, and cognitive biases. ”

    Lose the “of” after myriad.

    • Save two. Leave off the ‘a’ after ‘by’ in addition to TTTM’s ‘of’ after ‘myriad’.
      ================

    • Actually, if Judith’s wants to use myriad as an adjective, she can omit the “a.”

      “Our understanding of the complex climate system is hampered by myriad uncertainties, indeterminacy, ignorance, and cognitive biases.

      Another word saved!

    • … uncertainties, indeterminacy, ignorance, and cognitive biases.
      the comma before the “and” is bad grammar.
      Prefer … uncertainties, indeterminacy, ignorance and cognitive biases.

    • sorry to nitpick, but if you’re going to drop “of”, you should also drop the preceding “a”. Myriad is an adjective.

  21. We don’t know what we don’t know. Seven words.
    =========

    • That is because scientists DO NOT WANT TO KNOW. This protects the cocoon of ignorance around them by being the “experts” in a broken down system.

    • Didn’t some guy named Rumfield or Rumsberg or Rumplestiltskin say something like that once? Something about the unk-unk?

      • The unknown unknowns are also part of the standard sales pitch for the Landmark Forum.

        Accessing New Possibilities:

        Standard educational methods enhance what you know and explore what you don’t know. Landmark Education gives you access to what you don’t even know that you don’t know.

        At the time Rumsfeld said that, I wondered if he were a closet Landmark graduate.

  22. Judith,

    CO 2 does create a difference in the difference density it displaces in the atmosphere. This then effects pressure of having a denser material replacing a lighter material.
    This again has not been studied.

  23. @kim | March 25, 2011 at 8:02 am | Reply

    “We don’t know what we don’t know. Seven words”

    Lol Kim, They what you don’t know and can’t understand.

  24. Judith,

    NASA has generated it’s own area that you cannot even ask a question or do any inquires in case it may interfere with the competition process.
    You cannot contribute anything unless you are government funded or belong to a government authorized institution or business.

    I find it funny that the system currently created is circular in that any outside research that doesn’t fit the criteria of having the same mindset is to be ignored.

  25. Judith

    An excellent article. Thank you.

    …Furthermore, they are liable to be misleading because the conclusions, usually in the form of PDFs

    Here you used “PDF” before defining it.

  26. Judith

    In the detection and attribution of 20th century climate change, Chapter 9 of the AR4 WG1 Report all but dismisses natural internal modes of multidecadal variability in the attribution argument.  Further, impacts of the low level of understanding of solar variability and its potential indirect effects on the climate are not explored in any meaningful way in terms of its impact on the confidence level expressed in the attribution statement. 

    I think this passage would be more powerful if you include references on natural variability and solar variability .

  27. Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.

    As an engineer, if I was given the global mean temperature anomaly data and asked to verify the above statment, here is what I would do.

    1) I would calculate the maximum global warming rate for a 30-years period before mid-20th century. [0.15 deg C/decade: http://bit.ly/eUXTX2%5D

    2) I would calculate the maximum global cooling rate for a 30-years period before mid-20th century. [0.07 deg C/decade: http://bit.ly/f81KJa%5D

    3) I would then conclude, for any 30-years period since mid-20th century, as long as the global warming rate is less than 0.15 deg C/decade or the global cooling rate is less than 0.07 deg C/decade then there is no evidence of man-made climate change. As in no 30-year periods the above global warming or cooling rates were exceeded, there is no evidence of man-made global warming.

  28. John from CA

    Improved understanding and characterization of uncertainty is critical information for the development of robust policy options.

    Quantification is fundamental in scientific and policy debates and a critical framework for appropriate policies.

    When working with policy makers and communicators, it is essential not to fall into the trap of acceding to inappropriate demands for certainty.

    Wynne (1992) makes an erudite statement: “the built-in ignorance of science towards its own limiting commitments and assumptions is a problem only when external commitments are built on it as if such intrinsic limitations did not exist.”

    Policy makers and communicators will continue to demand certainty until the intrinsic limitations and uncertainty are properly presented.

    • John from CA

      Quantification is fundamental in scientific and policy debates and a critical framework for appropriate policies decisions.

      Wynne (1992) makes an erudite statement: “the built-in ignorance of science towards its own limiting commitments and assumptions is a problem only when external commitments are built on it as if such intrinsic limitations did not exist.”

      Policy makers and communicators will continue to demand certainty until the intrinsic limitations and uncertainty are properly presented.

    • I’d recommend keeping this:

      When working with policy makers and communicators, it is essential not to fall into the trap of acceding to inappropriate demands for certainty.

      I don’t think it can be over-emphasized that policy makers will want certainty, even where it doesn’t exist and that giving in to that desire is both wrong and ultimately counter-productive.

      • John from CA

        Policy makers typically rely on expert sources so they don’t get “hung out to dry” when the policy is proven wrong.

        Policy makers are not likely to fall into the trap of relying on “expert” Scientific conclusions that clearly state they are based on qualification (uncertainty). If they do, they’ll ultimately end up with policies that “seemed like a good idea at the time” and no one to blame for the mistakes.

        I do agree, “both wrong and ultimately counter-productive” as well as unethical and unscientific.

      • and no one to blame for the mistakes

        Which you would think would be enough of an incentive for very smart people to be both ethical and scientific.

        I disagree that relying on expert opinion that carries the proper caveats is a “trap”. Policy makers may not like having to assume the responsibility for a decision, but it’s theirs unless the adviser falls into the trap of saying what they want to hear rather than what they need to hear.

    • John from CA

      “Not to mention that an improved characterization of uncertainty and ignorance would promote a better overall understanding of the science and how to best target resources to improve understanding. Further, improved understanding and characterization of uncertainty is critical information for the development of robust policy options.”

      Is the use of the term “ignorance” appropriate?

      If Climate Science is ignorant of its own limitations, it simply isn’t worth funding.

      I suspect they have already determined the gaps but, as you pointed out months ago, Climate Science was never funded to determine how the climate system works. They were funded to determine human impact on a system they don’t fully understand. Ignorance in this situation implies something which may not be occurring?

  29. In haste: You might consider exploiting this blog:
    1. Create a “portable document format” (PDF) copy of your published paper.
    2. To the PDF, append a list of citations and links to expanded discussions for which there is no room in your published paper.
    3. Post paper, citations and expanded discussions on this blog.
    4. Reference the posted PDF with its link in your published paper.

    Great paper! :-)

  30. I understand and applaud what you are doing. There are indeed as many ways of conceptualising complex systems as there are enquiring minds. This is illuminated by some particularly vivid recent examples that should help us learn from the hard-won experiences of others. The systemic hollowing out of the banking industry and the recent failure of fail-safe engineering in the oil and nuclear industries are examples of institutional authority and group-think dominating the governance of human affairs. In these cases the risk/reward preferences of the governance authority group appeared to have substituted for the risk/reward preferences of the rests of us. The language of risk can be helpful in bringing these unwanted biases to the surface. This approach chimes with experiences in my professional life where the honest and thorough sharing of different understandings of risk, uncertainty and variability have often triggered the development of innovative solutions to intractable and complex business problems. Keeping the language conceptual and steering well clear of management game playing, emotive examples and polarised rhetoric can create the intellectual space for proper conversations to break out. Thank you so much for hosting the Climate etc blog and I wish you great success with this ambitious project. DavS

  31. Professor Bob Ryan

    A nice thoughtful article which works well. Obviously you will be editing it down in due course. A few notes as I read it whilst prepping for a weekend of teaching.
    The Tversky and Kahnemann, and Kelly’s work is good but so is the work by Janis and Mann which you may be familiar with. They are well known for their concept of ‘group think’ and human reasoning under stress (they refer to these as ‘hot cognitive processes’). They also are noted for the development of their conflict theory of decision making under uncertainty. All of this appears germane to your topic area and the rather fraught state that climate science has reached.

    The resolution of uncertainty has been a hot-topic in the social sciences for many decades. The consilience issue also caught my eye and reminded me of the debate in the literature about ‘triangulation’ and some of the epistemological arguments around it. You may be interested in the work of Sven Modell (2009) ‘In defense of triangulation: A critical realist approach to mixed methods research in management accounting’ Management Accounting Research, Volume 20, Issue 3, September 2009, Pages 208-221. I know its across disciplines but it’s rather surprising what us finance people get up to on the quiet.

    ps: I delivered a public lecture here in Dubai last night talking about the powerful similarities between the development of finance and climate science. The issue of modelling dominated because as you may know some financial products have been appallingly difficult to model (eg CDO’s). I have greatly appreciated the leads you and participants on your website have given me especially in the discussions on modelling, ensemble testing and, of course, uncertainty.

  32. I’d like to suggest a small editorial technique that usually reduces word-count while simultaneously improving readability: Look very closely at phrases involving the word ‘not’. Nearly all such phrases become both stronger and simpler by finding an alternative wording that avoids the ‘not’. Useful verbs in this context include “lacks,” “omits,” “fails,” and so on.

    A few common exceptions to this exercise: “not only … but also” is just fine, as are colloquialisms such as “not to mention…”.

    My experience with technical papers is that this little technique works for about 90% of the phrases that fall outside the common exceptions, and that it strengthens the wording of the paper substantially. Not to mention that it also pushes me towards using active voice, which is naturally far stronger than passive voice.

    Some examples:
    “…the consensus approach being used by the IPCC has not produced a thorough portrayal of the complexities…” becomes
    “…the consensus approach being used by the IPCC has failed to produce a thorough portrayal of the complexities…”

    “… is associated with the questions that do not even get asked.” becomes “…is associated with the questions that are never asked.”

    “…potential indirect effects on the climate are not explored in any meaningful way…” becomes “…potential indirect effects on the climate remain unexplored in any meaningful way…”

    “This assessment strategy does not include any systematic analysis…” becomes “This assessment strategy lacks any systematic analysis…” or perhaps “This assessment strategy omits any systematic analysis…”

    “Ignorance is that which is not known” becomes “Ignorance is that which is unknown”

    “…degree of agreement (consensus) among experts do not explicitly account for indeterminacy…” becomes “…degree of agreement (consensus) among experts fail to explicitly account for indeterminacy…”

    “The consilience of evidence argument is not convincing unless it includes parallel evidence-based analyses for competing hypotheses,…” becomes “The consilience of evidence argument is unconvincing in the absence of parallel evidence-based analyses for competing hypotheses,…”

    • Xenophon

      Though I admit to the superiority of Fred Moolten’s method, as it comes out of a deeper editorial understanding of the level of the target audience, I like your suggestions too, and add moving from the passive to the active case of verbs and avoiding ‘of’ of ‘which’ where changing word order will achieve the same meaning:

      “…the consensus approach being used by the IPCC has not produced a thorough portrayal of the complexities…” becomes
      “…the IPCC consensus approach prevents thorough portrayal of complexities…”

      “… is associated with the questions that do not even get asked.” becomes “..foregoes conclusions.”

      “…potential indirect effects on the climate are not explored in any meaningful way…” becomes “…potential indirect climate effects miss meaningful study..”

      “This assessment strategy does not include any systematic analysis…” becomes “This assessment strategy lacks analytic rigor…”

      “Ignorance is that which is not known” becomes “Ignorance is unknowning”

      “…degree of agreement (consensus) among experts do not explicitly account for indeterminacy…” becomes “…indeterminacy is unaccounted for by agreement (consensus) level among experts…”

      “The consilience of evidence argument is not convincing unless it includes parallel evidence-based analyses for competing hypotheses,…” becomes “Absent parallel evidence-based analyses for competing hypotheses, consilience of evidence is unconvincing,…”

      Some of these usages will come across as awkward, and there’s always a matter of taste in writing and reading.

      • add moving from the passive to the active case of verbs

        “The Little English Handbook” covers this (why to use active case of verbs) and many other topics.:

        Written by revered composition theorist Edward P.J. Corbett with co-author Sheryl Finkle, this pocket-sized, inexpensive, elegant little handbook addresses the most prevalent writing problems students face. Featuring artful prose explanations on matters of grammar, style, paragraphing, punctuation, and mechanics, the text also serves as a guide to the conventions of research writing and documentation providing coverage of online sources and ACW documentation style for citing those sources as well as coverage of MLA, APA, CBE and CMS documentation systems.

  33. Notice that, using the scheme below, Leftist global warming fearmongers are on the level of ‘meta-ignorance,’ i.e., they’re all unconscious incompetents refusing to ‘even consider the possibility of error,’ as follows:

    “Walker et al. (2003) categorize the following different levels of ignorance. Total ignorance implies a deep level of uncertainty, to the extent that we do not even know that we do not know. Recognized ignorance refers to fundamental uncertainty in the mechanisms being studied and a weak scientific basis for developing scenarios. Reducible ignorance may be resolved by conducting further research, whereas irreducible ignorance implies that research cannot improve knowledge (e.g. what happened prior to the big bang). Bammer and Smithson (2008) further distinguish between conscious ignorance, where we know we don’t know what we don’t know, versus unacknowledged or meta-ignorance where we don’t even consider the possibility of error.” ~J. Curry

    That is how we know AGW True Believers are not really scientists at all, by any definition–e.g., because for them, the easily accessible knowledge that the null hypothesis of AGW has never been rejected will forever be an unacknowledged fact.

  34. Here is my perspective on uncertainty.
    Imagine 50 years from now it is 3 degrees warmer, and there has been no long-term preparation for it because of the uncertainties in the early 21st century. Will the scientists of the current era be blamed for not conveying enough certainty to take advanced action? What can scientists do now to avoid that future perception? I think they have to put their thoughts and certainties in writing in the IPCC reports and elsewhere, which when looked at in the future, will absolve them of blame for not trying hard enough. The blame will then fall on the policy makers who did not take action given the obvious signs.

    • John Carpenter

      Jim,

      Regardless of what the temperature is in 50 yeasrs, 3.0 C warmer or 0.2 C cooler, the science of this era will not be judged as poor if reasonable and necessary discussions about uncertainty have been properly addressed. The converse to your hypothetical is, what if temperatures only rise insignificantly (or cool) and energy policies enacted on the certainty of AGW radically change world economies for the worse and cause lots of unncesessary strife? How would science of this era be judged then? Your precautionary argument is a double edged sword.

      Everything needs to be considered on both sides of the argument. Right now, it is largely biased on the AGW side, no?

      • Clearly it is not biased enough on the AGW side because people are still talking about uncertainty in even the most fundamental aspects that should be certain by now.
        On the earlier point, yes, if it doesn’t continue to warm, many scientists will have a lot of wrong papers and they would not look good in history. They have put their reputations on the line, and they don ‘t do that lightly.

      • John Carpenter

        Jim,

        “They have put their reputations on the line, and they don ‘t do that lightly.”

        I could not have put that better myself. In light of that excellent comment, do some climate scientists then have a motive to not be completely open, transparent and forthright with their uncertainties? Isn’t this the crux of the whole debate when you get down to it?

      • When you read their papers and reports, you see the uncertainties mentioned. It is unfortunate that press releases don’t mention those parts. It is actually very rare for a scientist not to hedge in a paper for fear of being outrightly wrong, and they do this by mentioning uncertainties or limitations in data, or simplifying assumptions they have made. Science is not often as black and white as mathematics.

      • John Carpenter

        Jim,

        Uncertainty (or error) has to be presented when asserting any technical narrative involving measurement, whether it is a lab report, a standard practice or a journal article. Measurment of any metric has an associated error, right? In addition, the measuring device must also be properly calibrated against an accepted standard. Calibration and error must be know in order to accept a metrics value for its worth… its credibility. One would think the taking of the atmoshpheric temperature (for example) would be a straightforward, simple exercise. But its not. Another example… calibrating tree ring growth as an indicator of historic temperatures against present day measurements of both should be straight forward. But it doesn’t appear to be. There are many more examples, but the point is, the certainty to which these metrics have been measured appear to be overstated by those who present them. We know this because those who are more skeptical of the results are verifying for themselves.. and in the process are finding unreported errors and hidden uncertainties. This is certainly not the majority of climate science, but enough to cast a dark shadow over the IPCC reports. Unfortunatly, some who are quite influential in the field and the IPCC do not wish to acknowledge this because it involves their work… furthering skepticism among those of us who are skeptical. It is a negative feedback loop.

        I am a scientist, I deal with issues of uncertainty all the time in the course of doing my research and producing consistent product for my businesses clients, my customers. They pay good money for the services I give through the company I work for. If I am not completely honest with them about uncertianties we have, I am not doing my job. They have to make their decisions based on what I know and…. what I don’t know. If you oversell what you know, and undersell what you don’t know, it can come back and hit you real hard. I may lose my customer or worse, I may stain the reputation of the company I work for.

        This, in a microcosm, is exactly what climate scientists and policy makers are wrestling wrt to our unknown future of the planets climate. There is a lot at stake, on both sides of the argument.. prestige, money, energy policy and our environment… nobody wants to be wrong.

        We cannot overstate what we know. We cannot understate what we don’t know. Through this draft, JC has offers a better way of stating all types of uncertainties encountered in research… of measuring them in a more quantitative way and taking out the subjectivity of “consensus”.

      • John – I gather from your comments that you believe there is an excessive tendency within climate science to overstate certainty and inadequately acknowledge statistical variability. My question to you is “From what source have you derived that conclusion? Over the past few decades, hundreds of thousands of climate science papers have appeared in the literature. Have you taken a sample randomly selected from that literature and analyzed it for the appropriateness of uncertainty/variability statements, or have you based your judgment on examples “cherry-picked” from among those hundreds of thousands by partisans intent on promoting a particular agenda?”

        I expect you realize that my question was rhetorical, because I already know the answer. However, could I suggest that you actually try the random sample approach, utilizing a climate science topic that is both important and controversial? One example might be the topic of “climate sensitivity”, because the climate sensitivity value affects the magnitude of warming mediated by CO2 and other anthropogenic emissions. I believe that if you use those words as a search term in Google Scholar, you will come up with many dozens of articles whose full contents are available, typically as pdf files. You might want to skim through a sample of those to get a better sense of what is typical in climate science than is achievable through blogs and other sources that cite non-representative examples.
        I believe you will also find that the articles you read will rarely if ever address topics as general as “anthropogenic global warming”. Almost all are devoted to particular items designed to shed light on some small part of the problem so as to advance the science incrementally. Arguments about whether or not we are warming the planet are the stuff of blog debates, but the scientists are more intent on doing science, with the general conclusions flowing from the convergence of multiple different sets of data rather than the dramatic findings in any single study.

      • John, there is a delicate line between certainty and uncertainty in getting something published or added to a report (like IPCC). Too much uncertainty is also bad, because the work might be rejected on those grounds. Different types of publication may have different lines, but I believe higher-profile publications require a higher level of significance to be evident, so there is a pressure in that direction for those, which probably has influenced the well known cases in climate. Looking at the run-of-the-mill publications gives a better idea of the things that are actually certain and uncertain, in my opinion.

      • John Carpenter

        Fred,

        You say,

        “I gather from your comments that you believe there is an excessive tendency within climate science to overstate certainty and inadequately acknowledge statistical variability.”

        I don’t think you read what I said very carefully. On more than one comment I posted I made it very clear that I do not consider all or most of climate science to be in error wrt overstating confidence. Read for yourself.

        My opinion is… there have been some very influential scientists who have made very strong assertions from their research that gained a lot of publicity through the IPCC which is shaping policy making. The same scientists are or have been very influential within the IPCC. They have worked closely with some journals to game the system in their favor. It is my opinion they are shaping a narrative with an attitude of complete arrogance and confidence that simply flies in the face of legitimate criticism. You are correct that I have gathered this information from certain web sites, blogs and books on the subject. IMO the sources arguing this side of the debate are far more credible based on their behavior vs those I previously mentioned.

        Perhaps you do not think this important. You may take the position that the preponderance of the evidence points to AGW and the actions of a few can not spoil the overall result. You may be surprised to know I feel the same way. I am not an AGW denier, I am certain enough of the known science to accept mankind has a profound effect on our environment. I do not think CO2 is the main driver. I do not think the link between CO2 and temperature is well correlated. You may not agree with me, so be it.

        I have no aversion to your suggestion of reading up on more topics related to climate science. I read as much as I can, but make no mistake about my position. I have gained a biased opinion about what I read. It is due to a few badly behaved, influential, arrogant scientists. It will take a lot of incontrovertible evidence and/or humbling on their part to change that. I am skeptical. That includes sifting through the AGW denier propaganda for what is complete bunk. Please do not attempt to pigeonhole me into some preconceived category of your own imagination, I can read between the lines. The truth is up to all of us to decide on our own, based on the sources we feel most comfortable with. You may think some of the sources I use are unworthy…I have enlightened you to some of my opinions for mutual clarification. You do not know how extensively I study the subject of climate science.

      • John Carpenter

        Fred,

        Not to leave you with a completely negative feeling about me from my previous post… but I really dig your style of music. We may or may not agree with each other on climate topics now or in the future… but I am certain we both enjoy similar musical tastes based on what I heard on your website.

      • Thanks, John. Regarding negative feelings, there are none. I don’t feel offended just because someone disagrees with me.

    • Latimer Alder

      In 50 years time most of today’s scentsts will be retired or expired. Why should they worry about the ‘blame game’ half a century hence?

      And if the signs were as obvious as you seem to beleve, then we wouldn’t be discussing this topic anyway. They aren’t, so we are.

    • Scientists do care how their work is perceived in the future. This is not understood enough by people outside the field. They do not write papers that they believe will be discredited in the future, because there is nothing worse than that to a scientist. What they put in a paper, is what they believe to the best of their knowledge.

      • John Carpenter

        Jim,

        No question scientists (and everyone else) want their life’s work to have a positive perception in the future. I think anybody, scientist, musician, actor, legislature, laborer, writer, poet etc… can identify with that. This trait is certainly not unique to scientists in any way. Scientists are not the only professionals that leave a written history of their ideas, life experiences, research or beleifs.

        But let’s face it, a lot of well intentioned, productive and talented scientists end up on the dust heap of history, never to be known or appreciated by the general masses. For those scientists who do become more reknown, isn’t it more likely to be said that thier work was credible, thorough and good because it was transparent, open to debate and inclusive to alternative ideas (using the scientific method) than those who were not. Won’t these types of scientists end up having good work in the long run because the work stood up to the science?

        As an aside, arrogant people, whether they end being right or not, will still be recognized historically as being arrogant.

        What you say is really true to everyone, isn’t it?

      • Yes, it should be true for not just scientists. This is a point that is important to make for scientists because I see a lot of people here (not you) ascribe alternative motives to scientists, implying that they will say anything, true or false, to get funding or play along with their peers, and this is not the case for a typical individual scientist, because in the end the good ones are judged by which ones were right, and they know that.

      • John Carpenter

        Jim,

        Thanks for the complement. I have to be honest with you, I am one that does believe some in the field do have alternative motives to furthering their ideology, career or prestige in climate science. I said “some”, not “all” or even “most”, but enough to sour the milk. This also, in my experience, is not unique to climate science. We all know of this type of behavior, most people with it are either politicians or lawyers… right? ;)

      • “They do not write papers that they believe will be discredited in the future, because there is nothing worse than that to a scientist.”

        Almost every scientific discovery will eventually be proven wrong. Surely every reasonable scientist knows this. Or they should.

        There is plenty of past examples of scientists working behind the scenes to keep new discoveries from the light of day, to help prop up their own work. Often a generation of scientists must pass before the new ideas can be fairly considered. It has been argued thata Moses wandered in the desert for 2 generations for a similar reason.

    • The blame will then fall on the policy makers who did not take action given the obvious signs.

      It is shown that the anthropogenic carbon dioxide warming should emerge from the noise level of natural climate variability by the end of the century, and there is a high probability of warming in the 1980’s.
      http://bit.ly/e0tm7w

      Let us verify the above statement of Hansen et al (1981).

      Here is the data for global mean temperature since 1980.
      http://bit.ly/fnFCsR

      From the above chart, we see that at the end of the previous century, the anthropogenic carbon dioxide warming did NOT emerge. As a result, “the obvious signs” were wrong ones.

      • That first link (Hansen et al., 1981, Science) is actually very prophetic. He expected 0.7 C warming by 2000, and said it would be a significant two standard deviations above natural variability which he says should have stayed within plus or minus 0.2 C of the zero anomaly. We are clearly well above the 0.2 C line, and have been continuously for decades now, so he was correct. People have been trying to redefine natural variability since, but it is being stretched beyond Hansen’s common-sense view based on standard deviations.

      • He expected 0.7 C warming by 2000

        But it is only 0.4 deg C since 2000 as shown in the following chart

        http://bit.ly/h86k1W

      • This is the anomaly from 1961-90. Pre-industrial is at least -0.3 on that scale.

      • “he says should have stayed within plus or minus 0.2 C of the zero anomaly”

        A quick look at the graph shows that the anomaly went from +.4 to -.3 within a span of 3 years (1982-1985) which blows Hansen’s range for natural variability out of the water.

      • The simple facts that we had a – 0.7 anomaly over a period of 3 years (82-85) shows how bogus Hansen et al, 1981 prediction of 0.7 warming by 2000 was. The temperature of the earth shows large natural variability over short periods of time. Statistically it must show even larger variability over larger periods of time. Especially as it displays near unit root behavior.

        Unlike a coin toss (each toss is independent), this years temperature is not independent of last years. It is more likely to be like last years temperature than the temperature 2 years ago due to inertia.

        This means temperature will tend to wander over time, rather than seek a mean as we see with a coin toss. It is more like a car with lots of play in the steering and suspension. It will wander all over the road with every bump, no matter how much you try and keep it going straight.

        Nowhere is this better illustrated than the 100k problem. A slight change in forcing produces one of the most dramatic observed changes in climate, well outside of what the models predict. Until a solution is found, we must assume that our understanding of solar forcings is fundamentally flawed.

        Thus, any predictions that assume small solar forcings can have only a small climate effect are fundamentally flawed, as they are at odds with observation.

      • This is why he was only considering 5-year moving averages.

  35. The key point is to recognize the uncertainty, which is a far cry from the bad old ‘debate is over’ days.

  36. Jim,

    Are you suggesting that there is more scientific certainty than is being communicated?

    Scientists cannot be blamed for presenting the facts realistically, regardless of what the policy makers do with them. They can be blamed for distorting the presentation (in either direction) for their own purposes. Conveying uncertainty is not the same as advising someone to do nothing.

    • Yes, there is more certainty if we just focus on temperature changes. There would not be such a large consensus without a degree of certainty. More than 90% of climate papers now are evaluating changes that have occurred already, or are assuming the consensus of a changing future. If the policy makers could somehow have a direct understanding of the scientific literature as it stands now, they would also have more certainty. The IPCC reports try to summarize that for them, but have not been successful yet at conveying certainty in significant warming by the end of the century. Regional effects on precipitation and other impacts are another thing. There is a lot of uncertainty about those.

      • Jim D: What do you mean by “conveying certainty”? It seems to me that the IPCC scientists convey plenty of certainty — even too much certainty according to Dr. Curry’s analysis.

        We who are not scientists, both policy makers and citizens, are aware that there is a consensus among scientists about ACC. We also know that scientists are human and can be wrong, e.g. the consensuses against continental drift and bacteria-caused ulcers.

        As annoying as it may be to consensus scientists, the rest of us are not going to shrug our shoulders and start writing huge checks and accepting all the changes Al Gore demands just because there is a consensus of scientists conveying certainty.

        Also remember that policy makers are either elected or accountable to elected officials. If big changes and sacrifices are required, ordinary citizens will have to be convinced too.

      • I don’t think an IPCC report has much to act on if there is no certainty conveyed. Its value is all in its degree of certainty on various matters, and there are things that are more certain than others, which should be separated out and stated clearly, and I think they have been in AR4. Consensus has taken on a bad meaning here, because it implies a bandwagon thinking. Most scientists can and do think for themselves, so this consensus is one of individuals coming to their own conclusions based on their own knowledge of the basics of the arguments for and against AGW.

      • I don’t think an IPCC report has much to act on if there is no certainty conveyed.

        Jim D: I don’t understand what you mean here.

        I read IPCC reports and they convey certainty — all those “likely”, “very likely”, “extremely likely”, “virtually certain” statements.

        http://www.sejarchive.org/resource/IPCC_terminology.htm

      • My reaction was to the idea in the post that uncertainty is not emphasized enough in these reports. I certainly support the level of certainty conveyed in the AR4, but this blog post seems to want to back off that level of certainty, if I read it correctly. In a nutshell, I don’t see anything wrong with they way they did it in AR4.

      • The first-order effect on climate, from the point of view of forcing, is the increase of CO2. Natural internal variability and solar variations have not been shown to be able to produce more than 0.5 C of variation, while CO2 credibly produces 3 C by just doubling. The discussion on uncertainty has to take into account the relative magnitudes of these processes, which I see sorely missing here.

      • JimD
        “……while CO2 credibly produces 3 C by just doubling. ”

        Yes, but only if the modelled feedbacks are correct, and therein lies uch of the uncertainty that needs to be addressed.

      • Yes, talk about those uncertainties that apply to the 3 C number, rather than things that amount to only 0.5 C. The focus is lost if we are discussing wiggles that are no bigger than an El Nino, and that shouldn’t affect climate-change policy at all.

      • Basically I am disagreeing with the thesis that we don’t have enough uncertainty in the IPCC reports. They have it about right.

      • Jim D: Dr. Curry supported her thesis that uncertainty is underestimated in the IPCC reports with various arguments which I found persuasive.

        Do you disagree with her arguments or do you basically trust the IPCC scientists to have done their jobs properly and not fallen prey to the pitfalls Dr Curry describes?

  37. Dr. Curry: I offer this suggestion about the use of “consensus” based upon my 50 years of experience as a lawyer and my understanding of the scientific method process. Consensus is a political/legal term that has no place in scientific jargon.

    The following definition and thesaurus references for “consensus” are instructive:
    Consensus: general agreement; a consensus of opinion among judges; as adj. a consensus view.
    1 there was consensus among delegates, AGREEMENT, harmony, concurrence, accord, unity, unanimity, solidarity; formal concord. Antonym: disagreement.
    2 the consensus was that that they should act GENERAL OPINION, majority opinion common view.

    • Consensus, as said to be Japanese business practice: Group decision to adopt a course of action; dissenters have been heard but do not oppose the decision because the consensus process was fair and open.

    • It might be interesting too to consider that human involvement is not an automatic indictment and sentence of guild for whatever may happen, asfor example:

      Shall prospects of global cooling be considered a disaster too and if so, then it must be human-caused?

      It is interesting to note that, “The partial forecast indicates that climate may stabilize or cool until 2030-2040. Possible physical mechanisms are qualitatively discussed with an emphasis on the phenomenon of collective synchronization of coupled oscillators.” ~Nikola Scafetta

      Additionally, “… a long-term global cooling starting around 2002 is expected to continue for next five to seven decades…” ~Lu, Q.

      And, what if humans actually averted an ice age. Would that still be a disaster? “If man made global warming is indeed real, and it helps to prevent another ice age, this would be the most fortunate thing that has happened to our species since we barely escaped extinction from an especially cold period during the last ice age some 75,000 years ago.” ~Walter Starck

      And, what if global warming were to continue for 100 years? But, what if as throughout the 10,000 years of the Holocene, the global warming had nothing to do with humans. Would that still be a disaster?

      And, what if all the facts be known, the reality very likely may be that, “Fossil fuels will run out well before any drastic effects on climate are possible.” (Ibid.)

      And, shouldn’t we also keep in mind when answering these questions that everyone knows global warming has been much better for humanity than global cooling. “The net result of a projected doubling of atmospheric CO2 is most likely to be positive.” (Ibid.)

      • “Reducible ignorance may be resolved by conducting further research”
        Might I be so bold as to look at these areas as profitable ways to add data and find answers for the contributions of Natural internal variability [surface features changes due to tectonics] and external orbital parameter variability due to interactions with the rest of the solar system.

        Changes in the ion charge gradient from pole to equator due to interactions of the Earth’s geomagnetic storms, resulting from solar wind variances can change the cloud nebulized droplet size on a global scale, allowing large shifts in the total UV solar power reaching the surface with NO change in TSI.

        Synod conjunctions with the outer planets have this effect, and the timing of the passage of these conjunctions as they process through the seasons, shifts the periodicity of the effects leading to the chaotic patterns seen.

        http://research.aerology.com/natural-processes/solar-system-dynamics/

  38. Judy – Here is a “devil’s advocate” review plus suggestions for your paper. It’s deliberately a bit extreme, with the expectation that you might want to make use of only part of it (or none of it), but might find it useful to see the topic as it might be perceived from a very different perspective. Obviously, individuals will differ on this, and I offer this as only one person’s opinion.

    Before going into specifics, I would make the following few general suggestions:

    1. The notion that the climate is complex, uncertainty abounds, and underestimating it is a bad idea should be expressed very succinctly, with little elaboration and few references. Very few readers are going to be in favor of underestimating uncertainty, and if you lecture them too much on this point, they are likely to turn off, and pay inadequate attention to your other points. Much of what is now in text form could be summarized very briefly or put in a Table with references.

    2. You might want to give at least a few more examples of how confidence in conclusions can be undermined by unexpected new data. Two items come to mind regarding solar influences. The first is the recent report by Judith Lean et al on the inaccuracy of TSI data. The second, from some months ago (I don’t recall the authors) involved the observation that spectral irradiance data could diverge from overall TSI during the solar cycle, so that overall TSI reductions at cycle minima might be accompanied by actual increases in irradiance at some visible wavelengths. These conclusions, to the extent they are confirmed,will require adjustment of previous estimates of solar forcing. I’m sure there are other salient examples.

    3. Beyond black swans and dragon kings, you may also want to give more emphasis to the point that uncertainty is two-edged, with some of the uncertainties at the high end greater than at the low end (e.g., see Annan and Hargreaves).

    4. I see your most important content residing in those sections describing methods for addressing uncertainty. This lead to

    5. I think it would be worthwhile for you to describe in more specific terms what prescription you advise for the IPCC to follow. You’ve identified problems – how would you fix them? This could occupy far more space than you currently devote to it.

    1. Introduction

    ((DELETE THIS PARAGRAPH)) The challenge of framing and communicating uncertainty about climate change is a symptom of the challenges of understanding and reasoning about such a complex system. Our understanding of the complex climate system is hampered by a myriad of uncertainties, indeterminacy, ignorance, and cognitive biases. Complexity of the climate system arises from the very large number of degrees of freedom, the number of subsystems and complexity in linking them, and the nonlinear and chaotic nature of the atmosphere and ocean. A complex system exhibits behavior not obvious from the properties of its individual components, whereby larger scales of organization influence smaller ones and structure at all scales is influenced by feedback loops among the structures. The epistemology of computer simulations of complex systems is a new and active area research among scientists, philosophers, and the artificial intelligence community. How to reason about the complex climate system and its computer simulations is not simple or obvious.

    ((START HERE)) How has the IPCC dealt with the challenge of uncertainty in the complex climate system? Until the time of the IPCC TAR and the Moss-Schneider (2000) Guidance paper, uncertainty was dealt with in an ad hoc manner. The Moss-Schneider guidelines raised a number of important issues regarding the identification and communication of uncertainties. However, the actual implementation of this guidance in the TAR and AR4 adopted a subjective perspective or “judgmental estimates of confidence.” Defenders of the IPCC uncertainty characterization argue that subjective consensus expressed using simple terms is understood more easily by policy makers.

    ((CONDENSE THIS PARAGRAPH INTO A COUPLE OF SENTENCES WITH REFERENCES))The consensus approach used by the IPCC to characterize uncertainty has received a number of criticisms. Van der Sluijs et al. (2010b) finds that the IPCC consensus strategy underexposes scientific uncertainties and dissent, making the chosen policy vulnerable to scientific error and limiting the political playing field. Van der Sluijs (2010a) argues that matters on which no consensus can be reached continue to receive too little attention by the IPCC, even though this dissension can be highly policy-relevant. Oppenheimer et al. (2007) point out the need to guard against overconfidence and argue that the IPCC consensus emphasizes expected outcomes, whereas it is equally important that policy makers understand the more extreme possibilities that consensus may exclude or downplay. Gruebler and Nakicenovic (2001) opine that “there is a danger that the IPCC consensus position might lead to a dismissal of uncertainty in favor of spuriously constructed expert opinion.”

    ((DELETE PARAGRAPH EXCEPT FOR LAST SENTENCE))While the policy makers’ desire for a clear message from the scientists is understandable, the consensus approach being used by the IPCC has not produced a thorough portrayal of the complexities of the problem and the associated uncertainties in our understanding. While the public may not understand the complexity of the science or be culturally predisposed to accept the consensus, they can certainly understand the vociferous arguments over the science portrayed by the media. Better characterization of uncertainty and ignorance and a more realistic portrayal of confidence levels could go a long way towards reducing the “noise” and animosity portrayed in the media that fuels the public distrust of climate science and acts to stymie the policy process. Not to mention that an improved characterization of uncertainty and ignorance would promote a better overall understanding of the science and how to best target resources to improve understanding. Further,
    improved understanding and characterization of uncertainty is critical information for the development of robust policy options.
    Indeterminacy and framing of the climate change problem

    ((DELETE THIS PARAGRAH))An underappreciated aspect of characterizing uncertainty is associated with the questions that do not even get asked. Wynne (1992) argues that scientific knowledge typically investigates “a restricted agenda of defined uncertainties—ones that are tractable— leaving invisible a range of other uncertainties, especially about the boundary conditions of applicability of the existing framework of knowledge to new situations.” Wynne refers to this as indeterminacy, which arises from the “unbounded complexity of causal chains and open networks.” Indeterminacies can arise from not knowing whether the type of scientific knowledge and the questions posed are appropriate and sufficient for the circumstances and the social context in which the knowledge is applied.

    ((DELETE THIS PARAGRAPH))In the climate change problem, indeterminacy is associated with the way the climate change problem has been framed. Frames are organizing principles that enable a particular interpretation of an issue. De Boerg et al. (2010) state that: “Frames act as organizing principles that shape in a “hidden” and taken-for-granted way how people conceptualize an issue.” Risbey et al. (2005)??? argue that decisions on problem framing influence the choice of models and what knowledge is considered relevant to include in the analysis. De Boerg et al. further state that frames can express how a problem is stated, who is expected to make a statement about it, what questions are relevant, and what range of answers might be appropriate.

    The decision making framework provided by the UNFCCC Treaty provides the rationale for framing the IPCC assessment of climate change and its uncertainties, in terms of identifying dangerous climate change and providing input for decision making regarding CO2 stabilization targets. In the context of this framing, certain key scientific questions receive little attention. In the detection and attribution of 20th century climate change, Chapter 9 of the AR4 WG1 Report all but dismisses natural internal modes of multidecadal variability in the attribution argument. Further, impacts of the low level of understanding of solar variability and its potential indirect effects on the climate are not explored in any meaningful way in terms of its impact on the confidence level expressed in the attribution statement. In the WG II Report, the focus is on attributing possible dangerous impacts to AGW, with little focus in the summary statements on how warming might actually be beneficial to certain regions or in certain sectors.

    Further, the decision analytic framework associated with setting a CO2 stabilization target focuses research and analysis on using expert judgment to identify a most likely value of sensitivity/ warming and narrowing the range of expected values, rather than fully exploring the uncertainty and the possibility for black swans (Taleb 2007) and dragon kings (Sornette 2009). The concept of imaginable surprise was discussed in the Moss-Schneider uncertainty guidance documentation, but consideration of such possibilities seems largely to have been ignored by the AR4 report. The AR4 focused on what was “known” to a significant confidence level. The most visible failing of this strategy was neglect of the possibility of rapid melting of ice sheets on sea level rise in the Summary for Policy Makers (e.g. Oppenheimer et al. 2007; Betz 2009). An important issue is to identify the potential black swans associated with natural climate variation under no human influence, on time scales of one to two centuries. Without even asking this question, judgments regarding the risk of anthropogenic climate change can be misleading to decision makers.

    ((DELETE THIS PARAGRAPH))The presence of sharp conflicts with regards to both the science and policy reflects an overly narrow framing of the climate change problem. Until the problem is reframed or multiple frames are considered by the IPCC, the scientific and policy debate will continue to ignore crucial elements of the problem, with confidence levels that are too high.

    Uncertainty, ignorance and confidence

    The Uncertainty Guidance Paper by Moss and Schneider (2000) recommended a common vocabulary to express quantitative levels of confidence based on the amount of evidence (number of sources of information) and the degree of agreement (consensus) among experts. This assessment strategy does not include any systematic analysis of the types and levels uncertainty and quality of the evidence, and more importantly dismisses indeterminacy and ignorance as important factors in assessing these confidence levels. In context of the narrow framing of the problem, this uncertainty assessment strategy promotes the consensus into becoming a self-fulfilling prophecy.

    The uncertainty guidance provided for the IPCC AR4 distinguished between levels of confidence in scientific understanding and the likelihoods of specific results. In practice, primary conclusions in the AR4 included a mixture of likelihood and confidence statements that are ambiguous. Curry and Webster (2010) have raised specific issues with regards to the statement “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.” Risbey and Kandlikar (2007) describe ambiguities in actually applying likelihood and confidence, including situations where likelihood and confidence cannot be fully separated, likelihood levels contain implicit confidence levels, and interpreting uncertainty when there are two levels of imprecision is in some cases rather difficult.

    Numerous methods of categorizing risk and uncertainty have been described in the context of different disciplines and various applications; for a recent review, see Spiegelhalter and Riesch (2011). Of particular relevance for climate change are schemes for analyzing uncertainty when conducting risk analyses. My primary concerns about the IPCC’s characterization of uncertainty are twofold:
     lack of discrimination between statistical uncertainty and scenario uncertainty
     failure to meaningfully address the issue of ignorance
    Following Walker et al. (2003), statistical uncertainty is distinguished from scenario uncertainty, whereby scenario uncertainty implies that it is not possible to formulate the probability of occurrence particular outcomes. A scenario is a plausible but unverifiable description of how the system and/or its driving forces may develop in the future. Scenarios may be regarded as a range of discrete possibilities with no a priori allocation of likelihood. Wynne (1992) defines risk as knowing the odds (analogous to Walker et al.’s statistical uncertainty), and uncertainty as not knowing the odds but knowing the main parameters (analogous to Walker et al.’s scenario uncertainty).

    Stainforth et al. (2007) argue that model inadequacy and an insufficient number of simulations in the ensemble preclude producing meaningful probability distributions from the frequency of model outcomes of future climate. Stainforth et al. state: “[G]iven nonlinear models with large systematic errors under current conditions, no connection has been even remotely established for relating the distribution of model states under altered conditions to decision-relevant probability distributions. . . . Furthermore, they are liable to be misleading because the conclusions, usually in the form of PDFs, imply much greater confidence than the underlying assumptions justify.” Given climate model inadequacies and uncertainties, Betz (2009) argues for the logical necessity of considering climate model simulations as modal statements of possibilities, which is consistent with scenario uncertainty. Stainforth et al. makes an equivalent statement: “Each model run is of value as it presents a ‘what if’ scenario from which we may learn about the model or the Earth system.” Insufficiently large initial condition ensembles combined with model parameter and structural uncertainty preclude forming a PDF from climate model simulations that has much meaning in terms of establish a mean value or confidence intervals. In the presence of scenario uncertainty, which characterizes climate model simulations, attempts to produce a PDF for climate sensitivity (e.g. Annan and Hargreaves 2010) are arguably misguided and misleading.

    Ignorance is that which is not known; Wynne (1992) finds ignorance to be endemic because scientific knowledge must set the bounds of uncertainty in order to function. Walker et al. (2003) categorize the following different levels of ignorance. Total ignorance implies a deep level of uncertainty, to the extent that we do not even know that we do not know. Recognized ignorance refers to fundamental uncertainty in the mechanisms being studied and a weak scientific basis for developing scenarios. Reducible ignorance may be resolved by conducting further research, whereas irreducible ignorance implies that research cannot improve knowledge (e.g. what happened prior to the big bang). Bammer and Smithson (2008) further distinguish between conscious ignorance, where we know we don’t know what we don’t know, versus unacknowledged or meta-ignorance where we don’t even consider the possibility of error.

    While the Kandlikar et al. (2005) uncertainty schema explicitly includes effective ignorance in its uncertainty categorization, the AR4 uncertainty guidance (which is based upon Kandlikar et al.) neglects to include ignorance in the characterization of uncertainty. Hence IPCC confidence levels determined based on the amount of evidence (number of sources of information) and the degree of agreement (consensus) among experts do not explicitly account for indeterminacy and ignorance, although recognized areas of ignorance are mentioned in some part of the report (e.g. the possibility of indirect solar effects in sect xxxx of the AR4 WG1 Report). Overconfidence is an inevitable result of neglecting

    indeterminacy and ignorance.

    A comprehensive approach to uncertainty management and elucidation of the elements of uncertainty is described by the NUSAP scheme, which includes methods to determine the pedigree and quality of the relevant data and methods used (e.g. van der Sluijs et al. 2005a,b). The complexity of the NUSAP scheme arguably precludes its widespread adoption by the IPCC. The challenge is to characterize uncertainty in a complete way while retaining sufficient simplicity and flexibility for its widespread adoption. In the context of risk analysis, Speigelhalter and Riesch (2011) describe a scheme for characterizing uncertainty that covers the range from complete numerical formalization of probabilities to indeterminancy and ignorance, and includes and the possibility of unspecified but surprising events. Quality of evidence is an important element of the NUSAP scheme and the scheme described by Spiegelhalter and Riesch (2011). The GRADE scale of Guyatt et al. (2008) provides a simple yet useful method for judging quality of evidence, with a more complex scheme for judging quality utilized by NUSAP.

    Judgmental estimates of confidence need to consider not only the amount of evidence for and against and the degree of consensus, but also need to consider the adequacy of the knowledge base (which includes the degree of uncertainty and ignorance), and also the quality of the information that is available. A practical way of incorporating these elements into an assessment of confidence is provided by Egan (2005). The crucial difference between this approach and the consensus-based approach is that the dimension associated with the degree of consensus among experts is replaced by specific judgments about the adequacy of the knowledge base and the quality of the information available.

    Consensus and disagreement

    ((DELETE THE FOLLOWING FIVE PARAGRAPHS, DOWN TO “REASONING ABOUT UNCERTAINTY”)) The uncertainty associated with climate science and the range of decision making frameworks and policy options provides much fodder for disagreement. Here I argue that the IPCC’s consensus approach enforces overconfidence, marginalization of skeptical arguments, and belief polarization. The role of cognitive biases (e.g. Tverskey and Kahnemann 1974) has received some attention in the context of the climate change debate, as summarized by Morgan et al. (2009, chapter xxx). However, the broader issues of the epistemology and psychology of consensus and disagreement have received little attention in the context of the climate change problem.
    Kelly (2005, 2008) provides some general insights into the sources of belief polarization that are relevant to the climate change problem. Kelly (2008) argues that “a belief held at earlier times can skew the total evidence that is available at later times, via characteristic biasing mechanisms, in a direction that is favorable to itself.” Kelly (2008) also finds that “All else being equal, individuals tend to be significantly better at detecting fallacies when the fallacy occurs in an argument for a conclusion which they disbelieve, than when the same fallacy occurs in an argument for a conclusion which they believe.” Kelly (2005) provides insights into the consensus building process: “As more and more peers weigh in on a given issue, the proportion of the total evidence which consists of higher order psychological evidence [of what other people believe] increases, and the proportion of the total evidence which consists of first order evidence decreases . . . At some point, when the number of peers grows large enough, the higher order psychological evidence will swamp the first order evidence into virtual insignificance.” Kelly (2005) concludes: “Over time, this invisible hand process tends to bestow a certain competitive advantage to our prior beliefs with respect to confirmation and disconfirmation. . . In deciding what level of confidence is appropriate, we should taken into account the tendency of beliefs to serve as agents in their own confirmation.”
    So what are the implications of Kelly’s arguments for consensus and disagreement associated with climate change and the IPCC? Cognitive biases in the context of an institutionalized consensus building process have arguably resulted in the consensus becoming increasingly confirmed in a self-reinforcing way. The consensus process of the IPCC has marginalized dissenting skeptical voices, who are commonly dismissed as “deniers” (e.g. Hasselmann 2010). This “invisible hand” that marginalizes skeptics is operating to the substantial detriment of climate science, not to mention the policies that are informed by climate science. The importance of skepticism is aptly summarized by Kelly (2008): “all else being equal, the more cognitive resources one devotes to the task of searching for alternative explanations, the more likely one is hit upon such an explanation, if in fact there is an alternative to be found.”
    The intense disagreement between scientists that support the IPCC consensus and skeptics becomes increasingly polarized as a result of the “invisible hand” described by Kelly (2005, 2008). Disagreement itself can be evidence about the quality and sufficiency of the evidence. Disagreement can arise from disputed interpretations as proponents are biased by excessive reliance on a particular piece of evidence. Disagreement can result from “conflicting certainties,” whereby competing hypotheses are each buttressed by different lines of evidence, each of which is regarded as “certain” by its proponents. Conflicting certainties arise from differences in chosen assumptions, neglect of key uncertainties, and the natural tendency to be overconfident about how well we know things (e.g. Morgan 1990).
    What is desired is a justified interpretation of the available evidence, which is completely traceable throughout the process in terms of the quality of the data, modeling, and reasoning process. A thorough assessment of uncertainty by proponents of different sides in a scientific debate will reduce the level of disagreement.

    ((RESUME HERE))

    Reasoning about uncertainty

    ((DELETE THIS PARAGRAPH))The objective of the IPCC reports is to assess existing knowledge about climate change. The IPCC assessment process combines a compilation of evidence with subjective Bayesian reasoning. This process is described by Oreskes (2007) as presenting a “consilience of evidence” argument, which consists of independent lines of evidence that are explained by the same theoretical account. Oreskes draws an analogy for this approach with what happens in a legal case.

    ((DELETE THIS PARAGRAPH))The consilience of evidence argument is not convincing unless it includes parallel evidence-based analyses for competing hypotheses, for the simple reason that any system that is more inclined to admit one type of evidence or argument rather than another tends to accumulate variations in the direction towards which the system is biased. In a Bayesian analysis with multiple lines of evidence, you could conceivably come up with enough multiple lines of evidence to produce a high confidence level for each of two opposing arguments, which is referred to as the ambiguity of competing certainties. If you acknowledge the substantial level of ignorance surrounding this issue, the competing certainties disappear (this is the failing of Bayesian analysis, it doesn’t deal well with ignorance) and you are left with a lower confidence level.

    To be convincing, the arguments for climate change need to change from the burden of proof model to a model whereby a thesis is supported explicitly by addressing the issue of what the case would have to look like for the thesis to be false, in the mode of argument justification (e.g. Betz 2010). Argument justification invokes counterfactual reasoning to ask the question “What would have to be the case such that this thesis were false?” The general idea is that the fewer positions supporting the idea that the thesis is false, the higher its degree of justification. Argument justification provides an explicit and important role for skeptics, and a framework whereby scientists with a plurality of viewpoints participate in an assessment. This strategy has the advantage of moving science forward in areas where there are competing schools of thought. Disagreement then becomes the basis for focusing research in a certain area, and so moves the science forward. ((I HAPPEN TO DISAGREE THAT THIS IS A PRACTICAL APPROACH, BUT IF YOU WANT TO MAKE THIS POINT, YOU MIGHT ELABORATE A BIT MORE ON IT, WITH EXAMPLES.))

    ((DELETE THIS PARAGRAPH – I THINK THE POINT IS SELF EVIDENT, AND READERS WON’T CONSIDER IT INFORMATIVE)) The rationale for parallel evidence-based analyses of competing hypotheses and argument justification is eloquently described by Kelly’s (2008) key epistemological fact: “For a given body of evidence and a given hypothesis that purports to explain that evidence, how confident one should be that the hypothesis is true on the basis of the evidence depends on the space of alternative hypotheses of which one is aware. In general, how strongly a given body of evidence confirms a hypothesis is not solely a matter of the intrinsic character of the evidence and the hypothesis. Rather, it also depends on the presence or absence of plausible competitors in the field. It is because of this that the mere articulation of a plausible alternative hypothesis can dramatically reduce how likely the original hypothesis is on one’s present evidence.”

    Reasoning about uncertainty in the context of evidence based analyses and the formulation of confidence levels is not at all straightforward for the climate problem. Because of the complexity of the climate problem, Van der sluijs et al. (2005) argue that uncertainty methods such as subjective probability or Bayesian updating alone are not suitable for this class of problems, because the unquantifiable uncertainties and ignorance dominate the quantifiable uncertainties. Any quantifiable Bayesian uncertainty analysis “can thus provide only a partial insight into what is a complex mass of uncertainties” (van der Sluijs et al. 2005).
    Given the dominance of unquantifiable uncertainties in the climate problem, expert judgment about confidence levels are made in the absence of a comprehensive quantitative uncertainty analysis. Because of the complexity and in the absence of a formal logical hypothesis hierarchy used in the IPCC assessment, individual experts use different mental models and heuristics for evaluating the interconnected evidence. Biases can abound when reasoning and making judgments about such a complex problem. Bias can occur by excessive reliance on a particular piece of evidence, the presence of cognitive biases in heuristics, and logical fallacies and errors including circular reasoning. Further, the consensus building process itself can be a source of bias (Kelly 2005).

    Identifying the most important uncertainties and introducing a more objective assessment of confidence levels requires introducing a more disciplined logic into the climate change assessment process. A useful approach would be the development of hierarchical logical hypothesis models that provides a structure for assembling the evidence and arguments in support of the main hypotheses or propositions. A logical hypothesis hierarchy (or tree) links the root hypothesis to lower level evidence and hypotheses. While developing a logical hypothesis tree is somewhat subjective and involves expert judgments, the evidential judgments are made at a lower level in the logical hierarchy. Essential judgments and opinions relating to the evidence and the arguments linking the evidence are thus made explicit, lending structure and transparency to the assessment. To the extent that the logical hypothesis hierarchy decomposes arguments and evidence to the most elementary propositions, the sources of disputes are easily illuminated and potentially minimized.
    Bayesian Network Analysis using weighted binary tree logic is one possible choice for such an analysis. However, a weakness of Bayesian Networks is its two-valued logic and inability to deal with ignorance, whereby evidence is either for or against the hypotheses. An influence diagram is a generalization of a Bayesian Network that represents the relationships and interactions between a series of propositions or evidence (Spiegelhalter, 1986). Cui and Blockley (1990) introduce interval probability three-valued logic into an influence diagram with an explicit role for uncertainties (the so-called “Italian flag”) that recognizes that evidence may be incomplete or inconsistent, of uncertain quality or meaning. Combination of evidence proceeds generally as a Bayesian combination, but combinations of evidence are modified by the factors of sufficiency, dependence and necessity. Practical applications to the propagation of evidence using interval probability theory are described by Bowden (2004) and Egan (2005).

    An application of influence diagrams and interval probability theory to a climate relevant problem is described by Hall et al. (2006), regarding a study commissioned by the UK Government that sought to establish the extent to which very severe floods in the UK in October–November 2000 were attributable to climate change. Hall et al. used influence diagrams to represent the evidential reasoning and uncertainties in responding to this question. Three alternative approaches to the mathematization of uncertainty in influence were compared, including Bayesian belief networks and two interval probably methods (Interval Probability Theory and Support Logic Programming). Hall et al. argue that “ interval probabilities represent ambiguity and ignorance in a more satisfactory manner than the conventional Bayesian alternative . . . and are attractive in being able to represent in a straightforward way legitimate imprecision in our ability to estimate probabilities.”

    Hall et al. concluded that influence diagrams can help to synthesize complex and contentious arguments of relevance to climate change. Breaking down and formalizing expert reasoning can facilitate dialogue between experts, policy makers, and other decision stakeholders. The procedure used by Hall et al. supports transparency and clarifies uncertainties in disputes, in a way that expert judgment about high level root hypotheses does not.

    Conclusions

    ((HERE IS WHERE YOU MIGHT WANT TO EXPAND CONSIDERABLY ON SOME OF THE ABOVE, UTILIZING THE REFERENCED ITEMS TO GENERATE YOUR OWN DETAILED LIST OF SPECIFIC PRESCRIPTIONS FOR REFORMULATING THE IPCC PROCESS FOR ADDRESSING UNCERTAINTY. I WOULD THINK THAT AS MUCH AS 20 PERCENT OF YOUR PAPER MIGHT BE DEVOTED TO THIS POINT.
    In this paper I have argued that the problem of communicating climate uncertainty is fundamentally a problem of how we have framed the climate change problem, how we have characterized uncertainty, how we reason about uncertainty, and the consensus building process itself. As a result of these problems, the IPCC has not produced a thorough portrayal of the complexities of the problem and the associated uncertainties in our understanding. Improved characterization of uncertainty and ignorance and a more realistic portrayal of confidence levels could go a long way towards reducing the “noise” and animosity portrayed in the media that fuels the public distrust of climate science and acts to stymie the policy process. Not to mention that an improved characterization of uncertainty and ignorance would promote a better overall understanding of the science and how to best target resources to improve understanding.
    Improved understanding and characterization of uncertainty is critical information for the development of robust policy options. When working with policy makers and communicators, it is essential not to fall into the trap of acceding to inappropriate demands for certainty. Wynne (1992) makes an erudite statement: “the built-in ignorance of science towards its own limiting commitments and assumptions is a problem only when external commitments are built on it as if such intrinsic limitations did not exist.”((THIS PARAGRAPH IS FINE IF YOU HAVE SPACE, BUT IT DOESN’T ADD MUCH AND CAN BE OMITTED IF NECESSARY.))

    Acknowledgements. I would like to acknowledge…

    • fred thanks much for your thorough comments. i definitely agree with more specifics needed in the conclusion, will ponder the others.

      • I would not make any of these wholesale deletions. What you have is a flowing narrative that builds to your conclusion. Each of the major elements is a research program in its own right, which is just what the science needs.

        If you must shorten it then I suggest what I call a cut drill. This means trying to take a few words out of each sentence. Deciding which words are least important can be quite useful and it tightens the writing.

        Nor would I go into proposing specific changes by the IPCC. You are presenting a deep problem, not an administrative one. The most you can recommend is that the IPCC develops new guidance with an entirely new approach to uncertainty, an approach that has yet to be developed.

      • good points, thx

      • With my suggested deletions or alternative deletions you choose of comparable length, the word count is still about 4400, and adding specific recommendations will increase it. As I understand it, your target audience will be readers unconvinced that the IPCC is mishandling uncertainty but who are open-minded to suggestions. If I were one of those, I would be uninfluenced by generalizations about uncertainty as a problem, but I might be receptive to specific examples and concrete suggestions for improvement. In fact, reading through your draft, I could easily conceive of myself agreeing with every one of the generalizations and still finding no reason to change anything. That’s why I think a more focused approach might be more effective.

      • Fred, you are describing a very different paper, which would be a critique of the IPCC. Perhaps Dr. Curry’s longer paper does that, but this paper is scientific.

      • I lean strongely toward listening to the substance of Fred’s remarks.

        These comments clearly and demonstrably justify wholesale cutting within reorganization of the material presented, and do not necessarily mean either changing the key point and focus of the paper nor losing the substantive supporting material.

        This may seem like retraction of my earlier remark about the madness of cutting, but it indeed retrenches.

        Hitting target word count is more than conveniencing publishers, it produces clarity and impact.

        Everything that is cut can be footnoted, appendiced, referenced in bibliography or stated once and not repeated.

        The Kelly paragraph, I feel is important because exactly of what Fred identifies as missing: a proposed solution. The process of comparing uncertainties of competing theories (as part of comparing the whole of the theories) seems to me the solution Dr. Curry was hinting at without making explicit.

        At the same time, the Kelly quote is 3-4 times longer than it needs to be to state Kelly’s main point.

        Cleanly state the proposed problem and proposed solution at the start, with minimal but authoritative and pertinent supporting citation, develop the theme through the body of the essay paragraph by paragraph either with focus on the problem (which I do not recommend) or (better) with focus on relating each step in the development to the solution, to the improvement the solution offers, even if the purpose of the paper is not to sell the proposed solution.

        The readers in this target audience are bright enough to propose for themselves as they read their own favorite solutions, and to compare in their minds their favorite to Dr. Curry’s. What they need is to be guided step by step through the development of Dr. Curry’s thought, thesis, theme and conclusion.

        In the conclusion, a more mature restatement of problem and proposed solution will tie together the paper.

        At least, that’s the traditional structural solution to too high a word count, after techniques of pruning and word arrangement have been exhausted.

      • You say “The process of comparing uncertainties of competing theories (as part of comparing the whole of the theories) seems to me the solution Dr. Curry was hinting at without making explicit.”

        The problem is that there is no such process (yet) and she is making that point as well. She is posing a problem.

      • David Wojick

        Huh.

        Game Theory has had processes for at least 2 decades to compare uncertainties.

        At least, that was the thinking when I last studied the topic.

        Though I admit, I’m by no means conversant enough with the topic to contribute solid suggestions to an advanced paper of this type.

      • Bart R. That is just the point. Dr. Curry lays out a variety of methods and principles that might be used or combined to do the job of elucidating uncertainties. Game theory may be another, although I don’t see how. No solution is proposed.

      • Brandon Shollenberger

        Fred Moolten, I think you need to go back and look again. Your deletions do not leave the paper at about 4,400 words. The paper was about 4,400 words long before your deletions. After them, it doesn’t even clear 2,900.

        I think you must have messed up your word count.

      • I checked it twice, but will check again. Even 2900 after the deletions leaves little room for adding recommendations, but it would obviate the need for even more drastic cuts.

      • Brandon Shollenberger

        I have her paper open in an editing program, and my displayed word count is 4,429. I did a quick a read-through and cut out completely superfluous words and clauses, and that dropped the word count to under 4,300. There is no way the paper would have about 4,400 words after deleting entire paragraphs.

        Incidentally, I counted well over a dozen instances of “that” being used superfluously. Something like half of all instances of “that” can be removed without consequence.

      • You are right, it’s not 4400. Microsoft Word appears to have counted words I thought I had deleted. I get 2944 (after my suggested deletions), so there’s no need to make the paper much shorter. It can’t get much longer, though, and if some recommendations are to be added, some additional slight cutting of the text would be needed.

      • Now I’m getting 2776, after I subtracted a couple of additional truncations I had suggested. There’s a little wiggle room to add, although not much.

    • Fred, I do not agree with your recommendation for deleting the Kelly paragraph on alternatives on the grounds that it is self evident and uninformative. This is the one that includes “In general, how strongly a given body of evidence confirms a hypothesis is not solely a matter of the intrinsic character of the evidence and the hypothesis. Rather, it also depends on the presence or absence of plausible competitors in the field.”

      This paragraph captures the entire problem, which is that the IPCC ignores plausible competitors and just presents the evidence for AGW. If it were self evident then we would not be here. The IPCC reports are basically one half of a true assessment, a one-way consilience if you like, which is the essence of advocacy.

  39. This is a wonderful paper! It lays the groundwork for a serious attack on the problem, without claiming to solve it. Moreover, it is intelligible at a number of different levels of expertise.

    I would not include examples as they will immediately become the focus of debate, not to mention criticism. There is, after all, no consensus on where the uncertainty lies. That we try to figure that out is what you are calling for.

  40. Poach the Feynman quote from Goddard’s “Real Science” site:

    Science is the belief in the ignorance of experts.

    Put it in bold across the top of every page.

  41. Judith Curry

    Thanks for a very insightful treatise on uncertainty in climate science today.

    In the lead-in to one of the earlier threads on uncertainty, you wrote of the IPCC:

    Overconfidence comes across as selling snake oil.

    This is, indeed, a large part of the problem, as seen by the people surveyed in the poll you cited.

    In your latest “Reasoning on Uncertainty”, you write:

    In the detection and attribution of 20th century climate change, Chapter 9 of the AR4 WG1 Report all but dismisses natural internal modes of multidecadal variability in the attribution argument. Further, impacts of the low level of understanding of solar variability and its potential indirect effects on the climate are not explored in any meaningful way in terms of its impact on the confidence level expressed in the attribution statement.

    I would add that the great uncertainty regarding the impact of clouds as both a feedback as well as a natural forcing factor could be added to the list of ignored uncertainties.

    AR4 WG1 Ch. 9 (p.681) tells us:

    The simulations also show that it is not possible to reproduce the large 20th-century warming without anthropogenic forcing regardless of which solar or volcanic forcing reconstruction is used, stressing the impact of human activity on the recent warming.

    (and on pp.685-686)

    Climate simulations are consistent in showing that the global mean warming observed since 1970 can only be reproduced with combinations of external forcings that include anthropogenic forcings. This conclusion holds despite a variety of different anthropogenic forcings being included in these models. In all cases, the response to forcing from well-mixed greengouse gases dominates the anthropogenic warming in the model. No climate model using natural forcings alone has reproduced the observed global warming trend in the second half of the 20th century.

    This is an “argument from ignorance”, i.e. “we can only explain it if we assume…”, because it ignores both the “known unknowns” (solar, clouds, etc.) as well as the “unknown unknowns” (or real black swans).

    IPCC does acknowledge uncertainty regarding the statistically indistinguishable early 20th-century warming cycle (p.691):

    Detection and attribution as well as modeling studies indicate more uncertainty regarding the causes of early 20th-century warming than the recent warming.

    To a skeptic this sounds like the following logic:

    1. Our models cannot explain the early 20th-century warming.
    2. We know that AGW was a primary cause of late 20th-century warming.
    3. How do we know this?
    4. Because our models cannot explain it any other way.

    As far as making future projections regarding the risk of future climate change you write:

    An important issue is to identify the potential black swans associated with natural climate variation under no human influence, on time scales of one to two centuries. Without even asking this question, judgments regarding the risk of anthropogenic climate change can be misleading to decision makers.

    This appears to me to be the basic weakness of all the IPCC projections, especially those made for policymakers in the simplified Summary for Policymakers report. Policymakers are asked to place confidence in long-range projections regarding the risks related to anthropogenic climate change when these are based on a basic lack of understanding of the uncertainties regarding long-range past observations.

    To the weakness of the consensus approach you write:

    The uncertainty associated with climate science and the range of decision making frameworks and policy options provides much fodder for disagreement. Here I argue that the IPCC’s consensus approach enforces overconfidence, marginalization of skeptical arguments, and belief polarization.

    This does, indeed, appear to be a weakness of the IPCC consensus approach.

    You propose the “argument justification” approach as an opportunity for skeptics to state their arguments quasi as “falsifications” of the hypothesis, i.e. that AGW represents a serious potential threat to mankind and/or our environment, with the reasoning

    Argument justification invokes counterfactual reasoning to ask the question “What would have to be the case such that this thesis were false?” The general idea is that the fewer positions supporting the idea that the thesis is false, the higher its degree of justification. Argument justification provides an explicit and important role for skeptics, and a framework whereby scientists with a plurality of viewpoints participate in an assessment. This strategy has the advantage of moving science forward in areas where there are competing schools of thought. Disagreement then becomes the basis for focusing research in a certain area, and so moves the science forward.

    This sounds quite reasonable to me, but wouldn’t this mean that the dangerous AGW hypothesis as stated above is de facto the “null hypothesis” which should now be falsified? (I realize that Kevin Trenberth proposed carrying this over to severe weather events, which is another argument entirely.)

    Wouldn’t it be a more logical approach to define a “null hypothesis” which states, in effect, that AGW (caused principally by human CO2 emissions) does NOT represent a serious potential threat. This could be stated, for example, as a “2xCO2 climate sensitivity<2°C” hypothesis (IOW if 2xCO2<2°C there is no potential serious risk from AGW).

    It would then be up to those who have evidence to falsify this “null hypothesis” to present this evidence, to see whether or not it really falsifies the “null hypothesis”.

    The concept of “argument justification” would still be intact, but the “shoe would be on the foot” of those postulating that there is a potential danger from AGW to falsify the null hypothesis that this is not the case, which just seems a bit more logical to me than the other way around.

    As far as your premise is concerned, this could be included with a minor modification to the text, without changing any of the other very valid points you have made.

    Just my input.

    Max

  42. The evidence for abrupt hydrological (by definition climatic) change in the 20th century is conclusive. See for instance –
    http://www.sage.wisc.edu/pubs/articles/M-Z/Narisma/NarismaGRL2007.pdf

    There is a potted history of abrupt climate change here – http://www.aip.org/history/climate/rapid.htm

    Abrupt climate change occurred around 1910, the mid 1940’s, the late 1970’s and 1998/2001. Sea surface temperature, global surface temperature, hydrology and biology all show changes around those dates.

    Abrupt climate change is a threshold concept – which is defined as:

    ‘Transformative: Once understood, a threshold concept changes the way in which the student views the discipline.

    Troublesome: Threshold concepts are likely to be troublesome for the student. Perkins has suggested that knowledge can be troublesome when it is alien, incoherent or counter−intuitive.

    Irreversible: Given their transformative potential, a threshold concept is also likely to be irreversible, i.e. they are difficult to unlearn.

    Integrative: Threshold concepts, once learned, are likely to bring together different aspects of the subject that previously did not appear, to the student, to be related.

    Bounded: A threshold concept will probably delineate a particular conceptual space, serving a specific and limited purpose.

    Discursive: Meyer and Land [10] suggest that the crossing of a threshold will incorporate an enhanced and extended use of language.

    Reconstitutive: “Understanding a threshold concept may entail a shift in learner subjectivity, which is implied through the transformative and discursive aspects already noted. Such reconstitution is, perhaps, more likely to be recognised initially by others, and also to take place over time (Smith)”.

    Liminality: Meyer and Land [12] have likened the crossing of the pedagogic threshold to a ‘rite of passage’ (drawing on the ethnographical studies of Gennep and Turner in which a transitional or liminal space has to be traversed; “in short, there is no simple passage in learning from ‘easy’ to ‘difficult’; mastery of a threshold concept often involves messy journeys back, forth and across conceptual terrain. (Cousin [6])”. ‘ http://www.ee.ucl.ac.uk/~mflanaga/thresholds.html

    I suggest that we are all students of abrupt change – if we wish to understand the fundamental physics of climate. The two generic characteristics of the approach to a bifurcation point are increased variance of the observed
    signal, following from the fluctuation‐dissipation theorem [Kubo, 1966] and the corresponding increased autocorrelation, related to critical slow down. (Ditlevsen and Johnsen 2010 – http://www.leif.org/EOS/2010GL044486.pdf)

    Increased variance is a Dragon King – a meaningful outlier such as the 1997/98 El Niño. Autocorrelation is the equivalent of the synchronisation of Anastasios Tsonis. Both are necessary to identify chaotic bifurcation – otherwise known as a climate shift.

    The autocorrelation of a system increases as the system evolves towards a bifurcarion point. At the bifurcation point there are large fluctuations and the system then settles into a new state. After 1998 the planet stopped warming as fast as it did in the previous decades – a different climate state. The cooler state will persist for a while and then shift to a different state again.

    Predicting when it will shift into a different state is very difficult – predicting what that state will be is, IMHO, impossible. The changes that initiate climate shifts might include solar irradiance, clouds, ice, orbits etc. At the point of bifurcation – which it seems likely will be upon us again in a decade or 3 at most – climate is extremely sensitive to small initial changes. In this sense anthropogenic greenhouse gases are without doubt a factor that might trigger large and abrupt climate change.

    If you don’t understand this concept – you can understand nothing about the fundamental mode of climate change.

    • Chief

      Figure out exactly how many G&T’s you had before writing this comment.

      It was the perfect level.

      “Une seule partie de la physique occupe la vie de plusieurs hommes, et les laisse souvent mourir dans l’incertitude. “ Voltaire.

      http://www.woodfortrees.org/plot/hadcrut3vgl/from:1850/plot/gistemp/from:1850/plot/hadsst2sh/from:1850

      From 1850 to approximately 1910, you have (to cite the most brilliant hydraulics operator I know) what appears to be “..autocorrelation of a system increases as the system evolves towards a bifurcarion [sic] point. Then, from 1910 to 1997, “..At the bifurcation point there are large fluctuations and.. then after 1998 what may be happening, ..the system then settles into a new state.”

      That’s one model.

      However, you leave out analysis of what happens as the external perturbation increases.

      Keep up the yeomanlike hydraulic analysis, Chief.

  43. “Framing” means “lying”. I can’t believe academics do not perceive the signal they send when they say they should restrict the body of evidence to fit “social” parameters. That screams watermelon.

  44. I thought it might be worthwhile to cite an example from AR4 of the IPCC perspective on its treatment of uncertainty. It comes from the Technical Summary and cites approaches that are also detailed in other sections and chapters. If the Climatic Change paper is to propose improvements, it’s probably useful to see what the IPCC already thinks it’s doing right:

    “Box TS.1: Treatment of Uncertainties in the Working Group I Assessment

    The importance of consistent and transparent treatment of uncertainties is clearly recognised by the IPCC in preparing its assessments of climate change. The increasing attention given to formal treatments of uncertainty in previous assessments is addressed in Section 1.6. To promote consistency in the general treatment of uncertainty across all three Working Groups, authors of the Fourth Assessment Report have been asked to follow a brief set of guidance notes on determining and describing uncertainties in the context of an assessment.[2] This box summarises the way that Working Group I has applied those guidelines and covers some aspects of the treatment of uncertainty specific to material assessed here.

    Uncertainties can be classified in several different ways according to their origin. Two primary types are ‘value uncertainties’ and ‘structural uncertainties’. Value uncertainties arise from the incomplete determination of particular values or results, for example, when data are inaccurate or not fully representative of the phenomenon of interest. Structural uncertainties arise from an incomplete understanding of the processes that control particular values or results, for example, when the conceptual framework or model used for analysis does not include all the relevant processes or relationships. Value uncertainties are generally estimated using statistical techniques and expressed probabilistically. Structural uncertainties are generally described by giving the authors’ collective judgment of their confidence in the correctness of a result. In both cases, estimating uncertainties is intrinsically about describing the limits to knowledge and for this reason involves expert judgment about the state of that knowledge. A different type of uncertainty arises in systems that are either chaotic or not fully deterministic in nature and this also limits our ability to project all aspects of climate change.

    The scientific literature assessed here uses a variety of other generic ways of categorising uncertainties. Uncertainties associated with ‘random errors’ have the characteristic of decreasing as additional measurements are accumulated, whereas those associated with ‘systematic errors’ do not. In dealing with climate records, considerable attention has been given to the identification of systematic errors or unintended biases arising from data sampling issues and methods of analysing and combining data. Specialised statistical methods based on quantitative analysis have been developed for the detection and attribution of climate change and for producing probabilistic projections of future climate parameters. These are summarised in the relevant chapters.

    The uncertainty guidance provided for the Fourth Assessment Report draws, for the first time, a careful distinction between levels of confidence in scientific understanding and the likelihoods of specific results. This allows authors to express high confidence that an event is extremely unlikely (e.g., rolling a dice twice and getting a six both times), as well as high confidence that an event is about as likely as not (e.g., a tossed coin coming up heads). Confidence and likelihood as used here are distinct concepts but are often linked in practice.

    The standard terms used to define levels of confidence in this report are as given in the IPCC Uncertainty Guidance Note, namely:

    Confidence Terminology Degree of confidence in being correct
    Very high confidence At least 9 out of 10 chance
    High confidence About 8 out of 10 chance
    Medium confidence About 5 out of 10 chance
    Low confidence About 2 out of 10 chance
    Very low confidence Less than 1 out of 10 chance

    Note that ‘low confidence’ and ‘very low confidence’ are only used for areas of major concern and where a risk-based perspective is justified.

    Chapter 2 of this report uses a related term ‘level of scientific understanding’ when describing uncertainties in different contributions to radiative forcing. This terminology is used for consistency with the Third Assessment Report, and the basis on which the authors have determined particular levels of scientific understanding uses a combination of approaches consistent with the uncertainty guidance note as explained in detail in Section 2.9.2 and Table 2.11.

    The standard terms used in this report to define the likelihood of an outcome or result where this can be estimated probabilistically are:

    Likelihood Terminology Likelihood of the occurrence/ outcome
    Virtually certain > 99% probability
    Extremely likely > 95% probability
    Very likely > 90% probability
    Likely > 66% probability
    More likely than not > 50% probability
    About as likely as not 33 to 66% probability
    Unlikely < 33% probability
    Very unlikely < 10% probability
    Extremely unlikely < 5% probability
    Exceptionally unlikely < 1% probability

    The terms ‘extremely likely’, ‘extremely unlikely’ and ‘more likely than not’ as defined above have been added to those given in the IPCC Uncertainty Guidance Note in order to provide a more specific assessment of aspects including attribution and radiative forcing.

    Unless noted otherwise, values given in this report are assessed best estimates and their uncertainty ranges are 90% confidence intervals (i.e., there is an estimated 5% likelihood of the value being below the lower end of the range or above the upper end of the range). Note that in some cases the nature of the constraints on a value, or other information available, may indicate an asymmetric distribution of the uncertainty range around a best estimate. In such cases, the uncertainty range is given in square brackets following the best estimate.

    ‘Radiative forcing’ is a measure of the influence a factor has in altering the balance of incoming and outgoing energy in the Earth-atmosphere system and is an index of the importance of the factor as a potential climate change mechanism. Positive forcing tends to warm the surface while negative forcing tends to cool it. In this report, radiative forcing values are for changes relative to a pre-industrial background at 1750, are expressed in Watts per square metre (W m–2) and, unless otherwise noted, refer to a global and annual average value. See Glossary for further details.

    The IPCC Uncertainty Guidance Note is included in Supplementary Material for this report."

    One question that arises from the foregoing is the extent to which the IPCC distinction between "value" and "structural" uncertainties resembles the distinction in the draft between "statistical" and "scenario" uncertainties.

    Here is a section from TS4 regarding anthropogenic vs natural (i.e., alternative) factors:
    "Evidence of the effect of external influences, both anthropogenic and natural, on the climate system has continued to accumulate since the TAR. Model and data improvements, ensemble simulations and improved representations of aerosol and greenhouse gas forcing along with other influences lead to greater confidence that most current models reproduce large-scale forced variability of the atmosphere on decadal and inter-decadal time scales quite well. These advances confirm that past climate variations at large spatial scales have been strongly influenced by external forcings. However, uncertainties still exist in the magnitude and temporal evolution of estimated contributions from individual forcings other than well-mixed greenhouse gases, due, for example, to uncertainties in model responses to forcing. Some potentially important forcings such as black carbon aerosols have not yet been considered in most formal detection and attribution studies. Uncertainties remain in estimates of natural internal climate variability. For example, there are discrepancies between estimates of ocean heat content variability from models and observations, although poor sampling of parts of the world ocean may explain this discrepancy. In addition, internal variability is difficult to estimate from available observational records since these are influenced by external forcing, and because records are not long enough in the case of instrumental data, or precise enough in the case of proxy reconstructions, to provide complete descriptions of variability on decadal and longer time scales (see Figure TS.22 and Box TS.7). {8.2–8.4, 8.6, 9.2–9.4} "

    Elsewhere, the IPPC acknowledges the use of both subjective and objective measures in ascertaining confidence in a conclusion.

    My reason for quoting these passages (among presumably others of similar theme) is that IPCC defenders are likely to assert that they are already aware of uncertainty issues and are addressing them. I think it will be important to specify where they are not in order to effect a change.

    • John Carpenter

      Fred,

      It seems to me this is nothing more than a rating system. Where does the IPCC address how the probabilities are derived? What is the method? Am I missing something in what you posted?

      • John – My comments were designed mainly as a means of asking Judith Curry how her assessment compares with actual IPCC guidelines, with the thought that the comparison might be useful to her in writing a final draft. The details are in individual chapters, but I only wanted here to call attention to the general nature of the guidelines.

      • John Carpenter

        Fred,

        I take JC’s paper as an alternative methodology to ascertain/quantify uncertainty compared to what the IPCC currently uses. I guess I miss the connection you are making.

    • John Carpenter

      Fred,

      “Elsewhere, the IPPC acknowledges the use of both subjective and objective measures in ascertaining confidence in a conclusion.”

      Does this answer my question above? What are these methods?

  45. “It is because of this that the mere articulation of a plausible alternative hypothesis can dramatically reduce how likely the original hypothesis is on one’s present evidence.”

    One would hope so – but I rather think that the committed believers are part of the problem not part of the solution.

    We are certainly looking for a greater recognition of background climate variability – by definition chaotic bifurcation – in AR5. AE4 continued to predict warming of 0.2 degrees C/decade for the first decades of the 21st century. Never going to happen. The first peer reviewed reference below sets the scope of changes in the Pacific. The other four specifically address a continued lack of warming in the coming decade or decades.

    ‘This index captures the 1976–1977 “El Niño–Southern Oscillation (ENSO)-like” warming shift of sea surface temperatures (SST) as well as a more recent transition of opposite sign in the 1990s. Utilizing measurements of water vapor, wind speed, precipitation, long-wave radiation, as well as surface observations, our analysis shows evidence of the atmospheric changes in the mid-1990s that accompanied the “ENSO like” interdecadal SST changes.’

    Burgman, R. J., A. C. Clement, C. M. Mitas, J. Chen, and K. Esslinger (2008), Evidence for atmospheric variability over the Pacific on decadal timescales, Geophys. Res. Lett., 35, L01704, doi:10.1029/2007GL031830.

    ‘Our results suggest that global surface temperature may not increase over the next decade, as natural climate variations in the North Atlantic and tropical Pacific temporarily offset the projected anthropogenic warming.’

    N. S. Keenlyside, M. Latif, J. Jungclaus, L. Kornblueh & E. Roeckner (2008), Advancing decadal-scale climate prediction in the North Atlantic sector, Nature Vol 453| 1 May 2008| doi:10.1038/nature06921

    We find that in those cases where the synchronous state was followed by a steady increase in the coupling strength between the indices, the synchronous state was destroyed, after which a new climate state emerged. These shifts are associated with significant changes in global temperature trend and in ENSO variability. The latest such event is known as the great climate shift of the 1970s.

    Anastasios A. Tsonis,1 Kyle Swanson,1 and Sergey Kravtsov1 (2007), A new dynamical mechanism for major climate shifts, GEOPHYSICAL RESEARCH LETTERS, VOL. 34, L13705, doi:10.1029/2007GL030288

    If as suggested here, a dynamically driven climate shift has occurred, the duration of similar shifts during the 20th century suggests the new global mean temperature trend may persist for several decades. Of course, it is purely speculative to presume that the global mean temperature will remain near current levels for such an extended period of time. Moreover, we caution that the shifts described here are presumably superimposed upon a long term warming trend due to anthropogenic forcing. However, the nature of these past shifts in climate state suggests the possibility of near constant temperature lasting a decade or more into the future must at least be entertained. The apparent lack of a proximate cause behind the halt in warming post 2001/02 challenges our understanding of the climate system, specifically the physical reasoning and causal links between longer time-scale modes of internal climate variability and the impact of such modes upon global temperature.

    Swanson, K. L., and A. A. Tsonis (2009), Has the climate recently shifted?, Geophys. Res. Lett., 36, L06711, doi:10.1029/2008GL037022.

    ‘A negative tendency of the predicted PDO phase in the coming decade will enhance the rising trend in surface air-temperature (SAT) over east Asia and over the KOE region, and suppress it along the west coasts of North and South America and over the equatorial Pacific. This suppression will contribute to a slowing down of the global-mean SAT rise.’

    Takashi Mochizuki, Masayoshi Ishii, Masahide Kimoto, Yoshimitsu Chikamoto, Masahiro Watanabe, Toru Nozawa, Takashi T. Sakamoto, Hideo Shiogama, Toshiyuki Awaji, Nozomi Sugiura, Takahiro Toyoda, Sayaka Yasunaka, Hiroaki Tatebe, and Masato Mori (2010) , Pacific decadal oscillation hindcasts relevant to near-term climate prediction, doi:10.1073/pnas.0906531107PNAS February 2, 2010 vol. 107 no. 5

    • Chief

      Thanks for these valuable references that contradict IPCC’s 0.2 deg C/decade warming for the beginning of the 21st century.

    • Chief,

      Thanks, you continue to remind us that weather and climate change are both chaotic and unpredictable.

  46. Richard Hill

    There is an issue with combining unceratainties that I didnt see in the text. If you bet on several different horse races, your chance of picking a winner is improved. However if you are betting on just one horse in one race then it is different. One horse has a 1 percent chance of a broken leg, and a 1 percent chance of an embolism, and a 1 percent chance of being tripped and a 1 percent chance of being blocked, then the chance of it winning isnt 99 percent, or 97 percent but 0.99^4 or 96 percent.
    In the case of climate change causes, the probabilities need to be multiplied.
    I’d like to see a simple tablle: cause, probability, and the total.

    • Richard Hill

      There is extensive literature on this topic in Industrial Engineering. Starting with Deming from the USA and much further enhanced in Japan. If anyone is interested I can supply more detailed references.

  47. How come hadcrut3’s mean temperature for February has not yet come out? Does it take more than a month to publish?
    http://bit.ly/55TKj6

    Roy’s is –0.02 deg C, a drop of 0.68 deg C from the monthly maximum of 0.66 for February 1998.
    http://bit.ly/334m1a

    With a drop of 0.68 deg C, where is the global warming?

  48. I’ve said it before and I’ll say it again, any mathematical estimate of uncertainty requires an “expert panel” to provide some level of certainty to the equation. Those panels are contingent upon selection bias and those biases predict an outcome, which by the way are frequently unhelpful. Why is it that we cannot live with uncertainty? Why do we need to give uncertainty a number?

  49. John Whitman

    Judith,

    Thanks for opening up the rough draft of your piece to us.

    Since the piece is specifically directed at the IPCC and it appears the message you wish to deliver is centered on the act of framing the IPCC’s premises, purpose, procedures, processes, publication, publicity and plausibility then I would suggest some restructuring of your piece around framing. It would enhance the discussion by being more explicit about IPCC framing as the main discussion point with a value added idea being alternate parallel framing to provide competition which would promote balance and mitigation against single source biasing. Here is a suggested structure for your piece, it is rather wordy so consider it a very drafty draft. Take care.

    A. Introduction

    B. The Concept of Framing in the IPCC

    C. Independent Association for Alternate Framing Concepts

    D. Integrating an Assessment Compendium of Both IPCC and Independent Associations

    E. Competing Assessments for Balanced and Unbiased Reporting to Policy Makers

    F. Conclusions

    Best of luck.

    John

  50. ‘Competing uncertainties’ is a nonsensical phrase.

    One may have competing hypotheses; the probability of one hypothesis being the ‘truest one’ absent falsification of any plausible enunciated alternative hypothesis lower than before the later hypothesis was stated, but the uncertainties do not compete; they multiply, as Richard Hill alludes above (http://judithcurry.com/2011/03/24/reasoning-about-climate-uncertainty-draft/#comment-57722).

    Thus, a hundred warming hypotheses and a hundred cooling hypotheses, and all other hypotheses of radical and abrupt change reduce the certainty of the ‘status quo’ hypotheses.

    In Risk analyses, the enunciation of plausible new hypotheses has a real and immediate impact on costs.

    These costs will be in the form of Risk taken into account by real, current traders and investors.

    It will shape and inform their opinions, and it will shape the economic climate, once the credibility of these hypotheses is established and confirmed.

    Higher uncertainty will mean higher cost of money and higher inflation, reduced wealth in the market and reduced economic stimulus.

    Policy analysts do not by and large give a flying fig about the climate science, or warmer or colder or sudden abrupt change or stability.

    Policy analysts, the important ones anyway, care primarily about the investment climate and the shape of the economy.

    Therefore, activities that reduce, compensate for, address, calm, clarify or falsify uncertainty — not hypotheses — are what matter to Policy.

    As Dr. Curry has focused her attention on this topic, she will be important to Policy, eventually, if she continues and makes credible headway.

  51. Judith Curry

    I agree with Rob B’s comment that a couple of real examples are needed to get the point across that IPCC has understated uncertainty where it exists.

    I believe that the target should be the “Summary for Policymakers” report, in which this overconfidence is most evident and which has the greatest impact on policymakers as well as the general public, but with specific references to the pertinent chapters of the main AR4 report.

    You’ve written a very convincing and comprehensive treatise on what needs to change and why. It should make impact and (hopefully) get results.

    The draft as written has about 4,500 words. The target is 3,000 words or less, so some drastic surgery is required, as Fred Moolten has pointed out.

    But, unlike Fred Moolten’s suggested changes, I would think that the basic criticisms of the IPCC approach need to remain in the text, even if unnecessary verbiage and a couple of paragraphs citing references plus quotes can be cut out.

    So here goes. With the changes below, I get you down from 4,500 to 3,690 words. The rest can only be done by cutting out entire paragraphs, and this has to be done while still maintaining the overall flow.

    Lots of success with this.

    Max

    1. Introduction
    The challenge of framing and communicating uncertainty about climate change is a symptom of the challenges of understanding and reasoning about such a complex system. Our understanding of the complex climate system is hampered by a myriad of [many] uncertainties, indeterminacy, ignorance, and cognitive biases. Complexity of the climate system arises from the very large number of degrees of freedom, the number of subsystems and complexity in linking them, and the nonlinear and chaotic nature of the atmosphere and ocean. A complex system exhibits behavior not obvious from the properties of its individual components, whereby larger scales of organization influence smaller ones and structure at all scales is influenced by feedback loops among the structures. The epistemology of computer simulations of complex systems is a new and active area research [new] among scientists, philosophers, and the artificial intelligence community. How to reason [Reasoning] about the complex climate system and its computer simulations is not simple or obvious.

    How has the IPCC dealt with the challenge of uncertainty in the complex climate system? Until the time of the IPCC TAR and the Moss-Schneider (2000) Guidance paper, uncertainty was dealt with in an ad hoc manner. The Moss-Schneider guidelines raised a number of important issues regarding the identification and communication of uncertainties. However, the actual implementation of this guidance in the TAR and AR4 adopted a subjective perspective or “judgmental estimates of confidence.” Defenders of the IPCC uncertainty characterization argue that subjective consensus expressed using simple terms is understood more easily by policy makers.

    The [IPCC] consensus approach used by the IPCC to characterize uncertainty has received a number of [numerous] criticisms. Van der Sluijs et al. (2010b) finds that the IPCC consensus strategy underexposes scientific uncertainties and dissent, making the chosen policy vulnerable to scientific error and limiting the political playing field. Van der Sluijs (2010a) argues that matters on which no consensus can be reached continue to receive too little attention by the IPCC, even though this dissension can be highly policy-relevant. Oppenheimer et al. (2007) point out the need to guard against overconfidence and argue that the IPCC consensus emphasizes expected outcomes, whereas it is equally important that policy makers understand the more extreme possibilities that consensus may exclude or downplay. Gruebler and Nakicenovic (2001) opine that “there is a danger that the IPCC consensus position might lead to a dismissal of uncertainty in favor of spuriously constructed expert opinion.”

    While the policy makers’ desire for a clear message from the scientists is understandable, the consensus approach being used by the IPCC has not produced a thorough portrayal of [oversimplified] the complexities of the problem and the associated uncertainties in our understanding. While the public may not understand the complexity of the science or be culturally predisposed to accept the consensus, [but] they can certainly understand the vociferous arguments over the science portrayed by the media. Better characterization of uncertainty and ignorance and a more realistic portrayal of confidence levels could go a long way towards reducing [reduce] the “noise” and animosity portrayed in the media that fuels the public distrust of climate science and acts to stymie the policy process. Not to mention that an improved characterization of uncertainty and ignorance would promote a better overall understanding of the science and how to best target resources to improve understanding. Further, improved understanding and characterization of uncertainty is critical information for the development of robust policy options.

    Indeterminacy and framing of the climate change problem
    An underappreciated aspect of characterizing uncertainty is associated with the questions that do not even get asked. Wynne (1992) argues that scientific knowledge typically investigates “a restricted agenda of defined uncertainties—ones that are tractable— leaving invisible a range of other uncertainties, especially about the boundary conditions of applicability of the existing framework of knowledge to new situations.” Wynne refers to this as indeterminacy, which arises from the “unbounded complexity of causal chains and open networks.” Indeterminacies can arise from not knowing whether the type of scientific knowledge and the questions posed are appropriate and sufficient for the circumstances and the social context in which the knowledge is applied.

    In the climate change problem, indeterminacy is associated with the way the climate change problem has been framed. Frames are organizing principles that enable a particular interpretation of an issue. De Boerg et al. (2010) state that: “Frames act as organizing principles that shape in a “hidden” and taken-for-granted way how people conceptualize an issue.” Risbey et al. (2005)??? argue that decisions on problem framing influence the choice of models and what knowledge is considered relevant to include in the analysis. De Boerg et al. further state that frames can express how a problem is stated, who is expected to make a statement about it, what questions are relevant, and what range of answers might be appropriate.

    The decision making framework provided by the UNFCCC Treaty provides the rationale for framing the IPCC assessment of climate change and its uncertainties, in terms of identifying dangerous climate change and providing input for decision making regarding CO2 stabilization targets. In the context of this framing, certain key scientific questions receive little attention. In the detection and attribution of 20th century climate change, Chapter 9 of the AR4 WG1 Report all but dismisses natural internal modes of multidecadal variability in the attribution argument. [A specific example is the acknowledgement that the models cannot fully explain the early 20th-century warming, yet the statistically indistinguishable late 20th-century warming is attributed to anthropogenic forcing, because the models cannot otherwise explain it.] Further, impacts of the low level of understanding of solar variability and its potential indirect effects on the climate are not [meaningfully] explored in any meaningful way [nor is the large uncertainty regarding clouds] in terms of its impact on the confidence level expressed in [of] the attribution statement. In the WG II Report, the focus is [focuses] on attributing possible dangerous impacts to AGW, with little focus in the summary statements on how warming might actually be beneficial to certain regions or in certain sectors.

    Further, the decision analytic framework associated with setting a CO2stabilization target focuses research and analysis on using expert judgment to identify a most likely value of sensitivity/ warming and narrowing the range of expected values, rather than fully exploring the uncertainty and the possibility for black swans (Taleb 2007) and dragon kings (Sornette 2009). The concept of imaginable surprise was discussed in the Moss-Schneider uncertainty guidance documentation, but consideration of such possibilities seems [was] largely to have been ignored by the AR4 report. The AR4 focused on what was “known” to a significant confidence level. The most visible failing of this strategy was neglect of the possibility of rapid melting of ice sheets on sea level rise in the Summary for Policy Makers (e.g. Oppenheimer et al. 2007; Betz 2009). An important issue is to identify the potential black swans [impacts] associated with natural climate variation under no human influence, on time scales of one to two centuries. Without even asking this question, judgments regarding the risk of anthropogenic climate change can be [are] misleading to decision makers.

    The presence of sharp conflicts with regards to both the science and policy reflects an overly narrow framing of the climate change problem. Until the problem is reframed or multiple frames are considered by the IPCC, the scientific and policy debate will continue to ignore crucial elements of the problem, with [overstated] confidence levels that are too high.

    Uncertainty, ignorance and confidence
    The Uncertainty Guidance Paper by Moss and Schneider (2000) recommended a common vocabulary to express quantitative levels of confidence based on the amount of evidence (number of sources of information) and the degree of agreement (consensus) among experts. This assessment strategy does not include any systematic analysis of the types and levels uncertainty and quality of the evidence, and more importantly dismisses indeterminacy and ignorance as important factors in assessing these confidence levels. In context of the narrow framing of the problem, this uncertainty assessment strategy promotes the consensus into becoming a self-fulfilling prophecy.

    The uncertainty [paper] guidance provided for the IPCC AR4 distinguished between levels of confidence in scientific understanding and the likelihoods of specific results. In practice, primary conclusions in the AR4 included a mixture of likelihood and confidence statements that are ambiguous. Curry and Webster (2010) have raised specific issues with regards to [regarding] the statement “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.” Risbey and Kandlikar (2007) describe ambiguities in actually applying likelihood and confidence, including situations where likelihood and confidence [these] cannot be fully separated, likelihood levels contain implicit confidence levels, and interpreting uncertainty when there are two levels of imprecision is in some cases rather difficult.

    Numerous methods of categorizing risk and uncertainty have been described in the context of different disciplines and various applications; for a recent review, see Spiegelhalter and Riesch (2011). Of particular relevance for climate change are schemes for analyzing uncertainty when conducting risk analyses. My primary concerns about the IPCC’s characterization of uncertainty are twofold:
    · lack of discrimination between statistical uncertainty and scenario uncertainty
    · failure to meaningfully address the issue of ignorance

    Following Walker et al. (2003), statistical uncertainty is distinguished from scenario uncertainty, whereby scenario uncertainty implies that it is not possible [impossible] to formulate the probability of occurrence [of] particular outcomes. A scenario is a plausible but unverifiable description of how the system and/or its driving forces may develop in the future. Scenarios may be regarded as a range of discrete possibilities with no a priori allocation of likelihood. Wynne (1992) defines risk as knowing the odds (analogous to Walker et al.’s statistical uncertainty), and uncertainty as not knowing the odds but knowing the main parameters (analogous to Walker et al.’s scenario uncertainty).

    Stainforth et al. (2007) argue that model inadequacy and an insufficient number of simulations in the ensemble preclude producing meaningful probability distributions from the frequency of model outcomes of future climate. Stainforth et al. state: “[G]iven nonlinear models with large systematic errors under current conditions, no connection has been even remotely established for relating the distribution of model states under altered conditions to decision-relevant probability distributions. . . . Furthermore, they are liable to be misleading because the conclusions, usually in the form of PDFs, imply much greater confidence than the underlying assumptions justify.” Given climate model inadequacies and uncertainties, Betz (2009) argues for the logical necessity of considering climate model simulations as modal statements of possibilities, which is consistent with scenario uncertainty. Stainforth et al. makes an equivalent statement: “Each model run is of value as it presents a ‘what if’ scenario from which we may learn about the model or the Earth system.” Insufficiently large initial condition ensembles combined with model parameter and structural uncertainty preclude forming a PDF from climate model simulations that has much meaning in terms of establish a mean value or confidence intervals. In the presence of scenario uncertainty, which characterizes climate model simulations, attempts to produce a PDF for [quantify] climate sensitivity (e.g. Annan and Hargreaves 2010) are arguably misguided and misleading.

    Ignorance is that which is not known; Wynne (1992) finds ignorance to be endemic because scientific knowledge must set the bounds of uncertainty in order to function. Walker et al. (2003) categorize the following different levels of ignorance. Total ignorance implies a deep level of uncertainty, to the extent that we do not even know that we do not know. Recognized ignorance refers to fundamental uncertainty in the mechanisms being studied and a weak scientific basis for developing scenarios. Reducible ignorance may be resolved by conducting further research, whereas irreducible ignorance implies that research cannot improve knowledge (e.g. what happened prior to the big bang). Bammer and Smithson (2008) further distinguish between conscious ignorance, where we know we don’t know what we don’t know, versus unacknowledged or meta-ignorance where we don’t even consider the possibility of error.

    While the Kandlikar et al. (2005) uncertainty schema explicitly includes effective ignorance in its uncertainty categorization, the AR4 uncertainty guidance (which is based upon Kandlikar et al.) neglects to include ignorance in the characterization of uncertainty. Hence IPCC confidence levels determined based on the amount of evidence (number of sources of information) and the degree of agreement (consensus) among experts do not explicitly account for indeterminacy and ignorance, although recognized areas of ignorance are mentioned in some part of the report (e.g. the possibility of indirect solar effects in sect xxxx of the AR4 WG1 Report). Overconfidence is an inevitable result of neglecting indeterminacy and ignorance.

    A comprehensive approach to uncertainty management and elucidation of the elements of uncertainty is described by the NUSAP scheme, which includes methods to determine the pedigree and quality of the relevant data and methods used (e.g. van der Sluijs et al. 2005a,b). The complexity of the NUSAP scheme arguably precludes its widespread adoption by the IPCC. The challenge is to characterize uncertainty in a complete way while retaining sufficient simplicity and flexibility for its widespread adoption. In the context of risk analysis, Speigelhalter [Speigelhalter] and Riesch (2011) describe a scheme for characterizing uncertainty that covers the range from complete numerical formalization of probabilities to indeterminancy and ignorance, and includes and the possibility of unspecified but surprising events. Quality of evidence is an important element of the NUSAP scheme and the scheme described by Spiegelhalter and Riesch (2011). The GRADE scale of Guyatt et al. (2008) provides a simple yet useful method for judging quality of evidence, with a more complex scheme for judging quality utilized by NUSAP.

    Judgmental estimates of confidence need to consider not only the amount of evidence for and against and the degree of consensus, but also need to consider the adequacy of the knowledge base (which includes the degree of uncertainty and ignorance), and also the quality of the information that is available. A practical way of incorporating these elements into an assessment of confidence is provided by Egan (2005). The crucial difference between this approach and the consensus-based approach is that the dimension associated with the degree of consensus among experts is replaced by specific judgments about the adequacy of the knowledge base and the quality of the information available.

    Consensus and disagreement
    The uncertainty associated with climate science and the range of decision making frameworks and policy options provides much fodder for disagreement. Here I argue that the IPCC’s consensus approach enforces overconfidence, marginalization of skeptical arguments, and belief polarization. The role of cognitive biases (e.g. Tverskey and Kahnemann 1974) has received some attention in the context of the climate change debate, as summarized by Morgan et al. (2009, chapter xxx).

    However, the broader issues of the epistemology and psychology of consensus and disagreement have received little attention in the context of the climate change problem.
    Kelly (2005, 2008) provides some general insights into the sources of belief polarization that are relevant to the climate change problem. Kelly (2008) argues that “a belief held at earlier times can skew the total evidence that is available at later times, via characteristic biasing mechanisms, in a direction that is favorable to itself.” Kelly (2008) also finds that “All else being equal, individuals tend to be significantly better at detecting fallacies when the fallacy occurs in an argument for a conclusion which they disbelieve, than when the same fallacy occurs in an argument for a conclusion which they believe.” Kelly (2005) provides insights into the consensus building process: “As more and more peers weigh in on a given issue, the proportion of the total evidence which consists of higher order psychological evidence [of what other people believe] increases, and the proportion of the total evidence which consists of first order evidence decreases . . . At some point, when the number of peers grows large enough, the higher order psychological evidence will swamp the first order evidence into virtual insignificance.” Kelly (2005) concludes: “Over time, this invisible hand process tends to bestow a certain competitive advantage to our prior beliefs with respect to confirmation and disconfirmation. . . In deciding what level of confidence is appropriate, we should taken into account the tendency of beliefs to serve as agents in their own confirmation.”

    So what are the implications of Kelly’s arguments for consensus and disagreement associated with climate change and the IPCC? Cognitive biases in the context of an institutionalized consensus building process have arguably resulted in the consensus becoming increasingly confirmed in a self-reinforcing way. The consensus process of the IPCC has marginalized dissenting skeptical voices, who are commonly dismissed as “deniers” (e.g. Hasselmann 2010). This “invisible hand” that marginalizes skeptics is operating to the substantial detriment of climate science, not to mention the policies that are informed by climate science. The importance of skepticism is aptly summarized by Kelly (2008): “all else being equal, the more cognitive resources one devotes to the task of searching for alternative explanations, the more likely one is hit upon such an explanation, if in fact there is an alternative to be found.”

    The intense disagreement between scientists [supporting] that support the IPCC consensus and skeptics becomes increasingly polarized as a result of the “invisible hand” described by Kelly (2005, 2008) . Disagreement itself can be evidence about the quality and sufficiency of the evidence. Disagreement can arise from disputed interpretations as proponents are biased by excessive reliance on a particular piece of evidence. Disagreement can result from “conflicting certainties,” whereby competing hypotheses are each buttressed by different lines of evidence, each of which is regarded as “certain” by its proponents. Conflicting certainties arise from differences in chosen assumptions, neglect of key uncertainties, and the natural tendency to be overconfident about how well we know things (e.g. Morgan 1990).

    What is desired is a justified interpretation of the available evidence, which is completely traceable throughout the process in terms of the quality of the data, modeling, and reasoning process. A thorough assessment of uncertainty by proponents of different sides in a scientific debate will reduce the level of disagreement.

    Reasoning about uncertainty
    The objective of the IPCC reports is to assess existing knowledge about climate change. The IPCC assessment process combines a compilation of evidence with subjective Bayesian reasoning. This process is described by Oreskes (2007) as presenting a “consilience of evidence” argument, which consists of independent lines of evidence that are explained by the same theoretical account. Oreskes draws an analogy for this approach with what happens in a legal case.

    The consilience of evidence argument is not convincing unless it includes parallel evidence-based analyses for competing hypotheses, for the simple reason that any system that is more inclined to admit one type of evidence or argument rather than another tends to accumulate variations in the direction towards which the system is biased. In a Bayesian analysis with multiple lines of evidence, you could conceivably come up with enough multiple lines of evidence to produce a high confidence level for each of two opposing arguments, which is referred to as the ambiguity of competing certainties. If you acknowledge the substantial level of ignorance surrounding this issue, the competing certainties disappear (this is the failing of Bayesian analysis, it doesn’t deal well with ignorance) and you are left with a lower confidence level.

    To be convincing, the arguments for climate change need to [should] change from the burden of proof model to a model whereby a thesis is supported explicitly by addressing the issue of what the case would have to look like for the thesis to be false, in the mode of argument justification (e.g. Betz 2010). Argument justification [This] invokes counterfactual reasoning to ask the question “What would have to be the case such that this thesis were false?” The general idea is that the fewer positions supporting the idea that the thesis is false, the higher its degree of justification. Argument justification provides an explicit and important role for skeptics, and a framework whereby scientists with a plurality of viewpoints participate in an assessment. This strategy has the advantage of moving science forward in areas where there are competing schools of thought. Disagreement then becomes the basis for focusing research in a certain area, and so moves the science forward.

    The rationale for parallel evidence-based analyses of competing hypotheses and argument justification is eloquently described by Kelly’s (2008) key epistemological fact: “For a given body of evidence and a given hypothesis that purports to explain that evidence, how confident one should be that the hypothesis is true on the basis of the evidence depends on the space of alternative hypotheses of which one is aware. In general, how strongly a given body of evidence confirms a hypothesis is not solely a matter of the intrinsic character of the evidence and the hypothesis. Rather, it also depends on the presence or absence of plausible competitors in the field. It is because of this that the mere articulation of a plausible alternative hypothesis can dramatically reduce how likely the original hypothesis is on one’s present evidence.”

    Reasoning about uncertainty in the context of evidence based analyses and the formulation of confidence levels is not at all straightforward for the climate problem. Because of the complexity of the climate problem, Van der sluijs et al. (2005) argue that uncertainty methods such as subjective probability or Bayesian updating alone are not suitable for this class of problems, because the unquantifiable uncertainties and ignorance dominate the quantifiable uncertainties. Any quantifiable Bayesian uncertainty analysis “can thus provide only a partial insight into what is a complex mass of uncertainties” (van der Sluijs et al. 2005).

    Given the dominance of unquantifiable uncertainties in the climate problem, expert judgment about confidence levels are made in the absence of a comprehensive quantitative uncertainty analysis. Because of the complexity and in the absence of a formal logical hypothesis hierarchy used in the IPCC assessment, individual experts use different mental models and heuristics for evaluating the interconnected evidence. Biases can abound when reasoning and making judgments about such a complex problem. Bias can occur by excessive reliance on a particular piece of evidence, the presence of cognitive biases in heuristics, and logical fallacies and errors including circular reasoning. Further, the consensus building process itself can be a source of bias (Kelly 2005).

    Identifying the most important uncertainties and introducing a more objective assessment of confidence levels requires introducing a more disciplined logic into the climate change assessment process. A useful approach would be the development of hierarchical logical hypothesis models that provides a structure for assembling the evidence and arguments in support of the main hypotheses or propositions. A logical hypothesis hierarchy (or tree) links the root hypothesis to lower level evidence and hypotheses. While developing a logical hypothesis tree is somewhat subjective and involves expert judgments, the evidential judgments are made at a lower level in the logical hierarchy. Essential judgments and opinions relating to the evidence and the arguments linking the evidence are thus made explicit, lending structure and transparency to the assessment. To the extent that the logical hypothesis hierarchy decomposes arguments and evidence to the most elementary propositions, the sources of disputes are easily illuminated and potentially minimized.

    Bayesian Network Analysis using weighted binary tree logic is one possible choice for such an analysis. However, a weakness of Bayesian Networks is its two-valued logic and inability to deal with ignorance, whereby evidence is either for or against the hypotheses. An influence diagram is a generalization of a Bayesian Network that represents the relationships and interactions between a series of propositions or evidence (Spiegelhalter, 1986). Cui and Blockley (1990) introduce interval probability three-valued logic into an influence diagram with an explicit role for uncertainties (the so-called “Italian flag”) that recognizes that evidence may be incomplete or inconsistent, of uncertain quality or meaning. Combination of evidence proceeds generally as a Bayesian combination, but combinations of evidence are modified by the factors of sufficiency, dependence and necessity. Practical applications to the propagation of evidence using interval probability theory are described by Bowden (2004) and Egan (2005).

    An application of influence diagrams and interval probability theory to a climate relevant problem is described by Hall et al. (2006), regarding a study commissioned by the UK Government that sought to establish the extent to which very severe floods in the UK in October–November 2000 were attributable to climate change. Hall et al. used influence diagrams to represent the evidential reasoning and uncertainties in responding to this question[,] Three alternative approaches to the mathematization of uncertainty in influence were compared, including Bayesian belief networks and two interval probably methods (Interval Probability Theory and Support Logic Programming). Hall et al. argue [arguing] that “ interval probabilities [probability methods (Interval Probability Theory and Support Logic Programming)] represent ambiguity and ignorance in a more satisfactory manner than the conventional Bayesian alternative . . . and are attractive in being able to represent in a straightforward way legitimate imprecision in our ability to estimate probabilities.”

    Hall et al. concluded that influence diagrams can help to synthesize complex and contentious arguments of relevance to climate change. Breaking down and formalizing expert reasoning can facilitate dialogue between experts, policy makers, and other decision stakeholders. The procedure used by Hall et al. supports transparency and clarifies uncertainties in disputes, in a way that expert judgment about high level root hypotheses does not.

    Conclusions
    In this paper I have argued that the problem of communicating climate uncertainty is fundamentally a problem of how we have framed the climate change problem, how we have characterized uncertainty, how we reason about uncertainty, and the consensus building process itself. As a result of these problems, the IPCC has not produced a thorough portrayal of the complexities of the problem and the associated uncertainties in our understanding. Improved characterization of uncertainty and ignorance and a more realistic portrayal of confidence levels could go a long way towards [help in] reducing the “noise” and animosity portrayed in the media that fuels the public distrust of climate science and acts to stymie the policy process. Not to mention that an improved characterization of uncertainty and ignorance would promote a better overall understanding of the science and how to best target resources to improve understanding.

    Improved understanding and characterization of uncertainty is critical information for the development of robust policy options. When working with policy makers and communicators, it is essential not to fall into the trap of acceding to inappropriate demands for certainty. Wynne (1992) makes an erudite statement: “the built-in ignorance of science towards its own limiting commitments and assumptions is a problem only when external commitments are built on it as if such intrinsic limitations did not exist.”

    • It’s actually down to 3,610 words with the changes, so there is still some to go.

      Max

  52. jens raunsø jensen

    Thanks, Judith for an inspiring website and for sharing your thoughts.

    I agree with manacker (march 27 12:14am) and others that real examples of understated uncertainty would be useful. One such example with considerable implications is how the IPCC handles hydrological and in particular precipitation uncertainty.

    In AR4, WG1 the physical science basis, it is concluded p201 that “Precipitation, a principal input signal to water systems, is not reliably simulated in present climate models”. This statement seems to remain true as evidenced by numerous publications since the AR4, some of which I have summarised in http://www.danishwaterforum.dk/events/Climate%20researchers%20day%20Oct%202010/Jens%20raunso%20Jensen%20Researchers%20day%202010.pdf

    However, the AR4 Synthesis Report downplays this – probably disturbing – scientific assessment when summarising on p 73: ”the confidence in projections is higher for some (temperature) than for others (eg. precipitation)”. This is in my view the most blatant example of a poorly communicated uncertainty in the IPCC work, with damaging consequences for the climate variability and change dialogue.

    • I disagree about examples. As written, the purpose of this paper is to explore various analytical methods and epistemic principles that might be used to properly characterize uncertainty in climate science. That this problem exists is assumed and there is no room to argue its existence. A critique of the IPCC reports is a different paper.

      It is now widely accepted that there is a communication problem. People who do not believe that uncertainty is part of this problem may find nothing useful in this paper but they are not the target audience. As written, this is a methods paper.

  53. “In the detection and attribution of 20th century climate change, Chapter 9 of the AR4 WG1 Report all but dismisses natural internal modes of multidecadal variability in the attribution argument. ”

    The history of climate science is the limiting factor to how uncertainty is framed. CO2 and warming is straight forward in the lab, and has been known for a long time. The effect of this increasing gas on the climate system is therefore dependant on the climate system itself, rather than some ye old experiment made over a century ago . The uncertainty has NOTHING to do with the properties of this gas, and EVERYTHING to do with recent climate knowledge, Knowledge that is only a few years / decades old. Climate patterns, ENSO, clouds, Deep sea cores, etc all have a high level ignorance. This should be where the problem is framed. The IPCC wastes too much time and effort framing the elements that are Known.

    Sometimes it best to conclude “we don’t know, the literature is not good enough to answer the question, we don’t need any more CO2 papers, CO2 is the problem, not the answer.

  54. Quite right. It is important to realize that natural climate variability only became generally accepted in the late 1990’s, by which time the CO2-centered research program had become entrenched. Carbon cycle research still dominates the program, at least in the US.

    Moreover, the IPCC community explanation of climate change continues to begin with CO2, and then it adds on natural systems as mere modifiers. This AGW dominated explanatory framing is conceptually backwards. Because the framing is backwards it is easy to ignore uncertainty as a mere secondary effect, something minor to be resolved in the fullness of time. The skeptical frame is just the opposite, namely that we need to know what nature is doing before we can know what, if any, effect CO2 is having.

  55. Hansen et al. 1981

    When should the CO2 warming rise out of the noise level of natural climate variability? An estimate can be obtained by comparing the predicted warming to the standard deviation, s, of the observed global temperature trend of the past century (50), The standard deviation, which increases from 0.1 deg C for 10 year intervals to 0.2 deg C for the full century, is the total variability of global temperature; it thus includes variation due to any known radiative forcing, other variations of the true global temperature due to unidentified causes, and noise due to imperfect measurement of the global temperature. Thus if To is the current 5-year smoothed global temperature, the 5 year smoothed global temperature in 10 years should be in the range To +/- 0.1 deg C with probability of about 70 percent.

    Nominal confidence in the CO2 theory will reach about 85% when the temperature rises through the 1s level and 98% when it exceeds 2s.

    ….

    We conclude that CO2 warming should rise above the noise level of natural climate variability in this century.

    http://bit.ly/gMXzRd

    Here is my attempt to verify Hansen’s claim.

    Here are the 5-year average global mean temperatures for hadcrut3:
    YEAR =>TEMP
    1985=>0.09
    1990=>0.16
    1995=>0.32
    2000=>0.41
    2005=>0.41
    http://bit.ly/fiOKEc

    Let us apply, “Thus if To is the current 5-year smoothed global temperature, the 5 year smoothed global temperature in 10 years should be in the range To +/- 0.1 deg C with probability of about 70 percent.”

    For 1985, To = 0.09

    For 1995, To varies from –0.01 (=0.09 – 0.1) to 0.19 (=0.09+0.1). The observed temperature for 1995 is 0.32, which is outside natural variability.

    For 2005, To varies from –0.22 (=0.32 – 0.1) to 0.42 (=0.32+0.1). The observed temperature for 2005 is 0.41, which is inside natural variability.

    Conclusion:
    CO2 warming rise is not outside the noise level of natural climate variability.

    Do you agree?

    • Hansen et al (1981) are taking “natural variability” to be a trendless oscillation (mere noise) on the dec-cen scale. In other words that there is no natural climate change. That view is no longer tenable.

  56. Hansen et al. 1981

    When should the CO2 warming rise out of the noise level of natural climate variability? An estimate can be obtained by comparing the predicted warming to the standard deviation, s, of the observed global temperature trend of the past century (50), The standard deviation, which increases from 0.1 deg C for 10 year intervals to 0.2 deg C for the full century, is the total variability of global temperature; it thus includes variation due to any known radiative forcing, other variations of the true global temperature due to unidentified causes, and noise due to imperfect measurement of the global temperature. Thus if To is the current 5-year smoothed global temperature, the 5 year smoothed global temperature in 10 years should be in the range To +/- 0.1 deg C with probability of about 70 percent.

    Nominal confidence in the CO2 theory will reach about 85% when the temperature rises through the 1s level and 98% when it exceeds 2s.

    ….

    We conclude that CO2 warming should rise above the noise level of natural climate variability in this century.

    http://bit.ly/gMXzRd

    Here is my attempt to verify Hansen’s claim.

    Here are the 5-year average global mean temperatures for hadcrut3:
    YEAR =>TEMP
    1985=>0.09
    1990=>0.16
    1995=>0.32
    2000=>0.41
    2005=>0.41
    http://bit.ly/fiOKEc

    Let us apply, “Thus if To is the current 5-year smoothed global temperature, the 5 year smoothed global temperature in 10 years should be in the range To +/- 0.1 deg C with probability of about 70 percent.”

    For 1985, To = 0.09

    For 1995, To varies from –0.01 (=0.09 – 0.1) to 0.19 (=0.09+0.1). The observed temperature for 1995 is 0.32, which is outside natural variability.

    For 2005, To varies from –0.22 (=0.32 – 0.1) to 0.42 (=0.32+0.1). The observed temperature for 2005 is 0.41, which is inside natural variability.

    Conclusion:
    CO2 warming rise is not outside the noise level of natural climate variability.

    Do you agree?

    • To repeat, Hansen et al (1981) are taking “natural variability” to be a trendless oscillation (mere noise) on the dec-cen scale. In other words that there is no natural climate change. That view is no longer tenable.

      • It is tenable unless you are now moving the goalposts. Most people would see what he had as a reasonable definition of natural variability. Plus or minus 0.2 degrees in 5-year mean global temperature covers all known ocean oscillations, and the solar cycle.

      • I don’t know who “most people” are but yes the goal posts have moved in the last 20 years, if you like. There is now a lot of evidence that climate varies on decade to century scales, or even longer. In so-called abrupt events it seems to change by several degrees in a few decades. Your view is 20 years out of date, but many people do indeed still hold it.

      • It is a novel idea that purely internal variations can lead to half-degree shifts in decadal averaged global temperature. Can you suggest a mechanism? If you talk about PDO, note that it has an amplitude less than 0.2 C, so it doesn’t make the cut.

      • But it’s not the direct heating or cooling effects of PDO that’s the issue. It’s the secondary effects that are important. ie the effect of PDO on cloud cover, albedo and the hydrological cycle. These are the powerful agents of change.

      • Noting novel about it Jim D (nor did I specify “purely internal variations” whatever that might mean). That climate may change naturally is one of the primary skeptical positions. Apparently you do not understand this, in which case you do not understand the present scientific debate. But this is not the thread to explain it on. We are discussing Dr. Curry’s draft paper.

      • That climate may change naturally is one of the primary skeptical positions.

        *may* change naturally? The skepticism is in regard to the scientific fact that it *does* change naturally?

        You think that mainstream consensus climate science goes to far when it states that variations in forcing such as solar insolation will change climate, just as variations in forcing due to dumping GHG’s in the atmosphere will change climate?

        Oh, wait, let me guess … you’re unaware that mainstream science is as aware of this fact as it is aware of the existence of gravity …

      • To bring it back to topic, there is a lot of uncertainty about what natural variation means or even includes. Unless that is resolved, no other uncertainty definition makes sense.

      • dhogaza

        “You write that “”science” is “aware” that “variations in forcing such as solar insolation will change climate”.

        That may be so, dhogaza, but IPCC is apparently unaware of this fact.

        AR4 WG1 states that all natural forcing (including solar) from pre-industrial times to 2005 is an insignificant 0.12 W/m^2, compared with 1.66 W/m^2 for CO2 alone!

        Duh!

        Max

      • My view is the radical one the effects have causes, and until alternative causes are identified, i.e. quantified and verified, they don’t count for much.

      • Wrong. Unverified causes are called hypotheses and they count a lot. Unknown mechanisms for known phenomena also count a lot. You clearly do not understand the debate, which is based on the fact that we do not understand natural climate change, but we know it exists. The idea that only what we know counts is the basic AGW fallacy.

      • OK, if we can explain the 0.7 degree warming in terms of CO2 changes that are well observed, and its effects that are well understood, why would we be looking for another explanation, if it was not for the politics behind not liking the obvious explanation. Remember that this “other” explanation also has to explain why CO2 is not having its expected effect, so two hypotheses are needed to replace one.

    • He defined natural variability as oscillations around the zero anomaly, and deviations outside +/- 0.2 C were a climate change signal. Therefore the last few decades count as a climate change signal, as he predicted would happen.

      • But this does not allow for natural climate change, as opposed to natural climate variability around a fixed temperature. In other words, the signal may be natural. For example, continued emergence from the Little Ice Age.

      • Solar variability can’t account for anything significant in the last half-century. How about the CO2 explanation?

      • Jim D

        Allow me to correct your sentence:

        Direct solar irradiance alone can’t account for anything significant in the last half-century.

        Max

      • I am puzzled by assertions that variations in energy delivered by solar irradiance (TSI or SSI) contributes little to temperature trends. We do not appear to measure the entire EM spectrum, much less its variability or power. (Granted, Svalgaard has found that proxies suggest a floor.)

        Global Warming and Weather Discussion: What is Total Solar Irradiance (Really)?
        http://solarcycle24com.proboards.com/index.cgi?board=globalwarming&action=display&thread=468

        The following links to a graph of TSI measurement coverage overlaid on the electromagnetic spectrum, as of May 5, 2009. The post also includes findings on what length of time would be required to detect a 0.1% per year long-term trend.
        http://solarcycle24com.proboards.com/index.cgi?action=gotopost&board=globalwarming&thread=468&post=18678

        Since then, SDO has launched, carrying EVE (Extreme Ultraviolet Variablity Experiment). It covers 0.1 – 105 nm, 121.5 nm, which is within the range of the graph above. ( http://lasp.colorado.edu/eve/docs/eve_quick_facts.pdf and http://sdo.gsfc.nasa.gov/mission/instruments.php ) More on SDO/EVE at ( http://lasp.colorado.edu/eve/news/news.htm and http://lasp.colorado.edu/eve/docs/eve_quick_facts.pdf )

        “We’ve been seeing just the tip of the iceberg when monitoring flares with X-rays. With the complete extreme ultraviolet (EUV) coverage by SDO EUV Variability Experiment (EVE), we now see a secondary peak in the EUV that is many minutes after the X-ray flare peak. Furthermore, the total EUV energy from this broad secondary peak has about four times more energy than the EUV energy during the time of the X-ray flare peak.”

      • The question was raised: Can we explain the 0.7C warming seen over the 20th century without CO2?

        That is a good question, but a better question might be: “can we explain the warming that occurred in the early 20th century with CO2?”

        The answer here is “NO” (according to studies cited by IPCC).

        This warming period accounted for 0.53C linear warming (Delworth + Knutson 2000), and was statistically indistinguishable from the late 20th century warming period, which is used by IPCC as the essential basis for the AGW hypothesis.

        Several solar studies tell us that we can explain an average of around 0.35C of the total 20th century warming as a result of the unusually high level of solar activity (highest in several thousand years), with most of this occurring prior to 1970 (Scafetta + West 2006; Solanki et al. 2004; Geerts + Linacre 1997; Baliunas + Soon 1999; Dietze 1999; Shaviv + Veizer 2003).

        So we are left with a partial answer to the first question.

        Yes. We can explain around half of the observed 20th century warming without CO2.

        Regarding “the other half” there are all kinds of studies out there regarding ENSO, PDO, etc., which have been cited here and may or may not be related to solar activity (but are certainly not related to CO2); we know from the record that strong El Ninos in the 1990s caused some high temperatures (1998 was an example), which affected the overall temperature record.

        Then there are the data on cloud cover (reduced cover over the period 1985-2000 resulting in reduced albedo and more SW radiation getting through), followed by a recent shift to more clouds (reflecting incoming SW radiation) (Pallé et al. 2006).

        So there are “explanations”, even though the mechanisms are not fully understood.

        But I’m sure that CO2 also played a role, although I have concluded from all the data out there that there is a lot of uncertainty concerning just how significant this really was (as I think Judith has pointed out).

        Max

      • Yes, I think about half the 1910-50 change was solar. My attribution for the 20th century goes like this, FWIW.
        0.7 total = 0.9 CO2 + 0.2 solar – 0.4 aerosol

      • Jim D

        Thanks for your guess on 20th century warming attribution.

        The solar studies I cited put the solar part at around 0.3C, occurring around 80% in the first half and 20% in the second.

        The negative aerosol forcing you cite is a model-based red herring. IPCC AR4 WG1 states that all anthr0pogenic forcing factors other than CO2, including aerosols, other GHGs, etc. have cancelled one another out – so let’s forget these.

        Observations show there may well have been a net positive forcing from reduced cloud cover (4-5% reduction of total albedo) over the late 20th century (1985-2000), which has apparently reversed itself most recently (2-3% recovery) (Pallé et al. 2006). This could well represent some 0.5C warming over that period, which may have followed a period of increased cloudiness coinciding with the mid-century cooling cycle.

        ENSO caused some warmer years in the last 20 years of the 20th century, notably the record year 1998. Just taking the effect of those warm El Nino years into the record would account statistically for around 0.1C of the observed warming.

        So, compared to your breakdown guess, mine would be:

        0.3C Solar
        0.2C Clouds plus ENSO
        0.2C CO2 (290-369 ppmv, no net feedback)
        0.0C All other anthropogenic forcing per IPCC)

        Max

  57. For 2005, To varies from 0.22 (=0.32 – 0.1) to 0.42 (=0.32+0.1). The observed temperature for 2005 is 0.41, which is inside natural variability.

  58. Re ’ natural variability ‘
    There is a close correlation between PDO and ENSO, with the PDO’s primacy, based on the North Pacific’s currents oscillations, as the PDO’s driver. Both de-trended and the rate of change (first differential) of these oscillations data shows good correlation with the actual PDO. There is exactly the same type of driver in the North Atlantic correlating with AMO. More details will be available in a short article I am currently preparing based on my findings as shown here:
    http://www.vukcevic.talktalk.net/PDO-ENSO-AMO.htm

  59. Judith, The Rocking Horse Winner is somehow related to this. When the kid was sure, it was a sure thing. And when he wasn’t, he usually lost. But he at least knew the difference.

  60. Judith Curry

    You probably noted that I cut out the sentence from your draft regarding possible more rapid melting of ice sheets, which was not mentioned in AR4 WG1 SPM:

    The most visible failing of this strategy was neglect of the possibility of rapid melting of ice sheets on sea level rise in the Summary for Policy Makers (e.g. Oppenheimer et al. 2007; Betz 2009).

    Cutting this out was not an arbitrary decision.

    I think if one looks at the actual record, once can see that this was not a failing in actual fact, for two reasons.

    At the cut-off time for AR4 data, the GRACE measurements, upon which these fears are based, were still in their infancy and somewhat inconclusive due to sensitivity to bedrock motion (Thomas et al. 2006); some argue that this sensitivity is still the case.

    At the time, there were satellite altimetry records covering a full 10+year period 1993-2003 of both the Antarctic Ice Sheet (Davis et al. 2005, Wingham et al. 2006) and the Greenland Ice Sheet (Johannessen et al. 2004, Zwally et al. 2005).

    These studies showed that both ice sheets had gained mass over this long-term continuous period (+27 Gt/year mass gain AIS; +11 Gt/year mass gain GIS, representing a calculated reduction of sea level of –0.08 mm/year and –0.03 mm/year, respectively).

    Curiously, Zwally truncated the data for the 6-month colder period from October 2002 to April 2003 from the study, but still showed a slight mass gain for Greenland; using Johannessen’s seasonal breakdown, adding this season would have resulted in a net gain of around 23 Gt/year (instead of 11 Gt/year, as reported by Zwally).

    Despite these findings, which represented the only continuous 24/7 records of the two ice sheets over the entire 10+ year period, IPCC AR4 reported that both the Antarctic and Greenland ice sheets lost mass over the period 1993-2003 (-71 Gt/year each, contributing +0.21 mm/year to sea level rise).

    For this reason, I think it is best to leave ice sheet mass loss out of the discussion. When AR4 was published the available data showed a slight mass gain of both ice sheets and no GRACE data that could have indicated rapid melting, so there is no reason that this more rapid melting should have been shown as a possibility.

    In fact, there is a serious question regarding why any mass loss at all was shown despite thse data, which actually showed net mass gain of both ice sheets. (So I think it is best not to even open this can of worms, unless one wants to address the whole issue.)

    Max

    • max good point, but given all these uncertainties, their confidence level in sea level rise was too high. i will probably take this sentence out.

      • Judith

        You mention confidence in sea level rates as reported by IPCC.

        This is another weak point of IPCC AR4 WG1, which bears mentioning (but I do not recommend that you add it to your paper as a specific example, because of word limits).

        AR4 SPM tells us that sea level rose at 1.8 mm/year over 1961 to 2003, but that the rate was faster over 1993 to 2003 at about 3.1 mm/year, adding the caveat that this could reflect decadal variability or represent an increase in the longer-term trend.

        From this one would conclude that there has been a late 20th century acceleration in the rate of sea level rise.

        A small print footnote tells us “Data prior to 1993 are from tide gauges and after 1993 are from satellite altimetry”.

        As we know, the two methods are totally different in measurement methodology, with both having their pluses and minuses.

        They are also totally different in scope: tide gauges measure sea level at several selected shorelines while satellite altimetry measures the entire ocean, except areas near shorelines and at the poles that cannot be measured.

        So now we have a comparison of two different time periods, using two different measurement methodologies and covering two totally different scopes to show an apparent acceleration between the two time periods.

        I would say that that goes beyond a question of confidence to just plain bad science.

        Compounding this is the fact that the tide gauge record does not show this acceleration. The tide gauge data are not reported regularly any more, but the last report (Holgate 2007) showed large decadal swings but no acceleration in the late 20th century; in fact, the rate for the first half of the century was a bit higher than that for the second half. Another study estimates the 1993-2003 rate of sea level rise at 1.6 mm/year (Wunsch et al. 2007).

        So I would conclude that the confidence level of the 3.1 mm/year rate of sea level change reported by IPCC for 1993-2003 is extremely low.

        Just another point.

        Max

  61. Dr. Curry,

    As near as I understand it, the IPCC considers climate change to be the response of a linear system within resonably predictable bounds of certainty. The Chief Hydrologist (March 26, 2011 7:55 PM) reminds us that weather and climate change are both chaotic and unpredictable. IMO, the IPCC should consider the uncertainty associated with chaotic climate change.

    • It is interesting that last years summary of climate science from the Royal Society had 1 paragraph on ‘internal climate variability’.

      ‘In principle, changes in climate on a wide range of timescales can also arise from variations within the climate system due to, for example, interactions between the oceans and the atmosphere; in this document, this is referred to as “internal climate variability”. Such internal variability can occur because the climate is an example of a chaotic system: one that can exhibit complex unpredictable internal variations even in the absence of the climate forcings discussed in the previous paragraph. ‘

      This seems slow progress of sorts since AR4 – at least it gets a mention. I of course would put intrinsic variability at the core of climate change with small changes in greenhouse gases potentially influencing the dynamic evolution of climate. Dynamical complexity implies extreme sensitivity at the points of bifurcation – so there are risks in greenhouse gas emissions.

      That said the politics are hopelessly compromised. We are in a cooler phase space since 1998 – and these phases tend to persist for 2 to 4 decades. If so – most people will move onto the next problem within the next decade. The economics are likewise hopelessly romantic. Ignoring my feeble attempts at humour I discuss Pigovian taxes here with Bart – in a very long winded way . How could it be otherwise?

      http://judithcurry.com/2011/03/15/reasoning-about-floods-and-climate-change/#comments

      • Having acknowledged that “Such internal variability can occur because the climate is an example of a chaotic system: one that can exhibit complex unpredictable internal variations even in the absence of the climate forcings discussed in the previous paragraph,” one might think that the IPCC would at least address the uncertainty matter associated with chaotic systems.

        Oh, well …

  62. Hansen et al. 1981

    When should the CO2 warming rise out of the noise level of natural climate variability? An estimate can be obtained by comparing the predicted warming to the standard deviation, s, of the observed global temperature trend of the past century (50), The standard deviation, which increases from 0.1 deg C for 10 year intervals to 0.2 deg C for the full century, is the total variability of global temperature; it thus includes variation due to any known radiative forcing, other variations of the true global temperature due to unidentified causes, and noise due to imperfect measurement of the global temperature. Thus if To is the current 5-year smoothed global temperature, the 5 year smoothed global temperature in 10 years should be in the range To +/- 0.1 deg C with probability of about 70 percent.

    http://bit.ly/gMXzRd

    Here is my attempt to verify Hansen’s claim for the period 1910 to 1940!

    Here are the 5-year average global mean temperatures for hadcrut3:
    YEAR =>TEMP
    1925=>-0.26
    1930=>-0.18
    1935=>-0.07
    1940=>0.03
    1945=>-0.16
    http://bit.ly/fiOKEc

    Let us apply, “Thus if To is the current 5-year smoothed global temperature, the 5 year smoothed global temperature in 10 years should be in the range To +/- 0.1 deg C with probability of about 70 percent.”

    For 1925, To = -0.26

    For 1935, To varies from –0.36 (=-0.26 – 0.1) to -0.16 (=-0.26+0.1). The observed temperature for 1935 is -0.07, which is outside natural variability!

    For 1945, To varies from –0.17 (=-0.07 – 0.1) to 0.03 (=-0.07 + 0.1). The observed temperature for 1945 is –0.16, which is just outside natural variability, but under the lower range.

    Conclusion:
    Global mean temperature variation outside one sigma range was observed before mid-20th century (for 1935).

    Do you agree?

    • You should notice that Hansen did account for the earlier colder part of the record with volcanic and solar effects.

  63. The Supreme Court ruled “uncertainty” no excuse in Mass v. EPA. as they forced the agency to declare CO2 a “greenhouse gas”.

    The scientists who stood with the plantiff got away with false certainty. See below:

    [April 2, 2007]
        Justice Stevens delivered the opinion of the Court.

        A well-documented rise in global temperatures has coincided with a significant increase in the concentration of carbon dioxide in the atmosphere. Respected scientists believe the two trends are related. For when carbon dioxide is released into the atmosphere, it acts like the ceiling of a greenhouse, trapping solar energy and retarding the escape of reflected heat. It is therefore a species—the most important species—of a “greenhouse gas.”

    Judith, thank you for this effort. Lacking the type of reality based valuation for Climate Science certainties, which you seek, prescedents are set
    which have weighty impacts. If you use practical impact examples, this case is easy for policy makers to understand.

    Nor can EPA avoid its statutory obligation by noting the uncertainty surrounding various features of climate change and concluding that it would therefore be better not to regulate at this time. See 68 Fed. Reg. 52930–52931. If the scientific uncertainty is so profound that it precludes EPA from making a reasoned judgment as to whether greenhouse gases contribute to global warming, EPA must say so.

    Thanks again.
    Dave

  64. I’ve kept my comments here to a few (albeit long) ones, since I’m not aiming at being listed as a coauthor on the paper, but I’m inclined to comment on some of what seems to be happening in this thread.

    As I see it, some participants are encouraging Dr. Curry to write what would amount to a “Skeptic’s Manifesto”, using selected data to argue that the IPCC has overstated the level of anthropogenic influence rather than its confidence in that level. In my view, that would defeat her purpose. I believe she most wishes to reach readers who are often sympathetic to the main efforts of the IPCC, but who might be receptive to evidence that the IPCC should improve its handling of uncertainty. A paper that comes across as an argument for skepticism per se rather than a prescription for improved uncertainty management is more likely to alienate than persuade those readers, and so I think her efforts would more effectively entail a non-adversarial analysis, complete with examples and guidelines, of how the IPCC will benefit from the changes she urges. It includes an emphasis on the risk excessive certainty imposes of neglecting not only the most benign outcomes but also the worst case scenarios.

    In my own experience, I’ve been most successful in convincing others of a position I hold when I’ve been able to view the issue from their perspective, and then ask myself what kind of argument would appeal to that perspective. Never mind that I don’t always manage to do that in this blog. It’s still a good idea.

    • Fred Moolten

      There may have been some that tried to encourage Judith to write a “skeptics’ manifesto” (as you put it), but this is a minority of the bloggers here, as far as I could judge.

      But more importantly, I do not believe that any poster here is basically going to change Judith’s own logical conclusions regarding the open scientific questions relating to the climate change debate, do you?

      Max

  65. I just submitted the paper to Climatic Change. I managed to get the word count down to 3700. I greatly appreciate your comments, and I tried to make the paper more readable (shorter sentences, etc). Will be interesting to see what the reaction is to the paper. I am now back in Atlanta (after almost a month away), and I hope to get back on track with more frequent posts at Climate Etc.

    • Congratulations and lots of luck!

      Look forward to seeing it when it is published.

      Max

  66. David Bailey

    Judith,

    I hope you will take this the way I intend it, as an attempt to help. If not, I apologise!

    I have always felt that your papers would benefit by being made more concise, with more use of examples.

    For example:

    “The consensus approach used by the IPCC to characterize uncertainty has received a number of criticisms. Van der Sluijs et al. (2010b) finds that the IPCC consensus strategy underexposes scientific uncertainties and dissent, making the chosen policy vulnerable to scientific error and limiting the political playing field. ”

    Surely this should be followed by a good example (or at least a reference to a good example) to drive the point home?

    David

  67. I don’t believe that people working on IPCC reports would not agree that uncertainties are large. There are new guidelines for the AR5 and task groups have been working to take into account comments of IAC Review. Results from these task groups are expected soon. It’s generally known that there are all kind of biases and that applying Bayesian or any other approach is problematic and that this is often due to unknown unknowns. Repeating the statement that the situation is such is not going to have practically any effect. The real problems come from weighing the uncertainties against the evidence that anyway exists.

    The formal discussion of the uncertainties doesn’t really help much. Choosing different formal points of view can be used to justify different attitudes, but the real problems are practical and pragmatic. They can be solved only on a case by case basis. Examples are much more important than vague academic and generic discussion of approaches to handling uncertainties.

    Interval probabilities, fuzzy logic and other such methods do not really add to the understanding of the uncertainties. Any formally stronger approach requires weakening the claims as the strongest justifiable claims cannot be supported formally. Therefore a formal approach is guaranteed to give suboptimal results in practice – and still there will be weaknesses in the formal arguments.

    Estimating the level of scientific knowledge about climate as a scientific issue is one thing and determining, how the uncertainties should be taken into account in decision making is another, which may have very different answers, as this is influenced by many factors, which are not subject to climate research.

    IPCC has so far failed in following its own guidelines on presenting uncertainties in a systematic way. This is one of the main points raised by IAC Review. There is certainly a serious attempt to improve on this in AR5, but it may turn out to be more difficult that the people doing the work presently think. Understanding uncertainties and risks has always been very difficult for everybody. There is no way of getting rid of the subjective estimates and the issue of unknown unknowns cannot really be solved.

    AR5 may well succeed a bit better in communicating uncertainties than AR4, but I’m afraid that not much better. The issues are too difficult for any wider audience, perhaps even for the best experts on decision making under uncertainty.

    • Re: Pekka Pirilä ,
      Well, Pekka, you’d better be REAL certain of your conclusion that over the crest of the next little rise in the road is a drop-off and deep cliff, before you push the button that blows the explosive bolts that remove the wheels from your car in order to stop “in time”.

      Especially considering that at your current highway speed the odds are excellent that you will roll and burn in the wreckage.

      • Brian H,
        I’m not sure, I understand what you wish to tell, but if I understand, then I agree at least in general terms.

        Climate policies are about risk management, but risk management without a satisfactory understanding of both the risks and the methods of risk management is futile. It is not useful to by an insurance from a company that will not be able to make payments, when the accident has occurred. It is not useful to pretend to mitigate the climate change, when the methods chosen do not work or when their collateral damage is larger than the risks avoided.

        I have little doubt on the basic science of global warming, but I do have very much doubt on the climate policy choices taken both at international, national and even local level. Many acts taken here in Europe appear both useless and costly. Policies are not chosen for their effectiveness but based on political pressure that works here in a very different, almost opposite, way than in the US.

        The real big issue is the general growth of the global human consumption, which appears to continue until it finally cannot be supported by the finite Earth. Climate change is just one aspect of that and fixing one aspect helps only momentarily. Many resources are running out. Human ingenuity helps again momentarily, but that cannot go on forever, and what we do for the CO2 emissions is not decisive in the long run (which may mean hundred years or a couple of hundreds of years). Finally the human societies will adapt to the new environment. How it happens, that I cannot foresee. The equilibrium population may be well below the present level.

      • Re: Pekka Pirilä
        You ‘Lukewarmers’ really get to me sometimes.
        Pekka, you seem to have absorbed and be unable to question a kind of Neo-Malthusian set of assumptions.
        Finite resources: False and foolish. EVERY resource that has begun to “run out” has been replace by an economic substitute, and that’s the way a free market works. When oil spikes too high, exploration and development of alternatives blossom; OPEC dials back the price to keep those activities from damaging its oligopoly, but the point of no return has been passed, this time. Frac gas, notwithstanding howls of fear and outrage from the Left, has made energy cheap and available for the next many decades, and growing, e.g. “Substitutes” arise, you see.
        And beyond that, enormously cheaper energy will arrive with fusion, able to recycle minerals and metals ad infinitum with fusion torches. Timeline for the start of that is 5-50 yrs. The lower end would occur if, e.g., the LPPhysics.com project continues on track. Its cost profile is about 5% of current conventional capital and output best cases.

        As for the longer term, the planet is hardly the end of the line for resources. A single 1-mi. diameter nickel-iron asteroid in Earth orbit would match all of history’s precious metal supply and much of the base metal.

        As for population, the “projection” with the best track record, the lower edge of the low band of the UN FAO chart, says population peaks in 2030-2040 at 8bn +/- 50M.

        As for CO2, it is mankind’s “Thank You” to the plant world, which had dangerously depleted its supply. And the world is greening in response. A few degrees warming will only help, though unfortunately CO2 shows no signs of being able to provide that.

  68. P.S.
    About that “equilibrium” level: in 2005, the UN swore to cut world poverty in half in 10 yrs. Unfortunately, it happen in 2010, half the time (5 yrs.) Mostly due to more use of fossil fuel energy in India and China. Which will continue and accelerate, especially with the aforementioned Frac Gas discoveries virtually everywhere. (E.g.; Algeria has found 25,000 TCF, a millenium’s supply for Europe.)

    I warn you, a massive shortage of shortages is in the cards.

  69. My dear Dr. Curry,

    I have noticed that a number of your contemporaries still seem to not get the Italian Flag thingy.

    Though Eli Rabett did come close with his post on common sense, which seems to not be so common.

    Personally, I use Monty Hall. Once you pick a door stay with it until another door is opened, then change to improve your odds. A statistically significant explanation of logic when faced with uncertainty.

    Perhaps, they just have a thing about Italians?

    • Personally, I use Monty Hall. Once you pick a door stay with it until another door is opened, then change to improve your odds. A statistically significant explanation of logic when faced with uncertainty.

      An example of Bayesian reasoning in action, updating your knowledge base with new information and changing your strategy accordingly.