by Judith Curry
It is now published: the Climatic Change Special Issue on Guidance for Characterizing and Communicating Uncertainty and Confidence in the Intergovernmental Panel on Climatic Change.
Table of Contents
The online link for the special issue is [here]. The table of contents and links to the abstracts are provided below. Open access is noted where available, and I also provide links to a few other papers that I managed to find online.
The IPCC AR5 guidance note on consistent treatment of uncertainties: a common approach across the working groups. Michael D. Mastrandrea, Katharine J. Mach, Gian-Kasper Plattner, Ottmar Edenhofer and Thomas F. Stocker, et al. (open access)
Climate uncertainties and their discontents: increasing the impact of assessments on public understanding of climate risks and choices. Brenda Ekwurzel, Peter C. Frumhoff and James J. McCarthy (open access)
Yohe and Oppenheimer’s overview paper
The editors of the special issue wrote an overview paper for the special issues. Some excerpts:
From the Introduction:
Since its inception in 1988, the Intergovernmental Panel on Climate Change (IPCC) has worked with the growing recognition that uncertainty is pervasive in our understanding of the climate system: what drives climate change, what will determine its future course, and what influence it will have on important social and ecological aspects of our world. It is not news that the IPCC has struggled, with varying degrees of success, in its efforts to describe these uncertainties and to judge the confidence with which it can offer its major conclusions. This most recent attempt, informed by the history of previous assessments, is the point of departure for the papers in this special issue of Climatic Change.
AR5 authors must do their work in a world that is marked by several recent, major changes in the climate change landscape that present larger challenges and opportunities. First of all, the Inter-Academy Council (IAC 2010) review of IPCC emphasized the treatment of uncertainty and, among other things, called for improvement in the way IPCC describes and communicates uncertainty with particular emphasis on increased consistency across working groups in order that conclusions become more comparable and more credible.
From “Description of the Special Issue”:
This Special Issue provides an opportunity for a wide-ranging discussion of IPCC’s past and possible future approaches to the evaluation, characterization, and communication of uncertainty. Authors who were invited to contribute to this collection of papers approached their assignments from a variety of perspectives. Some, like Richard Moss, Michael Mastrandrea, and Katharine Mach, were intimately involved in producing the guidance documents; their contributions describe the objectives of these documents and offer some introspective considerations of past experience and what we might expect in AR5. Others, like Kristie Ebi, Gian-Kasper Plattner, Ottmar Edenhofer, Thomas F. Stocker, Christopher B. Field, and Patrick R. Matschoss, are playing key roles as working group co-chairs or members of associated technical support units in the AR5 process; they, as well as one of us (G.Yohe) were involved in developing the AR5 Guidance document, and their contributions describe their aspirations and concerns as the AR5 authors set to work. Still others, like Granger Morgan and Baruch Fischoff, articulate weaknesses and strengths in IPCC guidance efforts from an extraordinarily experienced and informed vantage point: that of research into uncertainty judgment and communication. Meanwhile, Marcus King and Sherri Goodman use their experience with the defense and national security communities to describe an approach to communicating and coping with profound and unique types of risk and uncertainty. James Risbey, Roger Jones, Roger Pielke, Jr., Rachael Jonassen, and Judith Curry have already contributed to the literature, discussions and evaluations of IPCC practices and procedures with regard to judging and communicating uncertainty. Pielke and Jonassen offer an empirical evaluation of uncertainty language in the AR4 while Risbey, Jones and Curry suggest “ignorance” as another category of confidence—not one that brings the process to a complete standstill, but one that best describes the state of affairs in some circumstances. Humility, they would all argue, would be a virtue. Brenda Ekwurzel and Peter Frumhoff have worked from IPCC documents to try to communicate with broader audiences in language that is more accessible than the dense prose that IPCC prefers; their paper describes some of the challenges and opportunities that they have faced or enjoyed, respectively. John Sterman and Robert Socolow represent users of that information from within the broader research community; they express some frustration in interpreting summary statements from previous assessments and offer suggestions for reducing that burden. Finally, Richard Tol, who has been an IPCC participant for many years and has thought seriously about the structure and efficiency in the entire enterprise, offers an analogy between a standard natural monopoly in economic theory and the IPCC in practice vis a vis providing climate information to the international community. It allows him to offer some stark but constructive hypotheses and some novel but intriguing remedies.
From “The issue of consensus”:
To many, notably including Risbey and Curry in this special issue, the emphasis on consensus is the most troublesome limitation of IPCC assessment processes (for a general critique of the consensus approach to science, see Moore and Beatty 2010). Achieving consensus is, to be clear, one of the major objectives of IPCC activities. Paragraph 10 of the amended Procedures Guiding IPCC Work, for example, states that “In taking decisions, and approving, adopting and accepting reports, the Panel, its Working Groups and any Task Forces shall use all best endeavors to reach consensus” (http://ipcc.ch/pdf/ipcc-principles/ ipcc-principles.pdf).
Two proposals have been advanced repeatedly for beginning to address the problem of creating, defending and communicating consensus results as well as departures from the consensus. The degree to which IPCC, through its working group leadership structure, resisted these proposals during the AR4 process is unsettling, given that the scientific communities from which IPCC authors are drawn are supposed to think analytically about the world as a whole. Apparently, this dictum does not extend to reflexive consideration within the IPCC process as it performs its assessments. Such reflexivity is entirely normal in social sciences, and increases, rather than decreases, the rigor of and confidence in the associated findings.
The first proposal calls for relaxing the focus on consensus and instead putting as much effort into presenting the full range of expert judgments. We and others have gone so far as to suggest that consensus on many key aspects of climate change is well known to governments; and we agree with Socolow in this issue, and others, when they argue that the value added from an assessment is in displaying the range of views. The most complete way to do so would be to present not only the range of views in the community, but also the range of views within the assessment group and perhaps even consider ripping off the mask of anonymity that cloaks our deliberations. Other ideas to increase transparency about the full spectrum of beliefs have surfaced, including opening author deliberations to scholars of decision-making, or/and the media.
The second urges that all Working Groups forgo the fiction that expert deliberations are entirely objective and that arriving at judgments by deliberation within what are usually small subgroups is the only permissible approach to assessment. Formal expert elicitation (Morgan and Henrion; Morgan, this issue) has been proposed again and again (for example, before the first uncertainty guidance in the report of the Aspen workshop (see Moss 2011 and Hassol 1996) and it is troubling that the IPCC has repeatedly declined to explore its value. After all, there is only a sparse literature on the efficacy of IPCC’s favored approach (see below) in comparison to the relatively extensive scholarly literature on formalized elicitation of judgments This observation raises an important question in our mind: Why is IPCC so tied to a method whose value remains largely speculative, given how little it has been subject to scholarly study? Furthermore, it is unlikely that formalized elicitation as currently practiced is the only or even the best available alternative method for assessing expert knowledge. IPCC should be encouraging research into such approaches (much as they encourage research into new emissions scenarios, for example) rather than turning its back on them. If those who do such research had a client as large and visible as IPCC, then progress might occur quickly.
I’ve read the abstracts to all of the papers, and about half of the entire articles. I’ll briefly mention the papers that I found most interesting.
From Kristie Ebi’s paper:
The process begins with an assessment of the scientific evidence and agreement supporting a finding, where evidence is defined as including mechanistic understanding, theory, data, models, and expert judgment. Further, decision-makers often find it valuable for scientists to differentiate situations where a theory is generally agreed but for which supporting data are limited, from situations where empirical data lack an explanatory theory. The paper describes the approach used by the International Agency for Research on Cancer (IARC) for assessing the relative robustness of a theory separately from the strength and quality of its supporting evidence, and then developing consensus statements of whether an agent is a human carcinogenic. Although the IARC and IPCC processes are very similar, the IARC process also differs by combining theory, evidence, and agreement as equal partners in a limited set of standardized categories of confidence.
From Robert Socolow’s paper:
My principal recommendation for making the IPCC more helpful to the policy- making community is to strive in the Fifth Assessment Report (AR5) to communicate fully what the climate science community understands and does not understand about high- consequence outcomes. This will require the AR5 authors to provide vivid information about future worlds where high-consequence outcomes have emerged. It will also require the AR5 authors to reveal any disagreements persisting among them after the give-and-take of the writing process has run its course. In the Fourth Assessment Report (AR4) the presentation of high-consequence outcomes had shortcomings that can be rectified in AR5.
When knowledge is preliminary, it is also usually controversial, and the AR5 authors will need guidance regarding how to summarize discordant views. The AR5 Guidance Notes should help the authors distinguish synthesis from consensus. Disclosing only consensus should not be the objective. Rather, producing a synthesis, one that presents not only what everyone can agree upon but also important residual disagreements, is the objective.
From Risbey and O’Kane’s paper:
Ignorance is an inevitable component of climate change research, and yet it has not been specifically catered for in standard uncertainty guidance documents for climate assessments. Reports of ignorance in understanding require context to explain how such ignorance does and does not affect understanding more generally. The focus of this article is on dynamical sources of ignorance in regional climate change projections.
Note that the admission of ignorance is different from admitting uncertainty more generally. There has been a relatively frank reporting of climate uncertainties in the literature and IPCC reports. However, these documents and the reporting of uncertainties in the literature have generally shied away from the border with ignorance (Funtowicz and Ravetz, 1990).
One reflection of the avoidance of ignorance is the categorization of the level of certainty of changes in climate variables in the IPCC uncertainty guidance. In the IPCC guidance documents the degree of precision or knowledge of changes in climate variables can be expressed on a scale from “ambiguous” at the low certainty end through to quantification via a probability distribution at the high certainty end of the scale. There is no scope in the IPCC scale for professing less certainty than ambiguity. In the original schemes that the IPCC scale was based on there is an additional category to represent “ignorance” (Risbey et al., 2002) or “effective ignorance” (Risbey and Kandlikar, 2007) at the low end of the certainty scale.
One could argue that there is no great need for a category of ‘ignorance’ in summarizing and categorizing findings in the IPCC. If the goal is to summarize what we know, then by definition there is no need for a category that connotes that we don’t know. We just wouldn’t report such things in a summary. However, the IPCC is also there to address relevant questions of the science for policy, and has a mandate to assess potential environmental and socio-economic impacts. Sometimes the best answer to some of the questions of the science asked by policy may be “we don’t know”, and that is why we have need for a category of ignorance. It would be nice just to dispense science advice where we know all the answers, but that would mean ignoring some of the most critical questions.
The paper by King and Goodman looks very interesting, but I have been unable to find a copy.
The issue of expert elicitation discussed in the Granger Morgan paper will be the topic of a future post.
My paper was discussed previously [here].
And finally, if you missed our previous discussion of Richard Tol’s paper, check it it, fascinating paper (and discussion).
JC conclusion: Kudos to Yohe and Oppenheimer for organizing this special issue; I hope it has some impact on the IPCC.