by Judith Curry
Uncertainty abounds in issues related to climate science and climate changes, the impacts of those changes, and the efficacy of strategies that might be used to mitigate or adapt to change. There are, however, a few things about which we can be quite certain. There are also a number of things about which many people are certain, but should not be.
The above words comprise the abstract of a paper by Morgan and Mellon in the Climatic Change special issue on uncertainty guidance for the IPCC. This special issue was discussed previously on this post. Most of the papers were behind paywall. The publisher, Springer, is making all of its journal publications public access for the month of December, so I have downloaded the remaining papers from [here]. Here are links to the papers that were not discussed or linked to on the previous thread:
Reducing doubt about uncertainty: Guidance for IPCC’s third assessment. Richard H. Moss [link moss]
Treatment of uncertainties in IPCC Assessment Reports: past approaches and considerations for the Fifth Assessment Report. Michael D. Mastrandrea and Katharine J. Mach [link mastandrea]
Differentiating theory from evidence in determining confidence in an assessment finding. Kristie L. Ebi [link ebi]
Applying the science of communication to the communication of science. Baruch Fischhoff [link fischoff]
Certainty, uncertainty, and climate change. M. Granger Morgan and Carnegie Mellon [link morgan] .
Defense community perspectives on uncertainty and confidence judgments. Marcus King and Sherri Goodman [link king]
Communicating climate change risks in a skeptical world. John D. Sterman [link sterman]
Certainty, uncertainty and climate change
I’ve selected one of these papers to highlight: the paper by Morgan .
This paper argues for the use of formal expert elicitations, as an alternative to the consensus seeking approach of the IPCC. Some excerpts:
While the IPCC has yet to make use of them, there are methods that allow an even more precise characterization of uncertainties. “Expert elicitation” involves a set of techniques first developed in the decision analytic community. Subsequently, the methods used to perform elicitation have been informed by the work of a number of experimental psychologists who have demonstrated that, without being aware of it, people use a variety of cognitive heuristics when making judgments about uncertainty. While these heuristics work well in many settings, they can also give rise to a variety of biases when making judgments under uncertainty.
There is clear experimental evidence that both experts and laypeople are systematically over confident when making judgments about, or in the presence of, uncertainty. Expert elicitation cannot eliminate the problem of biases caused by the operation of cognitive heuristics, nor can it eliminate overconfidence. But unlike more informal methods such as group discussion (in which the same cognitive heuristics and tendency to overconfidence operate) what it can do is work systematically to try to identify and minimize such problems.
Since the early 1990s my colleagues and I have conducted four detailed expert elicitations in which we have obtained judgments from leading climate and ecosystem scientists about uncertainty in the value of a variety of key climate variables and impacts.
One hopes that research will lead to a reduction of uncertainty and at least some decision analysts appear to assume that this will always be the case. However, our respondents were all experienced scientists who understand that often research identifies unforeseen complexities, and thus, at least for a while, can increase rather than decrease uncertainty. Thus, for example, in Morgan and Keith (1995) we asked respondents to assess the probability that their uncertainty about the value of climate sensitivity would grow by 25% or more after a 15-year program of research at 1-billion $/year. The responses we obtained ranged from 0.08 to 0.30 (average value of 0.19). We have found similar results in our more recent elicitations. While such results are not surprising to experienced scientists, they do come as a surprise to some analysts and decision makers who view research as always reducing uncertainty.
Quantitative expert elicitation can be a very useful tool to identify and display the divergence of opinion within a field. It can do so with much greater clarity than qualitative statements of the sort produced by IPCC writing teams.
6 Selecting the right tools and the need to develop new ones
While doing a good job of characterizing and analyzing uncertainty, and of communicating uncertainty, is very important, perhaps even more important is selecting the right tools to do climate-related assessment. Most of the conventional tools of policy analysis implicitly assume that:
1. There is a single (public-sector) decision maker who faces a single problem (in the context of a single polity);
2. Values are known (or knowable), static, and exogenously determined;
3. The decision maker should select a policy by maximizing expected utility;
4. The impacts involved are of manageable size and can be valued at the margin;
5. Time preference is accurately described by conventional exponential discounting of future costs and benefits;
6. The system under study can reasonably be treated as linear;
7. Uncertainty is modest and manageable.
Despite years of modeling that seeks an optimal global climate policy, it should be obvious to all that what is optimal for the Inuit of Northern Canada, the Quechua and Aymara-speaking peoples of the Andes, or the Anglo population of Australia will not be the same. How those, or dozens of other communities, will value goods, services and ecosystems 50 to 100 years from now is also deeply uncertain, and likely to depend in critical ways on cultural and path-dependent processes.