by Judith Curry
Use of state-of-the-art statistical methods could substantially improve the quantification of uncertainty in assessments of climate change.
The latest issue of Nature Climate Change has another interesting commentary, by Katz, Craigmile, Guttorp, Haran, Sanso, and Stein: Uncertainty analysis in climate change assessments [link; behind paywall]. Excerpts:
Because the climate system is so complex, involving nonlinear coupling of the atmosphere and ocean, there will always be uncertainties in assessments and projections of climate change. This makes it hard to predict how the intensity of tropical cyclones will change as the climate warms, the rate of sea-level rise over the next century or the prevalence and severity of future droughts and floods, to give just a few well-known examples. Indeed, much of the disagreement about the policy implications of climate change revolves around a lack of certainty. The forthcoming Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5) and the US National Climate Assessment Report will not adequately address this issue. Worse still, prevailing techniques for quantifying the uncertainties that are inherent in observed climate trends and projections of climate change are out of date by well over a decade. Modern statistical methods and models could improve this situation dramatically.
Uncertainty quantification is a critical component in the description and attribution of climate change. In some circumstances, uncertainty can increase when previously neglected sources of uncertainty are recognized and accounted for. In other circumstances, more rigorous quantification may result in a decrease in the apparent level of uncertainty, in part because of more efficient use of the available information. Nevertheless, policymakers need more accurate uncertainty estimates to make better decisions.
The recent IPCC Special Report Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation10, on which the IPCC AR5 relies, does not take full advantage of the well-developed statistical theory of extreme values. Instead, many results are presented in terms of spatially and temporally aggregated indices and/or consider only the frequency, not the intensity of extremes. Such summaries are not particularly helpful to decision-makers. Unlike the more familiar statistical theory for averages, extreme value theory does not revolve around the bell-shaped curve of the normal distribution. Rather, approximate distributions for extremes can depart far from the normal, including tails that decay as a power law. For variables such as precipitation and stream flow that possess power-law tails, conventional statistical methods would underestimate the return levels (that is, high quantiles) used in engineering design and, even more so, their uncertainties. Recent extensions of statistical methods for extremes make provision for non-stationarities such as climate change. In particular, they now provide non-stationary probabilistic models for extreme weather events, such as floods.
It has recently been claimed that, along with an increase in the mean, the probability distribution of temperature is becoming more skewed towards higher values. But techniques based on extreme value theory do not detect this apparent increase in skewness. Any increase is evidently an artefact of an inappropriate method of calculation of skewness when the mean is changing.
Box 1 | Recommendations to improve uncertainty quantification.
- Replace qualitative assessments of uncertainty with quantitative ones.
- Reduce uncertainties in trend estimates for climate observations and projections through use of modern statistical methods for spatio-temporal data.
- Increase the accuracy with which the climate is monitored by combining various sources of information in hierarchical statistical models.
- Reduce uncertainties in climate change projections by applying experimental design to make more efficient use of computational resources.
- Quantify changes in the likelihood of extreme weather events in a manner that is more useful to decision-makers by using methods that are based on the statistical theory of extreme values.
- Include at least one author with expertise in uncertainty analysis on all chapters of IPCC and US national assessments.
If these recommendations are adopted, the improvements in uncertainty quantification would thereby help policymakers to better understand the risks of climate change and adopt policies that prepare the world for the future.
JC comments: I applaud the publication of this essay by Nature Climate Change. This paper comes from the Geophysical Statistics Project at NCAR, of which I am a fan.
I support most of the recommendations made in this paper. But I have one big-picture cautionary concern. Uncertainty quantification is a growing buzzword in the climate modeling community. The Wikipedia defines UQ as the science of quantitative characterization and reduction of uncertainties in applications. It tries to determine how likely certain outcomes are if some aspects of the system are not exactly known. What’s not to like about UQ?
Sometimes the uncertainty monster is just too big to characterize , implying big areas of ignorance. The biggest area of ignorance is observations that we didn’t make in the past. In this case, at best we may able to state the sign of a trend, or put some bounds around what we are estimating, but the nature of the uncertainty and the level of ignorance should be carefully evaluated before attempting to quantify the uncertainty in context of a pdf (which has the potential to mislead). My concerns along these lines are fully explicated in my uncertainty monster paper.
The bottom line, and in this I agree 200% with the paper, is that climate science would HUGELY benefit from greater incorporation of statistical expertise. The relative lack of this expertise is one factor that contributes to overconfidence in the IPCC conclusions.