by Judith Curry
“Climate dice”, describing the chance of unusually warm or cool seasons relative to climatology, have become progressively “loaded” in the past 30 years, coincident with rapid global warming. We conclude that extreme heat waves, such as that in Texas and Oklahoma in 2011 and Moscow in 2010, were “caused” by global warming, because their likelihood was negligible prior to the recent rapid global warming. – Hansen, Sato, Ruedy
We discussed this paper Perceptions of Climate Change: The New Climate Dice in a previous thread. The paper is getting pretty big play in the MSM, notably this article by Paul Krugman entitled Loading the Climate Dice.
Tamino on loaded dice
The idea of ‘loaded dice’ is described in Tamino’s post Craps:
An ordinary die has six faces, with a single spot on one face, two spots on another, etc. etc. up to six spots. When you roll the die, you get an essentially random result between 1 and 6. It’s not uncommon in many games (craps, for instance) to roll two dice and add their numbers to compute the result. We could even roll three, or four, or as many as the game requires (Yahtzee, anyone?).
In any case, let’s call the result you get weather. The way the faces of the dice are numbered, with the six faces having the numbers 1 through 6, let’s call thatclimate.
Climate (the labels on the dice) determines what you expect to get. It also determines how much you can expect your result to vary. This fits the definition of actual climate, which is the mean and variability of weather over long time spans over large areas. Although climate determines the average result and how much variation we can expect, it doesn’t by itself determine the actual result. That’s a random result of rolling the dice … or if you prefer, a random result of natural variations in weather.
When we change either the mean value or the variance of a distribution, then relatively speaking the most profound changes in the probability are likely to occur in the tails of the distribution, i.e., for the extreme events. Let’s take a look at how this might affect a different probability function, the normal distribution(the familiar “bell curve”).
[see details of the analysis in Tamino's post]
And this illustrates one of the greatest potential dangers of global warming. If we increase the mean temperature (and we already have), of course we increase the likelihood of extreme heat waves (and we already have). But if, in addition, global warming increases the variance of regional temperatures, then we increase the likelihood of extreme heat waves by a lot. A helluva lot. The effect was profound when we only increased the standard deviation by a factor of 1.1 — what if it increases by a factor of 1.2 or even more? The increased likelihood of extreme heat would be astounding. What’s more, we would also increase the likelihood of extreme cold spells!
Hansen et al.
The main conclusion of Hansen et al is:
“Climate dice”, describing the chance of unusually warm or cool seasons relative to climatology, have become progressively “loaded” in the past 30 years, coincident with rapid global warming. The distribution of seasonal mean temperature anomalies has shifted toward higher temperatures and the range of anomalies has increased. An important change is the emergence of a category of summertime extremely hot outliers, more than three standard deviations (σ) warmer than climatology. This hot extreme, which covered much less than 1% of Earth’s surface in the period of climatology, now typically covers about 10% of the land area. We conclude that extreme heat waves, such as that in Texas and Oklahoma in 2011 and Moscow in 2010, were “caused” by global warming, because their likelihood was negligible prior to the recent rapid global warming.
Hansen et al. illustrate a dramatic change in the historical distribution of global surface temperatures (Fig 9a):
What has been plotted is local temperature anomalies divided by the local standard deviation. These T/sigma values are collected for separate decades and a distribution of T/sigma value is plotted for each decade.
It is difficult to interpret Hansen’s widening distribution for several reasons. The analysis convoluted the trend and the variance, and it is not clear whether the spatial or temporal variance has changed. While Hansen et al. conclude that temperatures have become more varied/extreme, the results could also be explained by global trend and different local standard deviations.
Therefore, it is not conclusive that since the T/sigma distribution widened, temperatures have become more varied or extreme. It may be a result of the global trend and different local standard deviations.
Tamino has two posts on this topic, raising several issues
See also the comments, very good discussions. The punchline from Tamino’s analysis:
It’s that last part which makes the variance baseline-dependent. In particular, if the baseline period is the same as the time span we’re averaging over, then all the station averages will equal their corresponding station offsets, and all the differences will be zero. This will cause the estimated variance to be minimum. If, on the other hand, the differences between station means and station offsets show large variance because different stations have warmed differently between baseline and observation intervals, then the last term will greatly increase the estimated data variance.
Lets break this down a bit. The recent IPCC SREX Report illustrates how a change in mean, variance, and skewness can influence the extreme tail of a distribution:
Figure SPM.3 | The effect of changes in temperature distribution on extremes. Different changes in temperature distributions between present and future climate and their effects on extreme values of the distributions: (a) effects of a simple shift of the entire distribution toward a warmer climate; (b) effects of an increase in temperature variability with no shift in the mean; (c) effects of an altered shape of the distribution, in this example a change in asymmetry toward the hotter part of the distribution.
Need for caution in interpreting in interpreting extreme weather statistics
Sardeshmukh, Penland, and Comp (3 of my favorite NOAA scientists) have a good poster that explains the issues in interpreting extreme weather statistics. Their main point:
- The PDFs of daily atmospheric anomalies are not Gaussian: they are generally skewed and heavy tailed.
- Non-Gaussianity has enormous implications for the probabilities of extreme values, and for our ability to estimate their changes using limited samples
From Tamino’s part II post:
However, if we want to know whether or not the weather (not climate) is getting more variable, then we really want to isolate the individual-station fluctuations. Therefore I submit that in order to estimate the distribution of the temperature anomalies for estimating temperature variability during some time span (perhaps each decade, or a set of 11-year periods like Hansen et al.), then for each time span computed, then baseline for anomaly calculation should be equal to the time span being analyzed.
From Sardeshmukh et al.:
- We have demonstrated the relevance of “stochastically generated skewed” (SGS) distributions for describing daily atmospheric variability, that arise from simple extensions of a “red noise” process.
- The parameters of these SGS distributions, and of the associated linear Markov model, can be estimated from the first four moments of the data (mean, variance, skewness, and kurtosis). The model can then be run to generate not only the appropriate SGS distribution, but also to estimate sampling uncertainties through extensive Monte Carlo integrations.
- We have shown that extreme-value distributions can be estimated more accurately from limited- length records using such a Markov model than through direct GEV approaches.
- To accurately represent extreme weather statistics and their changes, it is necessary for climate models to accurately represent the first four moments of daily variability. The good news is that for many purposes this may also be sufficient. The bad news is that currently they do not adequately capture the changes of even the first moment (the mean) on regional scales.
JC comments: Hansen’s idea for presenting extremes in this way is a powerful one, but misleading. I agree with suggestions by Tamino and Sardesmukh et al., here are some other suggestions.
Surface temperature and heat wave extremes are arguably the easiest extreme weather event data to work with, having the most complete time/space coverage. I like the idea of the decadal analysis. But I would focus only on land (over oceans the data quality is worse, the extremes are moderated, and the impacts are smaller), and focus on regions and look at at the longest time series possible. This seems like an ideal application for the Berkeley Earth dataset.
The issue of trend needs to be dealt with in some way. I agree with Tamino that each decade’s anomaly should be determined from the decadal average.
A table for each decade with the 4 moments (mean, variance, skewness, kurtosis) for each region would be the most illuminating IMO. I agree with Sardeshmukh et al. that the most interesting aspects of such an analysis is likely to be changes in the skewness of the distribution. Identifying regional physical mechanisms that could account for changes in skewness (e.g. changes in precip, land use) would be very illuminating. In regions with increasing rainfall, the distribution might very be skewed so that the long tail is towards cool and not warm.
Digging into the data to understand decadal variation of extreme events seems much more productive than inferring extreme events from climate models, I think Sardeshmukh et al. nail it with this statement:
To accurately represent extreme weather statistics and their changes, it is necessary for climate models to accurately represent the first four moments of daily variability. The bad news is that currently they do not adequately capture the changes of even the first moment (the mean) on regional scales.