by Judith Curry
Recent observed global warming is significantly less than that simulated by climate models. This difference might be explained by some combination of errors in external forcing, model response and internal climate variability.
The latest issue of Nature Climate Change includes the following Opinion & Comment by Fyfe, Gillett and Zwiers: Overestimated global warming over the past 20 years. [link; behind paywall]. Its a short piece, here are some excerpts:
Global mean surface temperature over the past 20 years (1993–2012) rose at a rate of 0.14 ± 0.06 °C per decade (95% confidence interval). This rate of warming is significantly slower than that simulated by the climate models participating in Phase 5 of the Coupled Model Intercomparison Project (CMIP5). To illustrate this, we considered trends in global mean surface temperature computed from 117 simulations of the climate by 37 CMIP5 models. By averaging simulated temperatures only at locations where corresponding observations exist, we find an average simulated rise in global mean surface temperature of 0.30 ± 0.02 °C per decade (using 95% confidence intervals on the model average). The observed rate of warming given above is less than half of this simulated rate, and only a few simulations provide warming trends within the range of observational uncertainty.
The inconsistency between observed and simulated global warming is even more striking for temperature trends computed over the past fifteen years (1998–2012). For this period, the observed trend of 0.05 ± 0.08 °C per decade is more than four times smaller than the average simulated trend of 0.21 ± 0.03 °C per decade. The divergence between observed and CMIP5- simulated global warming begins in the early 1990s, as can be seen when comparing observed and simulated running trends from 1970–2012.
The evidence, therefore, indicates that the current generation of climate models (when run as a group, with the CMIP5 prescribed forcings) do not reproduce the observed global warming over the past 20 years, or the slowdown in global warming over the past fifteen years. [S]uch an inconsistency is only expected to occur by chance once in 500 years, if 20-year periods are considered statistically independent. Similar results apply to trends for 1998–2012. In conclusion, we reject the null hypothesis that the observed and model mean trends are equal at the 10% level.
One possible explanation for the discrepancy is that forced and internal variation might combine differently in observations than in models. For example, the forced trends in models are modulated up and down by simulated sequences of ENSO events, which are not expected to coincide with the observed sequence of such events. For this reason the moderating influence on global warming that arises from the decay of the 1998 El Niño event does not occur in the models at that time. Thus we employ here an established technique to estimate the impact of ENSO on global mean temperature, and to incorporate the effects of dynamically induced atmospheric variability and major explosive volcanic eruptions. Although these three natural variations account for some differences between simulated and observed global warming, these differences do not substantively change our conclusion that observed and simulated global warming are not in agreement over the past two decades. Another source of internal climate variability that may contribute to the inconsistency is the Atlantic multidecadal oscillation (AMO). However, this is difficult to assess as the observed and simulated variations in global temperature that are associated with the AMO seem to be dominated by a large and concurrent signal of presumed anthropogenic origin. It is worth noting that in any case the AMO has not driven cooling over the past 20 years.
Another possible driver of the difference between observed and simulated global warming is increasing stratospheric aerosol concentrations. Other factors that contribute to the discrepancy could include a missing decrease in stratospheric water vapour, errors in aerosol forcing in the CMIP5 models, a bias in the prescribed solar irradiance trend, the possibility that the transient climate sensitivity of the CMIP5 models could be on average too high or a possible unusual episode of internal climate variability not considered above. Ultimately the causes of this inconsistency will only be understood after careful comparison of simulated internal climate variability and climate model forcings with observations from the past two decades, and by waiting to see how global temperature responds over the coming decades.
JC comments: As far as I can tell, the methods for statistically comparing observations and models and drawing inferences from this comparison are rock solid.
The selection of 20 years is interesting for several reasons. It gets away from the ‘cherry picking’ criticism of starting with 1997 or 1998. Also it includes the big jump from 1993-1998.
In terms of reasons for model underestimation, the apparent ‘preferred’ explanation of ‘the ocean ate it’ does not get any play here, other than in context of a brief consideration of natural internal variability. Their conclusion This difference might be explained by some combination of errors in external forcing, model response and internal climate variability is right on the money IMO, although I don’t think their analysis of why the models might be wrong was particularly illuminating. If you would like further illumination on why the climate models might be wrong, I refer you to my uncertainty monster paper.
A few words about the authors. Jon Fyfe and Nathan Gillett are Canadian climate modelers. Francis Zwiers literally wrote the book on climate statistics (w/von Storch): Statistical Analysis in Climate Research. Fyfe was a lead author for the AR4; Gillett is a lead author for the AR5 Chapter 9; Zwiers is Vice Chair for WG1 of AR5.
Dare we hope for sanity from the AR5 in their assessment of detection and attribution? Based upon the ‘leaks’, I am not too hopeful.