by Judith Curry
In the past 6 months or so, we have seen numerous different plots of the CMIP5 climate model simulations versus observations.
The first such plot that I saw was produced by John Christy in his Congressional Testimony last August:
The next time I encountered a similar diagram was in the leaked IPCC AR5 SOD, chapter 1, Figure 1.4. I am not going to reproduce that figure here since I am not sure about the legal status of this situation in context of my agreement with wordpress.com, but you can find the link [here]. In short, the diagram shows, for the period 1990-2015, the spread of FAR, SAR, TAR, AR4 and AR5 climate model simulations against the three main global surface temperature analyses. The also include a gray shading that corresponds to observational uncertainty and internal variability (although I have no idea how ‘internal variability’ is taken into account).
I next encountered a version of this diagram at RealClimate, in a post dated 7 February 2013:
.
Oops, the first time I glanced at this I had assumed that this was CMIP5, looks like it is CMIP3 instead.
I then spotted a version of this diagram on a post by Roger Pielke Jr, that came from a tweet by Ed Hawkins:
This figure motivated me to head over to Ed Hawkin’s blog to see what he is up to, and I spotted this very interesting analysis that compares the CMIP simulations with observations, where the CMIP5 output is masked to eliminate regions where there is missing data from HADCRUT4:
The conclusion is the same as in each of the past few years; the models are on the low side of some changes, and on the high side of others, but despite short-term ups and downs, global warming continues much as predicted.
The fact is that the comparison of climate model predictions with the last few decades of observations is dominating the public discussion of global warming (well, alongside the issue of global warming impact on extreme weather).
There is no simple way to interpret these comparisons. I like the approach that Ed Hawkins is taking with the masking. The range of model simulation results needs to be presented in several different ways to really understand the distribution of results: spaghetti diagrams, pdfs, and block ranges for different sets of simulations and scenarios.
In the midst of substantial public interest on this issue, there is no published analysis that I know of that compares CMIP5 simulations to observations, although it looks like Ed Hawkins’ analysis is heading towards publication. The IPCC process is actually slowing this down, since presumably those involved in producing these simulations or otherwise involved in the IPCC AR5 are holding these results until the final AR5 report so that some ‘consensus’ can be built in terms of how to interpret these results and ‘communicate’ the uncertainty to the public. The leak of the IPCC AR5 SOD gives us a glimpse into what the IPCC is thinking, but the formal academic etiquette of citing or posting leaked information precludes their use in academic publications, blog posts (the timing of my wordpress blog crash occurred around the time of the SOD leak, conspiracy theorists have at it) and raises interesting ethical considerations in personal communication of such inform with a policy maker through a briefing or testimony.
I would be most interested in any other analyses model-observation comparisons that include CMIP5 simulations, please let me know if you have spotted anything.
