Site icon Climate Etc.

Spinning the climate model-observations comparison. Part III

Several new analyses of relevance to interpreting the comparison of climate models with observations.

Ed Hawkins

Ed Hawkins has a new post Near-term global surface temperature projections in IPCC AR5.    He refers to Figure 11.25 in the AR5 Report (link to Ch 11; relevant text starts on p 51):

See Ed’s post for the complete Figure caption.  Two things strike me:

1) The top panel seems a much more honest comparison than the Figure 1.4 discussed on the previous Spinning thread.  Clearly the IPCC has not been consistent in its spin, with an apparently spin free Figure 11.25 making it into the document.

2) The red box in the middle panel, which implies a lowering of their projections relative to the climate models, seems a tacit admission that their model projections of actual climate variability and change are highly uncertain.

In trying to understand where the red box comes from, here is some text from Ch 11:

Overall, in the absence of major volcanic eruptions and, assuming no significant future long term changes in solar irradiance, it is likely (>66% probability) that the GMST anomaly for the period 2016–2035, relative to the reference period of 1986–2005 will be in the range 0.3°C–0.7°C (expert assessment, to one significant figure; medium confidence) This range is consistent, to one significant figure, with the range obtained by using CMIP5 5–95% model trends for 2012– 2035. It is also consistent with the CMIP5 5–95% range for all four RCP scenarios of 0.36°C–0.79°C, using the 2006–2012 reference period, after the upper and lower bounds are reduced by 10% to take into account the evidence noted under point (5) that some models may be too sensitive to anthropogenic forcing. 

However, the implied rates of warming over the period from 1986–2005 to 2016–2035 are lower as a result of the hiatus: 0.10°C–0.23°C per decade, suggesting the AR4 assessment was near the upper end of current expectations for this specific time interval.

The assessment here provides only a likely range for GMST. Possible reasons why the real world might depart from this range include: radiative forcing departs significantly from the RCP scenarios, either due to natural (e.g., major volcanic eruptions, changes in solar irradiance) or anthropogenic (e.g., aerosol or greenhouse gas emissions) causes; processes that are poorly simulated in the CMIP5 models exert a significant influence on GMST. The latter class includes: a possible strong “recovery” from the recent hiatus in GMST; the possibility that models might underestimate decadal variability; the possibility that model sensitivity to anthropogenic forcing may differ from that of the real world; and the possibility of abrupt changes in climate.

This text in Chapter 11 is some of the most believable text in the report, presenting a realistic assessment of the uncertainties in the climate system and the inadequacies of the climate models.  Too bad the folks writing Chapter 9 (95% confidence in attribution)  didn’t read Chapter 11 first.

Chip Knappenberger

At Cato, Chip Knappenberger and Pat Michaels have a post Climate Models’ Tendency to Simulate Too Much Warming and the IPCC’s Attempt to Cover That Up.  The main technical meat of their post is to compare observed and modeled 15- yr and 20-yr moving trends during the period 1951-2012.

Figure 3. The observed 15-yr moving trend during the period 1951-2012 from the Hadley Center temperature record (blue) compared to the multi-model mean trend (thick red line) and the 95 percent range of individual model simulations (thin red lines). (The model simulations consisted of 108 individual model runs downloaded from Climate Explorer that combined the RCP4.5 scenario with historical simulations).

 The result isn’t very pretty for the IPCC.

Most of the time, the observed trend is either near or below the average trend from the models. At the recent end of the record, the observed trend has fallen below the model trend for 12 consecutive overlapping periods, and the discrepancy is growing larger. For the past two periods, 1997-2011 and 1998-2012, the observed trend falls below the 95 percent range of modeled trends, a statistical indication that the observed trend is not a member of the modeled set of predictions (i.e., the models do not statistically capture reality as represented by the observations).

Also notice that there are few excursions of the observed trend above the value of the modeled trend—a few periods ending in the late 1960s, mid-1980s, and again for the period ending in 1998 and 1999. The 1998 endpoint is the one that the IPCC decided to highlight—the fattest cherry they could find. It ends the largest positive discrepancy between observations and models during the past 40 years. Had they chosen to break their analysis in 1997, the observed trend during the period 1983-1997 would have been very close to the multi-model mean, while during the period 1997-2011, it still would have fallen below virtually every climate model simulation. 

By carefully choosing the break point in their analysis to be 1998, the IPCC concocts a narrative that sometimes the observed warming trend is less than the model average and at other times it is greater than the model average and that it all works out. The reality of the situation is that the small (in time as well as magnitude) positive excursion during the late 1990s is dwarfed by the large (in time as well as in magnitude) negative excursion of the observed trend beneath the model average trend at the end of the record. Two wrongs do not make a right.

Steve McIntyre

Steve McIntyre has a new post Fixing the Facts 2 that raises additional concerns about changes in the various drafts and in particular the deletion of Fig 1.5 from the SOD.   The punchline of his post:

Richard Betts did not dispute the accuracy of the comparison in SOD Figure 1.5, but argued that the new Figure 1.4 was “scientifically better”. But how can the comparison be “scientifically better” when uncertainty envelopes are shown for the three early assessment reports, but not for AR4. Nor can a comparison between observations and AR4 projections be made “scientifically better” – let alone valid in accounting terms – by replacing actual AR4 documents and graphics with a spaghetti graph that did not appear in AR4.

Nor is the new graphic based on any article in peer reviewed literature.

Nor did any external reviewers of the SOD suggest removal of Figure 1.5, though some (e.g. Ross McKitrick) pointed out the inconsistency between the soothing text and the discrepancy shown in the figures.

Nor, in the absence of error, is there any justification for such wholesale changes and deletions after the third and final iteration had been sent to external reviewers.

In the past, IPCC authors famously deleted data to “hide the decline” in Briffa’s temperature reconstruction in order to avoid “giving fodder to skeptics”. Without this past history, IPCC might be entitled to a little more latitude. However, neither IPCC nor its supporting institutions renounced such conduct or undertook avoid similar incidents in the future. Thus, IPCC is vulnerable to concerns that its deletion of SOD Figure 1.5 was primarily motivated to avoid “giving fodder to skeptics”.

Perhaps there’s a valid reason, but it hasn’t been presented yet.

JC summary:  Sure makes the Mora et al ‘we are toast by 2047’ argument, based on climate model projections, look pretty far off the mark.

Exit mobile version