Site icon Climate Etc.

Spinning the climate model – observation comparison: Part II

by Judith Curry

IPCC model global warming projections have done much better than you think. – Dana Nuccitelli

Last February, I wrote a post Spinning the climate model – observation comparison, where I introduced Ed Hawkins’ now famous graph:

The following version of Figure 1.4 from the Second Order Draft of the AR5 SPM showed this version of the comparison:

The Final Draft SMP made this statement about the model-observation comparison:

“Models do not generally reproduce the observed reduction in surface warming trend over the last 10 –15 years.”

Much has been made of the lack of agreement between the model projections and observations, with the observations being perilously close to falling outside the entire range of model projections.

The final WG1 Report provides the following version of  Figure 1.4:

Note that the observations are no longer outside the range of model simulations. Text from Chapter 1 states (both Final and SOD):

Even though the projections from the models were never intended to be predictions over such a short time scale, the observations through 2012 generally fall within the projections made in all past assessments.

Steve McIntyre has a post IPCC: Fixing the Facts that discusses the metamorphosis of the two versions of Figure 1.4.  McIntyre states:

For the envelopes from the first three assessments, although they cite the same sources as the predecessor Second Draft Figure 1.4, the earlier projections have been shifted downwards relative to observations, so that the observations are now within the earlier projection envelopes. You can see this relatively clearly with the Second Assessment Report envelope: compare the two versions. At present, I have no idea how they purport to justify this.

The main issue seems to be this.  Both plots make different choices as to which year/period you are aligning the models to the observations for comparison.  Depending on which you pick, you get observations to be inside or outside the projection values.  How ‘best’ to do this is discussed at Tamino’s and Lucia’s thread.

Using different choices for this can be superficially misleading, but doesn’t really obscure the underlying important point, which is summarized by Ross McKitrick on the ClimateAudit thread: 

Playing with the starting value only determines whether the models and observations will appear to agree best in the early, middle or late portion of the graph. It doesn’t affect the discrepancy of trends, which is the main issue here. The trend discrepancy was quite visible in the 2nd draft Figure 1.4. All they have succeeded in doing with the revised figure is obscuring it.

JC deconstruction

Lets take a closer look to see why all this is so confusing.  First, note the Figure 1.4 plots temperature anomalies, from some reference year/period.  The ‘necessity’ for plotting temperature anomalies rather than the actual temperatures is evidenced from the figure below (from Mauritzen et al., see this previous post)

Pay attention the gray lines prior to 2000.  These lines indicate the model temperature climatologies, many of which are running 1-2C above or below observed temperatures.   To compare climate models with observations (or even with themselves), the model climatology at a reference period is subtracted to produce a temperature anomaly.

Forget for a moment your uneasiness about model climatologies that are 1-2C different from observations; your uneasiness might arise from wondering how these models produce anything sensible given the temperature dependence of the saturation vapor pressure over water, the freezing temperature of water, and the dependence of feedbacks on temperature parameter space. Thank goodness for tuning.

Back to the main point.  Comparing the model temperature anomalies with observed temperature anomalies, particularly over relatively short periods, is complicated by the acknowledgement that climate models do not simulate the timing of ENSO and other modes of natural internal variability; further the underlying trends  might be different.  Hence it is difficult to make an objective choice for matching up the observations and model simulations.  Different strategies have been tried (as per the debate discussed above); matching the models and observations in different ways can give different spins on the comparison.

How to make the comparison depends on the hypothesis you are trying to test.  If you are trying to test the hypothesis that climate models have not predicted the pause since 1998, then you should be comparing trends between models and observations, rather than seeing if the observed temperature anomalies  lie within a broad envelope of climate model simulations.

Using Figure 1.4 and this statement:

Even though the projections from the models were never intended to be predictions over such a short time scale, the observations through 2012 generally fall within the projections made in all past assessments.

to infer that the models have been able to simulate the recent pause is arguably an example of the Texas sharpshooter fallacy.  Nor should this new version of Fig 1.4 lead you to think that “IPCC models are better than you think.”  The problem is not so much with Figure 1.4, but with the statement above that interprets the figure.  Nowhere in the final WG1 Report do we see the honest statement that appeared in the Final Draft of the SPM:

“Models do not generally reproduce the observed reduction in surface warming trend over the last 10–15 years.”

The following are the take-home statements about climate model-observations comparisons:

Some skeptical sites are trumpeting the new figure 1.4 as a ‘hide the decline’, a new Climategate, etc.  There may be nothing technically wrong with Figure 1.4, although it will mislead the public (and Dana Nuccitelli) to infer that climate models are better than we thought, especially with misleading accompanying text in the Report.

Of the diagrams, I like Ed Hawkins diagram the best: it does a good job of lining up the climate models and observations in a sensible way from 1960-1990, so as to show the growing discrepancy between models and observations over the last decade.

What is wrong is the failure of the IPCC to note the failure of nearly all climate model simulations to reproduce a pause of 15+ years.

Yes, Dana Nuccitelli, climate models are just as bad as we thought – and even worse than most people think, since the inability of most models to reproduce the Earth’s average temperature is not well known.

Exit mobile version