by Judith Curry
The Lewis and Crok paper is stimulating much discussion; unfortunately little of it is technical. Lets devote a thread to technical discussion on the issues the raise.
Ed Hawkins’ blog
At Ed Hawkins’ Climate Lab Bok, there is a post Comments on the GWPF climate sensitivity report. This includes a guest post by Piers Forster and comments from Ed Hawkins and Nic Lewis.
Summary of Piers Forster’s comments:
These are two reasons why the Lewis & Crok estimates of future warming may be biased low. Nevertheless, their methods indicate that we can expect a further 2.1°C of warming by 2081-2100 using the business-as-usual RCP 8.5 emissions scenario, much greater than the 0.8°C warming already witnessed.
From comments by Ed Hawkins:
Lewis & Crok are part of the consensus. Their estimates are well within the IPCC range. And, yes, their method is observationally based, but it still uses a model, and as described in the post this approach is no panacea.
Comment for Nic Lewis:
Hi Piers, I’ve just seen your comments on my and Marcel Crok’s report, Piers. In your haste you seem to have got various things factually wrong. You cliam that the warming projections in our report may be biased low, citing two particular reasons. First:
” the Gregory and Forster (2008) method employed in the Lewis & Crok report to make projections (by scaling TCR) leads to systematic underestimates of future temperature change”
I spent weeks trying to explain to Myles Allen, following my written submission to the parliamentary Energy and Climate Change Committee, that I did not use for my projections the unscientific ‘kappa’ method used in Gregory and Forster (2008).
Unlike you and Jonathan Gregory, I allow for ‘warming-in-the-pipeline’ emerging over the projection period. Myles prefers to use a 2-box model, as also used by the IPCC, rather than my method. His oral evidence to the ECCC included reference to projections he had provided to them that used a 2-box model.
I agree that the more sophisticated 2-box model method is preferable in principle for strong mitigation scenarios, particularly RCP2.6. If you took the trouble to read our full report, you would see that I had also computed forcing projections using a 2-box. The results were almost identical to those using my simple TCR based method – in fact slightly lower.
So your criticisms on this point are baseless.
Secondly, you say:
“in Figure 3, the CMIP5 models have been reanalysed using the same coverage as the surface temperature observations. In this figure, uncertainty ranges for both ECS and TCR are similar across model estimates and the observed estimates. This indicates that using HadCRUT4 to estimate climate sensitivity likely also results in a low bias.”
I have found that substituting data from the NASA/GISS or NOAA/MLOST global mean surface temperature records (which do infill missing data areas insofar as they conclude is justifiable) from their start dates makes virtually difference to energy budget ECS and TCR estimates. So your conclusion is wrong.
Perhaps what your figure actually shows is more what we show in our Fig.8, That is, most CMIP5 models project significantly higher warming than their TCR values imply, even allowing for warming-in-the-pipeline.
Armour et al.
One of the most insightful papers on climate sensitivity that I’ve read in a long time is a paper by Kyle Armour et al. Kyle Armour also gave an invited talk in the same APS session as mine earlier this week.
Time varying climate sensitivity from regional feedbacks
Kyle Armour, Cecilia Bitz, Gerard Roe
The sensitivity of global climate with respect to forcing is generally described in terms of the global climate feedback-the global radiative response per degree of global annual mean surface temperature change. While the global climate feedback is often assumed to be constant, its value-diagnosed from global climate models-shows substantial time variation under transient warming. Here a reformulation of the global climate feedback in terms of its contributions from regional climate feedbacks is proposed, providing a clear physical insight into this behavior. Using (i) a state-of-the-art global climate model and (ii) a low-order energy balance model, it is shown that the global climate feedback is fundamentally linked to the geographic pattern of regional climate feedbacks and the geographic pattern of surface warming at any given time. Time variation of the global climate feedback arises naturally when the pattern of surface warming evolves, actuating feedbacks of different strengths in different regions. This result has substantial implications for the ability to constrain future climate changes from observations of past and present climate states. The regional climate feedbacks formulation also reveals fundamental biases in a widely used method for diagnosing climate sensitivity, feedbacks, and radiative forcing-the regression of the global top-of-atmosphere radiation flux on global surface temperature. Further, it suggests a clear mechanism for the ‘efficacies’ of both ocean heat uptake and radiative forcing.
published in J. Climate, [link] to complete manuscript.
Anthony Watts has a post Notes on the reax to Lewis and Crok, along with looking at climate model output versus data. Anthony refers to two new papers that I hadn’t previously seen.
A paper published last December in the SIAM/ASA Journal on Uncertainty Quantification argues that it is more appropriate to compare the distribution of climate model output data (over time and space) to the corresponding distribution of observed data. Distance measures between probability distributions, also called divergence functions, can be used to make this comparison.
The authors evaluate 15 different climate models by comparing simulations of past climate to corresponding reanalysis data. Reanalysis datasets are created by inputting climate observations using a given climate model throughout the entire reanalysis period in order to reduce the effects of modeling changes on climate statistics. Historical weather observations are used to reconstruct atmospheric states on a global grid, thereby allowing direct comparison to climate model output.
View the paper:
It has been argued persuasively that, in order to evaluate climate models, the probability distributions of model output need to be compared to the corresponding empirical distributions of observed data. Distance measures between probability distributions, also called divergence functions, can be used for this purpose. We contend that divergence functions ought to be proper, in the sense that acting on modelers’ true beliefs is an optimal strategy. The score divergences introduced in this paper derive from proper scoring rules and, thus, they are proper with the integrated quadratic distance and the Kullback–Leibler divergence being particularly attractive choices. Other commonly used divergences fail to be proper. In an illustration, we evaluate and rank simulations from 15 climate models for temperature extremes in a comparison to reanalysis data.