Site icon Climate Etc.

Sensitivity about sensitivity

by Judith Curry

. . . the IPCC’s sensitivity estimate cannot readily be reconciled with forcing estimates and observational data. – James Annan

Since the release of the AR5 SOD, there has been a flurry of blog posts on the topic of climate sensitivity.  The triggers seem to have been Nic Lewis’ analysis (discussed here), plus a press release from the Norwegians (discussed here).  Both find values of sensitivity to be significantly lower than the consensus values.

In response to this, we have been ‘reassured’ by RealClimate  and SkepticalScience that nothing has really changed with regards to the consensus on sensitivity.  Gavin Schmidt states:

In the meantime, the ‘meta-uncertainty’ across the methods remains stubbornly high with support for both relatively low numbers around 2ºC and higher ones around 4ºC, so that is likely to remain the consensus range.

The ‘consensus’ range has been 1.5 – 4.5C (centered on 3C) since the 1979 Charney Report.  With all the many different ways of calculating these numbers (empirically and from simple models and general circulation models), and different results that have been obtained from these analyses, why hasn’t this range and central value budged in over 3 decades?  Here are some reasons:

1.  The ‘experts’ are convinced.. Zickfeld et al. (2010)  conducted face-to-face interviews with 14 leading climate scientists, using formal methods of expert elicitation.  The results were not surprising.  Apparently the results of this expert elicitation have a substantial influence in the AR5 report.  Before you start criticizing the formal expert elicitation process, it is WAY better (less biased) than a consensus building process (see my paper no consensus on consensus).   You are of course allowed to criticize all of this in the context of how many, and which, experts were included in this process.

2.  Anchoring devices.  Pielke Jr reminded me of this paper by van der Sluijs et al. Anchoring devices in climate for policy: the case of consensus around climate sensitivity.   Excerpt:

We show how the maintained consensus about the quantitative estimate of a central scientific concept in the anthropogenic climate-change field – namely, climate sensitivity – operates as an `anchoring device’ in `science for policy’. In international assessments of the climate issue, the consensus-estimate of 1.5°C to 4.5°C for climate sensitivity has remained unchanged for two decades. Nevertheless, during these years climate scientific knowledge and analysis have changed dramatically. We propose that the remarkable quantitative stability of the climate sensitivity range has helped to hold together a variety of different social worlds relating to climate change, by continually translating and adapting the meaning of the `stable’ range. But this emergent stability also reflects an implicit social contract among the various scientists and policy specialists involved, which allows `the same’ concept to accommodate tacitly different local meanings.

3.  The ‘experts’ have taken into account the latest knowledge on external forcing and uncertainties, model uncertainties, methodological uncertainties, etc. in preparing their estimates.  Oops, looks like they forgot to do this (see James Annan’s comments below)

Andy Revkin

Revkin has two recent posts on this topic:

Revkin has elicited some remarkable statements from IPCC authors regard climate sensitivity, as well as some startling comments from climate bloggers.

Reto Knutti

Revkin’s post on the Norwegian press release elicited the following statement from Reto Knutti, excerpts:

If you look at the Fig. 3a in our review (red lines at the top) you see that many previous estimates based on the observed warming/ocean heat uptake had a tendency to peak at values below 3°C (that review is from 2008). The Norwegian study is just another one of these studies looking at the global energy budget. The first ones go back more than a decade, so the idea is hardly new. The idea is always the same: if you assume a distribution for the observed warming, the ocean heat uptake, and the radiative forcing, then you can derive a distribution for climate sensitivity.

What is obvious is that including the data of the past few years pushes the estimates of climate sensitivity downward, because there was little warming over the past decade despite a larger greenhouse gas forcing. Also in some datasets the ocean warming in the top 700 meters is rather small, with very small uncertainties (Levitus GRL 2012), pushing the sensitivity down further. However, in my view one should be careful in over interpreting these results for several reasons:

a) the uncertainties in the assumed radiative forcings are still very large. Recently, Solomon et al. Science (2010, 2011) raised questions about the stratospheric water vapor and aerosol, and just days ago there was another paper arguing for a larger effect of black carbon (http://www.agu.org/news/press/pr_archives/2013/2013-01.shtml, a massive 280 pages…).

b) Results are sensitive to the data used, as shown by Libardoni and Forest DOI: 10.1029/2011GL049431 and others, and particularly sensitive to how the last decade of data is treated. Very different methods (detection attribution optimal fingerprint) have also shown that the last decade makes a difference (Gillett et al. 2011, doi:10.1029/2011GL050226).

c) The uncertainties in the ocean heat uptake may be underestimated by Levitus, and there are additional uncertainties regarding the role of deep ocean heat uptake (Meehl et al. 2011 Nature Climate Change).

Even though we have many of these studies (and I am responsible for a couple of them) I’m getting more and more nervous about them, because they are so sensitive to the climate model, the prior distributions, the forcing, the ocean data, the error model, etc. The reason for this, to a large extent, is that the data constraint is weak, so the outcome (posterior) is dominated by what you put in (prior).

 

James Annan

I found the statement by Knutti to be interesting and I think it made some good points.  This post by James Annan made my jaw drop.  Excerpts:

As I said to Andy Revkin (and he published on his blog), the additional decade of temperature data from 2000 onwards (even the AR4 estimates typically ignored the post-2000 years) can only work to reduce estimates of sensitivity, and that’s before we even consider the reduction in estimates of negative aerosol forcing, and additional forcing from black carbon (the latter being very new, is not included in any calculations AIUI). It’s increasingly difficult to reconcile a high climate sensitivity (say over 4C) with the observational evidence for the planetary energy balance over the industrial era.

Note for the avoidance of any doubt I am not quoting directly from the unquotable IPCC draft, but only repeating my own comment on it. However, those who have read the second draft of Chapter 12 will realise why I previously said I thought the report was improved :-) Of course there is no guarantee as to what will remain in the final report, which for all the talk of extensive reviews, is not even seen by the proletariat, let alone opened to their comments, prior to its final publication. The paper I refer to as a “small private opinion poll” is of course the Zickfeld et al PNAS paper. The list of pollees in the Zickfeld paper are largely the self-same people responsible for the largely bogus analyses that I’ve criticised over recent years, and which even if they were valid then, are certainly outdated now. Interestingly, one of them stated quite openly in a meeting I attended a few years ago that he deliberately lied in these sort of elicitation exercises (i.e. exaggerating the probability of high sensitivity) in order to help motivate political action. Of course, there may be others who lie in the other direction, which is why it seems bizarre that the IPCC appeared to rely so heavily on this paper to justify their choice, rather than relying on published quantitative analyses of observational data. Since the IPCC can no longer defend their old analyses in any meaningful manner, it seems they have to resort to an unsupported “this is what we think, because we asked our pals”. It’s essentially the Lindzen strategy in reverse: having firmly wedded themselves to their politically convenient long tail of high values, their response to new evidence is little more than sticking their fingers in their ears and singing “la la la I can’t hear you”.

 Of course, this still leaves open the question of what the new evidence actually does mean for climate sensitivity. Irrespective of what one thinks about aerosol forcing, it would be hard to argue that the rate of net forcing increase and/or over-all radiative imbalance has actually dropped markedly in recent years, so any change in net heat uptake can only be reasonably attributed to a bit of natural variability or observational uncertainty.

 But the point stands, that the IPCC’s sensitivity estimate cannot readily be reconciled with forcing estimates and observational data. All the recent literature that approaches the question from this angle comes up with similar answers. By failing to meet this problem head-on, the IPCC authors now find themselves in a bit of a pickle. I expect them to brazen it out, on the grounds that they are the experts and are quite capable of squaring the circle before breakfast if need be. But in doing so, they risk being seen as not so much summarising scientific progress, but obstructing it.

 There’s a nice example of this in Reto Knutti’s comment featured by Revkin. While he starts out be agreeing that estimates based on the energy balance have to be coming down, he then goes on to argue that now (after a decade or more of generating and using them) he doesn’t trust the calculations because these Bayesian estimates are all too sensitive to the prior choices. That seems to me to be precisely contradicted by all the available literature, It looks rather like the IPCC authors have invented this meme as some sort of talismanic mantra to defend themselves against having to actually deal with the recent literature.

For some context on why my jaw dropped, here  are some posts by James Annan where he has been harshly critical of my criticisms of the consensus conclusions and the consensus building process.  Looks like JA has been bitten by the uncertainty monster.  The antidote is to reread my papers on the Uncertainty Monster and Reasoning About Climate Uncertainty.

Stoat

Comment from William Connolley in Revkin’s second post:

James also identifies a possible problem in the way IPCC subgroups can come to “own” a particular area, and find outside opinions — even those clearly from within Science rather than the wackosphere — unwelcome. I don’t know how serious that is: again, I’d be inclined to trust James Annan on this, but that’s all I’d be doing. Perhaps an investigative journalist might take an interest.

JC comment:  This one made me laugh. Here is how ‘outside opinions’ have been treated by Stoat (search for ‘shark jumping’). 

JC summary:  Kudos to Revkin for stimulating this blogospheric  discussion.  Whether all this heralds a ‘game change’ in the ‘consensus’ remains to be seen.

The broader issue with climate sensitivity is this.  The simplistic way in which this is defined and calculated makes the whole concept an artifact of the oversimplification.  On short time scales (decade to centuries), there is no satisfactory way of sorting out forced climate variability from natural internal climate variability unless you have a really good climate model that can adequately handle the natural internal variability on the range of time scales from years to millennia. Empirical methods have yet to do this in any sensible way, IMO.

Further, it is  misleading to think of sensitivity in terms of a pdf (see my previous post on this here).  Talking about the probability of a climate sensitivity fat tail is meaningless in my opinion.  What is meaningful is the possibility of a scenario of abrupt climate change.  While abrupt climate change is regarded as a possibility based upon paleoclimatic evidence of previous events, climate models are incapable of producing such emergent phenomena.  The concept of abrupt climate change does not figure into any estimate of equilibrium sensitivity that I am aware of.

Until we better understand natural internal climate variability, we simply don’t know how to infer sensitivity to greenhouse gas forcing. The issue of how climate will change over the 21st century is highly uncertain, and we basically don’t know whether or not different scenarios of greenhouse gas emissions will be (or not be) the primary driver on timescales of a century or less.  Oversimplification and overconfidence on this topic have acted to the detriment of climate science. As scientists, we need to embrace the uncertainty, the complexity and the messy wickedness of the problem.  We mislead policy makers with our oversimplifications and overconfidence.

Exit mobile version