by Judith Curry
There are several recent estimates of climate sensitivity that are worth taking a look at.
For reference and context, see these previous posts:
- Probabilistic (?) estimates of climate sensitivity
- Probabilistic estimates of climate sensitivity
- WHT on Schmittner on climate sensitivity
Hansen and Sato
Jim Hansen has announced a new draft manuscript, that is open for comments:
Climate sensitivity estimated from Earth’s climate history
James E Hansen and Makiko Sato
Abstract. Earth’s climate history potentially can yield accurate assessment of climate sensitivity. Imprecise knowledge of glacial-to-interglacial global temperature change is the biggest obstacle to accurate assessment of the fast-feedback climate sensitivity, which is the sensitivity that most immediately affects humanity. Our best estimate for the fast-feedback climate sensitivity from Holocene initial conditions is 3 ± 0.5°C for 4 W/m2 CO2 forcing (68% probability) . Slow feedbacks, including ice sheet disintegration and release of greenhouse gases (GHGs) by the climate system, generally amplify total Earth system climate sensitivity. Slow feedbacks make Earth system climate sensitivity highly dependent on the initial climate state and on the magnitude and sign of the climate forcing, because of thresholds (tipping points) in the slow feedbacks. It is difficult to assess the speed at which slow feedbacks will become important in the future, because of the absence in paleoclimate history of any positive (warming) forcing rivaling the speed at which the human-caused forcing is growing.
[link] to manuscript
From the summary:
There is a widespread perception that climate sensitivity should be represented by a probability distribution function that is extremely broad, a function that includes rather small climate sensitivities and has a long tail extending to very large sensitivities. That perception, we argue, is wrong. God (Nature) plays dice, but not for such large amounts. We note here several key reasons for perceptions about our knowledge of climate sensitivity.
First, there is an emphasis on climate models for studying climate sensitivity with an implicit belief that as long as climate models are deficient in their ability to simulate nature, climate sensitivity remains very uncertain. Model sensitivity is uncertain, to be sure, as illustrated by recent discussion of the difficulty of modeling clouds (Gillis, 2012). Aerosol feedbacks and the effect of these on clouds make a strict modeling approach a daunting task. However, climate science has a number of tools or approaches for assessing climate sensitivity, and the accuracy of the result will be set by the sharpest tool in the toolbox, a description that does not seem to fit pure climate modeling.
Second, there is, understandably, an emphasis on analysis of the period disturbed by human climate forcings, especially the past century, and it is found that a broad range of climate sensitivities are consistent with observed climate change, because the net climate forcing is very uncertain. Focus on the era of human-made climate change is appropriate, but, until the large uncertainty in aerosol climate forcing is addressed with adequate observations, ongoing climate change will not provide a sharp definition of climate sensitivity.
Third, there is a perception that paleoclimate changes are exceedingly complex, hard to understand, and indicative of a broad spectrum of climate sensitivities. To be sure, as we have emphasized, the huge climate variations in Earth’s history emphasize the dependence of climate sensitivity on the initial climate state as well as the dependence on the magnitude and sign of the climate forcing. However, the paleoclimate record, because of its richness, has the potential to provide valuable, and accurate, information on climate sensitivity.
Gillett et al.
Improved constraints on 21st-century warming derived using 160 years of temperature observations
N. P. Gillett, V. K. Arora, G. M. Flato, J. F. Scinocca, and K. von Salzen
Projections of 21st century warming may be derived by using regression-based methods to scale a model’s projected warming up or down according to whether it under- or over-predicts the response to anthropogenic forcings over the historical period. Here we apply such a method using near surface air temperature observations over the 1851–2010 period, historical simulations of the response to changing greenhouse gases, aerosols and natural forcings, and simulations of future climate change under the Representative Concentration Pathways from the second generation Canadian Earth System Model (CanESM2). Consistent with previous studies, we detect the influence of greenhouse gases, aerosols and natural forcings in the observed temperature record. Our estimate of greenhouse-gas-attributable warming is lower than that derived using only 1900–1999 observations. Our analysis also leads to a relatively low and tightly-constrained estimate of Transient Climate Response of 1.3–1.8°C, and relatively low projections of 21st-century warming under the Representative Concentration Pathways. Repeating our attribution analysis with a second model (CNRM-CM5) gives consistent results, albeit with somewhat larger uncertainties.
[link] to complete paper
Isaac Held
Isaac Held has a new post at his blog entitled Estimating TCR from recent warming. The main point:
Here’s an argument that suggests to me that the transient climate response (TCR) is unlikely to be larger than about 1.8C. This is roughly the median of the TCR’s from the CMIP3 model archive, implying that this ensemble of models is, on average, overestimating TCR
Paul K at Lucia’s
At Lucia’s Blackboard, Paul K has a lengthy and interesting post entitled The arbitrariness of the IPCC’s feedback calculations. The gist of Paul K’s article is about problems associated with the linearized analysis of the nonlinear flux response. The comments on this thread are also well worth reading.
JC comments
There has been a general trend in recent analyses of climate sensitivity to eliminate the long tail on the high end. There has also been an increased emphasis on observationally based estimates. However, I fail to see how observations can be used in any sensible way to infer equilibrium climate sensitivity; transient climate sensitivity seems to be a better match for observational determinations.
I have a problem with all of these analyses, owing to two issues: the linearization issue and the issue of natural internal variability. Note: Zaliapin and Ghil address some issues associated with linearization .
The determination of sensitivity S
S = ΔTeq/F
from an equation relating a change in equilibrium surface temperature to a change in top-of-atmospheric forcing includes a variety of explicit and implicit assumptions that are questionable at best. Some issues that I have with this simple formulation are:
- The earth’s climate is never in equilibrium in the sense that infrared and shortwave fluxes at the top of the atmosphere are exactly in balance for any significant period of time. While averaging over some time period may be useful in the context of some analyses, the frequency dependence of sensitivity and multiple modes of natural internal variability make it very difficult to define a suitable period for such averaging.
- Individual feedbacks (e.g. water vapor, ice albedo, cloud) are not additive, implying nonlinearity
- Sensitivity is frequency dependent: there are fast response feedbacks (e.g. water vapor and clouds) and slow-response feedbacks (e.g. ice sheets), implying nonlinearity
- Sensitivity is dependent on the climate regime (e.g. temperature and other elements of the climate), implying nonlinearity
- Sensitivity varies regionally (and hemispherically), implying nonlinearity
- Sensitivity is asymmetric for cooling and heating perturbations, implying nonlinearity
The simple model
ΔTeq = S * F
is designed to evaluate the response of surface temperature to forcing at the top of the atmosphere. This equation assumes that heat is distributed in the atmosphere-ocean-solid earth system such that a TOA forcing is simply translatable to a change in surface temperature. The problems with this are addressed in a crude way by some approximation for ocean storage.
However, we know that large reorganizations of heat in the atmosphere and ocean occur (that show up as surface temperature changes) that are not forced by TOA radiative forcing. Further, such changes in surface temperature may not change the TOA fluxes in a way that is consistent with the relation ΔTeq = S * F, largely depending on the cloud response and regional responses. I think that this is generally understood in the context of ENSO. However, say for a 60-70 year natural internal oscillation (e.g. AMO, PDO) or longer term natural internal variability, the use of this equation to determine sensitivity from observations will give you extremely misleading results for sensitivity.
When discussing the hockey stick handle, and responding to the criticism that there is too little variability in the handle, it has been stated that higher variability in the handle would imply a much higher sensitivity. This is NOT true if the variability is unforced and associated with natural internal variability.
The big issue is how heat is distributed within the atmosphere and ocean and how this distribution varies temporally and spatially. While climate modelers acknowledge natural internal variability from ENSO and on time scales of a decade, they regard this as noise on top of a large secular trend. They neglect the possibility that the secular trend is not easily disentangled from longer term natural variability, that the models reproduce at much lower amplitude than is observed.
So what do sensitivity values derived from the equation S = ΔTeq/F actually mean? Not much, as far as I can tell.
Moderation note: this is a technical thread, comments will be moderated for relevance.
