by Judith Curry

An important new paper on this topic has been published in *J. Climate*, that raises the bar in terms of uncertainty analysis.

**Probabilistic Estimates of Transient Climate Sensitivity Subject to Uncertainty in Forcing and Natural Variability**

Lauren E. Padilla, Geoffrey K. Vallis, Clarence W. Rowley

**Abstract. **In this paper we address the impact of uncertainty on estimates of transient climate sensitivity (TCS) of the globally averaged surface temperature, including both uncertainty in past forcing and internal variability in the climate record. We provide a range of probabilistic estimates of the TCS that combine these two sources of uncertainty for various underlying assumptions about the nature of the uncertainty. We also provide estimates of how quickly the uncertainty in the TCS may be expected to diminish in the future as additional observations become available. We make these estimates using a nonlinear Kalman filter coupled to a stochastic, global energy balance model, using the filter and observations to constrain the model parameters. We verify that model and filter are able to emulate the evolution of a comprehensive, state-of-the-art atmosphere-ocean general circulation model and to accurately predict the TCS of the model, and then apply the methodology to observed temperature and forcing records of the 20th century.

*For uncertainty assumptions best supported by global surface temperature data up to the present time, we find a most-likely present-day estimate of the transient climate sensitivity to be 1.6 K with 90% confidence the response will fall between 1.3–2.6 K, and we estimate that this interval may be 45% smaller by the year 2030. We calculate that emissions levels equivalent to forcing of less than 475 ppmv CO2 concentration are needed to ensure that the transient temperature response will not exceed 2 K with 95% confidence. This is an assessment for the short-to-medium term and not a recommendation for long-term stabilization forcing; the equilibrium temperature response to this level of CO2 may be much greater. The flat temperature trend of the last decade has a detectable but small influence on TCS. We describe how the results vary if different uncertainty assumptions are made, and we show they are robust to variations in the initial prior probability assumptions.*

Padilla, L., Vallis, G. K. and Rowley, C. 2011. Probabilistic estimates of transient climate sensitivity subject to uncertainty in forcing and natural variability. *J. Climate ,* doi: 10.1175/2011JCLI3989.1

Full manuscript is online [link]

Some background information from the Introduction:

*The steady-state response of the global-mean, near-surface temperature to an increase in greenhouse gas concentrations (e.g., a doubling of CO2 levels) is given, definitionally, by the equilibrium climate sensitivity (ECS), and this is evidently an unambiguous and convenient measure of the sensitivity of the climate system to external forcing. However, given the long timescales involved in bringing the ocean to equilibrium the ECS may only be realized on a timescale of many centuries or more and so its relevance to policy makers, and indeed to present society, has been debated. Of more relevance to the short and medium term — that is, timescales of a few years to about a century — is the transient climate response (TCR), which is the global and annual mean surface temperature response after about 70 years given a 1% CO2 doubling rate. (Sometimes an average may be taken from 60 to 80 years or similar to ameliorate natural variability.) Although the detailed response of the atmosphere to a doubling in CO2 will likely depend on the rate at which CO2 is added to the atmosphere, recent work with comprehensive models suggests that surface temperatures respond quite quickly to a change in radiative forcing, reaching a quasi-equilibrium on the timescale of a few years (in part determined by the mixed-layer depth) prior to a much slower evolution to the true equilibrium (e.g., Held et al. 2010). In the quasi-equilibrium state, the rate of change of surface temperature is a small fraction of its initial increase, and the response following a doubling of CO2 may be denoted the transient climate sensitivity (TCS). The TCS may be expected to be very similar to the TCR, but it’s definition does not depend so strictly on there being a particular rate of increase of greenhouse gases. As long as the CO2 doubles over a time period short enough for deep ocean temperature to remain far from equilibrium (less than 100 years, for example), the response to that doubling will likely be nearly independent of the emissions path. The ECS, in contrast, will take centuries to be fully realized. Given the timescale separation between the transient and equilibrium responses, the TCS is a useful parameter characterizing the climate system and it is this quantity that is the focus of this paper.*

*In addition to its relevance, the TCS may be easier to determine from observations than the ECS, in part because there are fewer free parameters to constrain. When estimating the TCS, we sum the atmospheric feedback strength and the rate of ocean heat uptake [also an uncertain quantity (Hegerl et al. 2007; Forest et al. 2002)], rather than constraining each factor separately. The overall response uncertainty, however, may still be dominated more by uncertainty in atmospheric feedbacks than the uptake of heat by the ocean (Knutti and Tomassini 2008; Baker and Roe 2009).*

*[T]he way that we shall proceed is to construct a simple but physically based model and then to try to constrain the parameters that determine the model’s transient climate sensitivity by a direct comparison with observations. Specifically, we will constrain a simple energy balance model by observations of the 20th century surface temperature record, using a particular nonlinear form of the Kalman filter as a way of estimating parameters. This approach allows us to explicitly examine the way in which probability distributions depend on the underlying assumptions and length of the observed record. *

Their methodology is complex and not summarized here in detail, but the key issue IMO is their analysis of the uncertainty of their uncertainty estimate:

*For comparison with the results described above, which were obtained with assumptions about uncertainty detailed in section 5a that we consider most plausible, we also consider three limiting cases for past uncertainty: forcing uncertainty 50% larger, forcing uncertainty 50% smaller – both with our plausible estimate of unforced variability, and plausible forcing uncertainty with larger natural variability in the temperature record. These uncertainty scenarios are summarized in table 1 along with the corresponding 90% confidence intervals after assimilation of surface temperature data up to 2008 and 2030. The confidence intervals are also shown in the inset plot of figure 6. The combined range of these intervals is an indication of the effects of uncertainty in our uncertainty estimates. As expected, the larger the forcing uncertainty and natural variability, the broader becomes the spread in the estimated [sensitivity].*

From the Conclusions:

*Although our estimates are certainly sensitive to these uncertainties and to natural variability, they may be sufficiently narrow as to still be useful. For uncertainties ranging from very large forcing uncertainty to very small forcing uncertainty, our confidence intervals for TCS range from 1.2–2.6 K to 1.4–2.6 K. With a much larger portion of the observed temperature change attributed to natural variability, our TCS interval increases to 1.1–5.5 K. Our probabilistic estimate of the range of TCS that we believe to be best justified by data, namely 1.3–2.6 K with a most probable estimate of 1.6 K, is broadly consistent with the TCR range of IPCC AR4 climate models whose median and mean are 1.6 K and 1.8 K, with 90% confidence interval of 1.2–2.4 K (Randall et al. 2007; Meehl et al. 2007). Figure 15 summarizes our range of probabilistic estimates given data from 1900 to 2008 and 2030. The collection of probability densities and corresponding confidence intervals indicate both the state of TCS uncertainty today and potential for improved understanding 20 years in the future.*

JC comments: There are a number of things that I like about this paper:

- observationally based analysis of climate sensitivity
- sophisticated uncertainty analysis
- focus on transient response (rather than equilibrium response), which is more appropriate for an observationally based analysis
- explicitly includes a term for natural (unforced) internal variability
- affiliations of two of the authors (mechanical and aerospace engineering); brings new blood and new ideas to the topic!

**Moderation note:**This is a technical thread and comments will be moderated for relevance.

Thank you, Professor Curry, for this link and for working to increase awareness of uncertainty in climatology.

With kind regards

Oliver K. Manuel

■ observationally based analysis of climate sensitivity:

“…we may not be dependent on uncertain models to ascertain climate sensitivity. Observations can potentially directly and indirectly be used to evaluate climate sensitivity to forcing of the sort produced by increasing CO2 even without improved GCMs…

“…the common assertion that even small changes in mean temperature can lead to major changes in climate distribution is ill-founded and, likely, wrong.”

~Colloquium Paper by Richard S. Lindzen, Can increasing carbon dioxide cause climate change? Proc. Natl. Acad. Sci. USA 94 (1997)

That looks like an excellent paper. It will take a long time for me to read it. My favorite part so far was from the references:

Evensen, G., 2007: Data Assimilation The ensemble Kalman filter. Springer, 280 pp.

I always look for opportunities to buy new books.

mmm I really do not understand the obsession with surface temperatures when the vast majority of the heat energy in the earth system resides in the oceans.

Excellent point, Gary. Giant squid and other deep sea denizens would appear to be safe from global warming.

LOL

–perhaps Gary was refering to the isse of transfer of heat from the oceans surface to the deep ocean and how little of that issue is understood.

Vaughn is being funny today

Yeah, it’s a travesty the way he seems to care more about polar bears than giant squid and wierd fish with blinky lights.

how little of that issue is understood.That’s for sure. But we do at least understand that heat can reach 1 km or deeper much faster in the ocean than the land, thanks to convection in the former.

Near the surface there is more ocean than land, but this reverses at a depth of 5 km or more. That and the higher specific heat of land means that the top 10 km of land has a much higher heat capacity than that of the oceans. The obstacle to heating the land is the much higher thermal resistance.

The situation can be modeled electrically as follows. The numbers used here are only to give a qualitative idea of what happens and do not correspond to actual thermal resistances and heat capacities.

The 5V supply represents warming of the surface. The resistors represent the obstacles to heating the ocean and the land respectively, with the 1 megohm resistor representing convection and the 3 megohm resistor representing the higher resistance of the land (not to scale). The 10 and 20 microfarad capacitors represent the thermal capacity of respectively the ocean and the land. The current through the resistor represents the rate at which heat flows into the surface, while the charge on the capacitor represents the accumulated heat in respectively the ocean and the land.

This graph plots the current (red graphs with Y axis on the left) and charge (blue graphs with Y axis on the right) of respectively the “ocean” (thin curves) and “land” (thick curves), as a function of time in seconds (think of a second as somewhere between 2 and 10 years). This is what would be observed with this actual circuit when the 5V is applied at time 0.

Initially heat flows at a much greater rate into the ocean than the land, and accordingly the ocean’s charge (heat energy) accumulates more rapidly. As the ocean heats up the heating process slows, until at about 40 seconds heat is flowing equally into the ocean and the land. Very slowly the land heats up too. In the limit both the ocean and the land are at 5V, with the ocean and land charged to respectively 50 and 100 microcoulombs, modeling the fact that they eventually reach the same temperature, but with a lot more heat energy in the land than the ocean. (For this purpose I’m counting everything below the ocean floor as land since the thermal resistance there is essentially that of land.)

In the real world the heat is not being switched on suddenly but is gradually increasing, which changes the shapes of the curves without however changing the basic idea that the ocean heats more quickly than the land but also saturates more quickly.

That~~and the higher specific heat of land~~means that the top 10 km of land has a much higher heat capacity than that of the oceans.Sorry, strike as shown. The volumetric heat capacity of water is comparable to that of the Earth’s crust, so it boils down to the crust having considerably more volume than the oceans.

until at about 40 seconds heat is flowing equally into the ocean and the land.Another mistake. I was comparing the slopes of the two currents when I should have been comparing their values, which are equal at around 13 seconds.

This is also where the slopes of the two

chargesare equal, current being the derivative of charge with respect to time, I = dQ/dt.Gary,

“I really do not understand the obsession with surface temperatures when the vast majority of the heat energy in the earth system resides in the oceans.”Actually you could have added the Earth’s core too. Don’t forget that. There is a lot of heat there too.

Yes it’s a tough one isn’t it? The only answer I can come up with is that most of us, and our fellow animals, plants etc live on the surface. Its the surface temperatures which drive weather systems, determine polar ice extents and ultimately sea levels.

You might think that’s a feeble answer. But maybe someone can help me out with a better one you might find slightly more convincing.

Temp:

Not quite, since the “vast majority of earths available energy exists in the oceans”, it is those oceans that drive the “weather systems, ice extents and sea levels” It cannot be otherwise as the caloric content of the oceans dwarfs the caloric capacity of the atmosphere by a large magnitude.

It has already been determined that the heat inside the Earth contributes very little to oceanic or atmospheric energy. Honestly, do not ask for a cite on this, a google search will help with that better then I can, and this point has been previously discussed.

Roy Weiler

It depends heavily on the time scale. When considering one century the ocean dominates because the land’s thermal resistance isolates the crust from the surface. But at a scale of millennia this reverses because the ocean saturates while at the same time flow of heat into the crust becomes observable. See my illustrated discussion of this point just above.

Vaughn,

This is what we observe in boreholes, correct?

Bill

and always not forgetting that temperature and heat energy are 2 very different animals

Correct. Below the land surface there is no radiation and all transport is via conduction and convection, and the fourth-power radiation law does not apply there. Furthermore the electrical relationship Q = CV, charge equals capacitance times voltage, has its counterpart in the formula Q = CT where Q is heat energy, C is heat capacity, and T is the temperature difference (just as V is voltage difference).

Sometimes T in this context is written ΔT to emphasize that it’s a difference, and logically V should likewise be written ΔV. However unlike temperature there is no such thing as absolute zero for voltage whence no ambiguity can arise from abbreviating ΔV to V. When context does not resolve this ambiguity in the case of temperature one should write ΔT for the difference in temperature. This does not undermine the analogy because the formula for electrical charge should properly be written Q = CΔV, or better yet ΔQ = CΔV (and likewise for temperature: ΔQ = CΔT).

Nice paper.

The treatment of observational uncertainty is a weaker aspect of the paper. They assume that the errors in global temperature are white noise with a fixed standard deviation. This is not what Brohan et al. 2006 state. That the errors are not white noise is doubly clear from Thompson et al where they get their global temperature series. The analysis is clearly sensitive to the discontinuity in observed global SSTs in 1945 that Thompson et al. highlighted. They also note the recent slowdown in warming, but don’t note that this is a particluar feature of HadCRUT3. GISS and NCDC analyses don’t level off to the same degree.

They also note the recent slowdown in warming, but don’t note that this is a particluar feature of HadCRUT3. GISS and NCDC analyses don’t level off to the same degree.Over the last three decades GISS has certainly been steadily ahead of HadCRUT3 by around 0.1 °C. But looking at the difference between the two over those 30 years, I find it hard to see any significant pattern in their difference (green curve).

In any event HadCRUT3 is hardly “levelling off.” This graph shows the month-to-month deltas in millikelvins since 1975 for HadCRUT3 smoothed to a 15-year moving average. The idea that the period 1975-2000 climbed steadily while 2000-2010 held constant or declined is contradicted by the greater number of negative monthly deltas during 1975-1980 and 1985-1990 that at any time since.

Granted one can find more downturns with smaller windows, but the smaller the window the less meaningful the downturn. One needs 15-year windows to eliminate most of the noise.

Won’t the multi-century long term equilibrium likely include a return of CO2 concentration to something like the pre-industrial level? There probably isn’t enough fossil fuel on the planet to sustain a 500-600 ppm CO2 concentration for centuries.

While I won’t be around to see it, I wouldn’t be at all surprised to see 280 ppmv reached by 2100. Extrapolating the Keeling curve via Hofmann’s raised-exponential model of it predicts 1000 ppmv for 2100. However this ignores the possibility of nuclear fusion coming online around 2060, which seems likely to me (I’m betting on inertial confinement, not magnetic). I’m also more on Jacobson’s side than Archer’s on the question of CO2 residence time. In fact if anything I’m even further in that direction than Jacobson.

Judith, you say:

I fear for your reading skills, Judith. This study is not “observationally based” in any sense. According to the paper:

So it’s not about observations as you claim, Judith. It’s not even about complex climate models. The story starts with the building of a “simple but physically based model”, which (as far as I can tell) has not been tested, and (again AFAIK) for which the code is not available.

After beginning by stating that their study is

not observationally based, but model based,they go on to say:So rather than use an existing model, they build a new, “simple” model … and then rather than testing the model, they try to “constrain the parameters” of the model by a comparison with observations … and in your world and your words, this is an “observationally based analysis”?

The sound you hear is me rolling on the floor and laughing my okole off … really, Judith? “Observationally based”? I begin to despair about the ability of our top scientists …

w.

PS – there are huge problems with their “simple but physically based model”, it uses the same bad math I’ve pointed out

elsewhere, but that’s an issue for another thread.For now, it suffices to point out that this study is not “observationally based” in any sense of the word. You seem not to realize that all they are doing is DESCRIBING THE NORMAL PROCESS USED TO TUNE A MODEL by comparing it to observations and adjusting the parameters … which establishes absolutely nothing about the

uncertainties in the sensitivity,only about theuncertainties in the model.This is a recurring problem with trying to establish uncertainty in a brand new field like climate science, where there is very little in the way of solid landmarks to compare things to. These folks are trying to compare the climate to a model … but we know nothing about the uncertainty of the model itself.

PPS – Their “simple but physically based model” assumes that the heat storage in the ocean is proportional to the temperature difference … given that heat in the ocean is stored and given up by very different mechanisms, this is a joke, right? Please tell me you don’t believe that, Judith. Since heat storage in the ocean is a factor (inter alia) of things like cloud cover and ocean turbidity and wind-governed evaporation rates that are not simply related to temperature, claiming ocean heat storage is proportional to ∆T is a unjustifiable assumption made solely to ensure the mathematical tractability of the problem.

Judith, perhaps you could give us the 95% CI on that assumption made solely to simplify the problem … I’m sure you see the problem. As is all too frequent in climate science, their “uncertainty” calculations only touch the top layer of assumptions, and ignore all deeper assumptions.

Which seems like an extremely uncertain way of going about things.

Perhaps ythe editor should resign, since it appears the paper is much less important than the authors seem to claim?

I think Judith should resign – just because.

‘Although lakes cover less than 2% of the Earth’s surface,

compared to 71% covered by oceans, lake sediments accumulate

each year almost half as much organic carbon as do marine sediments. However, it is not yet known precisely how this effective

carbon sink operates in lake.’

I have not been able to find out how much carbon is sequestered in ocean sediment – presumably twice that of lakes.

Willis

in your PPS you have elucidated the point that I was fumbling to grasp in my comment at 1.47pm for which I am most grateful

I’ll suggest you look at the guts of a kalman filter before you say anything

steven mosher | September 15, 2011 at 12:00 am

I’ve looked at a number of them. I’ll suggest you look at the paper, where they say:

They’re using a Kalman filter to estimate parameters on a simple energy balance model … color me unimpressed. Are you saying that is “observationally based”? Not sure what your point is here, but yes, I do understand the use of the Kalman filter. I first heard about Kalman filters from my older brother, who is the smart one in the family, winner of a “Scientist of the Year” award from Discover Magazine in 1991 for his work developing the first civilian version of the GPS system.

Anyhow, he was using (hardwired) Kalman filtering to solve the question of integrating all the delays and timing errors and atmospheric lags and including all the input from multiple satellites and the like to provide the best 3-D position estimate.

So Kalman filters are not news to me.

w.

Ah, so a model used to simulate reality is then used to assign a figure to reality. Sigh.

But on the bright side they tried to include natural forcings, which is great as far as i’m concerned (no comment on the validity of their specific addition yet though, i’ll need more time to read this paper thoroughly).

Judy – I agree that this is an informative paper. Having read through it once, I will have to reread it to appreciate the nuances, and perhaps some of the pitfalls. However, the potential value of the work, as you indicate, resides in its examination of the sensitivity of the transient climate sensitivity parameter λ to uncertainties in temperature, forcing, natural variability, and ocean mixed layer depth, alone or in combination (see e.g, figures 7 and 8). The power of using a recursive Bayesian method to narrow uncertainty margins as observations accumulate is illustrated in Figure 6. Whether furthering narrowing by 2030 will relax pressure to maintain CO2 concentrations below stringent limits (e.g., 475 ppm if a goal is to avoid a 2C rise over pre-industrial temperatures) remains to be seen. Also, as noted, transient climate responses represent only a fraction of long term equilibrium responses, and are most informative for the coming decades of this century, but less so beyond that. For the purpose of transient responses, the equation C(dT/dt) = F – λT suffices, because heat transfer to the deep ocean can be assumed to be very small relative to quasi-equilibration with the ocean mixed layer. For longer term persistent forcings, the deep layer heat capacity, Co, and temperature response will also need to be included.

For perspective, I think it’s worth noting that in theory, a perfect knowledge of forcing and temperature responses, free of confounding variables, would enable us to compute an extremely accurate value of climate sensitivities, whether they are ECS, TCR, or TCS, and would obviate the need to estimate feedbacks, since the latter would be implicitly determined by the forcing/response relationship. The obstacle has been the difficulty of estimating forcings and unforced variability accurately, and in choosing the appropriate representations of global temperature responses. In that sense, the paper translates this simple concept into a probabilistic estimate within defined boundaries that are narrow enough for at least tentative judgments of future temperature trajectories as a function of future GHG and aerosol emissions. The use of twentieth century data on temperature, CO2, aerosols, volcanoes – in combination with estimates of natural variability based on the standard deviation of the detrended century long temperature record – is an important strength, particularly when combined with the assessment of the effects of possible errors in the data, which is the main strength of the paper.

One concern was noted above by Nebuchadnezzar. The twentieth century temperature record is characterized by substantial heterogeneity in terms of the dominant causes of temperature change during different intervals, with the middle of the century most problematic for at least some of the variables – probably aerosols and perhaps natural variability as well. Some of this shows up in Figure 6, where the availability of additional data increased rather than reduced the uncertainty limits. Nevertheless, the TCS estimates from the paper already provide a good sense of the contributions to climate change of future anthropogenic inputs of CO2, other GHGs, and aerosols, regardless of how responses to those inputs will be modified by other climate factors. In this regard, it should be noted that the paper itself is not one more attempt to model the future trajectory of global temperature, except in a very limited sense. Rather, it uses twentieth century observations along with uncertainties about those observations to derive a range of transient climate sensitivity values that are likely to include the true value of that parameter. Applying TCS to the future will be relevant only in regard to the variables. examined in the paper.

Fred, thanks for your thoughts. You say:

Fred Moolten | September 14, 2011 at 4:26 pm

Well … you describe how things are “in theory”, but surely that depends on the theory you’re invested in, doesn’t it?

For example, I have provided evidence

herethat climate sensitivity is not a constant, but is a function of temperature.In fact, this claim is trivial to establish, in that the hotter the earth becomes, the harder it becomes to drive it warmer yet. However, as cited above I have provided much more detail to establish the point, showing the changes in local tropical circulation regimes that radically change the climate sensitivity … which means that lambda, the climate sensitivity is

nota constant, even in theory. It is a function of temperature f(T), and not a pretty function, since it has thresholds and is wildly non-linear, particularly near the thresholds.Given that nature loves to run at the edge of turbulence, saying “non-linear near the thresholds” is another way of saying “non-linear most of the time” … all of which puts a floor-to-ceiling limit on the uncertainty of their method.

w.

I don’t think the authors would disagree that climate sensitivity is a function not only of temperature but also the nature of the forcing. In that sense, the TCS values derived in the paper apply to the kind of climate we had in the twentieth century, and would be inapplicable, for example, to an ice age (where ice/albedo feedback would be much stronger) or a hothouse climate with no ice at the poles, (where there would be no ice/albedo feedback). If the current century is characterized by CO2 as the principal anthropogenic greenhouse gas, industrial aerosols as the principal source of long term aerosol variation, and volcanic eruptions at about the rate of recent centuries, and if no tipping points are breached, the reported TCS range should include the true value for the remaining years of the century. As the authors indicate, a strength of their method is the recursive (Bayesian) recalculation of probabilities as data accumulates, and so the method is flexible enough to accommodate a.fair amount of deviation, but not an extreme amount.

I am inclined to think that the latest tipping point – to cooler conditions – happened after 1998.

‘This paper provides an update to an earlier work that showed speciﬁc changes in the aggregate time evolution of major Northern Hemispheric atmospheric and oceanic modes of variability serve as a harbinger of climate shifts. Specifically, when the major modes of Northern Hemisphere climate variability are synchronized, or resonate, and the coupling between those modes simultaneously increases, the climate system appears to be thrown into a new state, marked by a break in the global mean temperature trend and in the character of El Nino/Southern Oscillation variability. Here, a new and improved means to quantify the coupling between climate modes conﬁrms that another synchronization of these modes, followed by an increase in coupling occurred in 2001/02. This suggests that a break in the global mean temperature trend from the consistent warming over the 1976/77–2001/02 period may have occurred.’ https://pantherfile.uwm.edu/kswanson/www/publications/2008GL037022_all.pdf

Pretty much seems to have happened – http://i1114.photobucket.com/albums/k538/Chief_Hydrologist/earthshine.gif – sourced from Project Earthshine.

Fred Moolten | September 14, 2011 at 8:09 pm

One man’s constant is another’s variable.

It’s often the case that a value is usable as a constant only within a range of conditions. In this case, it is a climate not dramatically different from that of the present and recent past. That qualification doesn’t invalidate the estimated value as something to be applied to similar climates in the future.

Certainly a pertinent fact is that the sensitivity between 190 ppm and 280 ppm coming out of the ice age seems to have been in excess of 10 degrees per doubling, and we are asking what it is now in the 280 plus range. Obviously it was helped by ice albedo feedback, but some claim it is now close to 1 degree per doubling which seems a bit of a sudden drop, especially as we still have some ice left.

True enough, but it’s hard to make any reliable analogies with the ice ages because CO2 is changing orders of magnitude faster now than then. We will be doubling CO2 in far too short a time to see anything like the rise in temperature back then.

Fred,

You wrote:

“For perspective, I think it’s worth noting that in theory, a perfect knowledge of forcing and temperature responses, free of confounding variables, would enable us to compute an extremely accurate value of climate sensitivities, whether they are ECS, TCR, or TCS, and would obviate the need to estimate feedbacks, since the latter would be implicitly determined by the forcing/response relationship.”

This statement is not true. Although it may seem counter-intuitive, you cannot derive ECS from temperature and forcings even if they are known exactly. The explanation is that any ECS will match the data provided that its underlying model asymptotes to the appropriate value of d (dH/dt)/ dT at “small” values of T, (where T is temperature change from start of perturbation, t is time and H is energy gain.) This is broadly equivalent to having the same linear coefficient for the feedback term, which means that one can still vary the non-linear coefficients without changing the match to temperature, which in turn gives an open range of ECS values. For a clear demonstration of this, see

http://rankexploits.com/musings/2011/equilibrium-climate-sensitivity-and-mathturbation-part-2/

Paul

Paul – Although its not germane to this paper, I disagree – somewhat. If sufficient temperature and forcing data are available, it should be possible to arrive at an accurate estimate of ECS. If you mean that you can’t do it from just a small range of temperature/forcing relationships, then there is no disagreement. I was also careful in my wording to use the words “perfect” for temperature and forcings, but only “extremely accurate” for sensitivity. My main point in my comments, though, was that we lack anything resembling perfect data and must utilize the kind of probabilistic analysis reported by the authors to estimate a sensitivity range that is in fact fairly wide.

I visited the web page you cited a while back, but I will revisit it to review the points you made.

Having briefly reviewed the analysis you linked to (but I need to go over it further), I think we are more in agreement than disagreement. I certainly agree that we need to know something about the shape of the forcing/temperature curve to constrain ECS estimates from the data, in the absence of prior assumptions about ocean heat transfer relationships. That would take a long time, but in fact the point I was making is that we

can’tdo something like in any case because the data are not known with sufficient accuracy, and so the practical obstacles to estimating ECS from “perfect data” are irrelevant.“I certainly agree that we need to know something about the shape of the forcing/temperature curve to constrain ECS estimates from the data,…”

I think you’re getting it, Fred. The shape that you refer to includes the timing and strength of all of the temperature -dependent feedbacks – the big unknowns. If one could demonstrate from the physics that the feedback is linear (for example), then one could unambiguously calculate an ECS from perfect temperature and forcing data. However, since anyone arguing for a high climate sensitivity must support the view that feedback is non-linear, then ECS cannot be unambiguously calculated even from perfect temperature and forcing data. Additional information must be used.

One of my favoritest post…

Judith Curry

The new Padilla et al. paper is interesting, but I’m afraid I can’t get as excited about it as Fred Moolten has done, because it basically does not tell us anything we do not yet already know (except maybe that there is great uncertainty regarding GCM outputs).

First of all, as Willis Eschenbach has remarked in detail, it is not “observationally based”, but rather simply more model work based on theoretical deliberations.

Based on a very simple calculation using the logarithmic function and the “observational basis” one would have arrived at a 2xCO2 “transient” climate sensitivity of 1.44°C,

assumingthat the IPCC AR4 model-based estimate (based on an admitted“low level of scientific understanding”of“natural forcing components”) is correct, namely that 93% of the forcing since pre-industrial days was caused by anthropogenic factors (only 7% by natural forcing) and that all other anthropogenic forcings other than CO2 cancelled one another out.Observed data:

0.66°C = HadCRUT3 temperature change (linear) from 1850 to 2010

290 ppmv = Vostok ice core CO2 concentration in 1850, per IPCC

390 ppmv = Mauna Loa CO2 concentration in 2010

For every %-age point that IPCC underestimated natural forcing, this TCS is reduced by 0.01544°C, so that the calculated TCS would be:

at 10% natural: 1.39°C

at 15% natural: 1.31°C

at 30% natural: 1.08°C

at 50% natural: 0.77°C (as several solar studies have suggested)

So one could say that the “observed TCS” based on various assumptions on natural forcing would lie between 0.7 and 1.4°C.

The second objection I would have to the paper is that it assumes an “equilibrium” climate sensitivity (ECS) will only be reached centuries after the CO2 increase has occurred.

This is an assumption that is based on the “hidden in the pipeline” postulation of James E. Hansen et al. which, in turn, is based on model deliberations and circular logic: i.e.

From 1880 to 2003 we have observed between 0.6 and 0.7°C warming, equivalent to a forcing of ~ 0.95 W/m^2

Our models tell us we should have seen 1.2°C, corresponding to modeled forcing of 1.8 W/m^2

Therefore:

The “missing” heat was supposed to be “hiding” in the upper ocean, as confirmed by very dicey data prior to 2003 from XBT expendable buoy devices.

These devices were later shown to introduce a “warming bias”, according to paper co-writer Josh Willis. Since the installation of more comprehensive and reliable ARGO devices in 2003, the record shows that the ocean has cooled, instead of warming, at the same time as the atmosphere has also not warmed, effectively falsifying the “hidden in the pipeline” hypothesis and the notion of a long equilibrium time (see Pielke and others).

As other posters here have noted, the Padilla et al. study has used models, which are very poor in simulating the impacts of clouds (IPCC has conceded that this is its

“largest source of uncertainty”regarding the ECS).So I’d agree with Willis that this is simply another model exercise, which tries to rationalize the notion of a shorter-term transient climate response to added CO2 (more or less in line with the IPCC model estimates when adding in a modeled delay factor), in order to confirm the notion of a much higher ECS to be reached some time in the far distant future, at the same time underplaying the significance of

the flat temperature trend of the last decade:Not much to get excited about there, I’m afraid.

Max

Contrary to some statements above, I believe readers who go through the paper carefully will find that the authors’ approach is very much observationally based – utilizing data on temperature, forcings, and natural variability from twentieth century observations, and assessing the effect of uncertainties in these data on the TCS estimates. Guessing at the accuracy of IPCC allocation of anthropogenic vs natural forcings as an alternative would fail to exploit the wealth of data in the observational record and would be of no value in estimating a TCS range. In using statistical methods to estimate a parameter- in this case λ, an appropriate approach is to develop a model, apply the observational data, utilize the results to test and refine the model, and then compute the desired parameter range from the output. This is exactly what was done here, and in fact, there is no way to use the observational data without some form of model. The word “model” regarding that purpose should not be confused with developing a GCM for the purpose of climate simulations, which some of the above comments may have done.

That equilibrium climate sensitivity would not be achieved for centuries is not based on Hansen’s “pipeline” metaphor but on the enormous heat capacity and slow warming of the deep ocean, and is not in doubt for persistent forcings. It is also somewhat irrelevant to the paper, which estimates TCS and not ECS, although the paper is correct in stating that ECS will be substantially greater and take far longer to be reached.

Clouds are also irrelevant, because they are feedback responses. In fact, a strength of the paper is the estimate of sensitivity from forcing and temperature data, so that a need to get clouds or other feedbacks right is avoided.

This paper is not the last word on the subject, but I see it as a useful addition, with novel features that advance beyond some of the simpler approaches used in the past or that have been attempted in these threads. At this point, its estimates of a TCS range of 1.3 to 2.6 K are a useful starting point, to be adjusted as further climate data become available. Whether that equates to an equilibrium sensitivity in the often-quoted 2 to 4.5 C range remains to be determined, but is less important for understanding changes during the remainder of this century.

This is a paper that requires careful scrutiny. I plan to reread it to see if I have overlooked important elements, but I think the general principles will be apparent to readers who take time to go through it even once.

Clouds are also irrelevant, because they are feedback responses. In fact, a strength of the paper is the estimate of sensitivity from forcing and temperature data, so that a need to get clouds or other feedbacks right is avoided.I don’t think we can conclude that now. As we learn more about clouds, as we shall, we may find that the knowledge clears up some of the uncertainty and corrects bias in the climate sensitivity estimate.

Clouds are a forcing.

We are pretty sure this is the case because SW up was negative (relative warming) and LW up was positive (relative cooling) – leading to net warming which is seen in ocean heat content.

Odd to say the least. It seems so obvious that only a space cadet in the Kool-aid queue could mistake it for anything esle.

Clouds don’t change?

~~Wh@t @ W@nker he is~~.Fred, aren’t you then just performing a retrofit on the model, rather that actually TESTING said model (if you’re using observational results to ‘adjust it’).

I’ll qualify this immediatley by saying i need to re-read the paper in more detail, but it strikes me that using the observational results to ‘force’ their model into some sort or agreement (incidentally suggesting that the model was way off to begin with) suggests that their model is lacking- or at least is working off assumptions that may not be borne out in reality.

Or to put it another way, how do you tell one model with good in-built assumptions from one with bad in-built assumptions if all you’re going to do is ‘adjust’ both at the end point anyway??

Labmunkey – You may be thinking about “models” in the GCM sense. Here, the model is simply a necessary tool onto which the observed values can be imposed to derive an estimate of the TCS range. The model didn’t start with any TCS estimate, and so it was neither “on” nor “way off”, but rather an application awaiting data. Among its “assumptions” were that it included the relevant parameters (forcing, natural variability) and used appropriate statistical methods, including the Kalman filter discussed in some of these comments. You can read the paper for the authors’ justification for these, which I found reasonable, but they did not “adjust” the assumptions. What they did do, and it is one of the strengths of the method, is continually update the estimates on the basis of additional data acquired after the starting point. This permits the method to narrow the range of estimated TCS within which the true value is likely to lie, as more and more data become available. Notice, though, that although the range narrowed, the most likely value for TCS stayed relatively constant, and it was the uncertainties that contracted – see Figure 6.

Fred Moolten

Huh?

Max

As long as clouds are feedbacks generated by some previous forcing of any type (GHGs, solar changes, aerosols, etc.), they should be irrelevant. Remember that climate sensitivities are a relationship between forcing and temperature, and when that can be estimated from data, we don’t have to know which feedbacks were responsible for the relationship or to what degree. That is one of the virtues of the authors’ approach.

And if they are not?

The forcings used by the authors are anthropogenic (GHGs and aerosols), and natural (solar and volcanic). That appears to cover all major sources of forcing. The Spencer/Braswell thread addressed a claim for some type of “internal forcing” mediated by clouds and playing an initiating role in ENSO. Reasons for doubting the claim can be found in that thread, so it would be redundant to restate them here. Even if a case could be made for this additional type of “forcing”, the claimed responses are probably too short term (months) to significantly affect long term climate sensitivity estimates. At this point, it’s reasonable to stick with the recognized forcings. As the paper noted, forcing uncertainty is a source of variation in the estimates of the sensitivity parameter, but the latter is not dramatically altered by different values for the forcings.

For clarity, it’s worth pointing out that both forcings and feedbacks can alter temperature by changing the radiative balance at the tropopause or the TOA. In some cases, it would be interesting to estimate “sensitivity” to feedbacks, but if one is interested in CO2 or other major forcings, the data used in this paper would be the appropriate choice.

We have gone over your errors in some detail. It is of course pointless to respond to any of your prevarication and misrepresentation.

You quote out of context to imply that things are the reverse to what is said – and then boast of ‘tormenting me’. Believe me – I don’t give you that sort of power. It is just juvenile nonsense and you deserve no detailed response because you are not honest and will argue nonsense just for the hell of it.

Fred,

You persist in mis-stating SB2011, despite the error having been brought to your attention.

“The Spencer/Braswell thread addressed a claim for some type of “internal forcing” mediated by clouds and playing an initiating role in ENSO.” Even Dessler has agreed to modify his introductory statement to avoid this mis-representation. You are being willfully provocative and tiresome on the issue. Please stop digging the hole you are standing in.

More interestingly, for the present paper, it would have been interesting to see a fuller decomposition of the forcing data, rather than its (highly restrictive) representation as the weighted sum of just two fixed input series. If the authors had solved for a short vector of parameters which summed a range of high and low frequency decomposed elements of the forcing data, then we might avoid completely going back over the question of whether cloud variation should be treated as a forcing in its own right.

Paul – This is an easy response to make. Readers who visit the paper will see the following:

From the abstract:

Here we present further evidence that this uncertainty from an observational perspective is largely due to the masking of the radiative feedback signal by internal radiative forcing, probably due to natural cloud variations.

From the text:

We have shown clear evidence from the CERES instrument that global temperature variations during 2000–2010 were largely radiatively forced… much of the temperature variability during 2000–2010 was due to ENSO [9], we conclude that ENSO-related temperature variations are partly radiatively forced.As you know, Paul, Spencer was even more explicit and emphatic in his blog in insisting that clouds can play a forcing role in which clouds change first and temperature follows, and not merely a feedback role (although he agrees that cloud feedback also occurs). Also note that in all of these discussions, the term used is “play a role”, with no implication that any of these phenomena is the sole cause of some effect.

In my response above, the “paper” I refer to is Spencer/Braswell 2011, and not the Padilla paper. I was providing evidence to support my statement of what SB-11 claims for “internal forcing” due to clouds.

‘The Spencer/Braswell thread addressed a claim for some type of “internal forcing” mediated by clouds and playing an initiating role in ENSO.Typical prevaricating – there is no suggestion by anyone that clouds cause ENSO. It is a meaningless red herring.

‘During El Niño, the warming of the tropical eastern Pacific and associated

changes in the Walker circulation, atmospheric stability, and winds lead to decreases in stratocumulus clouds, increased solar radiation at the surface, and an enhanced warming…’ http://www.cgd.ucar.edu/cas/Staff/Fasullo/refs/Trenberth2010etalGRL.pdfLow level stratocumulus cloud increase in a La Niña.

This seems a cause of TOA flux change on decadal scales which shows up in ocean heat content.

It is the mechanism that explains the lack of warming over the last decade.

I get the impression that many people would prefer to think of it as noise and hope it will just go away.

@Max,

I agree with Fred M and Dr Curry. The paper is observation-based. You might argue about the priors used, but the authors have sought to use what was there.

@Max,

mmm. They have also used some stuff that wasn’t there. The temperature match is Hadcrut3 with ENSO effects “removed”, but I can’t see any reference to how it was removed. To judge by the sudden changes in parameter estimates through the mid-century drop in temperature and the start of the post 1980 climb in temperature, it seems that the elimination of ENSO might not have been done very cleanly.

Paul K

Yeah. The paper is “observation-based”

“except for…”.(Which means it’s not really “observation-based”.)

Tossing in a bit of observation and then adjusting, massaging and manipulating the data to fit a model assumption is not “observation-based” IMO.

Max

At steady state influx = efflux, so the rates are the same. You can measure the rate of either influx or efflux if you block one.

Blocking efflux is very difficult, but blocking influx is easy. During every total solar eclipse we remove the input from the steady state temperature and the decay of the heat curve and upwardly radiated radiation should inform us as to what is going on.

if you go to

http://www.climate4you.com/

The urban heat island

Other local meteorological phenomena

it shows you this plot of changes in ground temperature and air temperature during a eclipse

The 10 degree drop in ground temperature vs 1 degree in the air is very interestings

At steady state influx = effluxIn climate science, we never know whether we have steady state or not. However, we know that the climate is a nonlinear dissipative system in many dimensions, and we know that input in each locale is time-varying, and not the same in all locales, so we know that steady state is extremely unlikely.

The 10 degree drop in ground temperature vs 1 degree in the air is very interestingsYes it is.

I don’t know if I fully understand what’s going on here, but I get the impression that Padilla

et al.(2011) are trying to model sensitivity as a dynamic “parameter”:If I understand this correctly, they’re saying that their “back-modeling” is providing a change to a lower sensitivity after 2000. This resonates interestingly with Gent

et al.(2011):I wonder what would happen if CCSM4 were run with a sensitivity change in 2000 appropriate to the sensitivity change noted by Padilla

et al.(2011)? And if the results followed the data closely, what would that mean for the current crop of models, which AFAIK assume sensitivity as a parameter? (Am I wrong Prof. Curry?)And what does that mean for the immediate future of political recommendations? I can see a whole new round of politicization of Science here as people fight over how to “tune” the changing sensitivity over the next century for “predictive” models. Especially since we don’t know why it changed around 2000, if it did.

Ref:

Gent, P.R., G. Danabasoglu, L.J. Donner, M.M Holland, E.C. Hunke, S.R. Jayne, D.M. Lawrence, R.B. Neale, P.J. Rasch, M. Vertenstein, P.H. Worley, Z.L. Yang, M. Zhang(2011) The Community Climate System Model Version 4.J. Clim., doi: 10.1175/2011JCLI4083.1Those are interesting points, AK. If I recall correctly, CCSM3 did a better job with some of the simulations than CCSM4

On a small point, I wouldn’t interpret the paper as concluding that sensitivity “changed” after 2000, but rather that the estimated range of values likely to include the true value was shifted slightly downward by the post 2000 data. I think this is probably what you meant.

Fred Moolten

Did the sensitivity “change” after 2000?

Or were the model assumptions on sensitivity simply proven wrong by the post-2000 data?

I’d agree with you, Fred, that it was most likely the latter.

Max

Max – The analysis used by the author didn’t make “assumptions” about sensitivity that later changed. Rather, their TCS value was constantly updated as new observations were entered into the computation. One of the strengths of their method is that this process can be continued in the future as we continue to gather data, and so there will never be a “final” version of TCS, although the range within which it is likely to lie narrows with time and will become less and less suceptible to dramatic deviations from the value already estimated.

Fred

If you were pressed for an answer, you would have to agree that Max’s statement was accurate when he wrote-

“the model assumptions on sensitivity simply proven wrong by the post-2000 data”

The degree of the error was substantial and was not a simple adjustment of the trend.

Rob – Max was wrong for the reasons I explained. The model didn’t “assume” any value for TCS but continually updated the range of values within which the true value is likely to fall based on accumulating data. When only data starting in 1970 are used rather than all the data, the result is more heavily weighted by post-2000 data, and results in a small downward adjustment, but the result is not the final word, and there is no “error” involved. We can’t say that the 1900-2008 estimated range was “wrong” and the 1970-2008 range is “right”. The range will continue to change as more observations are made. This might involve an upward adjustment in the future, a downward adjustment, or no change at all – this is the essence and the power of a recursive method. Based on the narrowing that arises from increased observations, it’s likely that the estimates will change not very much from now on, but they will never be either “right” or “wrong”. Rather, they will converge more and more on what is probably a true value for TCS in a climate such as ours.

So evidently Padilla et al. (2011) are assuming the sensitivity to be a constant (parameter) and what changes is their estimate (and confidence) of its value. This is the assumption that all the major models make (AFAIK): a constant “sensitivity”. As opposed, for instance, to one that changes occasionally due to unknown factors.

But that begs the question: what justification is there for this assumption besides “it’s easier”?

AK – If you ask most modelers, I think you’d find them saying that sensitivity is not an absolute constant, but probably varies somewhat with the nature of the forcing. It certainly varies with the state of the climate, and here is assumed to be close to a constant for our current climate, but would differ for an ice age or a climate with no snow or ice at all.

The reason, though, that it is fairly constant for a given climate is that for a given forcing, sensitivity will be determined by feedbacks. These, in turn, are not a response to the forcing itself but to the temperature change induced by the forcing. Since forcing is defined in a way that specifies its ability to change temperature, feedback will therefore respond relatively independently of the cause of the forcing. This is particularly true for persistent long term forcings (e.g., from CO2), but may differ for forcings that only operate for very short intervals.

I don’t agree with Gent et al. that

‘if CCSM4 twentieth-century runs had ended in 2000 rather than 2005, then the comparison would have looked much better.’Or rather, if it is technically accurate it is quite meaningless because a visual assessment is clouded by the extreme outlier of 1998 obs., and also by the larger than observed model cooling associated with the Mt. Pinatubo eruption.If you follow the graph from 1970 a clear picture emerges of a systematically increasing discrepancy between observations and the model output. I don’t see anything significantly different about the post-2000 period.

An important consideration when analysing output from the CCSM4 model is that it doesn’t make any attempt to include the indirect effects of aerosols on clouds (albedo and lifetime). Numerous assessments indicate that these indirect effects are likely the largest negative forcing components over recent decades so a model which doesn’t include them would carry an expectation that it would significantly overestimate warming.

I noticed that, but racked it up as one more defect in this generation of models. One problem I can see with aerosols is that it isn’t only human activity that would count as “forcing”. Yearly and longer term variation in pollen and other spores would also count as “forcing” (AFAIK), until vegetation models reach the point that they can provide a coupling feed (is that the correct term?) so it can be treated as a “feedback”. And even then, anthropogenic changes to vegetation (

e.g.deforestation, crop changes) would have to count as a forcing for future projections, wouldn’t they?Moreover, the effect of aerosols, especially pollen, doesn’t necessarily combine with CO2 in a linear manner, which adds complexity to the whole assumption of a constant “sensitivity”. And isn’t there an algae that produces a gas that affects low cloud condensation?

Not all GCMs exclude indirect aerosol effects. I’m not sure exactly which do or don’t, or how they’re implemented, but echam5 is coupled to a complex aerosol model (Published description here).

The paper has some strengths

1. I liked the representation of background variation by a stochastic differential equation.

2. the nonlinear Kalman filter is a powerful enough method that it might actually fit the data meaningfully and estimate the value aimed at.

3. they validated the modeling approach by fitting the model to “data” generated by a model that was known, and whose outputs were known without measurement error; they showed that in that case the model produced an accurate estimate of the parameter. This is like a calibration step in calibrating measuring instruments. This result provides some assurance that the model might be capable of coming up with an accurate estimate.

Because of 2 and 3, I expect that other people will take the modeling approach and use it with other options.

Some limitations:

1. Removing the effect of ENSO from the data seems unmotivated, and it must surely have the effect of biasing the estimate and enlarging its variability. Exactly how they did it is not explained.

2. They have used for the data only the model that was used to generate the simulated data. To the degree that they have omitted important drivers from the model, they will have biased their estimate and increased its uncertainty. The effects of cosmic rays on clouds, to take one possible omission, has regained life recently after a pretty moribund existence. What about a possible periodicity in the random variation? That could potentially be handled with a stochastic dynamic model in place of the O-U process. It isn’t hard for a reader to think of other omissions. I expect that such omissions will be addressed by others after they read this paper.

3. To the degree that you have suspicions about the accuracy and integrity of the HADCRUT data, you’ll have suspicions about these results. I think that the random error in the data is likely underestimated, but that is a problem that all modeling attempts share.

If I had to choose one model/analysis result out of all I have read for guidance as to what is likely to happen next, this is the one I would choose. But I would recommend, based on this study, that other analyses like this be done that include larger effects of some solar variables, and other noise processes such as a stochastic dynamical model with a mean frequency in the range 40-70 years.

I meant to add: if I had to guess the effects of the omitted or underrepresented drivers, I would guess that the authors have over-estimated the transient climate sensitivity, and underestimated its uncertainty.

The authors have done a very good job, and I expect the paper to have impact.

Another strength:

4. The use of multiple priors demonstrates that the final result does not depend much on what prior is chosen — the posterior distribution is dominated by the likelihood.

Keep lowering that bar…

(untenable assumptions

allthe way…)Here is the Thompson method for removing ENSO – compared to another method. Both use linear regression.

There are a couple of imponderables.

* why the interruannular variation in the residual?

* why no trend to 1976, a 0.1oC trend to 1998 and no trend since? As someone once said – calling it noise is no explanation.

The ‘anthropogenic influence’ is shown here – increasing over the century. http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view¤t=lean_2008.gif&newest=1

How is the solar influence handled?

What about multiple feedbacks and non-linearities?

If we have doubts about the ability of complex models to handle some of these issues – why does a simple model model succeed? How does this not gloss over the problems with data?

Cheers

Is the observations of the 20th century interpreted as accelerating?

http://bit.ly/b9eKXz

Or is interpreted as cyclic warming?

http://bit.ly/cO94in

They also don’t take into account the statistical effects of the Great Dying of the Thermometers in 1990, that severely truncated the data flow, and resulted in an immediate 1.6°C increase in “average” measurements.

From the start of Section 3 “Assimilation of observations with a nonlinear Kalman Filter”

“In order to estimate the parameter λ from past observations of temperature and forcing we use an adaptation of the Kalman filter applicable to nonlinear systems called the sigma-point Kalman filter based on Julier (2002). (The term λT in (2.3) is formally nonlinear because both λ and T are regarded as state variables. Physically λ is a constant parameter, but the Kalman filter adjusts its value to find the best fit.)”

This may not be innocuous. My understanding is that the filter can estimate state variables but not constants such as model parameters so it seems tempting to do this switch. It estimates state variables based on the data and the model dynamics. To do this the state variable has to satisfy some state equation or other, these exist for state variables but not for parameters. They illustrate equations for T (or λT) and for S but I have seen nothing I consider sufficient for λ separate from T. It may be that they have assumed some sort of smooth motion, or I have simply missed where this behaviour is described.

It is clear that their estimate for λ varies with time and it could do so benignly or it could act to “steal” variance from T and S in a nefarious manner. Allowing λ to be part of the state vector would seem to allow such behaviour. Put simply I suspect wiggles in λ may be tracking or compensating for variance not well explained by the modelled dynamics of S and T or the uncertainty in F. That is not definitive and my knowledge of Kalman fiters is slight. Its use here is justified on the basis of Annan and perhaps such matters are addressed there.

This technique seems crucial to the whole approach but I find it suspect. From the quote above:

“Physically λ is a constant parameter, but the Kalman filter adjusts its value to find the best fit.”

If the best “dynamic” fit (which is what is found by the filter for λ as a state variable) converges to the best “static” fit (which is what is required for λ as a system parameter) all may be well and good, but does it. If Figure 5 is used as a guide it seems to indicate that after some 50 years of settled behaviour λ took a 1sigma leap between the late 1990s and the end of the observational record. To me this does not seem conclusive either way particularly as I do not know the dynamics controlling the development of λ.

This is a good paper and I am not a Kalman Filter expert so I am not saying they can’t do this, merely that I do not know that they can. Perhaps the next passing KF expert would like to comment.

Alex

Alex,

I’m not either a Kalman filter expert. Thus something essential may miss some essential points. My impression is in any case that the analysis is strongly model dependent. The authors make several comments that support the interpretation that they would agree on that.

The describe their approach in the abstract:

“We make these estimates using a nonlinear Kalman filter coupled to a stochastic, global energy balance model, using the filter and observations to constrain the model parameters.”indicating that the model is an essential factor in the analysis.

In 3a. they end the chapter:

“While it may seem more straightforward for to be Gaussian (or uniform—see Frame et al. 2005) the most appropriate formulation remains an open question; in any case, we find that with more observations the importance of the skewness of to our TCS estimates diminishes.”They obviously don’t claim that, what’s more straightforward is necessarily correct. This point appears to be closely related to the question of, whether the prior should be flat in climate sensitivity or its inverse . They seem to have it uniform in over the range of relevance.

Nothing in the above is means that the paper would not be interesting and in particular in its more detailed analysis on the sensitivity of the results on various aspects of the data.

Pekka,

I am reasonably sure that deterministic part of the “model” comes directly from Eq 2.3 and 2.4 (without the noise term). They give us the state vector [T λ S] and the deterministic update matrix will be 3×3 with possibly the following non-zero values (I have added the update due to F):

where dt = the time step.

T’ = (1 – dtλ/C)T + dtS/C + dtF/C

λ’ = λ

S’ = (1-dt/τ)

That would fully the determine the “model” dynamics, add the noise matrices and I think that defines the “model”.

This would agree with their “…nonlinear Kalman filter coupled to a stochastic, global energy balance model …”.

It is basically AR[1] for T and for S with the parameter λ determined by the observations.

λ’ = λ is I think the obvious way to treat a constant deterministically but not so obvious to me that I realised this when posting above.

I believe λ cannot be simply derived but is inferred from differences between this and the other ensemble members which differ by sigma in their values.

The filter has only a very limited state vector and the update does not look any further back in time. If, as it sems to me, they never reanalyse the data with the improved value for λ and improved noise matrices, I suspect that this may allow the λ to assume the most promising value based most heavily on the most recent data. This would seem desirable when the filter is more commonly used for purposes where one really doesn’t care about the past but needs an efficient method of getting the best possible current estimate of the system state and λ is a true variable.

AR[1] is not a particularly good predictor for a series which like T has persistence but given that they consider that their results are not much effected by C this may not matter.

I agree with you with respect to the priors and I think that λ inherits a gaussian distribution from the noise used in the filter. For TCS this does not matter so much but it still matters.

I am warming to their approach but I still think that it would be of interest to rerun the filter using the final posterior for λ as its initial prior. I say this because λ is meant to be a constant and I think it should not really be varying during the run in the way that it does. They would lose their estimates for the rate of tightening but might get either a better value for λ or an indication that the filter is not happy if λ is more tightly constrained.

Much of the above is guesswork and I suspect at least some of it is probably wrong. I am not sure that Kalman filters are optimal for reasons given but they are inexpensive. I think that there is another non-iterative filter that would do a similar and perhaps better job that treats all the data in one go but it is much more computationaly expensive and might not handle the non-linear (λT) issue.

Alex

Correction

S’ = (1-dt/τ) should read S’ = (1-dt/τ)S

Oops

Alex

The Kalman filter is really a family of filters. The salient idea is that you can have a model of the noise and the model is adaptively improved through Bayesian techniques over time. So yes the traditional technique of a feedback loop is to minimize the error between the intended value and the observed value, but with a Kalman filter one also minimizes the error in the noise model estimation through Bayesian updates.

This can rankle people who don’t believe in the power of having a prior that is subjective, but the proof is in the pudding in situations where the Kalman filter is applied to solving some practical electro-mechanical engineering control problem. I was just talking to a colleague about the subject of who had written the shortest doctoral thesis of all time, and apparently Kalman is in the running. His thesis was 17 pages and no one on his committee could figure out what he was talking about. From what my colleague said, Kalman appeared smart so they gave him a degree. I suspect the reason his thesis was so short is that it is very much a conceptual way of looking at things.

Anyone who does modeling and gets criticized for updating their model’s estimation parameters after receiving some new data can blame Kalman. The difference is when you do this in an automated loop and it seems to improve the responsiveness of the control laws, people stop complaining. Now you understand why climate scientists get a bad rap, as they are constantly updating their models as new information becomes available. This looks like they are being wishy-washy but they are just doing what Kalman had suggested years ago. The controls engineer just wants to get the plane to land safely by applying a Kalman filter, and the climate science is looking for the same “soft landing”. It’s not that difficult a concept to understand. No wonder it took some mechanical engineers to write the paper.

WHT,

Thanks for your comments.

“No wonder it took some mechanical engineers to write the paper.”

Bizarrely they could have asked the weather modelling people, their data assimulation may now use just these types of methods. I think that the medium range folks ECMWF reckon that half or more of their computing time is spent on assimulating data including vast amounts of satellite data with the weather model running the dynamical part of the filter and noise and Bayes running the corrected update part.

Perhaps you must upgrade the weather modellers to engineering status. It is techniques such as data filtering and using stochastic correlated noise peturbations in their ensemble runs that has pushed out the period of validity over the last decade or so. They are not so dumb I think.

Alex

weather model data assimilation is quite sophisticated, and ensemble kalman filtering is one technique that is used

Don’t the climate models do that as well only much less frequently due to run times?

Judith,

There are some really smart people in weather/climate modelling. I do not know what the overlap is like in the US but in Europe, actually the UK, actually Southern England the overlap appears to be significant.

ECMWF is world class and arguably the class act, Hadley and the MET Office are not so shabby and they talk to each other and to academics like M Goldstein who I mentioned elsewhere.

Many if not all of the issues that are discussed here as if they were defects that the modellers were blissfully or willfully ignorant of, seem to be known, appreciated, and worked on.

MWF, data assimilation and reanalysis, and perhaps some of specialised climate stuff are not so much ivory tower as business enterprises. Maybe not all models and modelling groups are born equal but to suggest this, is not PC.

Alex

Since we are really entering the realm of control theory with the talk of Kalman filters, let’s see if we can place it in the context of the data available and what a controls engineer might understand.

If I use a Proportional-Derivative (PD) model to estimate historical Mauna Loa [CO2] against Hadcrut Temperature (12 month averaged, but still monthly discretized), the cross-correlation looks like the following:

This has a very strong zero lag correlation of 0.9. The PD model is

The first term is a Proportional term and the second is the Derivative term. I chose the coefficients to minimize the variance between the measured Temperature data and the PD model for [CO2]. In engineering this is a common formulation for a family of feedback control algorithms called PID control (the I stands for integral).

The question is what is controlling what.

It is intuitive for me to immediately think that the [CO2] is the error signal, yet that gives a very strong derivative factor which essentially amplifies the effect. The only way to get a damping factor is by inverting this and thereby assuming that Temperature is the error signal and then we use a Proportional and an Integral term to model the [CO2] response. This would then give a similar form and likely an equally good fit.

This brings up the issue of causality and control, and the controls community have a couple of terms for this.

There is the aspect of Controllability and that of Observability(due to Kalman).

So it gets to the issue of two points of view:

1. The people that think that CO2 is driving the temperature changes have to assume that nature is executing a Proportional/Derivative Controller on observing/estimating the [CO2] concentration over time.

2. The people that think that temperature is driving the CO2 changes have to assume that nature is executing a Proportional/Integral Controller on observing/estimating the temperature change over time, and the CO2 is simply a side effect.

What people miss is that it can be potentially a combination of the two effects. Nothing says that we can’t model something more sophisticated like this:

This is a PI controller on temperature, mixed with a PD controller on CO2.

The Laplace Transfomr transfer function (for Temperature/CO2) for this case is:

$ \frac {s(k + B s)}{c s + M} $

Because of the

in the numerator, the derivative is still dominating but the other terms can modulate the effect.sNow how do we bring the idea of the Kalman filter into this? The issue is that a PID controller does work but it is not an

optimal controller, in the sense that it won’t work as well under all conditions of environmental noise as another controller might. So what the control engineer (i.e. Nature) will do is try out various ideas including an adaptive control scheme such as the Kalman filter.This will improve the estimation objective, i.e. in particular estimating the observable states in the presence of noise, and maintain whatever control objective is needed.

So the problem boils down in the simple “bunny-rabbit” mind of us engineers is to optimize the (k,B,c,M) parameters to best match what the data says and what the corrupting environmental noise might be. We want to do this so we can estimate what the future trajectory of Temperature with CO2 might be. An adaptive approach whereby we continuously track the errror and apply a Bayes rule on updating parameters as new data comes in is the heart of the adaptive

control techniques. Since engineers are very pragmatic about these things and try to incorporate their control laws in real-time, that sets a baseline.

But at this point, the scientists and engineers part ways. The engineer is only trying to figure out how to control the system, while the scientist is trying to figure out what the future uncertainty might be. The control laws are essentially fixed but largely unknown because nature tends to keep its secrets. Furthermore, in the scientific case, the “noise” is actually partly manifested by the uncertainty in our estimates of what the parameter values are, and then we need ways of propagating that uncertainty for future trajectories.

That is the area I think that the smart weather forecasters have started to adapt the classic Kalman filter — originally intended for solving control problems — by extending it to interpret only trajectories and whatever nature has provided as a feedback control system.

So the next step is that if someone has a more complicated model that

will account for more than 90% of the effect (like my PID-like model does), then we can try to understand that models parameters and how they can be adaptively modified to give us good error bounds on future trajectories.

BTW, I am writing this up to try to understand this climate science thing and will be archiving my thoughts at http://theoilconundrum.blogspot.com/2011/09/sensitivity-of-global-temperature-to.htmlMissing cross-correlation figure here:

Missing Laplace Transform transfer function below:

$$ \frac {s(k + B s)}{c s + M} $$

Making errors in latex is so easy, that I test almost all of that on a site that allows previewing the message that includes latex without posting – i.e. my own site.

After two other trials I took in use the latex interpretation offered by wordpress.com also for those WordPress sites that are located elsewhere. Thus the testing should be valid for this site as well.

“My understanding is that the filter can estimate state variables but not constants such as model parameters”.Alex – One of the nice things about this thread is that it has motivated me to expand my knowledge of Kalman Filters from almost non-existent to a level slightly greater than almost non-existent. Hence, a naive question/comment.

As I understand it, the authors are using the KF not to estimate how λ varies according to the data (it’s a constant at least for these purposes, with a true value), but rather to estimate the probability that if the true value were known, it would be found within a specified interval. I see the latter – i.e., probability density – as a variable that will change according to the input data. Isn’t this a perfectly respectable use of the KF principle, or am I missing something?

Fred,

There is an apocryphal study into shoaling that showed that it was governed by the fish with the worst eyesight. Now that may be myth but I identify with that poor unguided fish. My knowledge is very limited but my motivation to get somewhere lingers on. What I do not wish to do is the misguide the shoal.

I don’t think your question is naive and I am not sure I know the answer. I am a naïf and I prefer people to know that. That I may often be wrong would bother me less than being uninteresting. Here is my “interesting” response.

I am fond of saying “Children, Dogs, and Neural Networks, you may know what you have taught them yet know not what they’ve learned.”

Kalman filters seem to fall under the broad heading of learning machines. They infer a world view (state vector) based on imperfect data.

There is a slightly less apocryphal study in using learning machines to spot tanks lurking in German forests. We taught them about tank recognition using surveillance images which “obviously” differed between with tanks and without tanks. We were teaching the machine about warfare, it was learning about botany and human behaviour and what was “obvious” were changes in the flora between when we were playing around in tanks and when we weren’t.

If I were this Kalman filter what would I be learning from the temperature and forcing histories?

I can argue about special cases.

Perhaps a crucial one is: if the forcing history was uniformly zero what could I infer about λ?

It seem clear to me that the answer is nothing. λ and S would I think be correlated but without a forcing they would be free to wander around in tandem using a scale of opportunity.

A second question is how concerned would I be for reconciling the present burden on λ to its burden in the distant past?

It is my suspicion based on the nature of the state vector (it only contains the immediately previous state) and the retained error matrices (which are persistent but subject to updates), that I would be encouraged to discount the past. It is but suspicion but if true, defining λ as a state variable would allow λ to be weighted or biased with regard to a plausible preference for getting the best current estimate over the best long term estimate. Perhaps we should look out for λ showing hunting behaviour.

What precisely is it that I, the machine, have been instructed to optimise? I am giving an answer but what precisely was the question?

It is my suspicion that I would only be answering the “intended” question optimally when both the forcing and the temperature were moving strongly and in tandem and hence relatively scaled by λ, sub-optimally when only one or the other was moving, and an unintended or unanswerable question if the forcing history was flat for a sufficient period. I also think that this could be discovered if one investigated the behaviour with more test cases, which is what we did with the tanks.

If I, the machine, were not discounting the past perhaps I would by now be sticking closely to a smooth of λ = (F(t)-F(0))/(T(t)-T(0)), the ratio of the total changes over the record, and not going on a fishing expedition if that is what λ is doing post the late 1990s

Also I would question whether my confidence in λ, is my confidence in the range about its value as a constant, or my confidence in the width of the corridor along its trajectory as a variable. This is very important and a variant on your question.

Am I, the machine, correctly answering an unintended question?

The intended question may also be up for grabs. I think that this thread may show that we are arguing about the inferred question, each with our own inference as to what is the intention.

My inference is currently:

If λ is constant what does history tell us about its value and our uncertainty in that value.

Whereas the question asked of the machine seems plausibly to be: If λ were a variable what does history tell us about an optimal estimate of its trajectory and our uncertainty in that trajectory according to some implicit criterion for what is the optimal.

The filter may have acted in precisely the way that you described, for that is I think the intention and the belief of the authors, but I am not sure that it is answering the intended question and I think that this is decidable by the simple expedient of more fully exploring its responses to special cases or by using logic to figure out how it would intreprete the task that we think we have set it.

I am warming to this paper, I insist it be important but not necessarily in the way that the authors intended. It seems likely that it is coming up with a worthwhile result but one requiring some caution and a dose of the Held multiplier to get from an estimate of TCS to an estimate of ECS. FWIW my opinion would be that the TCS values are worryingly high with respect to the very long run but that is a country perhaps best left undiscovered. We need a working value for TCS much more than an argument about ECS.

This has been a long old missive but I have not tried to convey just my answer which may well be incorrect but my illustration of the process that leads me.

I have read much of what you have written elsewhere on this thread and I think I agree with what you have said if you imply the impossibilty of inference without some model or other no matter how implicit. I find it poetic that some that reject models seem to do so from their very narrow model of reality.

Finally, “My understanding is that the filter can estimate state variables but not constants such as model parameters”.

In its simplest form the filter does not seem to be able to update a constant unless it is an observable (when it becomes trivial). Here they have a scheme that correctly required their using an ensemble of filters to infer the constant but it requires that it is treated as a variable albeit one whose unperturbed dynamics is to remain constant. I think they have done the correct thing and that their result is in some sense optimal. That said, I cannot see that this optimalisation is necessarily what we think it is. That would not necessarily be a big problem it would still be a correct answer but to a related but different problem. I don’t know the answer but I know I need to question.

Alex

Alex,

Great comment. You are re-treading some well-worn arguments between Bayesians and frequentists.

Well over 30 years ago, I developed some geostatistical mapping packages, largely Bayesian, but frustratingly could not sell their usage to my own management at the time. (The basic methodology is now widely used, after extensive third-party development and usage clarified the benefits, which I guess does not reflect well on my salesmanship.) The packages were designed to estimate unbiased mean and variance of certain key properties, using spatially varying conditional pdfs built from sparsely distributed sample points. The main reason I could not convince my management of their utility? They could not accept that our confidence in a critical estimated value could decrease (i.e. increased uncertainty) with the acquisition of more data, especially since they spent a lot of their time justifying expensive data acquisition by quantifying the benefits of uncertainty reduction.

This apparent paradox (the possibility of increasing the variance in a conditional pdf with the introduction of new data) is a common feature of most Bayesian approaches, and stems directly from the need to start with a model. The structural form of the model is then a major determinant in how one estimates the statistics of the random variables one is seeking to estimateof interest. If the structural form of the model is well proven by multiple applications to many similar datasets, then this is not necessarily a big drawback. For a one-off application however it is difficult or impossible to separate model uncertainty from information uncertainty.

Anyway, great post.

Paul,

Thanks, I am glad someone liked it. It took an age to write.

I fear that we might come to blows elsewhere. I do not wish to praise this paper but I come not to bury it either. Beware that I commonly don’t take myself all that seriously.

BTW, I have checked Eq 3.6 and it is I think correct but unclear. Ft appears to be a constant commonly taken to be 3.7 W/m^2 and Tt is what others term S. With that in mind the rest looks OK. It might be a cack-handed way of saying that a flat prior in λ leads to a 1/S^2 in S but I think this is what is going on.

Alex

Alex,

See my added notebelow re Eq 3.6.

Incidentally, I was not suggesting that this paper should not be published. My view was, and remains, that some major revisions would have been desirable.

Doesn’t it come to “things may be crazier and fuzzier than we/you thought”? And the way you discover this is with more data — which declines to conform to your original guess/model.

There are several points brought up in this thread, but two that really stand out to me.

1. Clouds… That makes sense since they are one of the larger known unknowns. Clouds may always be a feedback to some forcing, but if that forcing is not CO2 increase, then they could be called a forcing. Right or wrong?

2. UHI… The Urban Heat Island effect is adjusted in the records, how well is debatable, but to me it is minor compared to the human heat island effect. Irrigation, both farm related and suburban sprawl related, land use change, have a large impact. Can UHI be corrected without considering the total impact?

And since the merchants of doubt theme always pops up, we now have “Big Gerber” playing the good arsenic / bad arsenic card in apple juice. If you have a limit on one thing that is poorly explained it can lead to confusion with other items in comparison. This boils down to commonly accepted statistical methods, something that is part of the AGW debate.

Dallas – As long as clouds are a feedback to any of the forcings incorporated into the analysis of forcing/temperature relationships, their behavior is already taken into account and need not be ascertained separately.

For this paper that is correct. For the Spencer/Dessler debate it is not. Which gets us back into the same old tired disputes caused by language. Technically, sensitivity is the climate’s response to CO2 doubling because that is the more common usage. In reality, it is the climate’s sensitivity to any change in forcing/feedback.

I agree, and in fact, some of the forcing entered into the analysis did not come from CO2 – e.g., aerosols, solar changes, etc. The authors have assumed all persistent forcings to have similar sensitivities. This is probably wrong, but close enough not to matter too much.

Spenser, not Spencer!!

>:(

The latter is a turn-of-the-19th C. poet.

Yes its Spenser as in Spenser: For Hire.

Unfortunately, unlike the Spenser with no first-name, good ol’ Roy boy has no code of ethics :) :) :) :) :)

Actually, my post is wrong. It’s Roy Spencer. At least, he thinks so.

:(

;)

Dallas, clouds can also be part of the aerosol forcing, which is where I think their main variation is coming from. Aerosol forcing varies mostly with the rate and area of aerosol production, since its effects are short-lived and mostly short-ranged.

Regarding the use of climate sensitivity as a constant. This is a concept that is disputed by the results of studies on irrigation. You may be able to calculate the effects of ice albedo but it gets much more complicated once the ice becomes liquid water. In order to understand exactly what the climate sensitivity is one would first need to know why the effects of irrigation vary according to latitude and apply those concepts to a warming world in the process of deglaciation. This difference is not insignificant.

“Irrigation cooled the

northern mid-latitudes; the central and southeast United

States, portions of southeast China and portions of

southern and southeast Asia cooled by *0.5 K averaged

over the year. Much of northern Canada, on the other

hand, warmed by *1 K.

Imagine if the Sahel region came into big bucks and started big time irrigation.

From a quick read through of the paper, I’m not sure that much reliance can be placed on its results. Isn’t the basic energy balance model set out in equation 2.1 wrong? It models heat flow into the deep ocean (from the mixed layer) as a physically constant parameter multiplied by the difference in temperature of the mixed layer and the deep ocean – presumably the depth averaged mean of the latter. This is not consistent with a diffusion or upwelling-diffusion model, and I doubt it can be seen as a reasonable approximation to reality over the time periods involved in this study.

I am also dubious about the paper’s results on constraining aerosol forcing, given that it is based on global mean temperatures. Evaluating the time evolution of 20th century temperatures over several latitude bands, as done in Forest 2006, provides much better discrimination as to aerosol forcing, given the strong latitudinal variation thereof.

I also agree with the comments about this being a model dependent rather than a pure observational study. Although that doesn’t necessarily mean that its results will fail to reflect reality, the less direct linkage to observations must at the very least reduce the confidence that can be placed on the study’s results.

Finally, with any study like this that depends on complex mathematical methods, no reliance should be placed on any of its results until the authors have publicly archived the full computer code, as well as the data sets, used to generate the results. Any non-obvious shortcomings in the methods, or errors in implementation thereof, are only likely to be revealed by examining and running the computer code.

Nic – I don’t think equation 2.1 is wrong in theory for long term equilibration of the deep ocean with the mixed layer, regardless of the mechanisms involved,. Ultimately, the laws of thermodynamics require heat to flow from the warmer to the cooler body until they equilibrate, regardless of the mechanisms involved. Over intervals short of equilibration, there may be deviations from an average rate constant, as you suggest. However, the deep ocean component is largely irrelevant to the time periods analyzed, which involved the cumulative effect of annually computed changes. Because the mixed layer heat change dwarfs any deep ocean transfer on these timescales, variations in the latter were appropriately excluded from the analysis.

The effect of uncertainties in forcings (aerosols and others) is described in the text, Table 1, Figure 7, and elsewhere, and changes the TCS estimates relatively slightly.

The use of a model in this case is not an obstacle to deriving conclusions from observational data but the only way those conclusions can be reached. This is typical of any phenomenon involving a multitude of interacting variables, and is a standard approach. One can’t simply take the observations and do anything with them without a mathematical model. The relevant question is not whether a model is used (it has to be) but whether the model included the appropriate inputs and relationships, and applied the appropriate statistical treatments.

I agree with you that there may still be non-obvious shortcomings in the method. Whether the computer code would unearth these is hard to predict, but replication of the results by independent groups will be a good test.

The equation 2.1 is a crude expression of the presence of several time scales. A detailed analysis cannot be represented by two coefficients as the system does is not a combination of two well defined and well mixed subsystems with a weak link between them. It’s clear from the paper that the Transient Climate Sensitivity is also an abstraction that is not a natural well defined parameter of the Earth system. On the other hand the paper contains sufficient justification for it’s implied assumption that the temporal scales happen to be such that the TCS is a natural parameter to use.

This conclusion appears to be rather robust and should not be a cause of worry.

More problematic is the dependence of the results on the selected model and on other details of the methodology. To understand, how far the results are really enforced by the data and how far they are influenced by the methodology would require quite a lot more information than a paper like this can provide. The authors did some experimentation by varying the inputs artificially, but more would be needed and that more should include also changes in the methodology. As an example on could try, what would happen, if the climate sensitivity would have been used as a variable, when the Kalman filter was applied or if method would fail with that variable, then one could look, how far one can go towards it and still get results.

Another thing to vary could be the noise model, where significant autocorrelation could be added, although I cannot tell, what that would do to the solvability of the problem.

Some of the changes might influence the expectation value or the modus, others might make the tail much fatter. (Using climate sensitivity as a variable should lead to a fatter tail on the side of high sensitivity.)

As an example on could try, what would happen, if the climate sensitivity would have been used as a variable,That’s an interesting suggestion. I see this paper as one with potentially important implications for our ability to quantify climate responses to CO2 and other forcings, which means that it deserves to be subjected to considerable “stress testing”. If you were to vary TCS as a test, how would you proceed in practice – for example, how would equation 3.6 be rewritten? Given that the method yields a confidence interval for TCS (see, for example, Figure 6), it’s clear that an entire range of TCS values are compatible with the observational data. What would be the criteria for a successful test?

I could summarize the impression that I have got without going deeply in the methodology.

1) The paper is looking, what can be learned from a very limited set of data that has been studied by many also before.

2) The general rule is that a model parameter can be determined the more accurately the more and the stronger assumptions the model introduces. Here the essential assumptions are the structure of the basic model and the model of the stochastic noise.

3) When equivalent assumptions on the model structure and the nature of noise have been done in earlier analyses, very similar results have been obtained. Here one essential point is the use of as the parameter to be determined directly by the method.

4) The approach introduces the Bayesian approach in a technically well defined way, which allows presenting figures, which tell, how the evidence on the values of the model parameter has accrued over time. It seems obvious that the method is reasonably correct, but whether it’s really correct and without some subtle bias, is a matter that I cannot judge either way. I only note as a general observation that there is often a risk of badly understood (but usually rather weak) methodological bias in nonlinear methods.

5) Testing with the GCM provides evidence that the particular GCM is defined in such a way that the present model assumptions are a valid for that. It’s a matter of judgment to decide, how much that tells about the validity of the present model assumptions for the real Earth system.

========

I personally find the basic structure of the model plausible, but the assumptions made on the “noise” questionable. Finding the basic structure plausible should not be interpreted as almost certain to be right. It just corresponds to the standard way of looking at the energy balance. The dynamics of the Earth system is, however, so complex that the standard way might lead to misleading results in particular, if there are essential nonlinearities in the feedback like responses on the scale of potential future changes.

Nic,

I have no problem with Eq 2.1. (I do have a major problem with the text surrounding the derivation of Eq 2.3.)

However, whether on likes 2.1 or not is largely irrelevant, since this model is not used anywhere in the paper. The model used is basically the linear feedback equation with a single heat capacity term – the single box model.

The equation 2.3. tells that the internal variability is essentially mean reverting random walk, where the “mean” is the value produced by the deterministic equations. The annual stochastic variations are independent as in normal random walk, but the mean reversion term keeps the value within the specified range of variability.

In my above message I stated that I find the choice of the stochastic model questionable, as it excludes all autocorrelations except for the trivial one related to the mean reversion. Thus the model cannot produce any oscillatory behavior or shifts related to internal dynamics. Excluding all that is a sure way of getting rather narrow ranges of confidence for the remaining two model parameters ( and S, the aerosol scale factor is added at a later stage). The paper contains some tests on, what happens, when fluctuations are added to the data, but that doesn’t prove fully the validity of their results and.

Pekka,

I agree with all you have written. At best the authors have demonstrated that their model has the ability to incorporate a noise term with some (restricted) autocorrelation. They make no attempt to demonstrate that this term matches actual non-radiative flux variations. Similarity of fluctuations to those in models with known poor ability to match ENSO hardly makes a good case to support the structure of the term. And in fact, it becomes largely irrelevant in their match to “observed” temperature data, which actually turns out to be data modified to exclude ENSO variation.

Paul & Pekka,

I agree but differ. If the author’s intend that their results indicate the “answer” then you are correct and I suspect that they do. However if the question addresses whether the result is useful then I think it probably is.

Useful in the sense that it gives us the answer to a question that we may need to answer going forward.

Viewed as a problem in navigation, in guiding planet Earth; a wobbly answer based on an imperfect model with restricted and noisy data is useful provided that it comes with commensurate confidence. The imperfections in the dynamic and noise models and the data should be being reflected in the confidence. These types of filter are I think better suited to the task of navigation than to deep analysis. They are both synthetic and analytic but their analysis is primitive and is largely hidden.

In terms of the dynamics it would be better if they were more capable of producing persistence and this could be done in a statistical sense. It would I think be unwise to include too much questionable dynamics. If the dynamics, that you may seek, are not predictable from the history (which seems to be the case) adding them would cause the filters dynamics to give worse predictions. Now that would cause the confidence intervals to balloon, so if that is your wish that is fine but I would see this as an artificial method of achieving this.

Given the paucity of data and its vagaries I wonder whether it is up extracting more information than is done here.

It would be interesting to see whether the complexity of their model is in any way optimal. I strongly suspect that it should not be any weaker but I suspect that there is little to be had from improving its dynamics.

It is my prejudice that results such as these are little different from what the grandest of simulators produce and as a practical tool for navigation plausibly they are more useful as of this date.

If the question is: given the current circumstances what is our best guess working estimate of the TCS? Then this gives a plausible answer.

I agree with your lists of imperfections but I might say that given such imperfections these methods provide an answer as opposed to no answer. It is easy to suggest that it must be getting its confidence intervals wrong but that depends on what we mean by wrong. If we mean are they suffcient to provide ongoing guidance I suspect they are, and may in fact be in some way optimal. That is what I believe that these methods are intended to achieve.

I suspect you are both right, but is being right optimal?

Alex

Alex,

I don’t propose that they should try to justify any other specific internal dynamics, but rather that the stochastic model should allow also for more complex autocorrelations. That would widen the confidence interval for . The other proposal that I have made is to change the basic model in a way where the new parameter(s) would have an essentially nonlinear relationship with the present one(s).

The sensitivity of the results (values and confidence intervals) on such changes would tell something on the influence of specific choices on the outcome.

At present I think that this is a new method for getting essentially the same results that have been obtained before, when equivalent assumptions are made. Some other assumptions have led to different results, and I would expect that the Kalman filter approach could reproduce also those, when the assumptions are modified correspondingly.

Pekka,

Sometimes my ramblings are a bit what I think is called dialectic. If I seem to put words in mouths that is a hazzard. Mostly I am arguing with myself.

We do have a point of debate. I agree about the autocorrelations but I cannot be as sure as you that it would effect the confidence intervals in the way you say. I see it as a balance of effects. I see improving the covariance structure as leading to a better predictor for T which may lessen the burden on S and on the assumed observation noise in T. This in itself should lead to a better prediction for λ. The big question would be does a better predictor for λ, mean a better predictor for λ as a constant or for λ as a variable. It is I think the latter, and I suspect that the expectation of λ may not exist either usefully or even actually, i.e. λ may not converge but wander. I have written elsewhere that the confidence intervals express confidence in trajectory for λ so I suppose I must agree that they must be to narrow if λ is not convergent. That said I cannot be sure that improving the predictor for T would not make matters worse. This is getting too speculative I think. These are issues better addressed by someone who could build such a filter.

Alex

As far as lamda goes, I agree with Alex. Actually, I cannot see how lamda could not be variable due to water vapor. The lapse rate will likely change, but with more water vapor and fairly constant solar forcing there will be changes in precipitation rates and cloud cover. The odds of those changes being linear are pretty small other than for short time frames. Even without a GCR aerosol seeding effect, more water vapor with the same cloud formation mechanism with result in more clouds during favorable conditions and not much change during unfavorable conditions.

Alex,

We are referring to a bit different modifications to the method. You mention “improving the covariance structure”, while I have in mind more freedom in that thinking that the analysis cannot really pick any particular solution over the others. Adding more variables and flexibility would then mean that there are more alternative ways of reaching agreement within the chosen confidence level.

There is of course the possibility that the additional freedom would result in a solution of significantly better agreement with the data and that this solution would lead to rather narrow confidence intervals for the parameters, but I consider this possibility very unlikely taking into account that the present model reaches already a fairly good agreement with data. Is this second alternative that, what you have in mind.

Pekka,

Yes I think I mean your second alternative and I should have said lagged covariance. I can see now that you clearly had other thoughts in mind.

Although maybe neither hugely practical, nor sensible, there is no inherent bar to keeping the last few T in the state vector and requesting optimisation over all the resulting λs. A large number of extra Ts would more or less guarantee over fitting/optimisation crimes again data. By “improving the covariance” I meant simply going from AR[1] to AR[n] and optimising on that. My usage of the term improvement is dubious now that I think about it.

I have looked at this before and FWIW AR[1] is a pretty awful model and I am not closed to the idea that perhaps AR[n] would be noticably better for some n = ~6. That said there are other approaches. It is possible to approximate the amplitude/frequency spectrum with a slope (in the log log domain) using a single parameter with less risk of over-fitting and to convert that directly to a lagged covariance matrix and then optimise on the parameter. Actually that wouldn’t be so shabby at least as a second point of reference, it doesn’t capture structure (no phase coherence) but it does provide LTP. FWIW I have been down that path and I think it has merit. It is easiest seen as being an approximation to the impulse response function as modelled by the power law t^n with n ~ -0.5.

I really have looked at this and I know of no better single parameter stochastic emulation for the climate noise power spectrum.

Now that I think about this I might be seeing something like your point. This power law method might give a truer optimisation and at the same time a weaker fit. I will have to think about it but I know that in general using a power law decouples λ for periods much longer than AR[1] does (two or three decades as opposed to around 5 years) which would give rise to wider confidence intervals and slower convergence.

Thanks I obviously needed to think about this.

Alex

Ooops a mistake!

Two parameters are obviously required not 1 so it is some n ~0.5 and some λ and both would need to be optimised.

Also by “decoupled” I simply referred to the approximate period before T can imply anything meaningful about λ.

Alex

Going back to Fred Molten’s comment:

“Nic – I don’t think equation 2.1 is wrong in theory for long term equilibration of the deep ocean with the mixed layer, regardless of the mechanisms involved,. … However, the deep ocean component is largely irrelevant to the time periods analyzed, which involved the cumulative effect of annually computed changes. Because the mixed layer heat change dwarfs any deep ocean transfer on these timescales, variations in the latter were appropriately excluded from the analysis.”

I don’ t think that this is true at all. The deep ocean is not a single box. A more realistic model is of heat being transferred to the top layer of the deep ocean by diffusion from the bottom of the mixed layer, and permeating downwards. The depth of the layer of deep ocean to which the mixed layer is effectively coupled depends strongly on the timscale involved. In electrical terms, diffusion into the deep ocean would be modelled as a ladder of low value resistors in series with small capacitors attached at every rung, or a lossy RC transmission line, not as a single high value resistor in series with a high value capacitor (which is effectively what has been done in this study).

Taking an average heat transmission coefficient for the deep ocean, as the authors have done, is therefore not appropriate. Their methods fits both gradual temperature changes over the 20th century and the temperature response to short lived volcanism using the same heat transmission coefficient for the deep ocean, whereas the effective coefficients for these two cases will be quite different. Without obtaining working computer code and modifying it to reflect more realistic assumptions as to the deep ocean, it is simply not possible to say what the effects on the study’s results of such a change would be.

Nic,

All those details that you discuss would be important, if the issue would be warming of the deep ocean or what happens at longer time scales than the paper considers.

Concerning heat transfer to the deep ocean diffusive heat conduction is really slow and almost insignificant in comparison with the convective processes of thermohaline circulation or meridional overturning circulation. Those parts of deepest ocean that are not touched by these convective processes can be forgotten in most considerations as their warming is slow even on the standards of deep ocean.

What’s important for the paper is a separation of processes of time scales well below 100 years from those of longer time scale. Such a separation is certainly not complete or accurate, but the relative strength of the processes with time scale in the problematic range is not too large for the definition of transient climate sensitivity to make sense.

Pekka,

May I ask if you have actually done any calculations to support your assertions that heat transfer to the deep ocean (being all except the mixed layer) can be ignored over the timescales the paper considers? I have, and come to quite different conclusions.

The paper assumes a mixed layer 60m deep. If one assumes an effective mean deep ocean diffusivity of 0.0004 m^2/s (in line with that which matches a typical AOGCM) then the time constant associated with the next 250m depth is only 10 years, a fraction of the time period covered by this study. It follows that the coefficient (beta) of heat flow into the ocean will vary over the period of the study, depending on how much heat has diffused into the upper layers of the deep ocean (thermocline) and raised the temperature of the interface of the deep ocean with the mixed layer, as well as being different for short-lived volcanic forcings.

Accordingly, I don’t think what you assert is true.

Nic,

I didn’t make the claim that you disprove. I only told that convection is very much stronger than heat conduction in warming the deep oceans. The coefficient in the diffusion differential equation for heat conduction in water is only 0.00000014 m^2/s, when no convection or turbulent mixing is present.

Their model is certainly crude in, how the heat transfer to deep ocean is handled, but they do check, what’s the influence of using a mixed layer of 120m or 180m rather than 60m. The results change, but not dramatically.

Nic,

0.0004 m^2/s is rather large and might explain a few things.

I do recall values around 0.0001 m^2/s in a simplified upwelling diffusive model ocean were from a paper published paper in the 1990s,

The amount of heat uptake scales woth the diffusivity.

FWIW the upwelling has a major effect in these model oceans it creates a scale depth of ~1000m and a scale period ~300years.

The scale height is for the exponential variation of temperature at medium depths 1km – 2km+

The scale time is just the e-folding to a that profile.

After a few scale times the diffusion is matched by the upwelling and the down flux stops.

That is how the oceans were modelled and they got reasonable results. In my opinion that figure is way too high and would end up with way too much flux.

Alex

Nic – In reviewing this discussion, including the comments of mine you quoted, I realize that the terms I used were chosen poorly, or at least ambiguously. In particular, when I referred to “heat change”, I should have said “temperature change”. Heat will of course continue to flow from the mixed layer into the deeper ocean, but as long as the temperature of the latter changes only imperceptibly during the observation interval, the authors argue it can be ignored in going from equation 2.1 to 2.2 and 2.3.because changes in the quantity (T – To) will be almost identical to changes in T alone, and it is the relationship between changes in T and changes in F that defines sensitivity.

(I also notice that the paper describes To as being not very different from 0. I’m not sure what was meant. T itself must be in degrees K, and deep ocean temperature tends to be around 2 deg C. However, heat flow into the deep ocean will be a function of the temperature difference between the heat source and the deep ocean, and will cease when the source temperature reaches the freezing point, which is only slightly below 0 C. In any case, (T – To) can’t be equated with T. Nevertheless, when changes in T are measured, the scale (C vs K) is irrelevant. All in all, I’m not sure the temperature scales are relevant to the outcome of the calculations, but it still may be incorrect to refer to λ as the sum of γ and β.)

Fred,

I haven’t reread that section but there is a logic in assigning a value to To equal to the unperturbed surface temperature. This is on the basis that in the unperturbed state the temperature profile is steady. If that be the case it is only the difference between T and To that matters. Viewing it that way has some advantages and significantly it shows the importance of the assumption that they seem to make that the system was in equilibrium at the start of the run. In truth there is an uncertainty in our knowledge of To.

Their’s is a simplification and in general I found they raised an issue and then muddied rather then clarified it.

When I read that section, I read what made sense to me which may be a different thing to what they said.

Alex

Alex – From the text:

“consider a minimal model of the climate system that might be appropriate for determining both ECS and TCS. Such a model contains two independent variables representing perturbation surface temperature (T ) and deep ocean perturbation temperature (To), namely

C(dT/dt) = F – γT – β(T – To), Co(dTo/dt) = β(T – To)

where and γ and β are positive parameters, C and Co are heat capacities of the mixed layer and deep ocean, respectively, with Co much greater than C , and F is the net perturbation to the climate forcing (including both natural and anthropogenic factors). In ﬁnal equilibrium, T = To, and the temperature response (the ECS) to a speciﬁed forcing, say F for 2xCO2, is given by T(ecs) = F(2xCO2)/ γ. On decadal timescales the response of the deep ocean is small and To is approximately 0. The system then obeys C(dT/dt) = F – λT, with λ =γ + β.”

While I agree that their approach makes sense in general, I’m not sure about the statements about the value of To or that λ =γ + β if To is not zero .I speculated that this would not effect the final results, however.

Fred,

OK, I did warn that I had not reread it.

Rereading confirms my muddying metaphor. Not having a clue what they mean e.g. “… deep ocean perturbation temperature (To) …” I lapsed to my default setting. Ignore the verbage, make some sense of the equations, and move right on down the line. Not brilliant but better than getting overwraught as others have. I do read stuff sometimes, honest I do. I read some of the references that Isaac H. gives several times and likewise his posts.

A lot of time and electric ink, could have been spared by something like:

“We have assumed that the net heat flux to and from all parts of the thermal system other than the slab ocean is small compared to F, and on that basis we have ignored it. This is we feel justified when determining estimates for TCS and marks its distinction from ECS.”

Alex

Alex and others – I’ve added additional commentary downthread at a standalone location in hopes of attracting further opinions on some of these issues.

Nic,

Eq 2.1 brings To (temp ocean) into the argument moslty for the purpose of chucking it out again. For some reason they thought it worth discussing the difference between ECS and TCS and do so in a way that is over simplified.

It is a bit of a red herring as their approach can have nothing of substance to say about ECS or To.

It seems that the authors have spread a good deal of confusion about what they have and have not done. Here I think that it may not matter but they would have been better off leaving some things unsaid.

Alex

I like a number of things about this paper. However, having now been through it in more detail, my opinion is that it should not have been published without major revision.

It took me several reads of the paper to work out what the authors were actually doing, because of misleading references to governing models which were not actually used in their final attempt to match data – very frustrating. And what they actually did does not correspond well with the claims made about the paper in the abstract and introductory paras. It would have been better, in my view, if the authors had stated clearly that their intention was to show the potential value of a Kalman filter approach applied to a simple feedback equation, rather than give the impression that the values derived from their tests were in any way meaningful per se.

My summary of the paper would be:-

“Using a single-capacity linear feedback model, the authors demonstrate the potential utility of the application of a Kalman filter to temperature time-series, which permits the simultaneous estimation, continuously updated in time, of two parameters, one of which is the linear feedback coefficient, and the second of which controls total forcing. The temperature series are taken from selected GCM results to allow direct comparison with the GCM model parameters, and, in one instance, a modified version of HadCrut3 observational data which was preprocessed to remove ENSO variability. Although the paper shows that it is potentially feasible to include a noise term which may be adapted to reflect natural variability in the form of non-radiative flux changes, no attempt was made at this stage to model ENSO variations. In these applications the adjustment of total forcing is limited at this stage to controlling the relative contributions (via a single parameter) of two predefined forcing time-series – notionally a non-aerosol and an aerosol series. THIS SEVERELY RESTRICTS THE ABILITY OF THE MODEL TO MODIFY FORCING INPUTS AND MAKES THE RESULTS HIGHLY DEPENDENT ON THE PRIOR SELECTION OF FORCING SERIES. This makes the present simple model perform well as an emulator/corrector of GCM results, but it should not at this stage be seen as a reliable estimator of feedback uncertainty when applied to realworld data.”

In addition to the above, I would note that the paper has a number of statements which are highly dubious, too numerous to detail here, and some of which are just flat wrong. These should have been picked up in peer review.

I would draw particular attention to Equation 3.6 in the paper. This is wrong in two distinct respects. The substitution of lamda = Ft/Tt is not correct nor is it even asymptotically correct. It introduces serious mean bias. The calculation of the marginal distribution is also mis-stated, and would give rise, if the derivative term were correctly calculated, to a negative pdf for Tt.

Overall, I found this to be a frustrating read. It takes a lot of time to find the small worthwhile items amongst the BS.

Paul,

my first response is misplaced below:

https://judithcurry.com/2011/09/14/probabilistic-estimates-of-transient-climate-sensitivity/#comment-112566

Given that there seems to be such a large gulf between what I have inferred form this paper and what others have, there really does seem to be something amiss with the way it is written. It is not from lack of words of those it has very very many. I have read some more of the comments above and, although I doubt that it is intentional, they have managed a sleight of hand as it is clear that eyes have been deceived if only initially.

Alex

In light of Alex’s comments on Equation 3.6, I revisited the issue. There is still a mistake in the paper, but it is not as wrong as I thought it was.

The authors are trying to calculate the marginal distribution of Tt. I thought on first reading that Tt was the temperature response at time t to the forcing at that time, Ft. It is not. It is evidently meant to be the the EQUILIBRIUM temperature response from the linear feedback equation to a forcing, Ft, evaluated using the statistics of lamda evaluated upto time t, and assuming guassian prior in lamda. With this understanding of the definition of Tt, my comments on mean bias were incorrect.

The equation is missing a minus sign. The first part should be:-

Pr(Tt) = Pr(lamda) * – d lamda/dT

Thanks to Alex.

Paul,

You mention “gaussian prior in λ”. It could be gaussian with a large enough standard deviation, but basically I think their approach corresponds to a flat prior in λ over the whole range of possible values.

What’s essential that such a prior corresponds to the type prior for . This is a point discussed some time ago in relation to the Figure 9.20 of the IPCC, AR4, WG1 report, which summarizes the IPCC view on the evidence obtained on the value of the climate sensitivity.

Pekka,

The initial value of lamda is a point value – being the prior “state” for the first timesteps. If I understand correctly, the sample statistics from successive timesteps are then collected and a normal distribution (my “Gaussian prior”) is assumed in order to calculate the pdf of lamda at time t, and the (as you point out, skewed) pdf of the temperature at notional equilibrium for the linear model form. The authors claim that the method is insensitive to the initial choice of lamda.

Paul,

I was using the term prior with the meaning of that is has in Bayesian probabilities. While the Kalman filter approach is essentially Bayesian, there is an obvious technical difference at this point.

My knowledge of the method used in this paper is not at a level that would allow me to tell technically, how the transition from the point value to the range shown for λ at the beginning of the time range occurs.

The choice of the Bayesian prior doesn’t manifest only in that, but also other details of the method are based in part on that choice. At least to first order the choice of variables tells, which distributions are taken as flat in choosing the prior. The precise relationship of the method with the Bayes equation would require more thought, before I could state more precisely, what I have in mind.

Paul,

It is clear to me that they had an opportunity to state the components of the deterministic update matrix and hence define the dynamics in a way that would be immediately clear.

My guess which is stated somewhere above is that they used Eq 2.3 and the deterministic aprt of 2.4. That is both T and S are AR[1] and the dynamics are almost certainly more primitive than most were expecting yet this is not at all uncommon in such papers. I took these equations to be the deterministic model. Is your understanding different?

I may be benefitting from a tendency to save my eyes and not read more of their paper than I thought strictly necessary. Given your dissapointing experience I may have been wise on this occassion.

Alex

“Is your understanding different?”

No. I think we are very close.

In application to the “observed” data, Eq 2.3 is effectively re-stated as

CDT/dt = F1(t) + alpha*F2(t) – lamda*T +S

where F1 and F2 are fixed time series – notionally non-aerosol and aerosol forcings.

The system is then solved for four state variables:- T, lamda, S and alpha. However, S is almost irrelevant to the results. To clarify this, S is strongly covariant with T and alpha, but for the match to observed data, ENSO is removed from the temperature data by pre-process, and then the variance of S is calculated to match the detrended residual:-

“To account for

natural variability we assume for our base case that the natural variability [S in equation (2.3)] has a

standard deviation of S D 0:07Wm2, which gives T 0:13K, this being the standard deviation

of the observed detrended 20th century temperature record.”

Since it is excursive with small variance, the presence of S has the predictable effect (only) of increasing the apparent uncertainty of the results by a small amount.

One reason I suggested the paper would have benefited from major revisions is that I don’t think any of the above is very clear on first reading.

Paul,

Thanks, that seems fine.

Alex

Alex and Paul,

In my experience, it has been difficult to get a statistical analysis published in an “applications” journal in which the model and estimation procedure are sufficiently described. For completeness and clarity it is necessary to write at least two papers, one like this in which the method description is truncated, and another in a more statistical journal such as Statistics in Medicine, Annals of Applied Statistics, or Journal of the American Statistical Association. Note as well that the authors do not (yet, anyway — maybe the published version will be different) provide their code or data — in fact, they do not even name the programming language in which they did their computations (unless I missed it — please correct me if so.) I hope that the authors expand their presentation to include much more detail, of the model itself and of the algorithms and programming used.

Your comments about its incompleteness are accurate and on point.

The preferred procedure appears to be now that the details of the methods are published separately on net as supplementary material to the paper, unless they are really so novel and of such scientific value that they warrant a separate paper in a journal of that area of methodology.

I agree that is “preferred”, but multiple standards are in play all over the place. And there isn’t a uniform standard for what qualifies as “so novel” and “such scientific value”. An applications paper in an applications journal might refer to something like AOAS, which in turn will refer to Annals of Statistics, which in turn will refer to Annals of Applied Probability.

You write many good posts, so I am interested in your opinion: what do you think of my comment that the paper will have impact?

MattStat,

You must have noticed that I have presented rather similar critique as Paul_K. Thus I’m not so sure on the value of the paper although it’s certainly good that such methods are considered seriously that have shown great value in other fields in problems with at least some similarity with the present.

On the other hand such methods are usually well understood and formally justified only, when certain rather strict conditions are satisfied, and the Earth system as well as data on that don’t usually satisfy such conditions in all respects. That makes it difficult to judge, what kind of hidden assumptions are included in the use of such methods.

In this case I feel that one very essential assumption is hidden in the choice of λ as the state variable that has a Gaussian distribution throughout the analysis. Also the fact that the distribution is bound to be Gaussian is a serious limitation. The comment in the appendix that the width of the initial prior doesn’t affect essentially the final outcome, but the limitations that I mention above may well have a very essential influence.

Another major problem is in the stochastic process chosen. Allowing for processes that could lead to quasiperiodic internal variations of periods of several decades, would reduce essentially the power of the method in determining, what’s forcing and what internal variability. As many people consider such internal variability plausible or even likely to be significant, the analysis cannot tell at all, how such processes would influence the conclusions.

One should not put too much weight on the results of any formal analytic method as long as it’s not capable of taking into account all plausible and potentially important factors. This comment is indeed to me one of the most important guidelines in applying formal methods to complex problems. You should never believe in results from a method unless you trust that you understand all important mechanisms involved in the formation of the real data, and unless you in addition are sure that the methods are applicable when the mechanisms are those that you trust to be correct. Furthermore you should be certain that all requirements for the validity of the method are satisfied at the level required.

At the present the analysis of the paper doesn’t satisfy any of my most essential requirements. The paper contains some related tests and does through those tests suggest that an improved analysis along it’s general lines might be useful. (The reliance on the Gaussian distributions, may, however, make these hopes permanently moot.)

Pekka, you wrote:

Another major problem is in the stochastic process chosen. Allowing for processes that could lead to quasiperiodic internal variations of periods of several decades, would reduce essentially the power of the method in determining, what’s forcing and what internal variability. As many people consider such internal variability plausible or even likely to be significant, the analysis cannot tell at all, how such processes would influence the conclusions.I certainly agree with that.

I agree that your reservations are technically correct and, to repeat myself, “on point.” I never have read a paper that wasn’t flawed, however, and I think this paper is worth the while of someone (some of us?) to follow up.

Thank you for answering my question.

You may be offering profound truth, grasshopper, but it’s still a pain in the ass for the readers. When I was a lad (violins in background), it was pretty normal for authors to state clearly what the intent of their paper was e.g. – new postulate, new science observation, development of new methodology or a new application of known methodology. It was also pretty normal at pre-review stage for authors to state exactly what was done, why it was done, and what conclusions could be safely drawn. Untested hypotheses, conjecture, opinion – all had to be clearly identified as such. Typically after that, one of the roles of cynical, grumpy reviewers was to prevent authors from stretching a counter-intuitive observation (say) into a conclusion that the universe was running backwards in time, or stretching the development of a new application of the binomial theorem into a proof that Moriarty really was a mathematical genius. Anecdotal evidence only, but I reviewed several hundred papers during my career, and I cannot think of any occasion where I wrote that the authors were grossly understating the importance of their work.

I think that this could have been a great paper. Great idea – let’s try applying a Kalman filter to climate data. The authors could have shown the potential power of this tool in this application (in a third of the length of the existing paper) – with an articulate exposition of what was done, the results of the tests, the conclusions about the validity and the potential benefits. Instead, the authors got drawn into trying to solve the great challenge of modern times – whether Moriarty really was a mathematical genius. Er, sorry, I think I mean, what is climate sensitivity? The result is a very long paper that does not explain what it does very coherently, and which produces results for a model open to challenge, based on assumptions open to challenge, and which includes a number of naive statements which are open to challenge – putting it generously.

Will the paper have an impact? I hope so, because the basic concept is still sound, but the authors have buried the good bits deep enough that they just may remain undiscovered.

Paul_K:

Typically after that, one of the roles of cynical, grumpy reviewers was to prevent authors from stretching a counter-intuitive observation (say) into a conclusion that the universe was running backwards in time,That’s funny because Paul Dirac’s most famous paper produced an equation whose solution had an electron moving backward in time. Also, Einstein had by then proved that “simultaneous” events were not as simultaneous as previously had been thought.

But that’s just snarky.

I think that the paper you describe would have to have been at least twice as long.

And thank you for your comment.

The reference monograph has been updated, here:

http://www.springer.com/earth+sciences+and+geography/computer+%26+mathematical+applications/book/978-3-642-03710-8?changeHeader

Notice what they describe as the “modest” mathematical level to which the book is pitched.

Having reread this paper as well as recent comments, I’d like to summarize some impressions along with the acknowledgement that my rudimentary understanding of the Kalman Filter application may have led me astray. I hope any misconceptions will be corrected by those with a more sophisticated understanding of the technique.

On a general level, I see the paper as a significant step forward in quantifying climate responses to CO2 and other persistent forcings, with the caveat that the estimates of the sensitivity parameter λ are likely to be valid only for a climate resembling that of the past century in its distribution of forcings and natural variability. A climate radically different from the recent past would probably require a different value for λ (or more precisely for probability estimates of the range within which λ is likely to reside).

A number of reservations have been expressed that imply a potentially greater range than reported. I found those arguments plausible, but the question is really how much greater uncertainty is warranted. My guess is that for twentieth century conditions or those that are similar, the answer is probably “not too much”.

One issue relates to the implication of linear feedback in equation 2.3: C(dT/dt) = F – λT (if I understand the argument correctly). Linearity is almost certainly an oversimplification. On the other hand, it is not clear that nonlinearity radically alters the shape of pdfs for climate sensitivity – see Roe and Armour 2011, and would have little impact on a most likely value for λ. Is this a reasonable inference?

A second concern relates to estimates of natural variability based on the standard deviation of the detrended twentieth century temperature record. Again, we have to conclude that a century very much unlike the last one might yield different variances. However, there is an additional concern that the paper did not address – the possibility that some of the natural variability contributed to the trend that was removed and was therefore underestimated as a consequence of detrending. In this regard, I think we may have been fortunate in that by chance, the most likely candidates for contributing to a long term trend as opposed to short term fluctuations – the AMO and PDO – appear more or less to have averaged out over the century. If that is correct, that circumstance would have fortuitously eliminated a potential source of error.

Finally, from looking at the paper (see for example Figure 6), it appears that the breadth of the confidence intervals involves mainly the height of the upper limit, with much less effect on the most likely value, which resides near the bottom. With that in mind, which alternative choices in the way the model is derived would most likely affect the lower limit and the most likely value, and which the upper limit? In the text following equation 3.6, the question is raised as to whether it would be most appropriate for λ to be Gaussian or for a variable that is a reciprocal of λ to be Gaussian. How would the latter affect the distribution of probabilities in a manner different from Figure 6?

I have learned a great deal from the paper and the ensuing discussion. In my view, this has been one of the most informative of recent posts in this blog.

As per my recent reply to Pekka’s assertion that the paper’s assumption that heat transfer into the deep ocean can be ignored over the period it covers is valid, I don’t believe that is so. If one assumes an effective mean deep ocean diffusivity which matches a typical AOGCM, the time constant associated with diffusion into a layer of deep ocean 4 times thicker than the assumed mixed layer is only 10 years, a fraction of the time period covered by this study.

It follows that the coefficient (beta) of heat flow into the ocean will vary over the period of the study, contrary to the authors’ assumption of a fixed coefficient. The results of the study are therefore suspect, IMO, even leaving aside all the other issues that have been raised.

Nic – As I read equation 2.1, the issue involves the rate at which deep ocean

temperatureis changing, not the rate of heat flow. Because of the huge heat capacity of the deep ocean, that appears to be very small compared with the mixed layer, which is why it was ignored. If one used a more complex model with multiple layers, the result would probably be slightly different, but from a realistic viewpoint, a distinction between a mixed layer and a much more inertial deep ocean, with the two separated by a thermocline, seems to be a reasonable approach.Fred and Crew, It sure is a lot easier to let you guys hash this out. I do have a couple of I hope not too idiotic questions.

1. By detrending the observational data, the value of lamba would tend to be predominately dependent on CO2 forcing. So that value of lamda could be used as an initial value for a new model run with the normal observational data to provide an estimate of natural variability versus CO2 forcing?

2. Since the ocean temperature change is assumed to be small relative to surface temperature change, this model’s results could be used for initial values for another model to determine ECS or this model could be used as a kernal for a more elaborate model?

Nic,

Your concern is justified in that the authors have not presented evidence on the starting point that the whole concept of Transient Climate Sensitivity at a time scale of 70 or 100 years makes sense. The concept makes sense, if most transfer from the surface is due to mechanisms of much shorter or much longer time scales. There’s certainly some mixing in oceans with time scale not very different from 70 years, but the assumption is that those mechanisms are relatively unimportant in comparison with the faster and slower mechanisms. I find this assumption plausible, and likely to be true, but giving better evidence on that would be desirable.

The paper does discuss the long time needed to get very close to ECS, but that fact alone doesn’t prove the existence of good separation of the short and long timescales. The two-compartment model has a clear separation by definition, but a more realistic multi-compartment model might differ significantly in that.

As their method allows explicitly λ to vary and determines it’s estimate as a function of time, the method is not dependent on the constancy of β. One possible interpretation of the result that λ converges in the analysis to an essentially constant value is that the variation of β is small in absolute terms and that γ is essentially constant, but the analysis of the paper doesn’t go that far.

The quantitative results are plausible. They are in agreement with other plausible results based on the same prior. The prior is the one that appears most plausible to me, and I hope that I’m right on that. From the largest uncertainties left unanswered by the analysis that concerning the prior opens up the possibility that high climate sensitivities are more likely than the results tell. That’s exactly the same issue that was discussed in the thread on the Figure 9.20 of the IPCC, AR4, WG1 report, and that’s the issue considered in the paper of Annan and Hargreaves. I don’t think this paper presents anything new on that point, because the whole method is deeply linked to that particular prior. Changing the prior to flat in climate sensitivity and combining that with strongly skewed and fat-tailed distribution cannot be done in a method based on Gaussian distributions.

The other uncertainty left unanswered is related to internal variability. That uncertainty is likely to be more symmetric and would probably also lower the lower limit obtained for climate sensitivity.

Fred,

“In my view, this has been one of the most informative of recent posts in this blog.”

It also one of the most technical and least political, with a slower posting rate and perhaps longer than normal comments. It would be nice to hope that it has been as well visited and read as others but I doubt that vanity.

This is a blog and it is my view that compared with other formats such as the older message/bulletin boards/fora, blogs are not so adept at supplying cogent answers to enquiring questions in fields of interest. That said, cilmate is no ordinary field of interest, so it is refreshing to note, as you have, that sometimes just sometimes a less febrile atmosphere can be sustained and for information to pass more easily.

Alex

Fred,

You pose the question:-

“One issue relates to the implication of linear feedback in equation 2.3: C(dT/dt) = F – λT (if I understand the argument correctly). Linearity is almost certainly an oversimplification. On the other hand, it is not clear that nonlinearity radically alters the shape of pdfs for climate sensitivity – see Roe and Armour 2011, and would have little impact on a most likely value for λ. Is this a reasonable inference?”

Not only does nonlinearity radically alter the shape of pdfs for climate sensitivity, but it is ONLY the non-linearity in feedback which PERMITS the possibility of a high climate sensitivity. If feedback is truly linear – then any derivation of lamda from observational data, be it from temperature match, from OHC or from net flux evaluation, yields a value of the linear feedback coefficient close to 3.0 – the Planck response. The pdf of feedback under the assumption of linearity is lower bounded (or ECS is upper-bounded) by observational data. Conversely, the assumption of non-linearity removes the upper bound on climate sensitivity.

The link below considers the generalised non-linear form of feedback equation (without making any assumptions about heat distribution), and explains why this is so.

http://rankexploits.com/musings/2011/equilibrium-climate-sensitivity-and-mathturbation-part-1/

Paul – Roe and Armour 2011 appear to disagree with you on this point. These authors emphasize the asymmetric (“skewed”) nature of the pdf for the temperature response to forcing, stating that it is “an inevitable result of the fractional uncertainty in F[observed] being much larger than the fractional uncertainty in T[observed] [e.g., Gregory et al., 2002]: the skewed tail towards high climate sensitivity is because the observations allow for the possibility that the relatively well constrained observed warming might have occurred with little or no net climate forcing.” In other words, they do not attribute the asymmetry to nonlinearity.

For the latter, they conclude that “The nonlinearity of climate feedbacks is calculated from a range of studies and is shown also to have very little impact on the asymmetry”. They are of course referring specifically to ECS but I assume their argument would apply to TCS as well.

As I understand them, their conclusions are based on the assumption of normally distributed pdfs for observations and forcings, and so a different distribution would yield a different symmetry pattern. I think this could reduce the long tail of high sensitivity values, but would have relatively little effect on the lower end. From a practical perspective, I believe there are physical reasons why the very high sensitivity values at the top end are unrealistic, and so it is the values lower down that should interest us more.[

The argument of Roe and Armour can be restated to say that the observational data cannot exclude strongly the possibility that feedbacks would be strong enough to create instability. That possibility is rather far in the tail, but using a prior flat in climate sensitivity strengthens that tail by a factor proportional to the square of the climate sensitivity, i.e by a factor that diverges strongly in terms of λ for small λ (it diverges as ).

Unlinearities become essential in allowing significant probabilities for high climate sensitivity, when a non-divergent prior in λ is used.

Fred,

I think that the perceived wisdom is that the very high sensitivities (if they come to pass) are associated with very long stabilisation times. The trajectories are much the same for the first centuries but the high S worlds simply keep going up for longer. That is a simplification but it is along those lines. They form a class that are for all intents and purposes indistinquishable from the norm over times that civilistations take to rise and fall.

Something like TCS is believed to be both more useful and more knowable.

It is my opinion that the field and in particular the IPCC should get over its obsession with ECS navel gazing and move on.

They try to run GCMs to post 2xCO2 doubling stablisation and commonly do not get there and at huge cost in terms of computer time. As far as I am aware it produces little of practical use beyond IPCC fodder. That may be a bit harsh but were it I, the choice between a dozen or so runs telling us something about that which we might live to experience and just one that probed the 4th or 5th millenium, would be an easy one to make.

Alex

The paper of Padilla et al ends with the above sentence, which contains an essential point on very long term analyses. It doesn’t make sense to discuss, what happens, when the CO2 level stays high forever, if the CO2 level cannot stay high forever. Some part of the added CO2 may indeed be removed very slowly but that’s only a relatively small fraction.

My view is that we can with good reason concentrate totally on, what’s expected to occur in 100 years or at the maximum 200 years. Even, if we err on that, the future generations will have so much more influence for the longer term development, that our influence will ultimately be minimal.

Pekka,

I agree but might put it more strongly.

It seems that we are taking an unhealthy interest in something that is unlikely, unknowable, and irrelevant.

We set this trap for ourselves early on, before or at least by the Charney and MacDonald reports. Then it was of interest and guestimates were made as to the sensitivity which have not been much improved on.

What I liked about this paper, my take on it if you will, was that it could be viewed as dealing with trajectories. It is my view that we are in need of navigational tools and assimilation methods.

If I project myself into a future where mitigation is taking place I would wish for the answer to some simple questions.

Are we winning?

Should we go more quickly or more slowly?

Is it worthwhile?

Should we switch to adaptation?

How will we know when we have won?

I suspect that policy will continue to surf on the fluctuations. Inflicting misery to combat global warming is much easier the planet is inflicting climate grief.

A prior indication that the last decade would be so stable could have prevented much political capital being wasted.

We do need a range of projections fit for purpose and that must I think means putting emphasis on the shorter terms. Judith seems quite strong and consistent on this point.

There are a lot of dots that need to be joined together to get us from now to 2050 and climate is only one player. To me the notion that we can have any real idea about the then critical issues beyond 2050 is unfounded.

We do know that there are natural horizons defined by the expected lifetimes of the infrastructure and the implied replacement rates. We need to make decisions as and when they come due. We need to keep our estimations of what is optimal up to date. We need to navigate the future and we need to do it in chunks.

It is my view that starting mitigation is not the bigger issue, maintaining mitigation would be the nightmare.

Science does have a significant role in this but I am not sure from what is visible to me that we are building the tools that we will need in a timely manner. Were this a war, we would have different priorities.

Alex

Alex,

Continuing on the same line as your latest comment. It’s naive and erroneous to think that right policies can be based on “proving” that risks are large and combining that with the precautionary principle to conclude that putting maximal pressure on actors trough “ambitious goals”, low quota in cap and trade or very high carbon taxes is the solution.

That kind of one-sided approach is in contradiction with other essential development goals including specifically goals considered essential on also by the people promoting that erroneous approach. The climate change is not presently the most pressing problem of the human societies, and contrary to those simplistic views many other problems remain more pressing or at least comparable to the effects of climate change also far to the future. Therefore right policies cannot be based on one-sided goals, but must involve balancing often contradictory requirements.

Letting incalculable very distant risks dominate such balancing is counterproductive. Some long term issues may be essential, but they must be included in the analysis in a different way. Basically the only approach for that that I can see as even remotely feasible is expressed in the principle of sustainable development:

We must avoid actions that are likely to worsen the overall state of the world over a period that can really be planned. The longer term issues must be included in the estimation of changes in the overall state of the world. The future generations will know more and they’ll do their decisions based on the knowledge available then. They can build on, what we leave to them, and their choices are restricted by the limits of the world as they receive it from us.The approach of the previous paragraph is not easy to solve, but it can be formulated in a way that is not totally dominated by unknowable and by erroneous logic as are all ideas of planning the future beyond the period of direct influence of the present generation.

Thanks for the response, Fred. It forced me to go back through Roe & Armour 2011 (AR11) in some detail. You are certainly correct in saying that they disagree with my strongly worded assertion :-

“Not only does nonlinearity radically alter the shape of pdfs for climate sensitivity, but it is ONLY the non-linearity in feedback which PERMITS the possibility of a high climate sensitivity.”

AR11 apparently shows that you can get a high climate sensitivity from a linear model based on observational data, and assuming Gaussian distributions for (cumulative net heat flux – cumulative forcing) and the total temperature change. As you indicate, they infer that one cannot rule out the possibility that a low value of forcing within measurement sampling error might have given rise to a high temperature change within measurement sampling error; this pushes the statistical realisation of feedback coefficient to a low value. The inverse relationship between feedback coefficient and climate sensitivity then assures (a) the realisation of high values of climate sensitivity appearing within the resulting distribution of ECS and (b) asymmetry in the distribution of climate sensitivity. Obviously, as already noted the assumption of symmetry in Tobs controls the asymmetry in ECS here.

AR11 also notes: – “The nonlinearity of climate feedbacks is calculated from a range of studies and is shown also to have very little impact on the asymmetry”. To deal with this point first, this statement derives from a comparison of, on the one hand, the distribution of feedbacks extracted from GCMs, where the inputs are deterministic and the spread comes from structural differences in non-linear feedbacks in the models with, on the other hand, the distribution of ECS obtained from their own model, obtained by assuming that the structural form of feedback is fixed (and linear) and allowing the inputs to vary within measurement error. This is an apples to banana comparison of dubious validity. If it has any meaning at all, then it is reliant on the validity of their AR11’s own model results.

So, the question comes down to: are their observation-model results valid? I have two comments. Firstly, they are not using an integrative form of model, which means that they are discarding a huge amount of information. Comparison of their values with many other observation-based studies would suggest that a more sophisticated approach would reduce both the mode and the variance of their final distribution. As I have noted before, most observation-based studies founded on linear models seem to produce low values of ECS. (I will dig out some references to support this.) AR11 mean values start off suspiciously high.

The second point is that they make the assumption that the measurement errors for Tobs and Fobs are independent. In reality, the SST estimates which go into their Tobs are also used to compute their observed net heat flux, so these variables are positively correlated, a fact ignored by AR11. If correctly calculated, the variance of their distribution of ECS should therefore be reduced.

Notwithstanding the above, one of the consequences of this additional hoework you set me is that I have decided that I have probably underestimated the effects of measurement uncertainty on ECS.

Paul,

Non-linearity does not mean the same things to all people. I know that what I commonly consider linear others consider non-linear.

I take the narrower view that linearity is demonstrated by additivity.

If R1(t) = f(I1(t)) and

R2(t) = f(I2(t)) then

R1(t) + r2(t) = f(I1(t)+I2(t))

the function f may be some complex function of its input that evolves in time such as the exponential decay from an impulse or some resonant behaviour where the output is commonly a convolution of the input with some other function. I think that providing that “other” function does not dependent on the input function or other system variables the result would be linear in my terms and that would include fucntions that ould permitthe slow drift from TCS to ECS. The general class is defined as LTI linear time invariant but it does include feedbacks that evolve with time but not but are not functions of system variables. My evolve with time measures time back to the original impulse that caused the response so it is actually time invariant with respect to the individual impulses that make up the final response.

My usage is not superior and seems to be a minority choice I use it because it separates the strictly non-linear R(t) = I(t)^2 functions from the merely temporally rich.

I don’t “necessarily” see the evolution to ECS as non-linear (the long tail and slow amplification can be achieved maintaining linearity) although it probably is.

Alex

Hi Alex,

It had never occurred to me (honestly) that there could be different definitions of linearity.

The feedback equation with a single unchanging value of feedback coefficient, lamda, is an ordinary linear differential equation. It is a linear equation in T, the temperature perturbation term, using the standard mathematical definition of linearity, the only one with which I’m familiar.

The test of “additivity” then says that if T1(t) is a solution to the equation and T2(t) is another solution, then T1 + T2 is also a solution. This allows solution of the equation under conditions of time varying forcing by superposition, something I have done many times.

If lamda is made a function of time, then the equation is still linear.

If lamda is made a function of T, then the equation becomes non-linear, (apart from the trivial replacement lamda = 1/T). If the feedback term lamda*T is replaced by a polynomial of the form lamda1*T + lamda2*T^2 + lamda3*T^3+…, then the equation becomes non-linear and falls into a class known as Ricatti equations. Superposition (your additivity test) no longer works for these equations.

Paul,

It may seem bizarre to most here, but the evidence for non-linearity in the climate system at temperatures above those that are considered ice ages is a bit sparse.

It is my opinion that others, not you, mistake complex feedback behaviour where the signal that is fedback is transformed in a linear way due to time delays etc. with non-linearity.

People talk about non-linearity but I have know idea what they are meaning nor what their evidence is.

It is commonly assumed that Stefan-Boltzmann leads to pronounced non-linearity but this has been looked at and it appears to be minimal. This is perhaps a fluke due to a tendency for RH to stay reasonably constant but it seems to be the case.

On the short term but over large swings in temperatures we know from the climatology (30 year temps by month) that the system with respect to temps is nearly linear.

I do not understand what people are talking about.

It is true that the weather is chaotic and that the climate has persistence and possibly some oscillatory behaviour but people have looked for chaotic behaviour at the climate scale and they have not found much evidence for the current epoch.

There is a perception that the climate is non-linear but for all intents and purposes simple linear emulators produce results that can match the simulators and match the record.

I wrote what I did to try and attract some comment so thanks, I have a suspicion that some people may be mistaking richness of behaviour with non-linearity.

Perhaps it might be better to pose the issue another way.

What is the significant non-linearity in the climate system with respect to the temperatures? What are its sources and will its effects be distinquishable e.g. material in the next one hundred years?

One answer is that there seem to be no significant non-linearities according to that test. That answer I understand.

The common answer seems to be that there must be significant non-linearities, the system in non-linear, complex, and chaotic so they must exist.

Alex

Lamda is equal to gamma plus beta, with the beta associated with Tsub Ocean. Since the capacity of the ocean is very large compared to the surface, lamba for rising T would be different than lamba for falling T. For a short time period lamba can be assumed linear, but for a longer, and not necessarily all that long of a period, lamba can vary. So some portion of lamba is a function of temperature, T Ocean.

That may be simplistic, but in the northern hemisphere it seems obvious to me, comparing surface temp to SST versus southern hemisphere surface temp to SST.

So is a Global lamba all that meaningful?

‘An abrupt climate change occurs when the climate system is forced to transition to a new state at a rate that is determined by the climate system itself, and which is more rapid than the rate of change of the external forcing’ http://www.nap.edu/openbook.php?record_id=10136&page=1

Weather has been known to be chaotic since Edward Lorenz discovered the ‘butterfly effect’ in the 1960’s. Abrupt climate change on the other hand was thought to have happened only in the distant past and so climate was expected to evolve steadily over this century in response to ordered climate forcing.

More recent work is identifying abrupt climate changes working through the El Niño Southern Oscillation, the Pacific Decadal Oscillation, the North Atlantic Oscillation, the Southern Annular Mode, the Artic Oscillation, the Indian Ocean Dipole and other measures of ocean and atmospheric states. These are measurements of sea surface temperature and atmospheric pressure over more than 100 years which show evidence for abrupt change to new climate conditions that persist for up to a few decades before shifting again. Global rainfall and flood records likewise show evidence for abrupt shifts and regimes that persist for decades. In Australia, less frequent flooding from early last century to the mid 1940’s, more frequent flooding to the late 1970’s and again a low rainfall regime to recent times.

Anastasios Tsonis, of the Atmospheric Sciences Group at University of Wisconsin, Milwaukee, and colleagues used a mathematical network approach to analyse abrupt climate change on decadal timescales. Ocean and atmospheric indices – in this case the El Niño Southern Oscillation, the Pacific Decadal Oscillation, the North Atlantic Oscillation and the North Pacific Oscillation – can be thought of as chaotic oscillators that capture the major modes of climate variability. Tsonis and colleagues calculated the ‘distance’ between the indices. It was found that they would synchronise at certain times and then shift into a new state.

It is no coincidence that shifts in ocean and atmospheric indices occur at the same time as changes in the trajectory of global surface temperature. Our ‘interest is to understand – first the natural variability of climate – and then take it from there. So we were very excited when we realized a lot of changes in the past century from warmer to cooler and then back to warmer were all natural,’ Tsonis said.

Four multi-decadal climate shifts were identified in the last century coinciding with changes in the surface temperature trajectory. Warming from 1909 to the mid 1940’s, cooling to the late 1970’s, warming to 1998 and declining since. The shifts are punctuated by extreme El Niño Southern Oscillation events. Fluctuations between La Niña and El Niño peak at these times and climate then settles into a damped oscillation. Until the next critical climate threshold – due perhaps in a decade or two if the recent past is any indication.

James Hurrell and colleagues in a recent article in the Bulletin of the American Meteorological Society stated that the ‘global coupled atmosphere–ocean–land–cryosphere system exhibits a wide range of physical and dynamical phenomena with associated physical, biological, and chemical feedbacks that collectively result in a continuum of temporal and spatial variability. The traditional boundaries between weather and climate are, therefore, somewhat artificial.’

Chief,

On the global scale and restricted to temperatures as in the case of this paper are the variations distinquishable from noisy fluctuations, do they amount to a non-linearity in lambda?

If the Tsonis paper was the one that had a predictor for phase which was autoregressive then it is one that did not make a lot of sense to me for the predictor for phase is not like that. The notion of a phase angle regressing towards a mean does not seem physical.

I am not saying that the system is not rich and complex. My question is whether the system when averaged globally is detectably non-linear in its temperature response to increasing forcings. I think that to show this requires more than the existence of fluctuations that come and go.

I realise that I am making a narrow point concerning averaged temperatures only but that is the context for almost all discussions on sensitivity.

Alex

The Tsonis ‘new dynamical method for major climate shifts’ relies on a property of complex systems – the increase is autocorrelation.

Earth surface temperature changed trajectory after 1910, the mid 1940’s, the late 1970’s and after 1998.

http://www.pnas.org/content/105/38/14308

‘Recent scientific evidence shows that major and widespread climate changes have occurred with startling speed. For example, roughly half the north Atlantic warming since the last ice age was achieved in only a decade, and it was accompanied by significant climatic changes across most of the globe. Similar events, including local warmings as large as 16°C, occurred repeatedly during the slide into and climb out of the last ice age. Human civilizations arose after those extreme, global ice-age climate jumps. Severe droughts and other regional climate events during the current warm period have shown similar tendencies of abrupt onset and great persistence, often with adverse effects on societies.

Abrupt climate changes were especially common when the climate system was being forced to change most rapidly. Thus, greenhouse warming and other human alterations of the earth system may increase the possibility of large, abrupt, and unwelcome regional or global climatic events. The abrupt changes of the past are not fully explained yet, and climate models typically underestimate the size, speed, and extent of those changes. Hence, future abrupt changes cannot be predicted with confidence, and climate surprises are to be expected.’

http://www.nap.edu/openbook.php?record_id=10136&page=1

I am not sure what else I can say – if you want to define these as decadal to millennial ‘fluctuations that come and go’ just by chance – as noise – as meat pies – whatever.

Nic,

You raise a perfectly valid question. However, check out the link I gave to Fred M above:

http://rankexploits.com/musings/2011/equilibrium-climate-sensitivity-and-mathturbation-part-2/

You will see that there is a direct comparison of the impact of assuming a single capacity model vs a model which accounts for deep ocean diffusivity. These results are not comprehensively definitive since they were applied only to one GCM dataset, but they provide a pretty strong indication that estimation of the linear feedback coefficient is remarkably insensitive to ocean model assumptions even on a 120 year timescale.

Paul,

I did read with interest the posting you refer to at the time you made it, but I wasn’t sure that I agreed with all of it.

My main point in relation to this new study is that it is simultaneously using the same heat flux into the ocean per unit temperature change for both slow WMGG etc forcings and rapid volcanic forcings. I don’t think that is valid, because of the diffusive nature of heat flux below the mixed layer.

Nic, Is it not valid or only valid for x number of months or years? Surface air temperature response to forcing should be pretty quick. Five or six months it should be valid, two years maybe, over that? It is an interesting look at transient climate sensitivity. To me how long would it take to realize the majority of “Equilibrium” climate sensitivity? Thirty years?

Nic,

What do you mean by “diffusive”? Heat conduction is a real diffusive process, but it’s really slow. Faster processes involve some mixing either by vertical convection or by turbulence caused by convective currents, which may be horizontal. Thus none of the other processes is really diffusive i a strict sense, and the conduction is too slow to make much effect.

Pekka,

I think that it is assumed that there is a sufficient scale separation to allow turbulent mixing to be modelled as a diffusive process at and below a certain level ~1km. By that I mean that the turbulence is of a smaller vertical scale than the 1km or so of depth down towards the lower layers which have a termperture inversion. This eddy diffusivity is considered to have a diffusivity ~1000 times that for a standard sample of water.

Ultimately this is all a bit bogus but sort of works.

FWIW a diffusive ocean seems a better mimic of observations than a slab ocean if global averages are taken. It is more capable of reproducing or rather mimicking the spectral properties. When averaged globally the response of a slab that varies in thickness in space and time seems likely to be better mimicked by a diffusive layer than a uniform slab. In my opinion for whatever reason a diffusive assumption out performs a slab assumption in terms of mimicry of observations. That said, the diffusive model is but one special case in a family of similar response functions and it is not the best mimic in the pack. There exists a function in this group that produces residuals when the predictions are substracted from obseravations that are in a sense the whitest, are the least correlated. I think that it is useful in terms of statictical analysis to be able to reduce a highly correlated time sequence to something resembling an uncorellated set of basis vectors.

Because such a function would be the best mimic of the set, it is not surprising that when driven by white noise it produces a faux temperature history that is spectrally and also visually similar to observations. Obviously it would not be expected to reproduce structures where both frequency and phase are important.

Alex

It is of course impossible to account for all the particular variabilities in the system, and anybody that thinks they can will never be believed in any case. So what you do is assume the maximum uncertainty in spread that you can justify and go from there. Fortunately, one can use the maximum entropy principle to come up with good estimates of the spread based on mean values one can perhaps estimate.

This can correspond to the MaxEnt solution of a Fokker-Planck diffusion equation where the diffusion coefficient has a spread of values with maximum uncertainty around a mean:

integral exp(-1/(x*t))*exp(-x)/sqrt(x*t)*dx from x=0 to x=infinity

Not that the 1/sqrt(t) is retained after the smearing is applied. This is really a broadbrushing of the mathematics, but IMO that is the only possible way we have in solving these problems while maintaining an intuitive sense of what is happening.

BTW, one can do the same thing with a mix of diffusion and convection and it also works out relatively easily. Further, the Fokker-Planck master equation works for both heat and for particle motion so that there is a bit of unification of the principles as it may apply to globally averaged heat evolution as well as CO2 concentration evolution. That is the kind of change of thinking I am after. Forget about applying the Wiener process as conventionally done to bring in uncertainty and noise, and instead think about a coarse-grained smearing of the parameter space. I have applied this to several other smaller-scale problems and it does work well (IMO), yet I realize that applying it to the earth as a whole is taking a big bite, so I am looking for others that may be able to make some progress in this direction.

Pekka

You’re right, its mainly not diffusive in the strict sense, rather movement along surfaces of constant density (but not constant depth) and mixing. However, modelling the process as diffusion with an “effective” diffusivity of somewhere between 0.5 and 4 x 10^-4 m^2/s – circa two orders of magnitude greater than the pure diffusivity – seems to work well, dealing with the effects of upwelling as well, and is pretty standard. I think it stems from one of Hansen’s papers in the 1980s, but is still in general use.

Hansen recekoned originally that about 1 x 10^-4 m^2/s was the right figure, and subsequent research supports that estimate , but most AOGCMs generate a much higher effective diffusivity, I think typically 4 x 10^-4 m^2/s but some as high as 24, due presumably to shortcomings in their ocean circulation modelling. Hansen’s long 2011 paper deals with this issue in some detail. He concludes that it means AOGCMs have aerosol forcing not nearly negative enough, but the (to my mind more likely) alternative is that they have climate sensitivity much too high.

It’s always albedo, albedo.

==============

Or both. “Sensitivity” to CO2 is likely net negative.

Nic,

My main complain with that is that modeling something as diffusion, when it’s not can be expected to work only, where it’s empirically confirmed. One should never extend such an approach to anything else.

This must be related to the issue of “ventilation”. The introductory book on Atmospheric Science by Wallace and Hobbs states on that:

“Still at issue is just how this ventilation occurs in the presence of pycnocline. One School of thought attributes the ventialtion to mixing along sloping isopycnal (constant density) surfaces that cut through the pycnocline” Another school of thought attributes it to irreversible mixing produced by tidal motions propagating downward into the deep oceans along the continental shelves, and yet another to vertical mixing in restricted regions characterized by strong winds and steep sloping isopycnal surfaces, the most important of which coincides with the Antarctic circumpolar current, which lies beneath the ring of strong westerly surface winds that encircles Antarctica.All the mechanisms described in the above quote affect also temperature profiles although the quote is presented in connection to the chemical composition. One should, however, be careful in conclusions concerning the relative strengths of the effects on salinity, CO2 concentration and temperature, because the density depends on both salinity and temperature and that may make the apparent diffusion different for each of the factors.

It’s not like we can put breakthrough tracers on heat like we can on contaminants in a solution. So I don’t know how this will ever get empirically confirmed. I think it will only ever get modeled as a superstatistics averaging of convection with random-walk elements that will end up looking somewhat like diffusion. It would seem that was what Hansen was getting at?

If I am talking past you guys I am sorry. My intuition tells me that the GCM models don’t have a lot to add in terms of the average global temperature. Perhaps it has some second order effects, but the big hitter is still the amount of CO2 in the atmosphere. If we need the GCM to discriminate between short-term variations and the long-term trend then I think I see why it is necessary to model.

A lot of people are concerned about the simplified ocean used in their model and a lot of people would be right. The question is what could be done about it.

I will take the time to try and boil the issue down at least to my own satisfaction.

A generalisation can be made of which their slab ocean is a special case.

Consider some function R(t) which gives a temperature series in response to a unit pulse of heat energy entering the system. Eventually R(t) may decay to zero and if λ is meaningful it must decay to zero.

For a unit pulse

1 = Integral[λR(t)dt] {0,∞}

which states the conservation of energy (heat in at t=0 the unit pulse eventually equals heat out by t=∞) where the flux is given by the temperature R(t) times λ as in their paper.

At any time (t) the proportion of the unit pulse still stored is given by

1 – Integral[λR(t)dt] {0,t},

In their slab ocean case

R(t) = Exp(-λt/C)/C

and the proportion stored

1 – Integral[λExp(-λt/C)dt/C] {0,t} = Exp(-λt/C) = R(t)C as expected

That the proportion of the heat stored is so simply related to the the instantaneous temperature makes the use and perhaps over use of a slab ocean so tempting.

The other main 1-box candidate for the ocean is the diffusive case.

Here there is a limiting case for λ=0

R(t,0) = α/Sqrt(t) where α can be derived from the diffusivity and thermal capacity.

The general case R(t,λ) is not easily stated but can be calculated by the deconvolution of R(t,0) with the unit impulse fucntion to which an impulse term 1/λ at t=0 is added and the whole is deconvoluted with the unit impulse function. This is I think the same as can be achieved using the Laplace transforms in the continuous case.

My point is that we can model using response functions with some simple parameterizations. We can by the same method we combined λ with the diffusive ocean, combine a surface slab above a diffusive deep ocean etc.

These response functions can in turn be transformed into more model friendly autoregressive functions AR(∞) on T.

The inifinity is an issue. Generally with the exception of a few special cases (slab ocean or no thermal mass at all) the current temperature is dependent on the entire prior temperature history and of course the current net flux. Not having a complete temperature history is thus a source of uncertainty but this could be constrained analytically or by adding plausible pre historic temperature histories from some distribution.

What this amounts to is that we could embed a state machine into our model consisting of a state vector holding all available prior temperatures and an AR based update mechanism in a way that is parameterisable from a small number of identifiable physical parameters and constants (slab depth, diffusivity, thermal mass) plus our estimate for λ.

Although one stage removed this AR(∞) function is related back to response functions from which we could calculate the retained heat and hence an opportunity to constrain our response function by using the OHC record. Alternatively one could calculate the oceanic heat flux history.

Also it is possible to transform these response functions to the frequency domain allowing comparison to the real world temperature spectra given a choice of a forcing history.

Finally one can deduce the statistical properties of the system and so obtain uncertainty intervals for test functions like deviation of the mean and of the slope, given some other assumptions maybe gaussian noise residuals.

All of this is in the realm of the possible and could perhaps be used in a Kalman filter type learning machine. It is not perfect but it should be possible to produce a wealth of plausible and constrained thermal models to add to the mix, weighted according to their expert plausibility and how well they meet the constraints.

Hopefully I have not made any silly mistakes with the equations or at least none that will cause confusion. Obviously there are more details but I hope this will give some indication of how I see a possible resolution to some of the many concerns relating to the thermal model that have been expressed.

I will deal briefly with one. The notion of what is and is not the TCS is not really fixed analytically by what I have described. Rather it could be deduced from the models response to the type of forcing profile that is considered appropriate for defining the TCS.

R(t) functions can be produced for many resposnes including resonances, these could be added but I think that given we have only one long run observable (the temperature history) keeping the number of not directly observable parameters to a minimum might be a sensible idea.

I hope this is of interest but as I got something out of the writing it is already a benefit.

Alex

I will save this comment for my archives because it is the closest to my own thinking on the topic. I have been pushing diffusion and the 1/Sqrt(t) profile as the explanation for the “fat-tail” in the residence cum adjustment time that the IPCC has been publishing.

I punched these numbers in last night and this is what it looks like in comparison to a disordered diffusion profile:

The boxes they talk about is simply a model of maximum entropy disorder. If the maximum entropy disorder is governed by force (i.e. drift), the profile will go toward 1/(1+t/a) and if it is governed by diffusion (i.e. random walk) it will go as 1/(1+sqrt(t/a)). Selecting a weighted sum of exponential lifetimes is exactly the same as choosing a maximum entropy distribution of rates, and I bet that whatever their underlying simulation is that it will asymptotically trend toward this result. The lifetime they choose for infinity is needed because otherwise the fat-tail will eventually disappear.

WHT,

It seems to me to be a question of what works and what physical justification we can give for it working.

I think I have read your thoughts elsewhere with respect to the use of power laws and 1/sqrt(t) is special case which has some justification some logic in it. In terms of finding some optimal predictor of the temperature at each timestep power laws seem to be the simplest (have the least number of parameters) with the most suitable properties. FWIW I have found that exponents closer to -0.3 rather than -0.5 (1/sqrt(t)) to be te better mimic, give rise to more optimal predictors.

It does not surprise me that the multibox solutions like he one you describe are similar to a power law. I do not know, why but viewed on the basis of selecting things that produce better results over inferior ones power laws seem to be a simple but flexible way of capturing LTP type behaviour.

For those of us that have used and marvelled over the flexibility of certain fucntions as exemplified by french curves. We would keep the power laws not the exponentials if we wished to illustrate any but the most trivail of response functions.

The set of conductance λ, capacitance C and diffusivity k, is the minimal basis set. There seems to be no true analogue of inductance or thermal inertia/momentum other than what it might inherit due to real motion of the fluid.

Forced to choose I would keep diffusivity over capacitance but prefer the complete set of power laws.

BTW the 1/λ term I used is I think a typo (stupid mistake) and should be λ.

I am glad you liked it. There are professionals in the climate field that do work with response functions (usually a reponse to a step rather than an impulse). This makes a lot of sense to me. Hansen does but I think he made a bit of a hash with his ad hoc modification to their simulator’s (ModelE?) respsonse function to decrease the thermal mass in his paper concerning the determining of the sensitivity from ice age data.

It seems to be the case that some, commonly amatuers, that have used power law derived response functions or similar to match the observables (temp and OHC) have concluded that the simulators (GCMs) have too much thermal mass in their oceans. I did and I still stick to the line that the heat cannot be missing, it simply was not there in the suspected amounts.

Alex

Definitely the fewer the parameters one can use to fit some behavior, the better. When it comes time to use some metric criteria such as AIC or BIC, the fewer parameter model will score higher. That is independent of the fact that it might also explain the situation better.

Well, in the time domain, an inverse power law, is the heat kernel solution to the heat equation.

Near x=0, the exp() part quickly goes to one and all that is left is the 1/sqrt(t) dependence. This is the impulse response to a delta function sheet placed on a surface layer and watching it diffuse in both directions.

In reality the reflecting boundaries are asymmetric, as in air vs. water, but I believe that the only change is to the

exp()and that becomes anerf(). The connection is that the multi-box solution is simply a Finite Element Analysis used to solve the partial DiffEq.I am not really sure who the amateurs are and who the people that know something are. I am improving my understanding in a vacuum, and my comments may strike some as sounding too pedantic.

Alex,

What you write seems to make sense to me. As you say, the model needs to keep track of the temperatures at all relevant depths of the ocean, not just the mixed layer or mixed layer + one-slab deep ocean, or alternatively of the entire history of heat flux into the ocean (although further computations would then be needed, I think).

No doubt the analysis could as you say instead by performed in the frequency domain, but I imagine that would involve greater complications.

The zero-dimensional case of global average temperature can easily be transformed to the frequency domain, but for anything more complex that may be very difficult or impossible in practice.

Nic,

I think I intended to indicate that it could be constrained in the frequency domain rather than performed in that domain. We could check to ensure that we had something approaching the observed balance between high and low frequency components.

We can do what I laid out, I have checked that the various transformations can be carried out. The parameters can be constrained by OHC if one makes some guess at the build up of HC below the ~700m cutoff.

It is a model, in the sense that it is a mimic of the globally average observables. The averaging seems to have the effect that things that we know to be true such as the surface mixed layer are a better physical model but a poorer mimic of observations, in my opinion.

On another point, we commonly consider that we know the temperatures more accurately than the forcings. That is probably very true but may be deceptive. In a sense we need to form a quotient of expected or underlying values of the forcings and temperature. In that case we may not know temperatures any better than we know the forcings. Models or mimics such as I proposed produce considerable variation or wandering in the temperature maybe +/- 0.2C and can be off by that amount at decadal scales when driven with noise of the same amplitude that matches the noise implied by the residuals when the model ipresented with the observations. This is due to a tendency to find and mimic large amounts of low frequency variation, some sort of LTP in the observations. This is satisfying in the sense that it nudges us towards caution when setting confidence intervals for the relationship between foricng and its temperature response. I am not sure that there is much physics in this but if it looks like a duck today it just might look and behave like a duck tomorrow, next week and for the next decade in some statistical sense, but it isn’t a duck. It is my suspicion that this model is in large part a mimic of the global averaging process, that makes it sound trivial but although insufficient it is necessary.

Alex

This is a continuation of an upthread discussion with Alex Harvey on the interpretation of the equations 2.1 to 2.3 in Padilla et al. Some of the ambiguities are clarified by reference to a very informative 2008 paper by Gregory and Forster, who used a similar set of equations to estimate transient climate sensitivity responses.

Probably the first thing to note is that both papers arrived at similar estimates – for GF-08, the TCR range was 1.3 – 2.3 K (95% uncertainty limits), and for Padilla et al, the related TCS range is 1.3 – 2.6 (90% confidence).

I found the rationale in GF-08 more clearly explained because the terms were less ambiguous, but otherwise the equations were similar, with GF-08 referring to the Padilla sensitivity parameter λ as ρ. GF-08 did not attempt to separate out the heat capacities of the deep ocean vs the mixed layer, but simply noted that the climate system will gain heat as a function of forcing F from which is subtracted the radiative heat loss due to a temperature increase. The heat will be stored almost entirely in the ocean, and so a temperature change ΔT due to F will result in a loss of surface heat both to space and to the ocean at a rate defined by ρ. GF-08 regressed ΔT against F to derive their value of ρ, from which TCR (at 2xCO2) can be calculated as F(2xCO2)/ρ.

In addition to results of model simulations, GF-08 used linear regression from observed values of F and T, with their associated uncertainties. Padilla et al, by using a recursive Kalman Filter operation that allowed each estimate of a TCS range to be employed as a Bayesian prior on which the next set of observations were imposed, were able to progressively update and (over most intervals) narrow the 90% confidence limits for the range of TCS values. The method allows the process to continue with future observations. It also enabled the authors to quantity the sensitivity of their estimated probability ranges to the magnitude of various uncertainties in forcings and natural variability, both alone and in combination. This may be more efficient than the GF-08 attempts to evaluate the effects of uncertainty, but I’m not familiar enough with the method to judge this.

The fact that these two similar but non-identical approaches yielded similar results adds weight to their individual findings, but there are also differences in methodology that should be noted. I will be interested in the opinions of others who compare the two papers as to their relative advantages and limitations.

Hey Fred,

The Padilla et al model resembles a simplified adaptive feed forward control design. Without the code it is hard to say much about it other than what y’all have already said. I would imagine this would be a first step or kernel that could be used to develop a serious model. Like Alex said, it is a learning machine.

AFF has made tons of advances since I saw my first system in the early nineties. As long as their assumptions are in the ballpark, it will approach a reasonable answer. The next kernel will probably be a lower frequency input and then it is just a matter of money how far you want to go. It can be a much superior model since it learns on the fly.

In the paper the authors mentioned a twenty year time frame for significant change in the T_o variable. With a kernel for that variable the model should be able to determine to a point the degree of non-linearity, if there is any, of Lamba.

To be useful for accurate forecasts it would take a good deal of sprucing up. It would be outstanding for reanalysis of data with just the added T_o kernel. I am guessing of course, but they look to be on the right path.

Annan and Hargroves model work includes the Kalman filter, http://www.jamstec.go.jp/frsgc/research/d5/jdannan/C-GOLDSTEIN_ENKF_2.pdf

Their results in this paper was 1.6 C for 70 years with 2.9 C at equilibrium. There is a caveat for the THC over turn stopping. Their’s is still considered a simple model.

Thanks for the link. The first author is Hargreaves, but my actual question is what does the “C-GOLDSTEIN” in the name refer to? There is no Goldstein in the author list.

E&M(2003) and Annan et al(2004). See 2.l sup.

==========

I don’t know why I keep typing Hargroves. C-Goldstein is the name of the model. Don’t know all the history behind that.

Thank you for the Gregory and Forster link.

Fred Moolton wrote this:

I will be interested in the opinions of others who compare the two papers as to their relative advantages and limitations.I am trying to organize a session on this topic for next year’s Joint Statistical Meetings, July-Aug 2012 in San Diego, CA (check out http://www.amstat.org.) If anything comes of this I’ll let everyone know.

I wrote to the corresponding author, Geoff Vallis, and suggested that they produce a longer more technical article for the Annals of Applied Statistics. I also mentioned the possible sharing of data and code. If anything comes of this I’ll let everyone know.

How important are these uncertainties compared to the uncertainty in TSI of ~ 0.8K?

See:

Uncertain, impaired, models

Accurate radiometry from space: an essential tool for climate studies

Nigel Fox et al.

TRUTHS (Traceable Radiometry Underpinning Terrestrial- and Helio- Studies)

TRUTHS

Seeking the TRUTHS about climate change video

With two papers addressing similar phenomena – the Padilla et al 2011 paper (“transient climate sensitivity”) and Gregory and Foster-2008 (“transient climate response”) – it may be worthwhile to step back and consider the virtues and disadvantages of these approaches in comparison with more familiar estimates of equilibrium climate sensitivity. Below is my attempt to do this. Comments will be welcome.

The most obvious advantage is practical – TCS and TCR (henceforth simply TCR for short) estimate temperature responses to a persistent and/or increasing forcing over a period of decades rather than centuries for ECS and are thus more relevant to expectations for the remainder of the current century. The temperatures they estimate at a chosen time point (e.g., about 70 years for a CO2 doubling if CO2 rises at 1% per year) will be only a fraction of a final equilibrium temperature for a persistent forcing, and so concerns for subsequent centuries, while much less pressing, can’t be dismissed entirely.

Can we translate TCR values into ECS or vice versa? The question is relevant mainly because we already have many ECS estimates and so it is worth knowing whether the TCR estimates in the papers are what might be expected from previous ECS estimates – in other words, do the TCR values suggest that climate sensitivity is greater or smaller than we otherwise might have thought?

I’m not aware of any simple method for interconverting the values, because that involves assumptions about ocean heat uptake and other variables. However, based on plausible estimates for the shape of the equilibration curve, I believe we can roughly estimate that to translate a TCR range into one for ECS would require multiplying the lower bound by about 1.3 and the upper bound by something in the neighborhood of 2. For example, a TCR range of 1.3 to 2.3 deg C (see GF-08) might translate into an ECS of about 1.7 to 4.6 deg C. Thus, the TCR values are commensurate with the typical ranges quoted for ECS.

There is a second potential advantage of TCR over ECS that is less obvious – a lesser dependence on assumptions that are difficult to verify. For this purpose, I’ll use the symbols from GF-08 rather than Padilla et al. For comparison, however, the following are the GF-08 symbols with the more or less equivalent Padilla symbols in parenthesis: α (γ), κ(β), and ρ(λ). During a persistent climate forcing, F, and before equilibrium is reached, the climate system is gaining heat to raise surface temperature, but this surface heat gain is reduced by radiative heat loss to space in proportion to the temperature change, with a proportionality that can be expressed as αΔT, and also heat loss to the ocean at a rate given by N = κΔT, where N is the net heat influx into the climate system after subtracting loss to space (N = F – αΔT), and κ describes that proportionality of heat transfer to temperature change. At equilibrium, N is zero by definition, and so ECS can be estimated simply from α, where for doubled CO2, ECS is F(for 2x CO2/α.

However, when we are not at equilibrium, getting κ and α right is daunting because of the difficulties of very precise measurements for either ocean heat transfer or net change in radiative flux at the tropopause, and this difficulty complicates estimates of ECS, requiring modeled estimates of ocean heat transfer rather than being able to rely only on observations. The TCR method circumvents this problem by combining α + κ into a new parameter, ρ, which indicates how much heat is being lost in both directions (space and ocean) without specifying how that heat loss is divided up. From the above, we have F = N + αΔT = κΔT + αΔT, which is to say that F = ρΔT.

The beauty of this formulation is that its calculation of a sensitivity value – the relationship between forcing and temperature change – can be based on surface temperatures without requiring flux measurements. This eliminates a major uncertainty surrounding attempts to estimate ECS from flux changes.

The utility of the F = ρΔT approach is limited to circumstances of persistent forcing and particularly forcing that is increasing with time, as would be the case with rising CO2. It breaks down near equilibrium or with transient forcings that typically fluctuate up and down, where there would be no constant or near constant value for ρ.

Certain recent attempts to estimate ECS from radiative flux data have yielded values much lower than typically estimated, exemplified by papers from Lindzen/Choi 2009 and 2011, and Spencer/Braswell 2010. Without asking whether those estimates are correct, we can ask whether, if correct, they are inconsistent with the TCR values reported in these papers. The apparent answer is that they are not. Most of those data were derived from fluctuating radiative imbalances due to ENSO changes, and there is no reason to expect that the climate would respond similarly to those changes and to increasing CO2 concentrations. At this point, it is reasonable to consider the TCR values as reinforcing the level of responsiveness of climate to CO2 forcing as derived from most ECS estimates computed from long term data, but this remains to be evaluated further.

Finally, TCR is not a prediction but only an “if/then” exercise.

IfCO2 is steadily increasing, andifnothing else is changing, the TCR predicts that a CO2 doubling over 70 years will probably result in a warming between 1.3 and 2.3 C, with additional warming beyond that time as the climate continues to move toward equilibrium. This is of course unrealistic, but it is probably not unrealistic to conclude that if CO2 rises in that fashion, the temperature in 70 years will most likely be about 1.3 to 2.3 C warmer than it would have been otherwise.That is a very good summary Fred. One note though;

The IPCC range is based on the Hansen/Manabe compromise. The estimates of ECS listed on

http://bartonpaullevenson.com/ClimateSensitivity.html

made since 2000 average 2.7 C (using only one Boer et al estimate, 2.8 if you include both). Hargreaves and Annan, not what I would call skeptics, recommend an upper limit of 4 C.

The reason for the relatively low upper limit for the likely range given by Annan and Hargreaves lies in the prior that they use and that’s used also by Padilla et al. They both prefer an non-divergent prior for the feedback strength. In case of Padilla et al. that prior is chosen as Gaussian, for which they have tried three different widths, all wide enough to have little influence for the final outcome. Annan and Hargreaves discuss a Cauchy distribution for the prior in climate sensitivity, which leads also to a non-divergent prior for feedback strength.

The non-divergence of the prior in feedback strength is essential, because the high upper limits for the range of climate sensitivity are due to a fat tail that implies significantly non-zero probabilities for very high climate sensitivity. Such a tail corresponds to a diverging PDF for the feedback strength.

The fact that the climate excursions have not been very strong towards warmer direction appears to be evidence for a significantly positive λ in the Padilla notation and even for response over periods much longer than the 70 years they consider. This observation supports the use of a non-divergent prior for λ in agreement with both Annan & Hargreaves and Padilla et al.

The ratio TCS/ECS is given approximately by λ/(λ+β) in the notation of Padilla et al. Thus the difference is the larger the stronger the heat transfer to deep ocean is over the transient period.

It’s also worthwhile to notice that the rate of transfer of CO2 to the deep ocean is likely to be proportional to the rate of heat transfer to the deep ocean. Even with the Revelle factor around 10 the deep ocean takes in equilibrium some 70-80% of all CO2 that participates in the carbon cycle, when slow geological processes are excluded from the calculation. This observation supports the comment of Padilla et al that the warming due to peaking CO2 emissions may stay close to the TCS also over much longer periods, as the assumption of constant CO2 concentration that’s the basis for ECS will be violated following a decay time constant similar to that involved in approaching ECS in absence of this decay in CO2 concentration.

The rate of CO2 uptake in the deep ocean an interesting problem. A few more years of a neutral trend should clear that issue up somewhat. If the ocean heat content data were more accurate, that would be a huge help. The rate of uptake by the mixed layer should increase with the differential between the atmosphere and mixing layer which in turn should increase the rate of deep ocean uptake from the mixing layer. How much of that would be related to change in ocean heat uptake, I am not sure.

The quality of the data we do have for the past decade does tend to indicate the beta is small and that OHC is neutral now. With the more recent information on cloud feedback and change, I believe the choice of priors should include lower values including negative, which should reduce the fat tail. Then I have felt that way for some time because of my system controls perspective.

If a Cauchy, which is a very fat-tail (no moments defined unless median=0), doesn’t diverge what it is applied to than likely nothing will.

True, but you know how frequentists are, it can’t be correct without a fat tail.

The divergence that I’m talking about is the one related to the simultaneous distributions of x and 1/x. The Cauchy distribution has a finite integral. Thus the PDF of y=1/x doesn’t diverge at y=0, while a flat distribution cannot be integrated and corresponds to a non-integrable divergence at zero for the inverse.

In practice the divergence is killed artificially by choosing some arbitrary large cutoff, but that doesn’t solve the problem, if the final outcome is dependent on the value of that cutoff. This is exactly, what happens for the upper limit of climate sensitivity, when people use flat priors with cutoff for the climate sensitivity. This was a central point in the paper of Annan and Hargreaves. The empirical data supports lover values than 4K or so, but it doesn’t exclude large values strongly enough to cancel the infinite weight given by a flat distribution for all values of climate sensitivity above some positive value, while the Cauchy distribution falls fast enough to make the problem disappear.

We have the problem of a non-integrable flat prior distribution for climate sensitivity and this manifests itself as a non-integrable divergence at zero in the prior for the inverse, which is the (negative of the) total feedback strength including the s.c. Planck response. For the feedback strength excluding the Plank response the divergence is moved to 1.0. The arbitrary cutoff at high climate sensitivity corresponds to a cutoff at some small distance from zero for the total feedback strength.

If we accept that the prior for climate sensitivity should be integrable over all positive values without artificial cutoffs, we have no divergence or an integrable divergence for the inverse at zero. If the prior of the climate sensitivity has a Cauchy distribution or any other distribution that falls at least as strongly for large climate sensitivity, there’s no peaking for the prior of the inverse at zero.

I wonder if many denizens are aware of the work of Mary Selvam? I have been looking at the issues surrounding non-ergodic systems, of which climate, fluid dynamics and economic cycles are some of the main ones that come to mind.

This link is a good read and while I am not quite up to some of the proofs, the conclusions seem to be particularly of interest to climatolgy modelling and prediction. http://arxiv.org/ftp/physics/papers/0503/0503028.pdf

This link more specifically addresses probabilistic modelling of climate variability over time.

http://xxx.lanl.gov/html/physics/0105109

I like it. The periods predicted are very interesting. Nothing like blending classic physics and quantum physics, the inverse square law of climate variability.

What I find really interesting in Selvam’s body of work is the way that the work of others are synthesised and the way their results fit with observed phenomena in both small and large systems.

The papers are well referenced and represent a distinct step forward in dealing with non-linear chaotic systems and how the problems of prediction might be controlled using ordinary stochastic techniques.

The model postulated seems to provide some reasonable fits with short term and decadal fluctuations in observed climatic conditions over time, but I am still not clear how reliable longer term climate change prediction might occur.

Its similar to the unpredictiability of future changes in genetic mutations in evolutionary physics but with natural selection giving reasonable fits to short term biological modifications in species in responding to environmental circumstances.

The hits keep coming.

http://theoilconundrum.blogspot.com/2011/10/temperature-induced-co2-release-adds-to.html

This is a model of positive feedback based on temperature-induced CO2 release. Once again, don’t really know how much people are pursuing this angle.