WHT on Schmittner et al. on climate sensitivity

by WebHubTelescope
.
Schmittner et al have written a paper titled “Climate Sensitivity Estimated from Temperature Reconstructions of the Last Glacial Maximum” (Science Nov 24. 2011).

.

Abstract [link]

.

Assessing impacts of future anthropogenic carbon emissions is currently impeded by uncertainties in our knowledge of equilibrium climate sensitivity to atmospheric carbon dioxide doubling. Previous studies suggest 3 K as best estimate, 2 to 4.5 K as the 66% probability range, and nonzero probabilities for much higher values, the latter implying a small but significant chance of high-impact climate changes that would be difficult to avoid. Here, combining extensive sea and land surface temperature reconstructions from the Last Glacial Maximum with climate model simulations, we estimate a lower median (2.3 K) and reduced uncertainty (1.7 to 2.6 K 66% probability). Assuming paleoclimatic constraints apply to the future as predicted by our model, these results imply lower probability of imminent extreme climatic change than previously thought.

.

The main theme of the paper is estimating climate sensitivity more accurately. Several blogs have interpreted the finding already (see links below), with the common revelation or concern relating to the lack of a fat-tail in the sensitivity.

The lack of fat-tails is visible when we plot Schmittner’s climate sensitivity PDFs on a semi-logarithmic scale, as shown in the figure below. Note the steep fall-off on all the PDF curves apart from the land-only curve.

.

.

Fat-tail distributions are quite common in most natural phenomenon and usually derive from the propagation of uncertainty when the ratios of parameters factor in to model of an observable. Some well-known examples come under the heading of Ratio Distributions.  Applying parametric ratios will lead to the commonly modeled asymmetric right-tail skewed climate sensitivities that occur in many studies (as shown below, taken from the Roe and Baker 2007 Science article).

.

.

Even something as well-known as the Planck’s Law distribution when graphed as a function of wavelength instead of frequency will heavily skew right. This is just another example of a ratio distribution as the wavelength is the reciprocal of the frequency (i.e. the ratio 1/f)  of the photon.

The observation as it relates to the Schmittner paper is that they have somehow propagated their uncertainties such that all fat-tails have gotten truncated.  Whether this is realistic or they have prior information on probabilities, one can’t tell from the paper.

The real concern is if the truncated probabilities no longer modulate the land-only PDF. With the little info they give, I can infer that the Markov chain algorithm (perhaps a HMM or something similar) they apply mixes the probabilities together according to Bayesian rules. If a probability down in the 10-16 level mixes with a larger probability as a multiplication, this will clearly truncate the larger probability. They do make the admission that a “mis-specification of the statistical model” might have happened (see line 170, p.8).  It is also possible that the simulation sampling (Monte Carlo presumably, likely not importance sampling) was insufficient to generate enough statistics to generate probabilities for the empty tails. These are questions I would ask as part of the peer-review process.

Related to the sensitivity, is the fact that Schmittner relies on the Last Glacial Maximum (LGM) for the model analysis. Coming from a statistical physics background, I think of temporal climate changes as a random walk in a shallow energy well, as shown in the figure below. From ice core data that I have looked at, the random walk appears to be constrained by an Ornstein-Uhlenbeck or Semi-Markov process whereby it looks to revert to the mean, set by the minimum part of the well.

In reality these energy barriers are feedback boundaries as shown in the figure below. The upper temperature side is governed by the negative feedback provided by the Stefan-Boltzmann law. This has a logarithmic sensitivity to a CO2 forcing function.  On the low temperature side, we have a latent barrier resulting from the huge heat of fusion required to freeze large amounts of sea-ice. During this process the temperature will remain more-or-less constant, providing an escape hatch should the process decide to reverse.  The random walk wanders between these barriers during interglacial periods, with subtle forcings and noise accompanying the rather stable solar insolation.

.

The potential issue is that Schmittner’s study occupies the maximum glaciation time frame which sits in the low temperature left corner of the energy well. This is where the climate sensitivity gets tested via simulation and where it is compared against the paleoclimate data. They don’t have the data for the higher temperature regime where we reside today and where the climate sensitivity may differ.  During the LGM the random walk was alternately bumping up against the cold temperature latent barrier while making excursions to  warmer temperatures. In other words, it wasn’t occupying the shallow warm part of the potential energy well.

This skewing of the potential well may make a difference in the results, just as the skewing in  the propagation of  uncertainties in climate sensitivity can make a difference.

Links
1. http://www.skepticalscience.com/Schmittner-climate-sensitivity-goood-bad-ugly.html
2. http://julesandjames.blogspot.com/2011/11/more-on-schmittner.html
3. http://wattsupwiththat.com/2011/11/09/climate-sensitivity-lowering-the-ipcc-fat-tail
4. http://motls.blogspot.com/2011/11/science-climate-sensitivity-is-17-26-c.html

JC note:  The origin of this post was suggestions by commenters  that we discuss the Schmittner et al. paper.  Based upon his comments on the thread and also his work on sensitivity, I invited WHT to do a guest post.   I would like to thank WHT for preparing this at my request.  This guest post does not imply any  endorsement by me  of either the paper or the content of WHT’s comments.   BTW I am too busy this week to dig deeply into this.

270 responses to “WHT on Schmittner et al. on climate sensitivity

  1. Web, The energy well uses the latent heat of fusion as a latent boundary and you state that we are in unknown territory with current temperatures which is true. What I was noticing is that there appears to be a latent heat of evaporation boundary, which is the main difference between ocean only and land only response. Too much evaporation dampens radiant forcing at the surface, i.e. the tropics and the upper level convection situation which reduces the available energy for radiant forcing or the water vapor/CO2 feedback relationship previously discussed by Lacis.

  2. This is not a direct comment on Schmitter’s paper. However, for those interested in the relationship of ancient climates to CO2 concentration in general, and more specifically the implications of the comparison of the LGM to recent times regarding the earth’s climate sensitivity, I have written an extensive review of the literature that is available at:
    http://www.spaceclimate.net/Ancient.climates.and.CO2.pdf

  3. Several climate scientists hang out at the Azimuth blog, a very fine deep mathematical science blog/wiki site run by Prof. John Carlos Baez. I posted some comments to Nathan Urban from Princeton U, who is one of the co-authors of the Schmittner paper and is hanging out at the Azimuth.

    This is what I asked, followed by Professor Urban’s response:
    “Why are the tails so thin on all those PDFs ?
    Except for the Land PDF they drop down to the 1e-17 level very rapidly. Doesn’t this modify the Land contributions when you work out the Markov chain reconstruction using Bayes rule?

    Aren’t we sitting in the ice-age corner of the climate energy potential well using data from the Last Glacial Maximum? How does this translate to climate sensitivity in a warmer climate when we are almost on the other side of the well currently? It seems like the sensitivity is bumping up against the latent heat of fusion for making lots of sea-ice when we are working in an ice-age climate.”

    Nathan Urban responds:

    The tails are thin because the likelihood of the data in those areas of parameter space is very small, under the likelihood function we assumed. The land and ocean contributions modify each other when combining the two data sources, but the ocean data are more influential on the final result.

    Your second set of questions have to do with state-dependence of climate sensitivity. We should expect some state dependence, but it’s not clear how much (this may be rather model-dependent).

    Our model does not completely ignore state dependence: it shouldn’t predict exactly the same warming for a CO2 doubling starting from glacial or interglacial states. For example, we have an interactive sea ice model, and its sea ice (and associated temperature changes) will do different things in response to increased CO2 depending on what state you start from.

    However, to conclude something about modern CO2 changes from paleo CO2 changes, we do have to assume that there is some physics in common between glacial and interglacial states. Our methodology assumes that glacial and interglacial states obey the same relationship between temperature and outgoing longwave radiation.

    Thanks to Professor Urban for clarification. It always takes time to fully digest many of these papers, and I am sure others will build on the research.

    • Some more comments from the paper’s co-author Nathan Urban:

      A few remarks on your analysis:

      Fat tails of climate sensitivity are most common in analyses using short-term transient data. They may diminish in paleo studies when transient effects are less important. However, they can reappear in the guise of forcing uncertainties, especially if you use a linearized model when forcing uncertainty includes zero or negative forcings. You’ll get a ratio distribution that hits a divide-by-zero.

      The Markov chain algorithm is an adaptive Metropolis Markov chain Monte Carlo (MCMC) algorithm. It is not a HMM.

      The default prior for the paper is bounded uniform on the range 0.26 to 8.37 K (the range of the ensemble members), as we state on the paper. We also considered a prior that is bounded uniform in feedback factor instead of ECS (see the SOM).

      The thin tails are almost certainly not a result of Monte Carlo sampling. I also tried the analysis where I evaluated the posterior directly on a 1D grid, without sampling (fixing the error variances so I only had to consider ECS uncertainty). I got a similar result as the MCMC analysis.

      One of the interesting insights I get from these models is that they are partly for establishing uncertainty and partly for finding the best set of parameters to fit the observations. These two objectives are often at odds, and you have to make a choice in deciding which is most important, either narrowing down the best estimate or drawing a boundary around the range of possibilities.

      So the default uniform prior of 0.26 to 8.37 K allows one to search the space for best fit to the data, and this will place hard constraints on how far the tails can extend, unless uncertainty is also placed on other context parameters. You can almost see the 0.26 to 8.37 range on the Land sensitivity semi-log plot.

  4. WHT – GREAT graphs.

    Question: How does one add other potential sources of negative feedback to the potential well? Only the S-B feedback is shown.

    Finally, no one responded to this when I put it up on a previous thread, relating to the “original” WHT (yes, sorry, it’s on MySpace):

    http://www.myspace.com/blogurt/music/songs/wht-live-on-the-force-15374169

    • BillC, Added thermal energy as a forcing function will push the steady state point to the right (warming) while reduction in thermal energy will push it to the left (cooling). I would think other potential sources of negative feedback can be superposed on the first-order energy barriers shown.

      There is another implicit barrier way to the left which is the fact that the earth has to maintain at least some temperature thanks to solar insolation. That would occur if we punched through the latent heat of fusion barrier and the climate turned into “Snowball Earth”. In other words, Snowball Earth would have a temperature according to Stefan-Boltzmann law with the appropriate albedo. The Schmittner paper indicates the likelihood of that happening is very unlikely:High sensitivity models (ECS2xC > 6.3 K) show a runaway effect resulting in a completely ice-covered planet. Once snow and ice cover reach a critical latitude, the positive ice-albedo feedback is larger than the negative feedback due to reduced longwave radiation (Planck feedback), triggering an irreversible transition (Fig. S4) (15). During the LGM Earth was covered by more ice and snow than it is today, but continental ice sheets did not extend equatorward of ~40°N/S, and the tropics and subtropics were ice free except at high altitudes. Our model thus suggests that large climate sensitivities (ECS2xC > 6 K) cannot be reconciled with paleoclimatic and geologic evidence, and
      hence should be assigned near-zero probability.

      • Sorry, that last bit after the colon should be quoted, as it is an excerpt from the paper.

      • Kinda figured.

      • WHT – “I would think other potential sources of negative feedback can be superposed on the first-order energy barriers shown.”

        I agree.

      • Relating to this and to Dallas’s heat of evaporation comment above:

        1) Dallas – how is what you are referring to different from the lapse rate feedback?

        2) I wonder if the lapse rate feedback is a first or second order feedback. Considering the mechanism, when we add deltaGHG to the atmosphere, it absorbs deltaIR, “back-radiation” occurs, and additional radiation and convective energy reach the surface, providing additional energy to a) warm the surface and b) simultaneously evaporate surface water. (?)

      • BillC,

        It is more like a discontinuity in the lapse rate feedback. The ratio of surface to atmospheric absorption of solar is one factor and the altitude of the average atmospheric absorption another. (Trust me, in a pseudo-chaotic system there is much more to consider.)

        Move water vapor increases the amount of absorbed solar and the amount of reflected solar. It also increases the average altitude of the absorbed solar and OLR/returned OLR.

        So the upper level increases which is not as effective for increasing lapse rate as surface warming would be. It is a tug of warm between surface and atmospheric lapse rate impact.

        Another way to look at is from available energy. With 390Wm-2 available at the surface and 79Wm-2 shifted upwards by latent cooling the effective radiant flux for CO2 return is 311Wm-2, More latent cooling reduces the effective temperature for radiant return.

        So it is an energy well, but neglecting other forms of energy transfer for a simple S-B model is a faux pas, IMHO.

      • Note: Fat fingers and a small screen makes for lots of typos :(

      • Web,

        Perhaps you could do an attachment on the Planck feedback with examples of how it works in the tropics and each pole. I am sure there must be some subtle differences in each region.

  5. This is all mathematically quite interesting. But I consider “equilibrium climate sensitivity to atmospheric carbon dioxide doubling” to be a highly misleading scientific fiction, the source of the scare in fact, so I do not find the discussion useful. It is what I call AGW science. I am pretty sure that climate is a far from equillibrium system.

    It does sound as though all the probabilities may be subjective (or Bayesian), especialy the fat tails. If so then that I can agree with.

    • a post on nonsteady equilibrium coming soon (next few weeks)

      • is it fair to say that “random walk in a potential well” is an acceptable metaphor for nonsteady equilibrium, at least in some situations?

      • A master equation formulation is typically used to describe perturbations to the steady-state. The term dP/dt is there to describe the time evolution of the states of the system, and this goes to zero at steady state or quasi-equilibrium. The random walk metaphor comes out of the master equation pretty cleanly if one turns it into a set of difference equations. It is straightforward to do a classical random walk or a Ornstein-Uhlenbeck constrained walk from the difference equation and a random number generator.

        So I would say the answer to your question is yes. I use the potential well metaphor because it fits in well with an Ornstein-Uhlenbeck visualization, and also because I come from the semiconductor physics world where that is the fundamental way to think about things.

      • WHT

        I’m squinting at the graphs and the data, and wondering if there’s a hidden bimodal curve (or an ensemble of bimodal curves superimposed), and the middle of the range really ought be considered overstated?

        During the LGM the random walk was alternately bumping up against the cold temperature latent barrier while making excursions to warmer temperatures. In other words, it wasn’t occupying the shallow warm part of the potential energy well.

        Doesn’t this suggest a climate hates its own middle?

      • Doesn’t this suggest a climate hates its own middle?

        There could be all sorts of local minimum within the larger span, whereby the climate is temporarily hanging out in, we may be in one of these, as defined by the Holocene. If there was an exclusion area, an analogist might suggest that the equivalent of a bandgap might exist, or maybe just an energy hump that is metastable leading to a bimodality as you describe.

        There are all sorts of interesting probability distributions that one can generate. I infer your suggestion is to look at the PDF of temperatures in a time series. That sounds good. What I also want to do next is look at PDF’s of rates of temperature changes, and split those up into warming and cooling rates, and maybe something about the asymmetry can be deduced from this.

        Perhaps this has all been done before, but the paleoclimate data is so easy to get at, it might be worthwhile to look at this in more detail.

      • Web,

        The rates of change of temperatures should be interesting. Also the rates of change of CO2 with temperature related to both poles, since there is some question about how accurate the CO2 concentrations in ice cores may be.

      • “Nonsteady equilibrium”? I’ve heard of steady-state non-equilibrium, but that’s a new one. Not having any luck on google with that. It just comes up with hits for steady-state non-equilibrium.

      • The terms steady state, quasi-equilibrium and quiescence are used to describe a behavior that is stationary in time with respect to the parameters one is measuring. That’s the context for applying some perturbation or non-steady-state forcing function and to evaluate how the parameters respond. I assume that is the context of the upcoming post.

      • It’s also important to distinguish radiative equilibrium from thermodynamic equilibrium. Radiative equilibrium, which is one form of steady state dynamics, exists when incoming energy (solar) is equal to outgoing radiation (emitted longwave radiation plus reflected solar radiation). On a global average basis, our climate tends toward radiative equilibrium and is often not too far from it on a W/m^2 basis, although this may not pertain on a regional basis. Unlike thermodynamic equilibrium, radiative equilibrium does not require that the energy flowing from one object to another (e.g. sun to Earth) be equal to the energy flowing between the two in the opposite direction.

        Have we ever been exactly at radiative equilibrium in the global average sense? The answer is obvious – at times of past cooling, outgoing radiation exceeded incoming, while during warming, the reverse was true. At some point, therefore, the line was crossed, even if only very transiently.

      • Perturbation is something else completely. I don’t see the connection to equilibrium.

        Either that was a typo, or there’s a serious disconnect in terminology between climate science and the rest of science and engineering. I’ve already witnessed plenty of examples of the latter.

      • Fred, that’s a good example of the issue I’m having with climate nomenclature. When I studied thermo, it was made completely clear that equilibrium was a thermodynamic term, period. The “radiative equilibrium” that you speak of isn’t equilibrium, but steady-state, which is a transport/conservation phenomenon. They’re not the same thing. But I see climate scientists mixing up the distinct concepts of equilibrium and steady-state by using that kind of terminology. I think this is a bigger issue than most insiders realize, because what they think is clear isn’t.

        When I was in school, this nomenclature was clear and non-negotiable. Now, like with the loose use of English, things aren’t as clear.

      • One of the books that I like is “Modern Thermodynamics” by Dilip Kondepudi and Ilya Prigogine. It contains an introduction to non-equilibrium thermodynamics. It also contains an introduction to non-linear dissipative systems, with examples of dynamic structures (spiral waves, traveling waves, etc.) even in fairly simple systems with steady input. They end with a shallow presentation of a climate example:fluctuations in global ice volume over the last million years, and the comment (p.466): “The fact that our ecosystem is unstable makes it difficult to separate the ‘anthropic signal’ from the spontaneous evolution of the system.”

        It gets even harder if we have a stochastic dynamical system.

        I thought that might interest you.

    • The Stefan-Boltzmann equation makes the global energy budget an ‘equilibrium’ system (I think technically it would be described as a non-equilbrium steady state system, but pedantry aside…). If global surface temperatures decrease due to some non-deterministic ‘random walk’ less longwave radiation will be emitted by the planet, which will push it towards warming, and vice versa. To be clear, you could theoretically get massive changes across regional climates due to internal circulation shifts whilst maintaining equilibrium if they didn’t affect the global energy flux – i.e. the regional changes all balance each other out internally.

      I’m not sure how ‘equilibrium climate sensitivity’ is the source of the scare. It basically says that the warming should stop at some point – shouldn’t that reduce any fright?

    • You see, try as I might, I just have no confidence in the idea of a climate sensitivty. Go to the link below, select ‘Global Temperatures’ and scroll down ‘the overview’ at the beginning of the article to reach Diagrams 2 and 3. These really put into historical perspective the changes in global temperature over the last 40 years and make the notion of a ‘x 2’ climate sensitivity seem rather irrelevant in my view.
      look here as described to see the graphs I am referring to

    • David,
      “equilibrium climate sensitivity to atmospheric carbon dioxide doubling”

      All I can think about is the department of redundancy department :) Defining “Climate Sensitivity” as the surface temperature response to a doubling of CO2 always made me chuckle. It is like the whole problem was solved before the analysis started.

    • David –

      I am pretty sure that climate is a far from equillibrium system.

      As near as I can tell, then you disagree with the type of comment made by Herman below:

      http://judithcurry.com/2011/11/29/wht-on-schmittner-et-al-on-climate-sensitivity/#comment-144334

      Now I have seen similar comments made very frequently on “skeptical” blogs (for example, if I’m not mistaken I’ve seen quite a few posts up at WUWT that argue that the climate is an equilibrium system).

      Are you saying that you think that all of those “skeptics” are wrong, or is this a matter of my not understanding the technical aspects of the terminology being used?

      • “Are you saying that you think that all of those “skeptics” are wrong, or is this a matter of my not understanding the technical aspects of the terminology being used?”

        It is common in the ‘Climate Science’ field to refer to a steady state as an equilibrium. As has been pointed out MANY times here and elsewhere, a steady state is not an equilibrium. The misuse of the term ‘equilibrium’, and also the application of equilibrium thermodynamics to non-equilibrium systems is both common and wrong.

      • Joshua, my conjecture is that they are wrong. I call it the chaotic climate hypothesis. The global temperature oscillations may be due to constant solar input, plus nonlinear feedbacks. A lot of people do not understand nonlinear dynamics well enough to take this hypothesis seriously. Others do but prefer to consider other possibilities.

      • David said, “A lot of people do not understand nonlinear dynamics well enough to take this hypothesis seriously.”

        It sure makes the calculations easier when you don’t consider the nonlinear parts :) As long as you estimate “climate sensitivity” greater than 3C most things are negligible. Around 1.5C just about everything has some impact under some conditions. Before long it is hard to figure out what is going on.

      • “It sure makes the calculations easier when you don’t consider the nonlinear parts’

        Dear Captain, safer to skipper the Nostromo than to fit first-order processes to zero-order ones.

  6. WHT,
    Thank you for putting this together. It is a lot to digest and I appreciate your taking the time to offer it so concisely.

  7. WHT has given a critique of the Schmittner study on climate sensitivity. I will not go into WHT’s analysis, as I am no expert on statistics, as WHT apparently is.

    The Schmittner study moves the “climate sensitivity goalpost” slightly (to a smaller number) and constrains the upper limit to a lower value, but I would have two main caveats.

    First of all, it is based on model analyses of paleo-climate reconstructions (from the Last Glacial Maximum). These kinds of data are dicey, at best, for several reasons, which are known and I will not repeat here. It appears to me that WHT has covered this problem in more detail.

    The second caveat is summarized in this concluding remark from the study:

    Our limited model ensemble does not scan the full parameter range, neglecting, for example, possible variations in shortwave radiation due to clouds. Non-linear cloud feedbacks in different complex models make the relation between LGM and 2×CO2 derived climate sensitivity more ambiguous than apparent in our simplified model ensemble. More work, in which these and other uncertainties are considered, will be required for a more complete assessment.

    Based on satellite observations on the impact of cloud variations with warming (Spencer and Braswell 2007), it appears very likely that the incorporation of these impacts would result in an even lower 2xCO2 climate sensitivity range than that determined in the study.

    Despite these shortcomings (and those cited by WHT), the Schmittner et al. study does give some new insight into the ongoing debate on climate sensitivity IMO.

    Max

    • one estimates the ECR the other estimates the TCR. two different animals.

      • For those keeping score
        TCR=transient climate response
        ECR=equilibrium climate response

        Further down in the thread, Vaughan Pratt mentions his concerns about the lack of discussion on TCR with this paper.

      • Web it might be interesting an informative to do a little toy example of the difference between a transient response and a “steady state” or equillibrium response. I find people making this mistake over and over again and it gets rather annoying. I’ll look at what vaughan has to say

      • That would be solving a first-order heat equation with a heat sink providing the reservoir. This toy example gives a damped exponential transient response.

        I wanted to also update a comment based on some suggestions that were made pertaining to the climate sensitivity.

        I plotted the rates of change for the 30,000 to 80,000 Greenland temperature data in the following set of charts.
        http://3.bp.blogspot.com/-1oQduRUfN8I/TtcR5WZMKBI/AAAAAAAAAo8/jO-4ubmG0aA/s1600/greenland_ice_core_rates.gif
        The histogram of cooling and warming rates is tabulated in various ways: (a) across the extremes (b) on the cool side below average and (c) on the warm side above average. Note the fast regime for warming on the warm side. Where the histogram flattens out, a range of these these rates is equally likely, indicating a fast transition between warm and extremely warm. I think this is a “frictionless” regime perhaps abetted by a positive feedback mechanism.

        I think this demonstrates that the climate well is skewed and likely dynamically effected according to its operating region. Cooling seems very predictable, but the warming side shows that interesting hiccup in the temperature rise. I am pretty sure it occurs in the Antarctica ice core data as well.

        My question is whether anyone else has analyzed this data in any other interesting ways. I can also see plotting acceleration of the temperature changes.

  8. Clouds make hot temperatures cooler in the real world. Clouds are not being discussed.

    No clouds, no cookie.

    • Norm Kalmanovitch

      Actually the combination of clouds and water vapour can account for the entire insulating effect of the atmosphere bringing up the question for CO2 climate sensitivity of how much of the effect from CO2 is just the effect already achieved by clouds and water vapour that has been taken over by CO2 and how much of the effect if any is additional.
      It is clear from three decades of OLR measurements that there has been no detectable increase in the Earth’s atmospheric insulation in spite of the 57.1% increase in CO2 emissions so we can be reasonably sure that there is no climate sensitivity to CO2 emissions and with the increase in CO2 concentration from 336.78ppmv in 1979 to 389.78ppmv in 2010 (MLO) the 53ppmv increase does not appear to show any discernable effect on OLR indicating that the climate sesitivity to CO2 is rather close to zero with most of the assumed effect simply being taken over by CO2 from the existing effect from clouds and water vapour.
      Apparently WHT knows a lot about statistics but is a little weak on the understanding of the actual physical process expressed by these statistics.

      • Norm
        Any links to OLR?
        By “insulation” I gather you mean reducing outward long wave radiation (aka heat).

      • Norm Kalmanovitch

        Heat is defined in terms of kinetic energy implying mass. Electromagnetic radiation does not have mass per se (e=mc^2 equivalent mass) so it is energy but not heat.
        http://www.climate4you.com has graphs of OLR in the temperature section showing a response to the annual cyclic surface temperature variation of approx 3.9°C but no response to increased CO2.

  9. Doesn’t the whole analysis depend on the assumption that ice cores portray temperature driving CO2? Given that the temperature proxy, the CO2, and methane all track together in, say, the Vostok core (http://en.wikipedia.org/wiki/File:Vostok_420ky_4curves_insolation.jpg ), isn’t it much more plausible that temperature is driving the gases?

    • Yes, something other than a CO2 change can kick off the initial warming but then the oceans start outgassing more CO2 and H2O. Since the CO2 is the non-condensing of the two, this will continue on in a limited positive feedback fashion until it reaches the S-B negative feedback limit for CO2 sensitivity. It can also reverse course at any time because if the CO2 can be liberated by outgassing of the oceans, it can also re-absorb upon cooling.

      The crucial distinction that we have right now (and one that hasn’t occurred historically) is that anthropogenic CO2 is pure excess concentration that will only sequester back in the environment at a geologically slow pace.

      In other words, the thought is that this process has started in motion and can’t easily reverse itself. By analogy, the climate has always had the sensitivity to accelerate in either direction but now we have lost the braking power to shift into reverse. That’s the way I understand it, and YMMV.

      • Sounds like you are still maintaining that the relation between CO2 and temperature proxy in the ice cores involves a positive feedback of CO2 on temperature, which can be teased out to derive climate sensitivity. But the ice core data don’t have to be interpreted that way. It seems to me to be projecting uncertain assumptions in the models onto the ice core data.

      • Bruce – As far as I can tell, the papers under discussion are unrelated to the feedback of CO2 on temperature. They use a fixed value for LGM CO2 (e.g., 190 ppm) as a boundary condition, and ask what would happen to temperature if preindustrial climate conditions (with a CO2 concentration of about 280 ppm) were exposed to that lower CO2 level instead. They don’t ask how CO2 would change as a climate response over the course of their simulations, and so their climate sensitivity estimates don’t depend on a CO2 response to temperature..

      • WHT’s explanation of his understanding referred to feedbacks, but if it is a red herring, I apologize for muddying the waters. My essential point is just what WHT states in a comment below: “you can’t read sensitivity directly from the data, rather only indirectly through a model.” It seems to me there is an assumption being made that is not explicitly acknowledged.

  10. WHT – Thanks for your thoughtful analysis of the statistical underpinnings for the probability distributions. I’ll try to comment more generally on what has been attempted in these LGM simulations as a complement to your description, hoping that it may be useful to readers seeking to understand the process…

    Two recent papers, in fact, are items of great interest for this topic. The first is the above-referenced 2011 Science paper by Schmittner et al (hereafter S11), which has received recent media attention, and the second is the 2010 Climate Dynamics paper by Holden et al (hereafter H10). I’ve read both for my own edification, and I offer my interpretation below, but anyone who wants the views of climate science professionals who design and implement relevant climate models should visit the posts at James Annan’s blog and at RealClimate.

    Equilibrium climate sensitivity (ECS – the temperature change for a doubling of atmospheric CO2) has been estimated by a multiplicity of approaches. Methods combining forcing and feedback estimates constrained by recent observational data have yielded a range of 2.1 to 4.4 C, as cited in AR4 WG1 Chapter 8. A different approach involves the application of climate models to paleoclimatologic data; WG1 Chapter 9 describes many examples and cites a likely range of about 1.5 to 4.5 C with a median value of about 3 C. The two papers are updated examples of the latter approach. Both use models of intermediate complexity and focus on the Last Glacial Maximum (LGM) for the data underlying their conclusions. Each arrives at its own range for a likely ECS value (66% confidence interval) that is within the 1.5 to 4.5 interval, but the two papers differ on where their range fits in that interval. The median values also differ. As detailed below, S11 derives a value of 2.3 C, while H10 arrives at a peak probability value of 3.6 C.

    The Basic Method H10 and S11 start with estimates of forcings that operated during the LGM, based on available data. Of these, the most important two are greenhouse gas concentrations (CO2, methane, and others) and albedo effects due to the greater extent of ice during the LGM. Other forcings of a lesser nature are treated differently between the papers.

    These forcings are fixed as boundary conditions for the model simulations. The models are assigned a variety of input parameters that dictate how the climate will respond to the forcings (including the major feedback responses to forcing). These parameter combinations thus yield an ECS estimate (temperature response to forcing) whose accuracy depends on how well each combination can cause the model to simulate actual LGM climate. Specifically, model results are compared with actual LGM temperature reconstructions based on proxy data to determine how well different parameter choices simulate the actual difference between “modern” (preindustrial) and LGM temperatures, including regional as well as global differences. A Bayesian formula is used to calculate the range over which the best-performing parameter combinations can vary while leaving the ECS they represent likely (66% confidence limits) or very likely (90% confidence limits) to include the true ECS responsible for the actual temperature data. These are then cited as ECS ranges.

    S11 Results. S11 simulations were run for 2000 years to an approximate equilibrium. The ECS estimates resulted in a 66% ECS probability range of 1.7 to 2.6 K, a 90% range of 1.4 to 2.8 K, and almost no probability below 1.0 K or above 3.2 K. The median value was 2.3 K. The authors were careful to describe the uncertainties surrounding these estimates, including possible errors in the estimates of forcing. Their models were unable to completely reconcile ECS values derived from ocean data (low ECS) and from land (higher ECS). Non-linearity in feedback interactions were a further source of uncertainty – “Non-linear cloud feedbacks in different complex models make the relation between LGM and 2×CO2 derived climate sensitivity more ambiguous than apparent in our simplified model ensemble”. (Despite a comment above mine in the thread, nothing in the paper implies that it is “very likely” S11 overestimated climate sensitivity.).

    H10 Results. H10 simulations were run for 5000 years. The ECS estimated 66% probability range was 2.6 to 4.4 K, and 90% probability was between 2.0 and 5.0 K, with a peak probability value of 3.6 K. H10 was was more complex and detailed than S11 in its parametrizations, and readers should go through the paper for a description of the precalibration process designed to select realistic parameters that can then be tested in a set of climate simulations. Because of greater emphasis on the combination of parameters it used for best estimates, and the consequent increase in computational demand, H10 made some sacrifices in terms of other influences that it chose not to estimate directly – an example is the assignment of an arbitrary error term to compensate for an estimate of dust forcing. H10, like S11, was careful to acknowledge the limitations of its approach to an accurate ECS estimate – “Our approach does not represent an attempt to reduce uncertainty in ∆T2x – an unrealistic objective in view of the greatly simplified atmospheric model and outgoing longwave radiation (OLR) feedback parameterisation – but is rather an attempt to investigate the contribution of different components of the Earth system to uncertainty in ∆T2x.”

    H10/S11 Comparisons. The parametrization differences between the two papers are numerous. Nevertheless, the main source of the different values for median or most probable ECS (2.3 K vs 3.6 K) lies in the difference in estimated LGM temperatures. S11 estimated that on a global average, the LGM was only about 2.2 K cooler than the modern baseline temperature, while H10 estimated the LGM to be about 6.2 K cooler. This disparity reflects primarily different sources of proxy data for the temperature reconstructions. Figure 8 in H10 shows that unsurprisingly, ECS estimates were highly sensitive to estimates of LGM sea surface temperature anomalies compared with the modern record. For almost all SST estimates in that figure, ECS lies at 3 C or above, and only declines below 3 C for the lowest SST estimates.

    Is a 2.2 C LGM cooling consistent with sea level differences between then and now? As typically estimated, and acknowledged in S11, LGM sea levels were approximately 120 meters lower than current levels. Neither S11 nor H10 asks whether a small temperature rise from then to now could have melted enough ice to raise sea level 120 meters. At first glance, 2.2 C (S11), or even 6.2 C (H10) seem rather puny for this purpose, but the phenomenon isn’t a simple one. Much of the LGM ice was in latitudes that are currently warm enough to be mainly ice-free, and it’s likely that this ice was more susceptible to melting from even a small temperature change than is ice currently restricted to high latitudes. Even so, there seem to be problems with attributing 120 meters to a 2.2 average temperature difference. For perspective, current global ice is roughly enough to raise sea levels another 120 meters – i.e., the warming from the LGM to modern times. No-one expects that an additional 2.2 C warming will raise sea levels by 120 meters, and so it would be necessary to postulate a fairly steep transition from melt-susceptible ice to melt-resistant ice. To do justice to this concept would require sophisticated modeling, but we have data from the previous (Eemian) interglacial that illustrates the difficulty. Eemian temperatures probably averaged about 2 C higher than today, with a sea level that was perhaps about 8 meters higher – Kopp et al 2009 . This suggests that climates roughly similar to ours would respond to 2.2 C warming with only a small fraction of 120 meters, and so a greater warming would be needed to achieve the full 120, even allowing for increasing resistance to melting as the remaining ice was increasingly located in higher latitudes. Of course, it’s conceivable that we are poised at a breakpoint, whereby 2.2 C warming from now would only raise sea level by 8 meters or less, whereas 2.2 C warming to the current level would have enormously greater effect. Also, this evidence is quite indirect. Still, it suggests that a very small difference between LGM and current temperatures is harder to reconcile with the observed sea level rise than a larger difference would be.

    Summary. Two recent papers attempting to estimate ECS from LGM data have arrived at different estimates. One cites a median value of 2.3 C while the other cites a peak probability of 3.6 C. These estimates bracket the oft-cited 3 C median value. The probability range for each lies within the 1.5 to 4.5 C range typically quoted for paleoclimatologic ECS estimates. In each case, values below 1.5 C are found to be improbable, and very high values are also very improbable, although the cutoff is lower for one paper (3.2 C) than the other (5.0 C for 90% probability). Importantly, each paper is scrupulous in describing the limits of its method for estimating ECS, emphasizing the sources of uncertainty that tend to be neglected in media reporting. Finally, these estimates of ECS for long term forcings, such as from persistently elevated CO2, are well within the range of previous estimates. I’m unaware of any recent data outside of that range. Although this has been mentioned before, sensitivity values for short term responses to ENSO, such as addressed by Lindzen/Choi, Spencer/Braswell, or Dessler, are uninformative for long term CO2 responses, because unlike the latter, they represent phenomena arising regionally rather than globally, originating in ocean rather than atmospheric warming, and alternating between positive and negative phases on a frequent basis rather than persisting over decades. The dynamics are too different for quantitative extrapolation, and shouldn’t be confused with sensitivity to CO2.

    I recommend that anyone interested in an accurate understanding of what these papers tell us, and what they can’t, should read the papers rather than relying on second hand descriptions in the media or blogosphere.

    • Fred,

      On the amount of cooling, there seems to be quite a bit of confusion about different figures around the blogs.

      SkS went with 2.6ºC, but Andreas Schmittner pointed out that is the land SAT + ocean SST change. Land+Ocean SAT should be 3ºC. At RC, they’ve stated a 3.5ºC cooling using an area-weighted average.

      The genesis of the 2.2ºC figure seems to be very confusing. Schmittner stated at SkS that this was the correct amount of cooling for their SAT + SST estimate, rather than SkS’s stated 2.6, then changed his mind.

      Doesn’t change that these values are at the low end of all previous estimates though.

      • yes there is some confusion about this figure

      • Gavin’s cleared it up at RC. -2.2ºC is the average amount of cooling only in the areas where they had data. Gavin thinks the global average which was used to constrain sensitivity was -3.5ºC though the authors themselves seem less sure.

      • Urban stated in the discussion that the temp change is 3.3, 2.2 is a typo (!) from an earlier draft.

      • Urban’s reference to a typo concerned the ‘3.5’ figure. I believe it’s established now (?) that -2.2ºC was the average cooling from the areas containing data. But, yes, Urban stated that their best estimate for global (i.e. the whole globe) surface-air temperature cooling was -3.3ºC derived from a model run at ECS 2.22 ºC (Not to be confused with the earlier -2.2ºC cooling in the data…hopefully). Is everyone getting this ok?

      • HR – The 2.2 C figure appears in the article published online in Science Express- The best-fitting model (ECS2xC = 2.4 K) reproduces well
        the reconstructed global mean cooling of 2.2 K”
        .

        However, it doesn’t appear to be simply a typo, as seen in the following: “Averaging over all grid points in our model leads to a higher global mean temperature (SST over ocean, SAT over land) change (–2.6 K) than using only grid points where paleo data are available (–2.2 K)”, since -2.6 K is in fact a higher temperature change than -2.2 K)..

        It may be an error, but I’m not sure whether this can be changed in the print edition.

    • Thanks Fred,

      I’d also suggest that people read the interview that Noble did with “thingsbreak” Things did a great job

      • Here’s a link to the Interview. I notice that Urban does try to explain some of the different figures for cooling that they found, and mentions a 3.3 C value. He also addresses a point I had raised earlier regarding sea level:

        “with only 3.3°C of global cooling, one might wonder whether it is possible to grow the large ice sheets that existed at that time, with the accompanying large fall in sea level. For that, global averages can be deceiving. You have to look at how cold the ice sheets are, not the planetary average. If we look specifically at land temperatures north of 40°N latitude, our model simulates a cooling of 7.7°C. We compared this to the scientific literature and found a study which reported that a cooling of 8.3 ± 1°C is sufficient to generate the LGM ice sheets. So our study appears consistent with glaciological constraints.”

        This partially reassures me, but is a bit vague, mentioning (without a reference) that they “found a study” that didn’t require more cooling than they estimated to generate the ice sheets. Also, and perhaps more problematic, Urban refers to their land data, but those data were consistent with a higher ECS than their ocean data, so I’m not yet convinced they can account for the change in ice volume via their global ECS estimates. Perhaps that will be explained further in future discussions.

      • Fascinating:

        The author being interviewed makes an excellent point about self-identified “skeptics” who do a disservice to the tradition of scientific skepticism.

        Q: Any other thoughts on the skeptics’ reception of your paper?

        One blog did surprise me. World Climate Report doctored our paper’s main figure when reporting on our study. This manipulated version of our figure was copied widely on other blogs. They deleted the data and legends for the land and ocean estimates of climate sensitivity, and presented only our combined land+ocean curve:

        Upper: World Climate Report’s manipulated image removing the Land and Ocean data.

        Lower: The actual figure as it appears in Science, with the Land and Ocean curves included.

        They did note that their figure was “adapted from” ours, and linked to our paper containing the real figure. On the other hand, Pat Michaels duplicated this doctored version of our figure again in an article at Forbes, and didn’t mention at all that it had been altered. (A side note with respect to the Forbes article: Science didn’t “throw a tantrum” about posting our manuscript on the web. They never contacted us about that. I took it down myself as a precaution, due to the journal’s embargo policy.)

        I find this data manipulation problematic. When I created the real version of that figure, it occurred to me that it would be reproduced in articles, presentations, or blog posts. Because I find the difference between our land and ocean estimates to be such an important caveat to our work, I made sure to include all three curves in the figure, so that anyone reproducing it would have to acknowledge these caveats. I didn’t anticipate that anyone would simple edit the figure to remove our caveats. I can’t say why they deleted those curves. If you were to ask them, I’d guess they’d say it was to “clarify” the figure by focusing attention on the main result we reported.

        Regardless of their intent, I find the result of their figure manipulation to be very misleading, especially since their blog post strongly implies that our study eliminates the “fat right tail” of the climate sensitivity distribution, and has proven the IPCC’s climate sensitivity range to be incorrect. Our land temperature curve, which they deleted, undermines their implication. They intentionally took our figure out of the context in which it was originally presented, a form of “selective quotation” which hides data that does not support their interpretation.

        In summary, I find World Climate Report’s behavior very disappointing and hardly compatible with true skeptical inquiry.

        I can only imagine how they would respond if they found a climate scientist intentionally deleting data from a figure, especially if they deleted data that undermined the point of view they were presenting.

        Nailed it.

      • Sorry – that very last paragraph (not indented) was the author speaking, not me. There is an element of “Mommy, mommyism” there that I don’t agree with – although his point about false “skepticism” is right on the money.

      • I agree with the author. Deliberately misrepresenting data is despicable, whether done by a blog or by a scientist.
        That the doctored figure entered the MSM shows the standards of journalism have not changed since Walter Duranty won the 1932 Pulitzer Prize.

      • Regarding my earlier comment, the paper relating ice volume to ice age temperature was referenced in S11 and is Bitanja et al 2005. The data refer primarily to land surface air temperatures and so the S11 land estimates, which require a higher ECS than the final global average derived from land plus SST, would still appear to be the appropriate figures for comparison.

    • Thanks, I think the truncated tails in the PDF’s are a consequence of the confidence intervals (66%, 90%, etc) they placed on the prior parameter uncertainties. In other words, above a certain value they assumed zero probabilities for occurrence, and that gets propagated to the results.

      Proponents of extreme value analysis may not agree totally with this approach, but it probably does get the mean and median values in the ballpark. In general the mean value is defined by
      \mu = \int_0^\infty x p(x) dx
      and this mean can drift high if there is a meaty or fat tail.That is partly the rationale for the Black Swan concerns that people are sensitive to these days.

    • Fred, thank you for this analysis

    • One point I would add to all this is just how much the estimates can vary depending on forcing assumptions. If you look at Figure 3 in the Schmittner paper you will see that some forcing scenarious have peaks at about 1.3K, others at about 2.1K and others at 2.9K. While I think all of us should hope for a low sensitivity (unless we are hoping for increased human suffering), this figure does show just how uncertain all these things are. It of course makes my main point about climate science, which is that we don’t know what the real situation is and we had better get busy and find out. That means above all admitting that there is a problem with the science and trying to get people to honestly address it. Are you listening Fred? Certainly, the climategate emails make the point that clilmate scientists in private know that there is a problem with the science. However, their public statements give a different impression. And that is why the literature should not be treated as a source of truth, but as a target of validation and analysis.

    • Obviously, its a complex issue. Given the fact that we have such incomplete information, its hard to be sure. Thus the high level of ambigious language here. I had thought that the orbital theory of the ice ages was that a change in the distribution of solar forcing caused the arctic to cool and the accumulation of ice in high latitudes. Then various feedbacks such as the albedo feedback caused the ice sheets to expand. As I recall the various ice cores, sometimes the warm ups and reglaciations happened rather quickly, e. g., from about 9,500 BP there is a warm up in Greenland of 4K in only a hundred years. In general it seems as if the warmups were more sudden than the glaciations. This raises in my mind the question of unknown forcings such as solar variability, effects of cosmic rays, aerosols, possible big volcanic eruptions, etc. It tells me that all the pseudo-scientific certainty about sensitivity estimates is probably wrong. The sensitivity clearly is highly variable and there were almost certainly changes in forcings and feedbacks that we don’t understsand. You know, we still don’t understand why the Roman Empire collapsed. Do we really understand how paleoclimate works. I think not. I personally believe that the human species will figure out how to adapt to climate change. It does seem to me that given the last 100,000 years, we need to be more fearful of a new ice age than of a warmer planet.

    • Something else that seems clear (and was pointed out first by another denizen, I’ve forgotten exactly who) is that if you examine the climate record going back millions of years, it appears that climate changes had much higher amplitudes during relative cold climates and much smaller amplitudes during warm periods, indicating that in a warmer world sensitivity is a lot lower than at the last glacial maximum.

      • If you include long term changes in ice sheets as feedbacks, then glacial/interglacial transitions are characterized by a higher climate sensitivity (about 6 C) than the 3 C typically cited for climate change during an interglacial such as ours. (For one detailed exposition on this, look up the previous thread of months ago on this blog involving Hansen’s long paper on the subject that discussed different types of sensitivities).

    • Fred,

      That’s a good post. Thanks.

  11. It may be estimating a lower climate sensitivity than the IPCC numbers but it still supports the idea that a 2 C rise in global temperature could have catastophic implications.

    This paper shows proxy temperatures during the LGM at -4.9 C relative to the current interglacial maximum, which may be 2 C warmer than now.

    http://www.geology.wisc.edu/~acarlson/Other/Shakun_Carlson_QSR_2010.pdf

    So doubling CO2 will put the world warmer than the maximum of the current interglacial (assuming the median climate sensitivity of Schmittner et al), and one can assume sea levels simular to the maximum of the interglacial would result.

    So how is that for nothing to worry about?

    • Bob,
      Are the words ‘catastrophic implications’ in the study?

    • bob droege

      doubling CO2 will put the world warmer than the maximum of the current interglacial (assuming the median climate sensitivity of Schmittner et al), and one can assume sea levels simular to the maximum of the interglacial would result.

      So how is that for nothing to worry about?

      The 2.3C rise for a “doubling of CO2” may or may not be reasonable, but let’s take a look at it.

      We now have 390 ppmv CO2 in the atmosphere.

      IPCC estimates that with the UN estimate of world population levelling off at around 10 billion over this century, moderate economic growth and a “business as usual” scenario with no “climate initiatives” (scenario and storyline B1), we will reach around 580 ppmv CO2 by 2100.

      This is not a “doubling”, but an increase of 48%.

      Using Schmittner’s median 2xCO2 CS of 2.3C and the logarithmic relation, we arrive at an equilibrium warming from today to 2100 of 1.3C.

      This is around twice what we saw from 1850 to today.

      I’m not worried, Bob.

      Are you?

      Max

      • Brandon Shollenberger

        manacker, you have to remember there is warming “in the pipeline.” This means the warming we’ve observed won’t be the total warming increases in greenhouse gases (up to this point) will cause.

        Conversely, you have to remember CO2 isn’t the only thing which has changed. If you convert all the increases in greenhouse gases to an effective rise in CO2, you’ll get far more than a 48% increase.

        These two points work in opposite directions, so to some extent they will cancel out, but they still need to be considered.

      • Here are some numbers for you.

        At the peak of the last glaciation, sea levels were 120 meters below what they are now, with global average temperatures 5 C less than today, giving a rather imprecise estimate of 24 meters sea level rise per degree C of warming.

        But since there is only about 70 meters or so, sea level wise of ice left to melt, and will probably take 5 C to melt all of it, this gives about 10 meters of sea level rise per degree C of warming. Obviously, linear extrapolations are wrong and there is a considerable time lag between the increase in temperature and the eventual sea level rise, but given that and your estimate of 1.3 C temperature increase between now and 2100, how much do you think sea levels will rise?

    • bob
      Thanks for the link.
      As I understand Shakun & Carlson, the Last Glacial Maximum (LGM; peak glacial conditions) was ~ (22.2 +/-4.0 ka) ago, and that the Altithermal (peak interglacial conditions) as (7.8 +/-3.4 ka) ago.

      Altithermal includes maximum temperature – i.e., the peak interglacial temperature was reached ~7,800 years ago!

      That means we are headed down towards the next ice age!
      Message – its going to get colder (- despite Mann’s manipulations and protestations.)
      Market advice: Invest in long underwear.

      But with growing population, we need more food.
      We need more plant food – also known as CO2.
      We need more warmth to improve agriculture.

      No I am not worried about the relatively small warming predicted – it helps fend off the cooling.
      Conclusion – bring on all the global warming we can get!

      • The thing is, we have the technology to prevent a glaciation, and that is exactly what we are doing. There are better and easier methods to do that, though without the side benefits of electricity and mobility.

        One medium sized plant injecting fluorocarbons into the atmosphere would provide enough greenhouse gases to prevent a glaciation. But you have to take Hansen’s word on that.

  12. Why do you use a log scale graph and then say, look at the steep fall off? The reason you have a steep fall off is because you used a log scale.

    This is just more statistical shell games and chicanery. Where’s the beef?

    • Why do you use a log scale graph and then say, look at the steep fall off? The reason you have a steep fall off is because you used a log scale.

      A fat-tail will show up as a concave-up curve on a semi-log scale (a smile), and a thin-tail will show as a concave-down curve (a frown). The dividing line, an exponential decline will be a straight line. Since one of the findings is the lack of fat-tails, this is a good way to show this.

      Many of the PDF’s show what look like exponential declines but then drop off like they were truncated. From what Nathan Urban says “The tails are thin because the likelihood of the data in those areas of parameter space is very small, under the likelihood function we assumed.” That is the author’s rationale for their model.

      So it has nothing to do with “shell games and chicanery” but knowledge of the lingo and experience with how to look at the data.

  13. WHT-

    Thanks for this post.

    Could supplementation with hot-house paleo-data be useful for better identifying the cusp of the well and a potential point of transition out of the current ice-house climate?

    Thanks,
    bi2hs

  14. Why do you all try to make this so complicated?
    When earth is warm and sea ice is melted, it snows a lot and ice advances and earth cools.
    When earth is cool and sea ice is frozen, it don’t snow much and ice melts and retreats and earth warms.
    We are warm now. Sea Ice Extent is low. The Snows have started. We will cool.

    • Well, that is a good Chauncey Gardener explanation, but then why do science at all? i.e. we are born, we get ill, and then we die. Why do medical science then?

      Its because of the time scale involved, in that we wonder if all this will happen over the human time span and whether future generations will get impacted.

      • Web, a quick correction: We are born, we have one hellava good time, then we die happy. It is another option I decided not to explore :)

        Seriously, there is a little logic in what Herman says, warmer objects can cool more rapidly. For example; if you want crystal clear ice for your cocktail, start with warm water, it has less absorbed gas :) Funny thing about ice, what?

      • I agree they can cool more rapidly as Fourier’s Law and Stefan-Boltzmann say that the rate of cooling is proportional to temperature differences (Fourier) and absolute temperature (S-B).

      • Science is necessary. That helps us advance. A lot of times it is right and we go forward on the right path. A lot of times it is wrong and we go forward on the wrong path. I could give a lot of examples here, but I will not . Find your own if you care. Science is ahead of what we had one hundred years ago. Science is behind what we will have one hundred years in the future. That science, one hundred years in the future, will still not have all the answers. That science will still not be settled.

      • The difference being the very clear identification of things that kill people leading to scientific activity aimed at countering these things. The Climate Research Industry has, in my opinion, become a self-eating watermelon fueled by partisan politics and the creation of irrational fears and guilt among the general populace.

        An interesting strawman might be constructed around the creation of a tax, of the same magnitude as Carbon taxes being proposed, aimed at research into mitigating death from “natural causes”, that is, old age.
        At least the results might be tangible and measurable. Which just doesn’t seem to hold for ACC, which hasn’t yet reached the point of habeas corpus.
        Except in models, of course.

  15. “Climate sensitivity cannot be quantified”
    In email Tom Wigley of UCAR clarifies:

    Keith and Simon (and no-one else),

    Paleo data cannot inform us *directly* about how the climate sensitivity
    (as climate sensitivity is defined).
    Note the stressed word. The whole
    point here is that the text cannot afford to make statements that are
    manifestly incorrect. This is *not* mere pedantry. If you can tell me
    where or why the above statement is wrong, then please do so.

    Quantifying climate sensitivity from real world data cannot even be done
    using present-day data, including satellite data. If you think that one
    could do better with paleo data, then you’re fooling yourself. This is
    fine, but there is no need to try to fool others by making extravagant
    claims.

    Tom

    See WUWT: Senior NCAR scientist admits: “Quantifying climate sensitivity from real world data cannot even be done using present-day data…”

    • The insight is that certain time and space segments of paleoclimate data can be used to calibrate models in specific regimes and then those same models are used to estimate sensitivity across the board.
      The problem with paleo data is one of varying resolution, precision, and accuracy while present-day data suffers from short time-series.
      I agree that you can’t read sensitivity directly from the data, rather only indirectly through a model, and even that is challenging.

    • David you dont get it.

      Wigly is careful to use the word DIRECT.

      So, when you see a skeptic trying to calculate the ECR from the data, when they try to do this DIRECTLY, they will get the wrong answer. every time.

      Anybody who has worked in the field of control systems undersstands that you can only estimate the sensitivity or gain of a system by using data and a MODEL of the system you are trying to understand. You cant look ath the data and read the sensitivity off it DIRECTLY.

      For observation data ( like tmeperature series) you would use something like a two box model. WIth volcanic data you try to estimate the relaxation response and then work your way from that to the TCR and ECR.

      With long term paleo data you use a GCM. you absolautely need a model of the process to work backwards from the response to the gain given an ensemble of forcings.

  16. Next week’s AGU meeting is keeping me out of mischief (I’m presenting on Thursday afternoon in session GC43B) so I’ll be brief.

    While I’m sure there are plenty of details to investigate in this paper, such as those WHT has talked about, two things about the paper leap out at me that I don’t believe have been mentioned here yet.

    1. No mention of expected CO2 in 2100. Assuming their estimate of 2.3 K/doubling, whether it’s 500 ppmv or 1000 ppmv will make a difference of 2.3 degrees. On what basis do they project the temperature in 2100 to any greater precision than that?

    2. No mention of the different temperature rise times then and now. For then, assume a 10 °C rise taking 5000 years (out of the 100,000 cycle, the decline is more than an order of magnitude slower). The fastest temperature could have risen then would therefore be 2 °C per millennium. Today temperature was rising at 1 °C per century a few decades ago, and is now at 2 °C century, a growth that is well correlated with the impressively growing human population and fossil fuel consumption. If this keeps up it could reach 4 °C per century during this century.

    The difference is that there was an order of magnitude more time then than now for both the oceans and the land to heat-sink the rising temperature. Heat sinks are more effective at draining slowly rising heat than fast. Hence whatever climate sensitivity obtained during the deglaciations needs to be multiplied by a suitable constant factor in order to make it relevant to modern climate sensitivity.

    Has anyone seen any attempts at estimating this factor? Taking it to be unity, as seemingly done in this paper, is surely naive.

    • Dr. Pratt,
      Your question seems to skip over the evidence that in fact there have been ‘rapid’ temperature changes in the past on the same scale as we are experiencing now.
      Best wishes for a successful AGU presentation and for staying out of mischief.

    • Vaughan Pratt

      You speculate that atmospheric CO2 will rise to “500 ppmv or 1000 ppmv” by 2100.

      The lower estimate probably makes sense (IPCC has estimated 580 ppmv for “storyline and scenario B1”: population leveling off at around 10 billion over the century, medium economic growth and a “business as usual” world with no “climate initiatives”).

      On this basis, using Schmittner’s mean value for 2xCO2 CS of 2.3C, we would have 1.3C warming above today by 2100.

      The 1,000 ppmv figure is stretching credibility a bit, Vaughan, since all the fossil fuels on our planet as estimated by WEC in 2010, would only get us slightly above this level when they are all gone (and it is extremely unlikely that this will occur by 2100).

      As far as the rate of warming is concerned, it is absolutely untrue that.we are now at a warming rate of 0.2C per decade. The long-term warming rate has been around 0.04 to 0.05C per decade..Over the past decade the global temperature (HadCRUT3) actually cooled slightly, rather than warming, despite the fact that IPCC had predicted warming of around 0.2C per decade.

      Max

      • You keep insisting that the natural 280 ppmv is growing exponentially, with not a shred of evidence and plenty of data to refute it. It is the anthropogenic component of CO2 that is growing exponentially, namely at a CAGR of 2.2%/yr, for which there is much stronger evidence. The CAGR of fossil fuel has been that in recent decades, although earlier this century it was 5% and last century it was 15%!

  17. CO2 is not the temperature driver. When you estimate a climate temperature sensitivity based on something that has nothing to do with temperature, you will get something that means less than nothing.

  18. It looks to me as if everyone out there has their mind make up and none of you read and think about the possibility that you could be wrong about anything. You read other opinions only with the thought of how you can prove someone else wrong. I am guilty, I do that too. Read my opinions and tell me why you think I am wrong. Really why, not the standard answers.
    http://popesclimatetheory.com/

  19. there have been ‘rapid’ temperature changes in the past on the same scale as we are experiencing now.

    Indeed, and the same scaling factor should be applied when estimating climate sensitivity for any period rising at the modern rate. The 2.3 K/doubling figure obtained by Schmittner et al was for the order-of-magnitude slower deglaciations.

    In the case of the rapid rises 15,000 and 12,000 years ago, conceivably the climate sensitivity then might even have been greater than today. Heaven knows Earth needed it back then, in some places it was 20 degrees colder than today. The 30,000 years or so prior to that was also pretty cold, which may have been a contributory factor to the extinction of the Neanderthals, though the main cause is still hotly (so to speak) debated.

    • “In the case of the rapid rises 15,000 and 12,000 years ago, conceivably the climate sensitivity then might even have been greater than today.”

      This is exactly right. So why would we presume that the average climate sensitivity since the LGM has any bearing on what the sensitivity is today? It could be higher than the average or it could be lower and the deviation from the average could be large. The only way to measure sensitivity in the modern age is to measure it in the modern age.

      • stven,
        Good luck with that.
        So far climate science is doing an extremely expensive post-normal version of “Waiting for Godot”.

      • The only way to measure sensitivity in the modern age is to measure it in the modern age.

        That was my position until recently, when it occurred to me that if climate sensitivity was a function primarily of rate of change of temperature, paleoclimate CS might have some predictive value for modern CS after all, if only we could work out what that function should be. Until we can however I agree with your statement above.

    • Dr. Pratt,
      Thanks for the reply. It is beoming common for conversations to get peppered oddly down the threads.
      I think the Neanderthals met the fate of many extinct species from the Cro-Magnon/ early H. Sapiens period: they tasted great and were not too filling.
      I do have a question about TCR: Transient Climate Response sounds like an oxymoron: If it really transient, it is weather.

  20. Oh dear, missing the wood for the trees comes to mind.

    More GHGs slow down the rate of energy loss to space but a faster or larger water cycle speeds it up again via increased non radiative energy transfer from surface to tropopause.

    We see a shift in the latitudinal position of the permanent climate zones and that is perceived as climate change.

    Evidence here:

    http://climaterealists.com/index.php?id=8723&linkbox=true&position=10

    “CO2 or Sun?”

    and the detailed mechanisms are set out in the linked articles towards the end for those who are interested.

    It really is very simple once the chaff is cleared out of the way.

  21. This paper is a part of what I like to call, the climate sensitity trickle down effect.

    You can’t go out on limb and come up with climate sensitivity numbers between 0.5 and 1C, like Spencer and Lindzen, and not expect to be viciously attacked by the climate mafia. But, you can produce numbers that are a little lower than those of the IPCC and still overlapping with the IPCC numbers and get away with it. The team is unlikely to expend much energy claiming that someone is crazy for producing climate sensitivity numbers that are just a little lower. As a result, we are now seeing many studies of this type come out. And with time, these new studies will become an anchoring point from which other authors can afford to show their studies in which they found results that are even a little lower still. And they will be able to do it without putting their careers and professional reputations at stake. It’s a travesty that the process has to follow this path in order to avoid the self appointed AGW gatekeepers, but eventually it will get us to the real climate sensitivity numbers.

    • a very perceptive analysis of post-normal science, and one that rather confounds the paradigmatic view of science.

    • Tilo,

      You have Hansen to thank for that. He couldn’t get Venus to fly without a dash of Arrhenius. It is kind of interesting watching Arrhenius get derailed for a second time.

      • First, the climate sensitivity estimate was based on a compromise not science. It would automatically be incorrect for the overall range. Voting is not science.

        Second, reviewing Arrhenius, the forcing due to CO2 is dependent on the average radiant layer based on the temperature of that layer. An average 246K radiant layer would not exist in the Antarctic where is could have an impact on the surface.

        Third, there is more to heat flow than radiant flux. The Antarctic and tropopause are thermal sinks which have to vary if if the impact of a change in radiant forcing was changing significantly globally. Their stability tends to lead to questions.

        Other than that, the theory is perfect :)

    • What makes you think that the “real” climate sensitivity numbers are outside the IPCC estimate?

    • Tilo –

      This seems to me that essentially describing is a rather normal process of scientific research leading to more knowledge and hence, greater understanding, specificity, and a decrease in uncertainty.

      Tribalism among climate scientists may be an obstacle to that process – but so is the motivated conspiratorial perspective of some “skeptics” who seek to politicize the process of science in the climate debate.

      TomFP –

      Why is what Tilo described “post-normal?” Is there something new about a paradigmatic view of science?

  22. WebHubTelescope, in all humility and seriousness may I ask two general, but post related, points:-
    1) How do we get into and out of ice ages?
    2) May I ask what happens after massive cooling events, like super volcanoes and meteor impacts?

    I ask the second as I can’t see why a year or two of ‘vulcanic winter’ doesn’t tip us into a long term freezing event, unless the system has huge inertia. Cooling positive feed-backs (snow/ice albedo) and Heating positive feed-backs (water driven evaporation) are both plausible, but temperatures ARE pretty stable, even after huge geological events.
    So do the oceans act has huge thermal sinks?

    • I just assume these are all metastable points and a forcing function will start the ball rolling in one direction or the other.

      It must be that the long gradual or sustained forcing functions have a bigger influence than a short-term shock. Sure, the ocean could act like a capacitive buffer to transients.

      • This is the bit I don’t get. If we have an enormous thermal buffering capacity, why does the atmospheric temperature bounce around so much?
        Air temperatures must be tightly coupled to sea surface temperatures or a modest aerosol injection in the atmosphere leads to runaway cooling. The positive feedback for snow/ice albedo effects mean one big eruption/impact starts the glaciers rolling towards the equator.
        Purtubations in the atmosphere must surely be a proxy for heat exchange fluxes between the water and atmosphere. We can see this in the various decade long warming/cooling cycles in the thermometer record.

      • The atmosphere has a small heat capacity, and so locally, air temperatures vary rapidly depending on whether it’s clear or cloudy, night or day, or winter or summer. On a global average, they vary little from one year to the next except for the effect of climate forcings such as changes in greenhouse gases or solar radiation, or certain internal climate modes such as ENSO. Ocean heat capacity, being far greater, exhibits smaller short term fluctuations overall, but sea surface temperature does tend to fluctuate considerably on a local basis. However, note that air temperature is more stable over the oceans than over land areas, reflecting to some extent the thermal inertia of the oceans..

        Your second question, concerning feedbacks, involves a different set of concepts. The climate is stabilized by the fact that net feedbacks are negative. This is universally understood within climate science, but the terminology used has been confusing, leading to much misunderstanding. A temperature response to a forcing (e.g., an increase in CO2) leads to a direct warming effect combined with the effect of feedbacks. Some of these (e.g., water vapor) tend to amplify the direct effect, whereas others (e.g., the lapse rate feedback) tend to diminish the response.

        Here is where the confusion enters. When all of the commonly estimated feedbacks of importance are evaluated (water vapor, ice/albedo, clouds, lapse rate), the net effect is generally estimated to be one of amplification, and this has led to the concept of net positive feedback. However, all of these effects are eventually constrained by the so-called Planck Response, which refers simply to the tendency of a warmer body to shed more heat according to the Stefan-Boltzmann law. The Planck Response ultimately returns the system to a stable state by limiting any warming from the forcing plus the cited feedbacks, preventing a runaway state. It puts a limit on how much warming is possible directly from the forcing, and by operating on the positive feedbacks as well, it limits their ability to amplify the direct forcing response.

        In this sense, the Planck Response can be thought of as a negative feedback that dominates in the long run. The problem is that in climate science terminology, it is typically not included in the formal definition of feedbacks even though its effects are included in the calculations. Once it’s understood that the Planck Response prevents a runaway by allowing the climate to shed extra heat until it stabilizes at some new temperature, the confusion is averted. Unfortunately, the terminology tends to obscure this.

      • Note that the Planck Response also operates to limit cooling – as the climate cools, it sheds less heat, again tending to stabilize.

        On the cooling side, additional complexity arises (as WHT points out) from the effect of phase changes (water to ice) that occur without a temperature change. This tends to limit the cooling that would come from a negative flux imbalance (less energy in than out of the climate system). However, there is another effect operating in the opposite direction – as more ice forms, it increases planetary albedo, thereby amplifying the tendency to lose heat and cool. The resultant of these conflicting tendencies varies according to climate state, but with very severe cooling, can in theory lead to a runaway cooling that eventuates in an “icehouse” climate, with ice reaching near the equator or even freezing the oceans entirely.

      • Fred.
        1) Either stop using the term forcing or assume that I will not bother to read your reply posts.
        2) Do not state suppositions as facts. This will just lead people to assume that when you do actually state something that established to be truthful, it is just arm-waving like the rest of your discourse.
        3) Try to understand some basic kinetic descriptions

        “The climate is stabilized by the fact that net feedbacks are negative. This is universally understood within climate science, but the terminology used has been confusing, leading to much misunderstanding.”

        Different feed-backs can not be added or subtracted. A steady state is the sum of influxes and effluxes, but the mechanisms that change influxes and effluxes, feed backs, are not, and cannot, be added or subtracted.
        That this misconception is ‘universally understood within climate science’ is rather depressing.

      • Doc – I’m trying to help you learn some of this material, but if you’re not interested, I won’t bother. Incidentally, no-one has suggested that feedbacks can simply be added or subtracted.

        However, you’re not the only one confused on these general topics, and so if my descriptions are helpful to others, I’ll have accomplished something.

      • Well there’s gratitude for you!

        I found it helpful anyway.

      • Fred:
        Could you please give a short refresher on the “lapse rate feedback” and how it is different from the Planck Response.

      • Leigh – The Planck Response simply refers to the tendency of a body to radiate heat in proportion to the fourth power of its absolute temperature, as described by the Stefan-Boltzmann law. If, for example, an increase in atmospheric CO2 causes less heat to leave the planet than it receives, the Earth’s surface will warm until it reaches a new temperature at which the amount of energy it radiates upward is sufficient to restore a balance between incoming and outgoing energy. That limits how much warming will occur from a given warming perturbation, which is why it can be thought of as a negative feedback even if it isn’t always included in formal definitions of climate feedback.

        The lapse rate feedback refers to a reduction in lapse rate as a function of a warmer planetary surface. The lapse rate refers to the reduction in air temperature with increasing altitude. For a general description, see lapse rate. When the Earth warms, more water evaporates, and as this water vapor is convected upward due to the greater buoyancy of warm air, it starts condensing along the way as it reaches cooler altitudes, releasing latent heat of vaporization. This causes higher altitudes to become warmer relative to lower altitudes than previously, which is reflected in a reduced lapse rate – i.e., the higher altitudes are still cooler than lower ones, but the difference has diminished. The net result of all of this is that thermal energy is now redistributed on average so that it is closer to space than before, and thus less easily intercepted by greenhouse gases before escaping. This makes it easier for the Earth to cool, which is why it is a negative feedback.

        An increase in water vapor also mediates a positive “water vapor feedback” because water vapor is a greenhouse gas that can intercept infrared radiation from the surface and redirect some of it downward, heating the surface and atmosphere. This is one of the positive feedbacks that accompanies warming by CO2. Interestingly, although the magnitude of the positive water vapor feedback (due to its greenhouse effect) and the negative lapse rate feedback due to vapor condensation are individually somewhat uncertain, the difference between them appear to be less uncertain, and is generally estimated to be slightly positive – i.e., the greenhouse warming effect of water vapor outweighs its tendency to move heat upwards where it can leave the atmosphere more easily.

      • Doc Martyn: This is the bit I don’t get. If we have an enormous thermal buffering capacity, why does the atmospheric temperature bounce around so much?

        first of all, if you look at the Kelvin scale, Earth’s near surface air temperature only bounces around at most 20% above and below its mean. That’s not a lot from one perspective, but a great deal from the perspective of the transition temperatures of water, and requirements for respiration and metabolism in living creatures.

        Secondly, insolation is not constant: (1) each slice of longitude rotates into shade and back into sunlight each day; (2) the spheroidal shape of the earth makes insolation greater in the “middle” than at the poles; (3) the slant of the angle of rotation makes the N. and H. hemispheres different distances from the sun; (4) the revolution of the earth about the sun changes the relative nearness of the N. and H. hemispheres to the sun; (5) the orbit of the earth moves it closer to and then further from the sun. All of these variations produce temperature changes and gradients, ie, disturbances from equilibrium.

        thirdly, earth albedo is always changing: water freezes and thaws; clouds form and dissipate; deciduous trees and shrubs fill out and shed.

        fourth, although the heat capacities of the solid and liquid surfaces of the earth are not too different, their absorptivity is greatly different: on solid ground, most of the sunlight is absorbed, and the temperature increased, on the surface (but there are great and important inhomogeneities of cover and altitude); in water, the sunlight penetrates deeper, so the warming is more uniform, and the surface is not as hot.

        Perhaps you are asking a more subtle question.

      • Doc Martyn: This is the bit I don’t get. If we have an enormous thermal buffering capacity, why does the atmospheric temperature bounce around so much?

        Perhaps you were referring to the temperatures that occur with glaciation and deglaciation.

    • DocMartyn:

      I have to admit that I’m a little puzzled by that myself. We have supposedly had the condition of snowball earth. With the entire planet being one giant ball of albedo, how did we get back from that? It would seem like the planet would be so cold at that point that no small forcing change would be enough to get it to start melting. I guess if the entire planet were frozen, then there would be no clouds and no further snow would fall since evaporation from the surface would virtually cease. As a result, any volcanic activity would deposit soot on the surface of the snow and ice and that soot would not be covered by fresh snow. Instead it would build up until the albedo disappeared. After that, temperatures could rise and snow and ice could melt.

      That also leads us to the question – is part of our current temperature rise due to reduced albedo from soot. That, taken together with the UHI that is not accounted for in all surface temperature sets; and further taken together with the natural recovery from an LIA; and further taken together with a likely cosmic ray temperature effect, and you have a lot of room for a very low climate sensitivity number.

  23. Chaotic Climate Science.

    http://www.realclimate.org/index.php/archives/2004/12/antarctic-cooling-global-warming/

    “Furthermore, there are actually good reasons to expect the overall rate of warming in the Southern Hemisphere to be small. It has been recognized for some time that model simulations result in much greater warming in the high latitudes of the Northern Hemisphere than in the South, due to ocean heat uptake by the Southern Ocean.”

    From Bob Tisdale’s site, http://i53.tinypic.com/wce7it.jpg and http://i55.tinypic.com/qq965t.jpg

    Ocean heat uptake looks a little out of sequence. Must be those Roseby waves?

    No wait!

    http://www.realclimate.org/index.php/archives/2006/03/significant-warming-of-the-antarctic-winter-troposphere/

    Won’t be long now!

    http://www.realclimate.org/index.php/archives/2009/02/antarctic-warming-is-robust/

    Tah dah!

    No Wait! We need more press!

    http://www.realclimate.org/index.php/archives/2009/01/warm-reception-to-antarctic-warming-story/

    Ah, that’s better.

    Oh crap! Somebody deal with O’Donnell

    http://www.realclimate.org/index.php/archives/2011/02/west-antarctica-still-warming-2/

    Well, we did say that the Antarctic is consistent with the models. Instead of polar amplification mono-polar amplification. What happened with the 0.7C per decade warming at 600 hPa in the troposphere? That could have been consistent with the models too?

    Hey! Ray!, That Schmittner guy’s paper is not consistent with the models enough.

    http://www.realclimate.org/index.php/archives/2011/11/ice-age-constraints-on-climate-sensitivity/#comment-220643

    Fixed Gav!

    Crap!

    http://www.realclimate.org/index.php/archives/2011/11/ice-age-constraints-on-climate-sensitivity/#comment-220701

    Well, anyway, it is consistant with the models, just not our models :)

  24. World Climate Report has a post on the Schmittner paper that allegedly uses a misleading interpretation of some of the paper’s results.
    Anyone have information on this?

    • Read the section of the Interview fairly near the end – around the place where two graphs are shown, one on top of the other.

      • Fred,
        Thanks for the link. The interview is interesting.
        As usual, Joshua misrepresents things- but that is expected.
        I think that Nathan’s attribution of motive is a pretty big leap, especially regarding the World Climate blog.
        I do like this quote from the interview:
        “Q: Does this study overturn the IPCC’s estimate of climate sensitivity?

        No, we haven’t disproven the IPCC or high climate sensitivities. At least, not yet. This comes down to what generalizations can be made from a single, limited study. This is why the IPCC bases its conclusions on a synthesis of many studies, not relying on any particular one.”

        I also like this part:
        “Q: Your paper got a lot of positive attention from climate skeptic blogs like “Watts Up With That?”. What’s your reaction to all that?

        I haven’t followed these blogs too closely, but I skimmed the comments on a few that were pointed out to me. The responses I saw were fairly predictable, veering from uncritical acceptance of our findings, to uncritical dismissal of any study that involves computer models or proxy data. But some comments did seem to find an appropriate middle ground of, well, skepticism.”

        It is notable that not much was brought up about how people like Joe Romm reacted.

        There is a maturity emerging in climate science that may be able to undo some of the damage done by the fear mongers. It will be interesting to see.

      • What did I misrepresent, hunter?

        Do you doubt a willful manipulation of data with the intent to deceive? Do you think it was just an coincidence that WCR reproduced a graph that eliminated the data that undermined their point? Do you accept a climate scientist reproducing an altered graph without referencing the alterations?

        Do you really think that WCR and MIchaels are so incompetent that they didn’t realize the implications of their data manipulation, and thus weren’t acting out of a tribal agenda?

      • Joshua,
        Actually it is not clear at all, and since it is from two minor blogs, I frankly see it as a non-event.
        Yes, I think it is entirely possible that they made what they thought were inconsequential changes. The graphs are not really different in terms of presenting global climate data from the original.
        You, however, by your selective quoting from the interview, do in fact seem to seek to deceive people.
        So now we all await your predictable whine.

      • Yes, I think it is entirely possible that they made what they thought were inconsequential changes.

        If they thought the changes were inconsequential, then why did they deliberately eliminate certain data when reproducing the graph?

        It wasn’t accidental. It was deliberate. What was their deliberate intent, if not to eliminate certain data – the data that undermined their point?

        And further, it’s fair to assume that they know perfectly well that reproducing a graph with certain data eliminated is something which should be referenced.

        By what criterion do you determine what are “minor” and “major” blogs? Does an influential climate scientist, often quoted and referenced by “skeptics,” get a pass if he manipulates a graph that he posts on his blog because his blog gets fewer hits than another blog?

        Outstanding, hunter. Outstanding.

      • Joshua, I answered you in full below.
        As usual, you are practicing your deception.

      • And hunter, it wasn’t just WCR that manipulated data deliberately to deceive. Other “skeptics” did it also – notably the prominent “skeptic,” Pat Michaels.

        tsk. tsk. Guess they aren’t really “skeptics” after all; just tribalists giving “skepticism” a bad name.

        “Tis a shame.

      • Joshua,
        You do smarmy jerk really well, but then it is your one trick.

      • steven,
        The WCR graph states it was derived from the Schmittner paper.
        I over looked Briffa’s dislcosure.
        Joshua, of all people, is not going to get to try and set the parameters on fair and reasonable.
        If one actually reads what the WCR post says, it is clear why they did what they did and the explanation does not involve hiding anything or bad faith.
        Briffa required bad faith. Mann required bad faith.
        Climategate shows bad faith.
        WCR merely simplified a graph to show its global component, but it is all joshua has, and his bleeting out of Urban’s clear misread of WCR.

      • Joshua,
        Further re-reading of the World Climate post shows, it is actually not misleading at all: All they did was to isolate the land & sea combined graph of the Schmittner paper from the land and sea in isolation to show total climate- they did not change the paper’s work at all.
        Nathan did not like it, and is obviously touchy about people using his graph in ways he did not, but it is his graph.
        I wonder if he actually read the blog post.
        I am not familiar with World Climate- this is the first visit there for me. But it is not surprising how quickly you
        1- managed to parse the Nathan Urban interview to leave out inconvenient parts
        2- misrepresented the WC blog post.
        Keep up the predictable work, Joshua.

      • Dude –

        it is actually not misleading at all: All they did was to isolate the land & sea combined graph of the Schmittner paper from the land and sea in isolation to show total climate- they did not change the paper’s work at all.

        in contrast to

        They deleted the data and legends for the land and ocean estimates of climate sensitivity, and presented only our combined land+ocean curve:

        Outstanding.

      • Joshua,
        Read it for yourself.
        If you can.

      • Hunter they clearly deleted those portions of the chart which tend to give a confusing message and beg for explanation.
        Same thing Briffa did

        You do your position no good if you excuse these practices.

      • hunter –

        1. A skeptic suspends judgment until reasonable doubts are settled properly.

        2. WCR’s erasure of uncertainty gave unjustified confidence to claims of lowered sensitivity.

        See? 1. does not equal 2. Either mistake or not ‘true’ skeptical scientist.

        bi2hs

  25. Willis Eschenbach

    I read the paper. It says inter alia:

    Initial estimates of ECS2xC = 3 ± 1.5 K suggested a large uncertainty (2), which has not been reduced in the last 32 years despite considerable efforts (1-10).

    Folks, this is the most important statement in the document.

    When hundreds and hundreds and hundreds of humanoids in universities and research centers all around the globe spend three decades and hundreds of thousands of hours and literally billions of dollars trying to get a better estimate of a value and make no advance at all in 32 years, I can only draw one conclusion.

    The theory is wrong somewhere.

    Seriously. Can you offer an alternate explanation of the failure of so huge an effort of so many dedicated, educated, even gifted people? Can you name any other observational variable which shows that total lack of progress despite such an enormous effort?

    I have pointed out in “The Cold Equations” exactly where the theory is wrong, and where the mathematics goes off the rails. The underlying problem is that climate sensitivity is not a constant. Climate sensitivity is a non-linear function of instantaneous temperature. Which means arguments about fat and thin tails miss the point completely.

    Any flow system far from equilibrium such as the climate is constantly changing and evolving. The system responds to changing instantaneous temperatures by changing the climate sensitivity.

    The result is that averages for the climate sensitivity in this kind of naturally governed system don’t mean a whole lot, and fat or thin tails mean even less.

    w.

    • Non sequitur. The fact that they can’t make any progress doesn’t mean the theory is wrong. It just means they can’t get their hands around it. It’s one of those “just around the corner” things like sustained fusion or the TOE.

      Allah didn’t say that all theories have guaranteed methods of solution.

      • P.E. That is so true. Perhaps more Buddhists are required :)

      • Further, it represents a limited view to say that a lack of reduction of uncertainty = a lack of progress. For example, a similar degree of uncertainty could mean understanding the problem better by way of eliminating some variables from consideration while discovering other new potentially important (but not fully understood) variables. There may be other ways that a lack of reduction in uncertainty doesn’t necessarily imply a lack of increased understanding of the problem being studied (and hence, a lack of progress). As you imply, the years of study may ultimately translate into progress toward a solution.

        It reminds me of when students think that they will just sit down to write a paper and their thesis and proof will spontaneously appear (and if they don’t just appear, the students think they’ve failed). They don’t realize that all the work they do thinking about the problem and constructing their argument are a necessary component of the overall process.

        Concluding a “lack of progress” as Willis does likely reflects a tribal agenda.

      • P.E.
        I would sugest that your examples actually make Willis’ point rather well: if the theory was correct, we would, after 50 years, not be concerned with CO2 because electricity would be so cheap as to only require an annual meter fee and line maintenance cost because controlled fusion would be here.
        Fusion, sort of like the climate crisis, has been a few years away for decades.

      • That doesn’t mean it’s wrong, it means we don’t know. You know, that uncertainty critter Dr. C. keeps yammering about? Shows up in a lot of places.

      • P.E.
        The theory is that if we do certain things we will reach controlled fusion.
        We have not done it, despite decades and billions. That means the theory, at some level, is wrong.
        The theory on climate as presented in 1988 was that CO2 was going to cause certain things to happen. They have not. The theory, at some level, is wrong.
        Uncertainty critter is just another name for being wrong.
        It is also nicer to delicate academia.

      • That wasn’t what I was saying in the example. What I was saying is that fusion has been one of those things that can’t be ruled in or out for probably 60 years. And it’s probably going to stay that way for quite some time.

        These things are like the twilight zone of technology.

      • Willis Eschenbach

        P.E. | November 29, 2011 at 10:53 pm | Reply

        That doesn’t mean it’s wrong, it means we don’t know. You know, that uncertainty critter Dr. C. keeps yammering about? Shows up in a lot of places.

        If your claim is that we have the right theory … why haven’t we made more progress? What is it you are saying that “we don’t know”?

        I mean, I know we don’t know the answer. But other than that, you seem quite clear that there is something that we don’t know … what is it?

        w.

      • Willis Eschenbach

        Sustained fusion is trying to contain a little bit of the sun in a physical structure.

        You are claiming that somehow, that is equivalent to measuring a climate variable to better than ± 50%???

        Non sequitur. We’re not trying to build something incredibly physically difficult in the real world. We’re just trying to measure something, and we have failed miserably.

        w.

      • Agreed PE. huge non sequitor.

        http://en.wikipedia.org/wiki/Ratio_distribution

        tells you all you need to know. I had one of the devils once.

        http://en.wikipedia.org/wiki/Loss_Exchange_Ratio

        haha, guess what happens when none of your people die
        That didnt mean the science of warfare was wrong. It just meant a key metric was very tough to nail down, especially since numerator and demoninator are negatively correlated

    • Willis –

      If only those hundreds and hundreds and hundreds of humanoids in universities and research centers all around the globe, spending three decades and hundreds of thousands of hours and literally billions of dollars to examine this problem had your insight and freedom from bias, we would be so far in advance of where we are.

      Hopefully, your insight about how they all wasted time, expertise, and money will be your accomplishment, your contribution to society.

      • Joshua,
        You would have made a fine inquisitor.

      • Willis Eschenbach

        How would we far in advance of where we are? Sometimes you have to go down the wrong road for a while before you can realize you are on the wrong road. How would my “freedom from bias” have changed that in the slightest?

        I couldn’t put us “far in advance of where we are”, Joshua. We had to try for a long time before I became convinced that there must be theoretical problems. You can’t conclude that on day one.

        w.

    • I got to the first equation and it was wrong, and easily identified as being dimensionally incorrect:
      dH/dt = Q – E
      Unless E is meant as power doing work and Q similarly as a heat flow, but that is not standard notation.

      I think the main issue with predictions on this scale and what makes it challenging is that we are looking at a few degrees on the baseline of 300 degrees K. This is the standard way of looking at statistical physics in that the current temperature is the thermal background level and we are trying to gauge small perturbations about this point in what amounts to a metastable system.

      Then we have the seasonal and spatial variations and all the problems of deducing changes from the differences of large numbers.

      On top of that, I don’t see much dynamic range in any of the changing observables either, except for a select few. And you can probably guess which are the observables I am referring to.

    • When hundreds and hundreds and hundreds of humanoids in universities and research centers all around the globe spend three decades and hundreds of thousands of hours and literally billions of dollars trying to get a better estimate of a value and make no advance at all in 32 years, I can only draw one conclusion. The theory is wrong somewhere.

      Tch, tch. So much talent, Willis, so little imagination.

      Talent without imagination is like a match without oxygen.

      With a little more imagination, another conclusion might have occurred to you.

      The quantity whose value is in question is undefined.

      If you disagree, please state what you believe to be:

      THE DEFINITION OF CLIMATE SENSITIVITY

      If you have such a thing, we would all be much the wiser if you would kindly share it with us.

      If not, then what the h*** are you talking about? (Your favorite consonant in place of h there.)

      PS. By “humanoid” did you mean to exclude yourself?

      PPS. Regarding your “billions and billions of dollars,” unless you know otherwise there are fewer than 50 estimates of climate sensitivity in the literature. If these cost even just one single billion dollars in total, then that would be more than $20 million apiece. Can you give even one single furshlugginer example of an estimate of climate sensitivity that cost $20 million to make?

      PPPS. I only write this sort of c**p when someone really p***** me off. C*piche?

      PPPPS. You are such an *******.

      • Willis Eschenbach

        Vaughan Pratt | November 30, 2011 at 12:46 am


        PS. By “humanoid” did you mean to exclude yourself?

        No, I’m a humanoid like the rest of us. None of us are truly human yet …


        PPPS. I only write this sort of c**p when someone really p***** me off. C*piche?

        PPPPS. You are such an *******.

        Goodness, my friend, you should cut down on the coffee. I don’t recall insulting you, if you think so please quote what I said that has you so upset.

        w.

      • You were insulting our intelligence with your logic, which leaves a lot to be desired. Billions and billions of dollars being spent on determining climate sensitivity? At least you weren’t claiming AGW proponents are costing the economy 47 trillion dollars, as some people have been doing.

      • Billions and billions have been spent on alleged climate research, the goal of which is to prove that the climate is dangerously sensitive to a doubling of CO2. Catch up.

      • Rockets could not care less about the atmosphere. No need to study it.

    • It’s simple, they treated complex thermal dynamic systems as if they were simple static models. If the models are wrong, then the proof is in the pudding.

    • Willis: When hundreds and hundreds and hundreds of humanoids in universities and research centers all around the globe spend three decades and hundreds of thousands of hours and literally billions of dollars trying to get a better estimate of a value and make no advance at all in 32 years, I can only draw one conclusion.

      Why the insulting word “humanoid”? It’s a senseless insult.

      Why only 1 conclusion? How about “All the relevant quantities are hard to measure”?

      • Willis Eschenbach

        MattStat | November 30, 2011 at 7:20 pm | Reply

        Willis:

        When hundreds and hundreds and hundreds of humanoids in universities and research centers all around the globe spend three decades and hundreds of thousands of hours and literally billions of dollars trying to get a better estimate of a value and make no advance at all in 32 years, I can only draw one conclusion.

        Why the insulting word “humanoid”? It’s a senseless insult.

        My bad, I didn’t realize it was a hot button word for you. I use it for myself or anyone.

        Why only 1 conclusion? How about “All the relevant quantities are hard to measure”?

        “Hard to measure”? Thirty years and hundreds of thousands of hours of human and computer time and millions and millions of dollars spent trying, without being able to decrease the uncertainty, is a whole lot more than “hard to measure” on my planet.

        When there is a solid theoretical understanding, increased accuracy of e.g. satellite measurements for hosts of variables and the increased speed and capability of computers and the increasing numbers of ground based measurements and the development of sophisticated computer models should have at least reduced the uncertainty.

        But that hasn’t happened. An estimate made before the satellite measurements, made before the internet, made before the computer revolution, made before huge complex climate models, is unimproved for thirty years.

        Perhaps “it’s tough” suffices for you in that situation. For me, the absolute lack of progress is a clear indication of theoretical problems.

        w.

  26. I am not sure I agree with this potential well idea. The way I see it, the cold side is shallower because of the ice-albedo feedback. This is why small forcing changes produced bigger temperature swings in the ice ages, while now with less polar ice, this feedback is weaker, so the barrier is higher and it takes more forcing change to get a given temperature change. The shape of the well changes with the equilibrium temperature according to the positive feedback strength, and the center of the well is determined by forcing (atmospheric composition, solar input, surface albedo distribution).

    • There is also some debate about whether ice albedo feedback should be included in 2xCO2 sensitivity. I think (but am not sure) in this paper it was not, but others like Held and Soden (2000) include this feedback as one of their factors. I think it is also relevant for future climate that the melting of summer sea-ice affects the albedo as a positive feedback, so sensitivity should include it too.

      • Jim – I believe sea ice was included, but land ice sheets were fixed boundary values used to define the forcing due to the albedo difference between modern times and the LGM. They were thus defined in the model runs as an unchanging feature of the LGM.

      • Fred, OK, so I think by including it in the forcing it is not in the sensitivity, when in reality at least some of the ice loss would be expected to be due to the temperature rise. So this might mean that their sensitivity is not the full amount which would be given if ice-albedo feedback had been a predicted component of their model.

      • I think the principle is that the LGM can be considered a quasi-equilibrium state, so that it can be used to test the role of albedo and ghg differences between modern times and the LGM for their contribution to the temperature difference. The flux imbalances resulting from those differences can be calculated and used as the forcing terms in the model simulation (Schmitter also included a dust forcing term). The assumption is that changes in land ice sheet extent were so slow during the LGM that they could be neglected, and an average figure for albedo used. Of course, ice sheets changed extensively going into and out of that ice age, but that occurred over millennia and allowed the models to treat ice sheet extent as fixed when simply comparing LGM temperatures with “modern” (preindustrial) temperatures.

        Note that if one wants to look at very long term climate sensitivity, as Hansen has done, then the ice sheet changes can be considered feedbacks on orbital forcing. Hansen derived long term climate sensitivity values of about 6 C for this, but if we’re considering the feedbacks that operate only on century scales or less, the ice sheets can be considered boundary values with no feedback effects.

      • Fred, I generally agree, but would expect the reduction in Arctic sea ice to even affect the sensitivity in this century. Yes, this is not like retreating glaciers which do affect long-term sensitivity (e.g. when we lose Greenland’s then Antarctica’s glaciers which would take centuries to complete but can be committed with CO2 levels we will reach soon enough).

    • The complications don’t stop with ice albedo. The melting ice increases the soil moisture in the run-off area. This reduces dust, increases soil moisture and surface water, and changes vegetation. Once the glaciers have retreated far enough all these factors are likely to change yet again. So the likely response is high albedo, to low albedo, to an albedo somewhere between the two in the area irrigated by glacial melt.

      • Albedo changes are primarily a result of cloudiness changes which are in turn a result of the meridionality/zonality of the jet streams. Jets waving about latitudinally in larger loops creates more cloud overall by increasing the length of the lines of air mass mixing between different air mass types.
        I think that is a better idea than the Svensmark cosmic ray suggestion.

        The ice/snow cover effect is relatively minor and follows rather than leading the much more rapid cloudiness changes

        Such meridionality/zonlity being driven by solar variability affecting the surface characteristics of the polar vortices by changing the ozone quantities differentially at different levels of the atmosphere to push polar air further equatorward across the mid latitudes.

        That reduces solar input to the oceans for overall system cooling.

        See here:

        http://climaterealists.com/index.php?id=6645

        “How The Sun Could Control Earth’s Temperature”

        especially comment by Joanna Haigh:

        “our findings raise the possibility that the effects of solar variability on temperature throughout the atmosphere may be contrary to current expectations.”

        from here:

        http://www.nature.com/nature/journal/v467/n7316/full/nature09426.html

        The stratosphere stopped cooling in the late 90s following the peak of cycle 23. If it now starts to warm with a quiet sun then my hypothesis will be proven.

        Some say it has already begun to warm:

        http://www.jstage.jst.go.jp/article/sola/5/0/53/_pdf

        “The evidence for the cooling trend in the stratosphere may need to be revisited.
        This study presents evidence that the stratosphere has been slightly warming
        since 1996.”

      • I can’t find a mismatch between hypothesis and observations although the time period seems to be short both in observations and in my reading. Have you found any and if so how do you explain them?

    • I am not sure I agree with this potential well idea. The way I see it, the cold side is shallower because of the ice-albedo feedback.

      In the context of trying to apply a bounded random walk such as with the Ornstein-Uhlenbeck process, the well is a useful abstraction. We are actually entering the realm of stochastic differential equations with this formulation, which means any of the parameters and coefficients can change and have their own probability distributions.

      I placed a steep barrier on the cold side as it demonstrates that we won’t ever get to a snow-ball earth. Apart from that, I agree that modifications to the well are important to consider.

  27. Note that climate sensitivity can cut two ways.

    I think climate is highly sensitive to any forcing effect that tries to change the temperature differential between sea surface and surface air temperatures.

    The climate system immediately applies a negative reaction by altering the surface distribution of the permanent climate zones to try to recover the ‘natural’ equilibrium.

    That is then what we perceive as climate change.

    It is that high negative sensitivity to potentially disruptive forcing effects that makes the climate system insensitive enough for the oceans to have remained liquid and approximately the same temperature for some 4 billion years.

    That ‘natural’ equilibrium is a consequence of the interaction between surface atmospheric pressure and the physical properties of the bonds between water molecules with their capacity for phase changes (solid to liquid to gas and back again).

    The air temperature then follows sea surface temperatures rather than atmospheric GHG content. All that the latter achieves is to alter the speed of the water cycle from surface to space by shifting the surface pressure distribution a miniscule amount as compared to that resulting from natural solar and oceanic variability.

    • It is that high negative sensitivity to potentially disruptive forcing effects that makes the climate system insensitive enough for the oceans to have remained liquid and approximately the same temperature for some 4 billion years.

      What’s your range for “approximately the same?” Plus or minus 0.2 degrees? 2 degrees? 20 degrees?

      Increasing by 20 degrees would keep the oceans liquid. It might not keep your bank account liquid though. (This gratuitous political statement paid for through the nose, which as a scientist I know I should work harder on keeping clean, sorry Judy.)

      • Lets repose the problem ie The faint sun paradox.

      • Shifting climate zones would resolve the faint sun paradox.

        As the sun became stronger the zones moved poleward with increasing width of the trpical air masses.

        Thus did the faster/larger water cycle keep ocean temperatures stable despite higher solar input.

      • Its far more complex then it seems eg

        http://i255.photobucket.com/albums/hh133/mataraka/faintearthparadox.jpg

      • Based on paleoclimatic data it would appear that the variation for the oceans is less than 5C.

        “When we first looked at the paleoclimatic data, I was struck by the small cooling of the ocean,” Schmittner said. “On average, the ocean was only about two degrees (Celsius) cooler than it is today, yet the planet was completely different – huge ice sheets over North America and northern Europe, more sea ice and snow, different vegetation, lower sea levels and more dust in the air”

      • Based on paleoclimatic data it would appear that the variation for the oceans is less than 5C.

        Each of the past several glaciations took at least 90,000 years to decrease the surface temperature by ten degrees. Is your null hypothesis that when the surface temperature declines by 10 °C over 90ky, the ocean temperature declines by only 5 °C?

        If so then this would be an interesting null hypothesis to work on. Your response eagerly awaited. (Personally I don’t give a t**d about the politics of all this, only the numbers interest me.)

      • Vaughan, “Is your null hypothesis that when the surface temperature declines by 10 °C over 90ky, the ocean temperature declines by only 5 °C?”

        5C is probably too high an estimate of decline in ocean temperature. Tropical solar radiance probably reduced less than 1Wm-2 at the surface on average for the 90K years and change in tilt would increase the solar ocean/land absorption ratio due to land mass distribution. Southern hemisphere temperature change was about 1/2 to 1/4 that of the northern hemisphere. Major ice sheets extended to what, latitude 40 to 45 north? That is only a 5 to 10% increase in albedo which could actually be less because of reduced NH incident TSI due to tilt change and cloud cover changes.

        Snowball Earth makes a neat visual, but is pretty unrealistic.

  28. Willis Eschenbach

    WebHubTelescope | November 29, 2011 at 10:49 pm | Reply

    I got to the first equation and it was wrong, and easily identified as being dimensionally incorrect:
    dH/dt = Q – E
    Unless E is meant as power doing work and Q similarly as a heat flow, but that is not standard notation.

    So you gave up because the notation wasn’t standard. That’s impressive.

    w.

    • So you gave up because the notation wasn’t standard. That’s impressive.

      I guess I was too curt and it sounded dismissive. I assumed you would clarify since you are right there. I can follow derivations but I usually like to derive it on a piece of paper at the same time, and if my own interpretation is off, I will end up going in the wrong direction.

      • Q is the incoming solar energy, E is the outgoing thermal energy, and Q-E is the energy imbalance at the top of the atmosphere. Think “E” for emission.

        The notation Willis is using is borrowing existing variable names for these two basic quantities used in HEAT CAPACITY, TIME CONSTANT, AND SENSITIVITY OF EARTH’S CLIMATE SYSTEM, by Stephen E. Schwartz, published in JGR, June 2007. This is given near the top of his post on “Cold Equations”.

      • Congratulations on shedding light on dark corners. Several billion photons there.

        Any chance of a translation into English?

      • Er, translate what? WebHubTelescope asked for clarification of one equation, so I gave it. I’m not out to translate anything more than this.

      • Q is the incoming solar energy, E is the outgoing thermal energy, and Q-E is the energy imbalance at the top of the atmosphere. Think “E” for emission.

        I still think those should be stated in terms of power and not energy, since power has the units per time, yet the LHS of the equation is dH/dt which looks like change of enthalpy with time.

        I guess the other bit of confusion is that this “Cold Equation” looks a lot like a differential 1st law of thermodynamic heat equation
        dH = dQ – dW
        with all the baggage that carries.

        So it looks like the “Cold Equation” is intentionally crafted to look the way it does. I guess it was meant to confuse Vaughan, myself, and others who have been thinking with a different vocabulary.

      • I still think those should be stated in terms of power and not energy

        They are. Q and E are power terms.

        It was my error to use the word energy; sorry. Although it is quite normal for an “energy balance relation” to be actually describing quantities stated in power. (Eg, Trenberth et al)

      • There is a lot of baggage involved. The first and second laws will help prevent energy creation rabbit holes, though pondering perpetual motion can be fun with the right recreational medication.

        Without the laws of thermo how could you account for a 40C temperature inversion in the antarctic? Arrhenius’ equation is supposedly based on the surface response. When the air is warmer than the surface CO2 returns that energy to the warmer layer more efficiently, unless there are Harry Potter photons. Does C-C and Arrhenius cover that?

      • http://redneckphysics.blogspot.com/2011/11/surface-tropopause-connection.html

        The three disc model I used just estimates the average temperature of the radiant layer. The 246.9 with 3.7Wm-2 agrees with with Arrhenius’ 246K estimate. You can trick it up with the actual area of the CO2 spectrum instead of the S-B envelope I used, but what happens when the source disc is -79C and the sink disc is -60C? What happens when evaporation causes a cloud to form and the cloud top temperature is source at -30C and the sink disc is still -60C?

        http://redneckphysics.blogspot.com/2011/10/hows-that-choice-of-temperature.html
        My estimates are just estimates, but if you are using thermo equations you should remember the basics of thermo; frame of reference, don’t assume and KISS.

    • So you gave up because the notation wasn’t standard. That’s impressive.

      So you equate “dimensionally incorrect” with “nonstandard.” That’s, uh, unimpressive.

      • They were dimensionally correct. I guess the confusion is that the variable “E” is usually used for an energy term, and “Q” for a power term. But in this case, “E” was also a power term, being the thermal emission from top of the atmosphere.

        You can’t just assume units from the names of single letter variables; there isn’t one consistent standard, and Q and E in this case were simply following the lead from a paper published in JGR by Schwartz. The immediate context did define the variables and their units.

      • “E” is usually used for an energy term, and “Q” for a power term.

        Must be a generation gap thing. I’m from an earlier millennium where Q was an energy term, synonymous with and generally preferred to E. Pointer please to a 21st century textbook that uses Q as a power term.

        You can’t just assume units from the names of single letter variables;

        That’s only true for letters with a low Scrabble score.

      • Pointer please to a 21st century textbook that uses Q as a power term.

        Physics 1 for Dummies. (Seriously. The title leaped out at me when I did a quick search, and I couldn’t resist. But there are plenty more.)

        All you need to find Q as a power term is to focus on books where power is being treated prominently, such as treatments of an equilibrium process. It’s not universal of course; there is no one standard; but it is very common in, for example, statements of the Stefan-Boltzmann relation: Q = e.σ.T^4

        Look guys, this isn’t important. There’s a lot that’s wrong with Willis’s post on the cold equations — especially (as Vaughan has noted I think?) with the way he is implicitly defining and estimating climate sensitivity. A critique might be useful, perhaps, and WebHubTelescope would be a good person to do it too, if he felt it worth the time. It’s not really worth the time for me at present; and I would understand if others gave it a low priority as well. But distractions about variable usage are not legitimate criticisms. Especially when Willis DOES explain the variables, and is simply following the conventions already used in a perfectly good journal paper by a legitimate and useful researcher.

        All I was doing was explaining the variable usage when it was asked! Having to go into all this meta-discussion is a waste of time. Can we please drop snark about variable naming and either deal with his claims on their real content — or just ignore them as not particularly important?

      • PS. I goofed.: I’m the dummy. Physics 1 for Dummies uses Q = e.σ.A.t.T^4, so Q is energy and time t is explicit.

        But Thermodynamics of solar energy conversion does use the convention I am more used to; and even then Q is actually power per unit area. W/m^2 is what I use most often; also used by Willis, and in his reference Schwartz.

        Advances in finite time thermodynamics: analysis and optimization does use Q for power, straight up.

        Moral. You (I) have to actually read the context and not jump to conclusions based on a variable name you’ve seen before in a different context. I’ll stop now….

      • Willis Eschenbach

        Thanks, Chris. You mention that someone should take a look at it. I would welcome that, it is why I post things.

        The common statement of the relationship between forcing and temperature is

        Change in surface air temperature = climate sensitivity times change in forcing.

        This is an equation which is supposed to be derived from basic principles. It contains two simplifications which I think are incorrect.

        The larger problem is the assumption that ocean storage is a function of surface air temperature. Variations in ocean heat gain are mostly a function of insolation. Variations in ocean heat loss are mostly a function of evaporation, which in turn is mostly a function of wind. Neither wind nor insolation is much affected, other than indirectly, by surface air temperature. (Wind is mostly a function of spatial temperature differences, but that’s a different beast than temperature.)

        As a result, the substitution that they make is not supported either by either theory or observations. Their substitution is

             dH/dt = C dTs/dt

        which translates as

             Change in ocean heat storage = a constant C times change in surface air temperature

        But that relationship is simply not true.

        w.

      • Change in surface air temperature = climate sensitivity times change in forcing.

        Not quite. It is change in temperature given enough time to come to equilibrium with the new forcing.

        It is therefore not given simply by observing the change following a step change in forcing; since it takes significant time to come to equilibrium with the new forcing.

        This is the major problem I have with your discussion; you seem to miss entirely this aspect of “equilibrium response”.

      • It is therefore not given simply by observing the change following a step change in forcing; since it takes significant time to come to equilibrium with the new forcing.

        This is the major problem I have with your discussion; you seem to miss entirely this aspect of “equilibrium response”.

        I am just trying to play along here, but one could get a transient response if the equation was modified into a law of cooling if Ts represents a difference of temperature from a reservoir it is in contact with:
        dH/dt = C dTs/dt + k Ts

        Then you could start a compartment model if you treat the temperatures of the volume and the the reservoir it is attached to as coupled equations. You would see this kind of formulation as a first-order solution to a heat sink thermal management problem, and how fast a heat sink can dissipate heat. The term k would relate to diffusivity/thermal conductivity and you might also have a convection term.

      • A critique might be useful, perhaps, and WebHubTelescope would be a good person to do it too, if he felt it worth the time.

        Also I think Captain Dallas is a go-to guy. If I am not mistaken, I believe he is a MechE with a lots of experience on the HVAC side of things. Heat capacities and thermal conductivities are second nature to those guys.

        I gave a short outline of a path forward but I am not sure if that is the same as the “cold equations” is headed.

      • Change in ocean heat storage = a constant C times change in surface air temperature… But that relationship is simply not true.

        For small changes in surface air temperature, the relationship is linear enough to be considered true. It’s also a fundamental characteristic of the geophysics of radiative forcing from greenhouse gases such as CO2, and can be calculated from observational data and the radiative transfer equations. Obviously, air temperature is not the only variable capable of affecting the magnitude of ocean heat storage – for example, a given air temperature change will result in more W/m^2 delivery into the ocean at high CO2 concentrations than at lower ones because of the greater atmospheric emissivity in IR frequencies. Also, forcings from other sources such as solar changes, and changes in internal ocean dynamics will affect ocean heat storage, but the air temperature/heat transfer relationship is an important one, and particularly relevant to long term (multidecadal) changes in ocean heat content, where the shorter term fluctuations tend to average out..

      • Web said, “I am just trying to play along here, but one could get a transient response if the equation was modified into a law of cooling if Ts represents a difference of temperature from a reservoir it is in contact with:
        dH/dt = C dTs/dt + k Ts”

        You can but it is tricky because rate of flow across a boundary can be three or four orders of magnitude different depending on direction. Sea water to air is 500-1000:1, air has a much lower thermal capacity.. Also CO2 has a nonlinear conductivity impact, which while small, is not negligible especially in the Antarctic. http://redneckphysics.blogspot.com/2011/11/radiantion-versus-conductivity.html

        That would be negligible if climate sensitivity were over 1.5C in the region. The Antarctic temperatures, air and ocean, indicate it is not.

        Oh, in the new thread on synchronization, instead of energy well, the chaos term would be stability saddle. Thought you might like that :),

  29. Judith,

    I like the way science is sticking to their guns and NOT including the different velocities. This means each latitude has it’s own different volume of atmosphere.
    There NEVER was an equilibrium on any planet due to the constant changes of slowing, evaporation to space, planetary positioning changes, etc.

  30. Willis Eschenbach

    Fred Moolten | November 30, 2011 at 11:46 am |

    Change in ocean heat storage = a constant C times change in surface air temperature… But that relationship is simply not true.

    For small changes in surface air temperature, the relationship is linear enough to be considered true. It’s also a fundamental characteristic of the geophysics of radiative forcing from greenhouse gases such as CO2, and can be calculated from observational data and the radiative transfer equations.

    What is this, science by emphatic claim. I gave both theoretical and observational reasons, along with a variety of citations, as to why it wasn’t true.

    You come back, wave you hands, and say it is too true, and that all the shorter term fluctuations tend to average out. Presumably you’ve done the averages to verify that the short term fluctuations average out, and are simply too humble to reference them …

    Citations? Observations? Theoretical discussion of the mathematics? You don’t bother with that kind of thing … in fact, so far nobody’s been willing to try to say where the math in “The Cold Equations” is wrong.

    But y’all are certainly convinced that I’m wrong. WebHubTelescope makes the very, very foolish claim that my units and my notation are wrong. Immediately, I’m jumped on by a bunch of folks who clearly didn’t bother to read any more of my paper than WHT did before making his risible statement.

    In fact, as I stated very clearly in my analysis, I was discussing a paper published in JGR, and for that reason I was using their units and their notation. So all of your high-falutin abuse of me for being such an idiot, all of your claims that my notation is not in common use, all your statements about what an tool I was, about how couldn’t get my units straight?

    Turned out you were talking about yourselves and what tools and idiots you are to not do your homework..

    You were wrong about the units. You were wrong about the notation. Not just a little bit wrong. 100% wrong.

    All you did was prove yourselves both foolish and lazy. You demonstrated beyond doubt that you deserve every epithet you hurled at me. You didn’t read what I wrote, you stupidly just assumed I was wrong.

    And it’s curious, but I haven’t noticed anyone apologizing for sending me a nastygram claiming I was a stupid jerk before they went to check the facts. But heck, you guys are AGW supporters, you learned your manners from the Climategate conspirators, so no surprise there …

    w.

    • Hi Willis – This isn’t the place for a course in radiative transfer, but your conclusions were wrong for the reasons I indicated. Air surface temperature is a major determinant of ocean heat transfer on a global long term basis, although regional and short term effects are influenced by many other factors. I’ll let other readers judge what I stated and your response and leave it at that. However, you tell us that you offer your conclusions in hopes of understanding where you might be wrong, but I haven’t seen much evidence of that. In any case, what you eventually decide matters less to me than it should to you, and so you can consider what you want to do with the information at your leisure without a need to respond here.

  31. Willis Eschenbach

    Fred Moolten | November 30, 2011 at 2:22 pm | Reply

    Hi Willis – This isn’t the place for a course in radiative transfer, but your conclusions were wrong for the reasons I indicated.

    Well, gosh. My math is wrong, but you won’t tell me where because “this isn’t the place”.

    Fred, I hope you are aware of how pathetic that excuse is, particularly since you haven’t indicated a single reason, you just claimed you have indicated them. Oh, you waved your hands and said very impressive things like “For small changes in surface air temperature, the relationship is linear enough to be considered true.”

    But since you didn’t do anything but put your “FRED BELIEVES THIS” sticker on your claims, and you provided us with no citations, no explanations, no logic, and no math to back up what you believe … what are these mysterious “reasons you indicated” that make my conclusions wrong?

    w.

    PS—you say that “Air surface temperature is a major determinant of ocean heat transfer on a global long term basis, although regional and short term effects are influenced by many other factors.” Surely you wouldn’t make such a definitive statement without evidence to back up the idea that air temperature determines ocean temperature. Heck, even the guys that made that up call it an “Ansatz”, but no, Fred thinks it’s real …

    So where’s the citation, where’s the math for that one? I provided a host of information on this question, including showing that the distribution of surface temperature and ocean heat content changes are very different, to back up my ideas.

    You have provided nothing but your own loud mouth to back up your claims.

    • Willis – As I mentioned, whether or not you want to understand this should be of more interest to you than to me. It’s all pretty basic stuff, but if you want to start with the fundamentals, I recommend chapters 4 and 6 in Raymond Pierrehumbert’s “Principles of Planetary Climate”, as well as several of the posts at Isaac Held’s blog.

      There are poorly understood aspects to climate science, but this isn’t one of them.

      • Willis Eschenbach

        In other words, Fred, you are telling us you can’t point to what I’ve done wrong, so you are reduced to waving your hands and pointing at an entire chapter of some book …

        Fail.

        w.

      • Held’s blog posts 4 and 16 are a good start, including the reference to Gregory and Forster 2008. See also the Padilla et al paper in the thread on Probabilistic Estimates of Transient Climate Sensitivity in this blog, and the references therein. I’ll be glad to answer questions on specifics, but you need to understand the fundamentals first.

      • Willis Eschenbach

        Fred, sorry for the lack of clarity in my words, but you missed the point.

        First, you have to point to what you think I did wrong by quoting my words that you object to.

        Then, you need to tell me why you think my words are wrong.

        As a part of that, you can point some references like blog posts X and Y.

        You can’t just say “you’re wrong, go read this chapter”. I’m not going on a treasure hunt for an unspecified “error” because you are unwilling to 1) quote what I said that you think is wrong, and 2) tell me why you think it is wrong, including your citations and references to back up your claims.

        All I have from you so far is handwaving.

        w.

      • Willis – I thought I already did that, when I quoted, “Change in ocean heat storage = a constant C times change in surface air temperature… But that relationship is simply not true.”

        I pointed out that additional ocean heat as a function of a higher air temperature is linear enough to make the statement true rather than false. I also stated that many factors alter ocean heat storage rates on a regional and short term basis, but that long term, a warmer atmosphere is a dominant factor. It’s more important for warming due to ghgs than for long term changes in solar irradiance, but even the latter operate in part through atmospheric warming (direct absorption of near IR by atmospheric GHGs and increased redirection downward of surface-emitted energy coming from a solar-warmed surface). The addition to ocean heat can continue indefinitely as long as an imbalance exists because atmospheric energy is replenished directly and indirectly by solar radiation, and that is why the atmosphere, despite its small heat capacity, can transfer huge amounts of energy over long time intervals. This is basic thermodynamics relating air temperature to its ability to transfer energy downward by radiation and other means. Details are in the book I cited, but even a simpler text on basic geophysics will give you the same information, albeit with a less mathematically rigorous approach. Isaac Held’s blog items discuss some of the quantification, and cite a few relevant references. The Trenberth energy budget diagram also provides some quantitative data.

        The basic message is that air temperature is important long term, and that if you want to understand climate energy balance long term, warming via the atmosphere will be a critical element. If you simply want to know why ocean heat content has changed from one year to the next, many other factors, some unidentified, must be considered, but much of what you read about transient or equilibrium climate sensitivity will inevitably depend on much longer intervals than that.

        I probably won’t respond further unless new information is introduced, or you or others have specific questions beyond whether atmospheric warming is important.;

      • Willis:
        Fred tried that crap on me. He looks down his long nose and expresses his supercilious, arrogant view that whatever you did is wrong, and either blames it all on the Clausius-Clayperon equation, or if he really needs to roll out the heavy artillery, he says go read Pierrehumbert’s book. By the way, I glanced through that book and did not find it very illuminating.
        I spent the past five years reading 1,000 papers on climate change and I didn’t find one paper with Fred’s name on it. I had come to the conclusion that climate change was beyond human understanding but I no longer have to worry. Fred has all the answers.

      • Donald Rapp – Even if you don’t want to give me credit for knowing something about geophysics, it’s extraordinary that you would dismiss Pierrehumbert’s book. It’s already being recognized as a classic in the field.

      • Donald Rapp: . By the way, I glanced through that book and did not find it very illuminating. I spent the past five years reading 1,000 papers on climate change

        Perhaps if you have read 1,000 papers on climate change then Pierrehumbert’s book might not be illuminating. It does, however, contain a great deal of pertinent information about climate, stated with appropriate qualifications on the accuracies of diverse assumptions and approximations. It states clearly that it is principally about equilibrium conditions, and frequently directs attention to the inaccuracy (generally small) of the equilibrium assumption. What it therefore lacks principally is the presentation of the dynamism to be found in Marcel Leroux’s book “Dynamic Analysis of Weather and Climate” and the chapters on dissipative systems in the book “Modern Thermodynamics” by Kondepudi and Prigogine.

        Because of the limitations frequently addressed, Pierrehumbert’s book does not permit a calculation of the probable effects on the temperature of Earth of a doubling of the CO2 concentration: neither transient nor quasi-steady state. The effects are smaller than the documented errors of approximation. For this and other reasons, I recommend the book heartily to any skeptic interested in the physics and mathematics of climate. It is a treasure trove of reasons not to believe AGW. Not that AGW is wrong, but that it is on the extant evidence untested and untestable.

      • That is Fred’s style. He refers to vast knowledge in books and in his education, but he omits relevant details on particular points. He simply does not quote what he thinks you wrote in error, and then quote the passage from Padilla et al or Pierrehumbert or Isaac Held that specifically disputes what you wrote that he thinks is in error. Then when he misses your specific point, he asserts that you have not mastered the basics.

        I wrote this because Fred wrote up above that he leaves it to his readers to determine whether he is correct in this instance. He is vague and imprecise as usual in this instance, as far as this reader can determine.

      • Matt – It’s reassuring that you appreciate the value of Pierrehumbert’s book. However, you state,

        “It is a treasure trove of reasons not to believe AGW. Not that AGW is wrong, but that it is on the extant evidence untested and untestable.”

        It appears that the author strongly disagrees with you, as illustrated by his discussion of global warming and its potential consequence on pages 58-66. If you want to argue that we haven’t yet narrowed the climate sensitivity range to a precise value, I don’t think anyone would disagree, but that’s not the same as doubting the existence of AGW. If you find evidence in the book for the latter, you should cite it. I don’t think most who read it will discover evidence of that sort.

      • I lean towards Matt’s position, though without putting the words “untested” and “untestable” in RP’s mouth.

        Regarding “untested,” on p.62 we read

        “The ultimate test of the theory, though, is to verify it against the uncontrolled and inadvertent experiment we are conducting on Earth’s own climate. Can we see the predicted warming in data? This is not an easy task. For one thing, the atmospheric CO2 increase is only a small part of the way towards doubling, and the climate has not even fully adjusted to the effect of this amount of extra radiative forcing: oceans take time to warm up, and delay the effect for many years (for reasons to be discussed in Chapter 7. Thus, so far the signal of the human imprint on climate is fairly small.”

        and a few sentences later

        “The fact that the signal is hard to detect does not mean that global warming is of little consequence. The difficulty arises precisely because we are trying to detect the signal before it becomes so overwhelmingly large as to be obvious.”

        Clearly RP accepts that the AGW hypothesis is right, but seems to be saying that testing has not yet begun in earnest, though we’re seeing some evidence already.

        Regarding “untestable”, on p.65 he says,

        “Typical predictions of equilibrium global average warming for a doubling of CO2 range from a low of around 2 °C to a high of around 6 °C, with some potential for even greater warming with a low (but presently unquantifiable) probability. Because of other uncertainties in the system (particularly the magnitude of the aerosol effect and especially the indirect aerosol effect on cloud brightness) simulations with a range of different cloud behaviors can all match the historical climate record so far, but nonetheless yield widely different forecasts for the future.”

        If it only goes up 2 °C, one might want to claim that the testing has been at best mildly convincing but perhaps not conclusive. If on the other hand it goes up 6 °C then many would argue that this confirmation of their worst fears constituted more than adequate testing of the AGW hypothesis.

      • Vaughan Pratt, you quoted R. Pierrehumbert as follows: “The fact that the signal is hard to detect does not mean that global warming is of little consequence. The difficulty arises precisely because we are trying to detect the signal before it becomes so overwhelmingly large as to be obvious.”

        I would rewrite that as “The fact that the signal is hard to detect does not mean that global warming would be of little consequence, should it occur. It means that we can not tell whether it is occurring.”

        That is, accepting his premise that the signal is hard to detect, I would point out that there is more than one compatible conclusion.

        I did not mean to imply that Dr. Pierrehumbert would agree with my overall evaluation of the evidence concerning AGW; only that his steadfast and thorough honesty in addressing the limitations of knowledge reveals where the theory behind AGW is inconclusive on the most important point (what difference does anthropogenic CO2 and CH4 make?). I should add, out of respect to Dr. Pierrehumbert, that he is the only commentator at RealClimate (that I know of) who will admit that the knowledge is inconclusive on important points, such as changes in cloud cover. The others either deny or trivialize the gaps in knowledge.

      • Fred wrote: It appears that the author strongly disagrees with you, as illustrated by his discussion of global warming and its potential consequence on pages 58-66. If you want to argue that we haven’t yet narrowed the climate sensitivity range to a precise value, I don’t think anyone would disagree, but that’s not the same as doubting the existence of AGW. If you find evidence in the book for the latter, you should cite it. I don’t think most who read it will discover evidence of that sort.

        Yes, I do disagree with him. Skepticism is a “norm” in science, in which propositions are to be doubted until evidence is sufficient to rule out alternative propositions. This doubt is active and probing. Dr. Pierrehumbert recurrently addresses the errors in the approximations — how different the Earth climate is from equilibrium, how different real emissions are from the Stefan-Boltzman equation, etc. The doubling of CO2 is supposed to increase Earthward long-wave radiation at the Earth surface by about 1% (with variations depending on what the downward radiation is, e.g. 300 – 600 W/m^2 in the Equatorial Pacific), which is less than 1% of total insolation (daytime short wave radiation is as much as 1200 W/m^2 in the Equatorial Pacific.) The mathematics summarizing knowledge of the climate system is not accurate enough to determine what the effect of such a slight change will be. In the entire book, there is not sufficient evidence to rule out other explanations of observed climate change, so it does not on the whole support AGW. It’s a fine summary of equilibrium conditions. Within the amount of error in the approximations, CO2 could decrease surface and low troposphere temperature and raise upper troposphere temperature, or the reverse, and the approximations would be equally correct.

        Even if the Earth were a disk with uniform surface and uniform lighting, Dr. Pierrehumbert’s book (that is, the knowledge summarize within) is entirely unable to deal with the structures that emerge in dissipative systems of the sort displayed in the book on thermodynamics by Kondepudi and Prigogine, but the magnitude of such structures is important for human civilization and all of biology. Complicate the situation with all the sources of non-equilibrium in the Earth system, and you see that the book is almost completely irrelevant to the question of AGW. The evidence of this (claimed) irrelevance is recurrently supplied within the text itself.

      • The doubling of CO2 is supposed to increase Earthward long-wave radiation at the Earth surface by about 1% (with variations depending on what the downward radiation is, e.g. 300 – 600 W/m^2 in the Equatorial Pacific), which is less than 1% of total insolation (daytime short wave radiation is as much as 1200 W/m^2 in the Equatorial Pacific.)

        I think your 1%percent figure is inaccurate, or at least misleading. Total solar radiation at the surface averages about 218 W/m^2. A CO2 doubling (before feedbacks) increases the TOA radiative imbalance by about 3.7 W/m^2, but because of greenhouse effects, the increase at the surface is almost twice that. This is closer to a 3% change than a 1% change.

        My other point is semantic and probably relatively minor, but I don’t really think you can say that Pierrehumbert’s book provides “reasons not to be believe AGW”. The book wasn’t intended as a treatise on AGW, but the data on ghgs are supportive, rather than a reason “not to believe”. Unless you can cite specific passages providing reasons not to believe, I’ll have to side with the author on that issue. To say that an issue isn’t settled until there’s conclusive proof is reasonable, but it’s not the same as a reason not to believe, which implies evidence for falsification. I suspect that if anyone buys the book looking for those reasons, he or she might be disappointed, but would learn a lot of climate physics in the process.

      • The 3% is in reference to solar. For DLR, it’s about 2%.

      • An updated version of the energy budget diagram shows surface solar to be even less – 184 W/m^2.

      • I agree with Donald and MattStat that Fred is not as helpful as he could be. His encyclopaedic memorization of the literature gives no hint of the uncertainties and how complexity can enter. Also I think Fred was born without the skepticism gene so essential to scientific progress. This subsequent discussion of PierreHumbert has been interesting but I note that the interesting parts were not brought out by Fred. I am wondering if perhaps Fred shares the environmentalist perspective that Ray is so ardent about. Having shared this view 30 years ago I can attest that it leads to some rather strange prejudices, such as that man is “contaminating” nature which is good. Shades of Rousseau who perhaps originated this strange dogma in the modern world. It is amply deconstructed by Bertrand Russell. Russell has some great sarcastic comments about the idea that whatever is natural must be “good” and that whatever man does that goes “against” nature must be evil.

        Sorry to pile on, but I still have high hopes that Fred can help us understand the flaws that need to be addressed in climate science and show where the most obvious problems are that need attention. As the recent emails show clearly, the paleoclimate records are a very good place to start.

      • David – Thanks for the comment and your confidence that I could still be helpful if only I would reform my ways. I feel less comfortable discussing myself than scientific issues, but i disagree with two of your characterizations.

        First, however flattering, your description of my “encyclopedic” knowledge of the literature is way off the mark. I just struggle to keep up, and aware of my limitations, I sometimes wonder why people who don’t even try are sometimes so dogmatic in their pronouncements about what’s wrong with the literature.

        Second, skepticism is in the eye of the beholder. If you wanted to waste your time going back through this blog reviewing my comments, you’d find very many examples of my skepticism. It might just be, though, that what I’m skeptical about isn’t always the same as what some others are skeptical about, given the constituency of this blog.

        In any case, If you read my long description of the relevant papers near the start of this thread, you’ll find that I commended the authors for their acknowledgment of uncertainty surrounding their conclusions. I’m not averse to the notion that there’s still a lot that needs to be understood better.

      • It is a question of emphasis. If you are trying to rationalize the current climate science literature, then you minimize the differences and uncertainties. If you think, apparently like Pierrehumbert that the data is noisy and the signal is small, then you might be suspicious. An example is the Schmittner paper. One can choose the 66% confidence interval as Fred did and that gives a lower bound of 1.7K which is not inconsistent with IPCC ranges, or you can look further and see that depending on forcing assumptions you can get peaks at 1.3K, 2.1K and 2.9K. That is a huge range at least in terms of its consequences. I seriously doubt that we have much of a handle of the forcings 23,000 years ago. Likewise for the Trenberth et al paper on the energy budget that Fred cites. One can just quote the numbers while failing to note that there are several estimates of all the numbers and the only one that seems to be well constrained is solar insolation. In any case, averaging these crude numbers over the whole globe surely is totally inadequate. After all, the assumption is that it was a change in the distribution of forcing and not a change in total forcing that caused the ice ages. So, it all depends on what your goal is when you read the literature. Some want to rationalize it because of their unquestioned faith in the authority of science and this particular band of very flawed climate scientists. Some want to advance the science by asking questions. The latter is much more useful.

  32. Willis Eschenbach

    Chris Ho-Stuart | November 30, 2011 at 6:44 am |
    Change in surface air temperature = climate sensitivity times change in forcing.

    Not quite. It is change in temperature given enough time to come to equilibrium with the new forcing.

    Thanks, Chris. You are referring to “equilibrium climate sensitivity”. There’s a variety of climate sensitivities depending on the time span involved. My statement, however, is just an english translation of the canonical equation

    ∆T = λ ∆F

    where t is temperature, lambda is the climate sensitivity, and F is forcing. As long as the time frames are congruous (meaning that the sensitivity corresponding to the time span of interest is used) there’s no problem with my statement.

    It is therefore not given simply by observing the change following a step change in forcing; since it takes significant time to come to equilibrium with the new forcing.

    This is the major problem I have with your discussion; you seem to miss entirely this aspect of “equilibrium response”.

    I’m not clear how that affects the math, or how it makes my claims incorrect. Whether you are talking long or short term, the math is the math, it doesn’t change. I did not specifically deal with the “equilibrium response” because it didn’t enter either into the paper I was analyzing, or my analysis of the paper.

    For the record, however, I was (and have to be) talking about equilibrium response, as my paper was discussing the nature of the system at equilibrium. I still don’t see, however, what difference that makes to either my analysis, or the paper that I was deconstructing.

    All the best,

    w.

    PS—You say above:

    There’s a lot that’s wrong with Willis’s post on the cold equations — especially (as Vaughan has noted I think?) with the way he is implicitly defining and estimating climate sensitivity.

    I used the definition of climate sensitivity which was used in the JGR paper I analysed. This is, as far as I know, the standard definition—climate sensitivity is the temperature change resulting from a given forcing change, over the time period of interest (i.e. transient, or equilibrium). Let me go see what the IPCC says … OK, I find these:

    In IPCC reports, climate sensitivity usually refers to the long-term (equilibrium) change in global mean surface temperature following a doubling of atmospheric CO2 (or equivalent CO2) concentration. More generally, it refers to the equilibrium change in surface air temperature following a unit change in radiative forcing ( ̊C/W m-2).

    and

    As defined in previous assessments (Cubasch et al., 2001) and in the Glossary, the global annual mean surface air temperature change experienced by the climate system after it has attained a new equilibrium in response to a doubling of atmospheric CO2 concentration is referred to as the ‘equilibrium climate sensitivity’ (unit is °C), and is often simply termed the ‘climate sensitivity’

    If you (or Vaughn) think that definition is wrong, where is your correction to JGR and the IPCC informing them of that fact? Or alternatively, if you think I am using another definition, what do you think that definition is? (please provide quotations from my paper to establish what I’ve done wrong, and don’t do what Vaughn does where he just makes the claim that I’ve done something wrong and provides no evidence at all).

    Thanks.

    • For the record, however, I was (and have to be) talking about equilibrium response, as my paper was discussing the nature of the system at equilibrium. I still don’t see, however, what difference that makes to either my analysis, or the paper that I was deconstructing.

      Thanks for stating which definition you’re working with. It’s one that would appear to have more bearing on what the temperature is likely to do between now and 3000 than 2100. The latter impact is of more immediate interest, at least to our immediate descendants, and also has a better chance of coming true (note I did not say “good chance.”)

      Your analysis and deconstruction may be fine, but are they of any practical interest?

      If you (or Vaughn) think that definition is wrong, where is your correction to JGR and the IPCC informing them of that fact?

      Neither science nor the IPCC works that way. Disagreements of this kind with the state of the art, for example the greater relevance of transient response over equilibrium sensitivity, start out “in the air,” make their way into conference presentations, then more archival literature. The scientific community gets to ponder these arguments and adjust its views as it sees fit. The IPCC’s task is not to respond to every individual with a dissenting viewpoint but to see that each report faithfully reflects the state of the art at the time of writing.

      • The IPCC’s task is not to respond to every individual with a dissenting viewpoint but to see that each report faithfully reflects the state of the art at the time of writing.

        Would it not be better to suggest that the IPCC and its contributing authors interpretate the “literature” at the level of their appropriate “skill”

        Say for example Ghil 2008 which asks the correct questions ( more subtle then Wills) as well posed problems.eg

        As the relatively new science of climate dynamics evolved through the 1980s and 1990s, it became quite clear from observational data, both instrumental and paleoclimatic, as well as model studies| that Earth’s climate never was and is unlikely to ever be in equilibrium. The three successive IPCC reports (1991 [2], 1996, and 2001 [3]) concentrated therefore, in addition to estimates of equilibrium sensitivity, on estimates of climate change over the 21st century, based on several scenarios of CO2 increase over this time interval, and using up to 18 general circulation models (GCMs) in the fourth IPCC Assessment Report (AR4) [4].

        The GCM results of temperature increase over the coming 100 years have stubbornly resisted any narrowing of the range of estimates, with results for Ts in 2100 as low as 1:4 K or as high as 5:8 K, according to the Third Assessment Report. The hope in the research leading up to the AR4 was that a set of suitably defined better GCMs” would exhibit a narrower range of year-2100 estimates, but this does not seem to have been the case.

        The difficulty in narrowing the range of estimates for either equilibrium sensitivity of climate or for end-of-the-century temperatures is clearly connected to the complexity of the climate system, the multiplicity and nonlinearity of the processes and feedbacks it contains, and the obstacles to a faithful representation of these processes and feedbacks in GCMs. The practice of the science and engineering of GCMs over several decades has amply demonstrated that any addition or change in the model’s parametrizations” i.e., of the representation of subgrid-scale processes in terms of the model’s explicit, large-scale variables |may result in noticeable changes in the model solutions’
        behavior

        Hence it is difficult to implement change with error that is “entrenched” in the model families core dynamics.ie Irreducible imprecision (Mcwilliams 2007)

        When standard approaches from a dynamic pov (read geometry) such as Ghil 2001 2008 Chekroun 2010 are used the impervious divide arises.

        As these are nonstandard from the CS POV eg Held invokes the Captain Oates option.

        This is not an easy topic, and one that I have a lot to learn about. But I wouldn’t advise you to cancel your summer vacation plans just yet.

        Should interpretators be provided for the IPCC as often the information with climate sensitivity seems to get lost in translation say with Annan or Colose on ZG10.

  33. WebHubbleTelescope, that was a good post.

  34. Oops, I mean WebHubTelescope. I have a bad habit left over from the Carter years.

  35. They don’t have the data for the higher temperature regime where we reside today and where the climate sensitivity may differ.

    I think that the history of science teaches that untested assumptions are usually wrong. Is there a way, with present data, to test whether the climate sensitivity of the near future (next few decades or next hundred years) is the same as the (recent) past?

    • Unless near future forcings differ radically from those of the recent past, near future climate sensitivity will resemble that of the recent past. The article by Padilla et al already discusses some of this in anticipating a narrowing of transient sensitivity confidence intervals over the coming decades, as a continuation of the Bayesian recursive process that was implemented starting around 1900..

      Climate sensitivity is not climate prediction. It describes a relationship between forcing and temperature, but doesn’t tell us what the forcing magnitude will be. In part, the latter is up to us, because it will involve how much CO2, other greenhouse gases, and aerosols we put into the atmosphere.

      • Fred Moolton: Unless near future forcings differ radically from those of the recent past, near future climate sensitivity will resemble that of the recent past.

        That is undoubtedly true. And if near future climate sensitivity is a little different from the climate sensitivity of the recent past, then none of the predictions from the GCMs will be accurate enough to use as a basis of policy. A 2% increase in cloud cover would produce a slight (not radical) difference in surface “forcings”, for example.(this is a tangent, admittedly.) Does someone have data ruling out such a possibility?

      • Fred,

        Isaac Held’s blog has good videos of simulations of cloud formation and dissipation over ocean: http://www.gfdl.noaa.gov/blog/isaac-held/2011/10/26/19-radiative-convective-equilibrium/

        What we need to know is: How does a doubling of the CO2 concentration affect this process?

        We need to know other details as well, such as: could an increase in the mean troposphere temperature occur with a reduction of near surface temperature? It could in principle (knowledge to the contrary does not exist), but would the kinetics of energy transport from lower to higher troposphere produce such a result?

        Now reconsider your comment: The article by Padilla et al already discusses some of this in anticipating a narrowing of transient sensitivity confidence intervals over the coming decades, as a continuation of the Bayesian recursive process that was implemented starting around 1900..

        It’s irrelevant to the heat transfer processes that will change in response to an increase in CO2.

      • Isaac Held’s blog has good videos of simulations of cloud formation and dissipation over ocean… What we need to know is: How does a doubling of the CO2 concentration affect this process?

        We only have model data on CO2 doubling, which estimates a significant positive feedback. We have observational data on how clouds have behaved over recent decades that were characterized primarily by warming. They show a tendency for low (cooling) clouds to diminish, and/or for the ratio of high (warming) clouds to low clouds to increase, although with considerable interannual variability. All of these data are consistent with a positive feedback although it is never possible to conclusively exclude other variables that coincidentally mediated the observed changes. They appear to fairly conclusively exclude a strong negative feedback long term, although not necessarily negative short term feedback responses. Relevant links are:

        Cloud Cover Over Oceans

        ISCCP Cloud Types

        ISCCP Cloud Analysis

        Trends In Cloud Cover

      • Those are good references. They do not address the specific point that I made.

        Your comment “We only have model data on CO2 doubling” is correct. We don’t have measured data (are you never bothered by the oxymoron “model data”?), and we don’t have model results for the specific point that I mentioned. It has not yet been demonstrated that the models are accurate enough to model the actual effects of CO2 increases.

        About this: They appear to fairly conclusively exclude a strong negative feedback long term, although not necessarily negative short term feedback responses.

        Because they are observational studies, they do not exclude any possibilities about an increase in CO2, either globally, over any particular region, or over any particular time span (e.g., morning vs. afternoon; winter vs. summer; annual vs. multidecadal.) They merely summarize observed trends (although “merely” does not do justice to the work involved: the 2011 paper on cloud cover over oceans summarizes 54 million data points; any model purporting to be relevant to public policy discussions should be able to reproduce their results in model hindcasts.) That paper is still behind a paywall, so I only could read the abstract.

        A fairly weak negative feedback as a possibility (for example, a 4% increase in cloud cover) not excluded by any knowledge. By “long term”, are you referring to the equilibrium result, or do you mean something like the 50 year result?

      • The cloud data can’t be said to “prove” anything, bu they are consistent with a positive long term (multidecadal) cloud feedback and exclude a strong long term negative feedback. They don’t exclude a weak negative feedback but make it appear unlikely. When it comes to feedbacks and the related concept of climate sensitivity, it is the convergence of data from many independent sources that lends strength to the conclusions and not conclusive evidence from any single source.

    • But Mattstat, that is the fun part. It is like any puzzle, solving it is fun, figuring out other ways to solve it is fun and watching it baffle other people is fun. So figuring out whether climate sensitivity of the near future will change or be about the same gets down to the physics, not the statistics, and physics is phun :)

      As you mentioned up above, Ray Pierre’s book should be a must read for skeptics. I am waiting on the paperback myself. One thing that seems missing from the radiant physics part is variation in local emissivity. It plays hell on direct flux measurements in the polar regions and seems to be a bit of a bugger in the tropopause.

      It seems that there are plenty of spectra looking up from the surface and plenty of spectra looking down from space. Spectra in the tropopause looking up and down are rare. I happened on one that was taken from space of the emission spectra of a deep convection cloud top at about -90C, just above the tropopause. The spectra of that cloud top was the atmospheric window spectrum. That means that there were no CO2 friendly photons to amount to much. Without photons to absorb, CO2 can’t emit much energy.

      Since I am just a fisherman that happened to stay at a Holiday Inn Express once, I can’t say with much authority, but that could be an issue. It might even cause the upper troposphere temperatures to not be as high as predicted. Some scientists at the university of Wisconsin, Milwaukee are working on the issue though. I wonder if that was in Ray’s book?

      • Captain: Spectra in the tropopause looking up and down are rare.

        That is an interesting comment.

  36. I suspect that the figures of 2.3K median and 1.7 to 2.6 K, for 66% probability, won’t be low enough to cause any significant numbers of so-called climate sceptics to accept this paper. They’ll have noticed that these figures are lower than IPCC estimates, so they won’t want to be too hostile. There’ll be a few mutterings about a ‘useful contribution to the debate’. That sort of thing.

    If only Schmittner et al, could have come up with a way of justifying knocking off about a degree from these figures. That would have caused some excitement on the denialist blogosphere. They would have all been heroes then!

  37. Willis Eschenbach

    Fred Moolten | November 30, 2011 at 4:54 pm |

    Willis – I thought I already did that, when I quoted, “Change in ocean heat storage = a constant C times change in surface air temperature… But that relationship is simply not true.”

    Ah. I see the problem. I had stated a conclusion in this thread. In my analysis, I listed the reasons. You had said that my overall conclusion is wrong. That’s not what I was asking for. I wanted specifics.

    I pointed out that additional ocean heat as a function of a higher air temperature is linear enough to make the statement true rather than false.

    You “pointed that out”, but you haven’t supported it. Call me skeptical, but I’ve looked at the numbers and reported what I found. If you want to make that claim, your unsupported word is inadequate. The data is all publicly available. I cite the data and my methods.

    However, your statement reveals a different problem. This is that you think that the atmosphere is driving the temperature changes in the ocean. In fact the causation is the other way around. The boundary layer rapidly takes up the temperature of the ocean. Because of the huge thermal mass of the ocean, it is hardly affected by the temperature of the boundary layer.

    Of much greater importance in oceanic heat loss is evaporation. Evaporation varies linearly with the wind. So if the wind goes from 1 m/sec (2 knots) to 2 m/sec (four knots), the evaporation rate doubles. It doesn’t go up a little. It doubles.

    And as a result, when the wind goes up, ocean heat loss increases greatly regardless of the temperature.

    I also stated that many factors alter ocean heat storage rates on a regional and short term basis, but that long term, a warmer atmosphere is a dominant factor.

    I know you STATED that everything averages out except radiation. But until you can DEMONSTRATE using observational data that all those variables average out, I’m not interested.

    It’s more important for warming due to ghgs than for long term changes in solar irradiance, but even the latter operate in part through atmospheric warming (direct absorption of near IR by atmospheric GHGs and increased redirection downward of surface-emitted energy coming from a solar-warmed surface). The addition to ocean heat can continue indefinitely as long as an imbalance exists because atmospheric energy is replenished directly and indirectly by solar radiation, and that is why the atmosphere, despite its small heat capacity, can transfer huge amounts of energy over long time intervals.

    If you run some warm air out over a cold ocean, the first the boundary layer will cool. And from there the atmosphere will cool upwards. At equilibrium, the entire air mass will be cooler because of the cool ocean. This in turn will reduce the amount of DLR.

    So … is the ocean temperature regulating the amount of DLR, or the other way around?

    This is why the small heat capacity of the ocean matters. The temperature of the air is ruled by the ocean, because of the trivial heat capacity of the atmosphere. In turn, the amount of DLR, is controlled by the air temperature … which means that ocean surface temperature is controlling DLR.

    This is basic thermodynamics relating air temperature to its ability to transfer energy downward by radiation and other means. Details are in the book I cited, but even a simpler text on basic geophysics will give you the same information, albeit with a less mathematically rigorous approach. Isaac Held’s blog items discuss some of the quantification, and cite a few relevant references. The Trenberth energy budget diagram also provides some quantitative data.

    Been there, understand that. If you have a specific point, you’ll have to make it and support it. I know that air can transfer energy by radiation, and by conduction/convection of sensible/latent heat. You’ll have to be more specific.

    The basic message is that air temperature is important long term, and that if you want to understand climate energy balance long term, warming via the atmosphere will be a critical element. If you simply want to know why ocean heat content has changed from one year to the next, many other factors, some unidentified, must be considered, but much of what you read about transient or equilibrium climate sensitivity will inevitably depend on much longer intervals than that.

    No. Your basic claim is that air temperature “is important long term”. Your claim is that air temperature is the only important variable, because you say that everything else (wind, insolation, radiation, everything) simply averages out.

    You repeatedly claim that they average out. You have not demonstrated that.

    I probably won’t respond further unless new information is introduced, or you or others have specific questions beyond whether atmospheric warming is important.;

    The issue was never whether “atmospheric warming is important”, that’s your straw man. Your claim is that atmospheric warming is the only variable worth considering and that everything else averages out. You have not even begun to substantiate that.

    w.

    • is the ocean temperature regulating the amount of DLR, or the other way around?

      Both, but for a CO2-mediated forcing, the process starts in the atmosphere. This transfers thermal energy into the ocean, which warms, thereby reinforcing atmospheric warming through increased upward energy transfer. In other words, atmospheric temperature change is the initiator that results in ocean warming. More details are in Pierrehumbert pages 413-414 and the Trenberth energy budget diagram.

      Willis – Your original statement that I quoted was wrong. I’ve described the actual process, with reference to sources. Whether you want to cling to your original belief or move on is up to you. I’ll also let you work out for yourself why over multidecadal intervals, the other factors you mention don’t drive long term global trends. I’m going to assume though, that because you’re a bright guy, you’ll want to arrive at an accurate understanding, even if you do it privately without conceding anything in public.

    • Freddy has never been to San Francisco, in July. He would be just like the other naive turistas, cold. But he could take comfort in knowing that it if he returns sweaterless, year-after-year, it will eventually all average out.

  38. The following article is relevant to the type of process used in the climate sensitivity estimates by Schmittner et al and Holden et al – Probabilistic Projections Using Imperfect Climate Models. It will take me a while to get through all of the details, but I cite it here because of its general interest not only for this thread but for similar efforts in the future.

    • Freddy,

      You have proven in this thread that you are the Premier Purveyor of the Party Line. Stop in at RC asap to collect your plaque and honorary Phd, in Big Climate Obfuscation. .

    • Here is Part 2 of the article referenced above.

    • The paper is ok, but it does assume that the models are accurate enough, and that the climate sensitivity is constant. Since neither has yet been demonstrated to be true, it’s another empty Bayesian enterprise.

    • Fred

      I did a fairly quick read of the paper by Sexton et. all and find their idea of identifying “parametric uncertainty” between models and using it as a means to identify what is accurate on an overall basis deeply flawed. In essence the premise seems to be to take a bunch of outputs of models that cannot predict the future accurately, see where these models agree, and make those agreements the basis of summaries for policy makers.

      Why do those involved in atmospheric modeling seem to believe they need processes for model development completely different than standard engineering practices for model development? In reality, isn’t a general circulation model being used for policy implementation just a long term weather model?

      It does not matter if the output is from a global model or a series of regional models, it only matters if the model(s) can accurately forecast temperature and rainfall at a local level as a function of CO2. Why is it unacceptable to acknowledge that we don’t know what will happen with enough certainty to understand the net impacts?

      • Rob -For me, a “quick read” will be inadequate to evaluate the method, but I think it has promise. I still have to go over it in more detail.

        There are similarities and differences between weather models and climate models.

        It’s not correct to say that “it only matters if the model(s) can accurately forecast temperature and rainfall at a local level as a function of CO2.”

        What we know and don’t know are generally acknowledged adequately in my experience with the literature – there’s some of both.

        Climate understanding and prediction are aided by GCM type models, but also by many other sources of information. Blogosphere commentary sometimes attributes more dependence of inferences about climate on GCMs than is actually the case. I see this as a common misconception, and I’ve cited examples in the past of how various climate sensitivity estimates can be derived without deriving them from GCM simulations..

        For decision making, the question of how certain we need to be within what margin of error is a complex one, and any simple recipe is likely to be wrong.

  39. Stephen Wilde

    The atmosphere cannot warm the oceans. I have been into that issue in tiresome detail here:

    http://climaterealists.com/index.php?id=4245

    “Suppose one gets a vastly increased infra red input from above to the ocean surface.
    If that infra red energy input is high enough the few microns involved in evaporation could reach boiling point but there would still be a layer below that is cooler than the ocean bulk because the speed of evaporation just ramps up and takes the vastly increased latent heat of evaporation it needs from the layer below the evaporating layer. The temperature discontinuity between the cool layer atthe top of the ocean and the ocean bulk will intensify and go deeper because there is an increased gap between the energy sucked into the air and the energy coming up from below.

    Fourier’s Law will be of no consequence in the face of such an energy hungry process. The energy flow will never go downwards however hot the evaporative region gets because of the energy needs of the faster and faster evaporative process.

    One needs to see that evaporation needs more than the infra red energy available in the evaporating layer because of the net cooling effect. There is not enough energy in the evaporating layer even with the extra IR to supply the evaporative process. That is why the cooler layer at the top of the ocean is there in the first place. The evaporative process is taking the energy it needs from where it is most readily available. There is not enough in the evaporative layer so it is taken from below.

    That is the physical mechanism. It renders Fourier’s Law irrelevant in the locality of ongoing evaporation.

    One has to see that the infra red coming down is all used up in provoking the extra evaporation leaving a deficit that has to be supplied from elsewhere and that elsewhere is the ocean skin. The simple existence of a cooler ocean skin below the evaporative layer is the proof of it.

    This principle applies to ALL downward IR and not just any extra bit supplied by human activity. Downwelling IR always accelerates energy flow from water to air. It can never slow it down despite Fourier’s Law.

    I think that’s as far as I can go. If one still doesn’t get it then there is nothing more I can say save to point out that if one thinks there is enough energy from the downwelling infra red to provide all or more than the energy needed by evaporation then one is denying all the text books that clearly describe evaporation as a net cooling effect. One’s description would be of a neutral or net warming effect.”

  40. JC and WHT thank you for this very informative post and thank you to all who contributed to the discussion too.

    I have a long way to go to digest all this. My initial reaction is that we will be learning about and refining our understanding of climate sensitivity for decades to come. It is interesting to note how little the uncertainty on climate sensitivity has changed, in the IPCC assessment reports, over the past 20 years. A friend did this summary:

    “First IPCC Assessment (1990)
    Paleoanalogue method gives “a sensitivity of 3.0 deg C for a doubling of CO2” (p.83)
    Modelling indicates climate sensitivity is uncertain but “unlikely to lie outside the range of 1.5 to 4.5 deg C. There is no compelling evidence to suggest in what part of this range the correct value is likely to lie” [p. 139].

    Second IPCC Assessment (1995)
    The various coupled models had sensitivities ranging from 2.1 to 4.6 deg C (p.300).

    Third IPCC Assessment (2001)
    “Climate sensitivity is likely to be in the range of 1.5 to 4.5°C” (WG1 Technical Summary p. 67).

    Fourth IPCC Assessment (2007)
    “‘equilibrium climate sensitivity’, is likely to lie in the range 2°C to 4.5°C, with
    a most likely value of about 3°C. Equilibrium climate sensitivity is very likely larger than 1.5°C.” ( Ch. 6, Box 6.2).
    This box also notes that for the previous assessments “….equilibrium climate sensitivities simulated by atmospheric GCMs coupled to non-dynamic slab oceans…..were:
    3.8°C ± 0.78°C in the SAR (17 models),
    3.5°C ± 0.92°C in the TAR (15 models) and, in this assessment,
    3.26°C ± 0.69°C (18 models).”

  41. Web,
    Steve McIntyre has noted some new Arctic o18 records,

    http://climateaudit.org/2011/12/05/kinnard-arctic-o18-series/#more-15137

    You may be able to compare some more recent stuff.

  42. – Efficient energy planning: zone, sector and slope. Water –
    The collection of water forms possibly the most important part of permaculture.

    I am working on the closed cycle gardening and will plant our vegetable
    garden in the area that has been alfalfa and mulch
    with the alfalfa but since we have animals I will run it though them first and
    use it as manure.