Mainstreaming ECS ~ 2 C

by Judith Curry

Humanity has a second chance to stop dangerous climate change. Temperature data from the last decade offers an unexpected opportunity to stay below the agreed international target of 2 °C of global warming. – New Scientist

A new article is in press at Nature Geoscience [link]:

Energy budget constraints on climate response

Alexander Otto, Friederike E. L. Otto, Olivier Boucher, John Church, Gabi Hegerl, Piers M. Forster, Nathan P. Gillett, Jonathan Gregory, Gregory C. Johnson, Reto Knutti, Nicholas Lewis, Ulrike Lohmann, Jochem Marotzke, Gunnar Myhre, Drew Shindell, Bjorn Stevens & Myles R. Allen

Read the list of authors, read it carefully.  Note the presence of a number of names that are well known as IPCC lead authors and otherwise.  Also note the presence of Nicholas Lewis.

The punchline is this:

The most likely value of equilibrium climate sensitivity based on the energy budget of the most recent decade is 2.0 °C, with a 5–95% confidence interval of 1.2– 3.9 °C , compared with the 1970–2009 estimate of 1.9 °C (0.9–5.0 °C; grey, Fig. 1a). Including the period from 2000 to 2009 into the 40-year 1970–2009 period delivers a finite upper boundary, in contrast with earlier estimates calculated using the same method. The range derived from the 2000s overlaps with estimates from earlier decades and with the range of ECS values from current climate models (ECS values in the CMIP5 ensemble are 2.2–4.7 °C), although it is moved slightly towards lower values. Observations of the energy budget alone do not rule out an ECS value below 2 °C, but they do rule out an ECS below 1.2 °C with 95% confidence. 

The best estimate of TCR based on observations of the most recent decade is 1.3 °C (0.9–2.0 °C). This is lower than estimates derived from data of the 1990s (1.6 °C (0.9–3.1 °C) or for the 1970–2009 period as a whole (1.4 °C (0.7–2.5 °C); grey, Fig. 1b).  Our results match those of other observation-based studies and suggest that the TCRs of some of the models in the CMIP5 ensemble with the strongest climate response to increases in atmospheric CO2 levels may be inconsistent with recent observations — even though their ECS values are consistent and they agree well with the observed climatology. Most of the climate models of the CMIP5 ensemble are, however, consistent with the observations used here in terms of both ECS and TCR. 

Nic Lewis has a post on this at Bishop Hill, some excerpts:

The Nature Geoscience paper, although short, is in my view significant for two particular reasons. First, using what is probably the most robust method available, it establishes a well-constrained best estimate for TCR that is nearly 30% below the CMIP5 multimodel mean TCR of 1.8°C (per Forster et al. (2013), here). The 95% confidence bound for the Nature Geoscience paper’s 1.3°C TCR best estimate indicates some of the highest-response general circulation models (GCMs) have TCRs that are inconsistent with recent observed changes. Some two-thirds of the CMIP5 models analysed in Forster et. al (2013) have TCRs that lie above the top of the ‘likely’ range for that best estimate, and all the CMIP5 models analysed have an ECS that exceeds the Nature Geoscience paper’s 2.0°C best estimate of ECS. The CMIP5 GCM with the highest TCR, per the Forster et. al (2013) analysis, is the UK Met. Office’s flagship HadGEM2-ES model – see their webpage “Advanced climate modelling for policymakers” and their document “Advance: Improved advice for science mitigation advice”. The uncertainty distribution for the Nature Geoscience paper’s best TCR estimate of 1.3°C indicates that it is extremely unlikely that real-world TCR is as high as that of the HadGEM2-ES model. It has a TCR of 2.5°C, nearly double 1.3°C and 0.5°C beyond the top of the 5–95% uncertainty range. The paper obtains similar, albeit less well constrained, best estimates using data for earlier periods than 2000–09.

Secondly, the authors include fourteen climate scientists, well known in their fields, who are lead or coordinating lead authors of IPCC AR5 WG1 chapters that are relevant to estimating climate sensitivity. Two of them, professors Myles Allen and Gabi Hegerl, are lead authors for Chapter 10, which deals with estimates of ECS and TCR constrained by observational evidence. The study was principally carried out by a researcher, Alex Otto, who works in Myles Allen’s group.

The New Scientist has an article on this paper that is worth reading.

James Annan has a post that says what really struck me also:

The analysis itself is not particularly novel or exciting: what makes it newsworthy in my view is the list of authors, which includes some who had previously been trying to talk down these recent estimates. Even though this paper is too late for the IPCC AR5, I hope it reflects a change in thinking from the IPCC authors involved.

They argue that the new result for sensitivity “is in agreement with earlier estimates, within the limits of uncertainty”. The contrasting claim that the analysis of transient response gives a qualitatively different outcome (being somewhat lower than both the previous IPCC assessment, and the range obtained from GCMs) is just weird, since both their ECS and TCR results are markedly lower than the IPCC and GCM ranges.

This looks like a pretty unreasonable attempt to spin the result as nothing new for sensitivity, when it is clearly something very new indeed from these authors, and implies a marked lowering of the IPCC “likely” range. So although the analysis does depend on a few approximations and simplifications, it’s hard to see how they could continue to defend the 2-4.5C range.

JC comments:  James Annan’s blog post starts with this sentence: “At last the great and the good have spoken.”  I.e., some IPCC lead authors are paying attention to the lower sensitivity estimates.  It will be very interesting to see how the IPCC AR5 plays this.  I suspect that the uncertainty monster will become their good friend, ‘not inconsistent with.’  It will be very interesting indeed to see if the IPCC budges from the 2-4.5 C range that has remained unchanged since the 1979 Charney report.

714 responses to “Mainstreaming ECS ~ 2 C

  1. I just want to make sure now. We are excited because some folks finally saw the light. We are not excited because we can claim a bunch of experts say sensitivity is lower. After all, that would be falling back into the claim of authority trap wouldn’t it?

    • We should be excited by both – because a ‘bunch of experts’ shifting their position like this enables governments to think more sensibly about energy policy. Ain’t gonna lade the poor no more.

    • Gary authority backed up by strong data is good. Authority that is used to smooth over weaknesses in the data is bad.

      We have better measurements of more of the climate system now than ever before. That is a stronger basis on which to make these sorts of estimates. Can it be improved? Certainly. More years of this type of data should help.

      Paleo is another line of inquiry for this type of estimate. It seems like now would be a good time to reassess the weaknesses in that approach and identify the ways to strengthen that with better data.

      • HR,
        Good summary.

        It is going to be an incredible challenge to deal with the Earth’s exploding population. Having seen the World’s population increase from two billion to over 7 in my lifetime is stunning. Almost too hard an increase to believe. The brightest in all the fields and including government have a worthy challenge. People have learn to state the problem honestly.

        Where do you look and act next. Obviously energy sustainability is front and central. Getting one’s head out of focused and out of the sand is never too easy.

      • garry just imagine how Malthus would be if he was still alive. I see him blogging alongside Romm with the same level of hysteria.

        I’m relaxing in the sun, with the sound of my family in the background and no other humans in earshot, I’m thinking there’s room for a few more.

      • There is much energy in the universe, and it is not likely that man will ever use all of it.
        =========

    • David Springer

      GARRYD

      Darwinian evolution will meet the challenge if anthropogenic efforts fail. Nature is a harsh mistress when it comes to population control. Speaking personally I have faith that science and technology are up to the challenge of maintaining rising living standards for a growing global population. They haven’t failed us yet in that regard and I see no reason to believe they will fail in the future. I eschew defeatism. It’s un-American.

  2. Why so many contributors to this article? It seems more like it was written by a couple of collaborators and the others have lent their names to it as a form of peer approval.

    • They all want to jump into the life boat

    • Steven Mosher

      read who did what parts and you will understand.

    • The energy budget for the whole climate system is derived from several data sets (for the ocean, land, air and cryosphere). Several authors are credited for providing these. I agree though with Annan that this is recognition that the better data coverage of the past decade is giving us new insights.

  3. Judith, commented elsewhere on this important paper today. Tried to extend those ideas in a format compatible with your scientific blog, but failed ( old iPads have limitations that apparently the ghost of Jobs reincarnate cannot fix, or at least I cannot conjur his ghost forth)?
    I have to go about real business tomorrow early AM, will not try here twice. Suffice to say this is a litmus test of IPCC. They accepted ‘grey’ papers for AR4. Why was this one a smidgen late and so a bit grey?

    The tell is that it echoes at least seven other papers, plus my book chapter, since AR4. So if its Authors are required to ignore their own paper in the AR5 chapters they head is beyond a tell. It would be proof positive of rigged science.

    • Nic Lewis has been writing about the lower ECS and exposing the errors in the statistical analysis of ECS in papers included in AR4 for two years. So why did it take until after the AR5 deadline to get this paper accepted? Conspiracy?

      • Steven Mosher

        peter. Nic did a version of this earlier this year. In the interim he worked on another paper. coordinating all the details takes time. Why assume the worst in people.

      • Wasn’t Nic Lewis’s ECS centered on 1.6C?

        The centered number is dropping from 3C to 2C based upon recent observations. If warming resumes with a little more vigor than this study is predicting, the new observational estimate for ECS would presumably be higher, right?

      • Mosher,

        I am not assuming the worst of Nic Lewis. Why would you assume that?

        But I am wondering if unwelcome papers take longer to get through the review process (when it suits), than papers that are welcomed – c.f. Michael Mann’ hockey stick papers versus Steve McIntyre’s critiques (I presume you’ll get the point without diverting into being a pedant).

        And regarding your usual unnecessary, unpleasant, loaded comment (typical of an IT geek), it is rather hypocritical, don’t you think, for you to be making such a comment?

      • Nic Lewis comment at WUWT:

        “There seems to be a general assumption at most blogs that because the new Nature Geoscience paper has only just appeared, it is too late to be cited in AR5. That is wrong. The paper was accepted by the journal, after peer review, by the AR5 WG1 acceptance deadline (15 March 3013). It will be cited in AR5 WG1.”

      • JCH

        Yeah. And if the “lack of warming” continues for another decade or three, at will be even lower.

        Right?

        Max

      • JCH

        Yeah. And if the “lack of warming” continues for another decade or three, at will be even lower.

        Right?

        I don’t know, but my hunch would be it would depend largely on what happens in the oceans with respect to OHC and SLR.

      • About 1/3 of the global temperature anomaly is being diverted into OHC. So a TCR of 2C translates to an ECS of 3C per doubling of CO2.

      • Webster, “About 1/3 of the global temperature anomaly is being diverted into OHC. So a TCR of 2C translates to an ECS of 3C per doubling of CO2.”

        About? That stratosphere approach curve that you avoid like the plaque is an indication of the rate of change of OH uptake. The rate of OH uptake started slowing prior to the pause. The current rate of OH uptake is 0.3 Wm-2 +/- a touch, down from about 0.6 Wm-2 prior to 2000.

        ECS related to OHC*e^(-t/RC), TCR is decreasing because the rate of OH uptake is decreasing.

        Write that down :)

      • Here ya go Webster,

        https://lh6.googleusercontent.com/–Igxvx79vsQ/UZzPrY7sckI/AAAAAAAAINw/0NwzadXj53Y/s912/Oceans%2520and%2520Atmosphere.png

        I did torture that. Using the Reynold’s Oiv2 absolute temperature I converted all three to “relative” energy anomaly. That just scales there responses to the actual changes in “surface” energy flux instead of temperature anomaly. Oh, I did invert the lower stratosphere so it is easier to compare. Since the stratosphere “mirrors” the changes in the rate of Ocean Heat uptake, you can get an rough approximation to compare with the ARGO data which while short also indicates a similar “approach” curve.

        It is just like checking your battery charger Webster, it ain’t rocket science.

      • Where is the statistical noise analysis?

      • Webster, “Where is the statistical noise analysis?”

        Haven’t done one that could be called rigorous. Just using comparisons, UAH and RSS agree better with Steric sea level variation and annual CO2 variation by region, using barrow, mauna loa and south pole. The surface temperature records seem to have a problem with either polar temperature interpolation, seasonal signal removal or a combination of the two. That standard deviation plot shows something is off a touch.

        I haven’t really dug into that since Mosher was planning a comparison of surface stations with major body of water proximity. Plus there are so many other people looking at it I thought someone would have done something. So far, Wayne2 and I are the only people I know of that have quested the changes in standard deviation and first differences.

        I showed the SD issue to you over a year ago. Wayne2 picked up on the first difference shift at ~1955 a couple of months ago.

      • Cappy,
        Graeme Stephens made an estimate of 0.58 Wm^2 energy imbalance for the Earth as a whole. Divide this by the cross-section area of the ocean and we get 0.58/0.7= 0.83 W/m^2 concentrated as an OHC imbalance. This value of 0.83 W/m^2 is close to the one quoted by Trenberth of 0.85 ± 0.15 W/m^2 originally estimated by Hansen et al. in 2005.

        According to the skeptics, Stephens is in disagreement with Trenberth and Hansen. So why are they in so much agreement with 0.83 versus 0.85? Is it the error bars that they are arguing over? What kind of error bars are you contributing to the discussion?

      • Webster, “Cappy,
        Graeme Stephens made an estimate of 0.58 Wm^2 energy imbalance for the Earth as a whole. Divide this by the cross-section area of the ocean and we get 0.58/0.7= 0.83 W/m^2 concentrated as an OHC imbalance. This value of 0.83 W/m^2 is close to the one quoted by Trenberth of 0.85 ± 0.15 W/m^2 originally estimated by Hansen et al. in 2005.”

        I don’t have a problem is Stephens because he uses realistic margins of error. +/- 0.15 was not a realistic margin of error especially when that K&T Energy Budget had a glaring 18 Wm-2 error. Stephens’ margin of error at the surface is +/- 17 Wm-2 and nearly +/- 0.5 at TOA.

        Heck, I even estimated a 1Wm-2 imbalance. The question is, if and how quickly that imbalance is changing. The battery charger issue.

      • Graeme Stephens is being held up by the skeptics as an independent thinker who bucks the analysis of the climate science orthodoxy. Yet it remains a fact that Stephens agrees with Trenberth, Hansen, Levitus on the OHC uptake rate.

        Why is that, Cappy? Or are YOU the guy that the skeptics are supposed to rally around? Not Graeme Stephens, but Cappy the guy with the authoritarian title. Aye, Captain!

    • Steven Mosher

      It missed the deadline. The decision is not that hard. note, I’m on a paper that missed the deadline. rulz are rulz. no cookie.

    • I am not assuming the worst of Nic Lewis. Why would you assume that?

      But I am wondering if unwelcome papers take longer to get through the review process (when it suits), than papers that are welcomed – e.g. the Michael Mann’ hockey stick papers versus Steve McIntyre’s critiques (I presume you’ll get the point without diverting into being a pedant).

      And regarding your usual unnecessary, unpleasant, loaded comment (typical of an IT geek), its rather hypocritical don’t you think for you to be making such a comment – dope!

      • Steven Mosher

        sorry peter, when exactly do you think Nic and the others started on the letter and when do you think it got accepted and how long did the process take? I’m not suggesting that you think ill of Nic. Im talking about your conspiracy thing WRT to others. You have no idea about the timeline. You might want to consider the possibility that others do.
        Since you dont know the timeline, when they started and when they submitted, you might poor some ice water on your fevered imaginations, or Joshua might show up and pester you.

  4. I would suspect that a number of those author names were added specifically for AR5. It would be interesting to find out what names were omitted :) .

  5. Steven Mosher

    well its interesting to see more lukewarmers…

    willard, the pope of lukewarmers blesses you.

    hey look there is hope! go figure. Hope it doesnt spoil MT’s day.

    http://www.newscientist.com/article/dn23565-a-second-chance-to-save-the-climate.html

    • That is a messenger who will be shot by everybody. Even his own mother.

    • Personally I am pleased to see this paper. I think it is a step in the right direction. I also think the numbers will keep creeping down as the “hiatus” continues and as the understanding of natural variations expands and get factored in.

      So no cigar for lukewarmers just yet ;)

    • michael hart

      I have a sneaking suspicion that we are going to 3rd helpings (&more) of chances-to-save-the-planet rammed down our throats.

      There are plenty of people who don’t want the banquet to end yet. Journalists not least.

  6. The Skeptical Warmist (aka R. Gates)

    A few general questions:

    1) TCS is fairly easy to estimate and requires far less integrations that ECS, but how can anyone have a good grasp of ECS without knowing all the feedbacks involved? Certainly we are far from understanding all the feedbacks. For example, considering how far off the models have been so far in even getting the decline of Arctic sea ice correct (which will impact ECS, certainly), how can anyone with any certainty level above about 50% say anything meaningful about ECS?

    2) Isn’t what we really care about ESS? (Earth System Sensitivity), see for example: http://www.nature.com/ngeo/journal/v3/n1/abs/ngeo706.html

    The slower feebacks such as cryosphere and biosphere will take centuries to fully reach some kind of equilibrium response, and this is ultimately the climate humans will be forced to deal with. In this regard, love him or hate him, Hansen et al. are completely correct in looking at the paleoclimate data along side the models and coming up with an Earth System Sensitivity estimate. Recently, the paleoclimate data from Lake E has been especially helpful in this regard.

    • Agreed, see my points below. Even the tropical ocean surface hasn’t fully responded yet. Armour et al. talks about this delayed response issue in short-term sensitivity estimates.

    • TSW,

      “Certainly we are far from understanding all the feedbacks.

      …how can anyone with any certainty level above about 50% say anything meaningful about ECS?”

      Now if only you could apply this perfectly reasonable grasp of uncertainty to the other areas of CAGW dogma.

      Including your point 2. Try this variation for example:

      “Certainly we are far from understanding all the feedbacks.

      …how can anyone with any certainty level above about 50% say anything meaningful about climate model projections?”

      You can do the same thing with global average temperature, paleo proxies, sea level rise….

    • Whenever the ECS is considered as opposed to TCR it’s essential that also the removal of carbon from the atmosphere is taken fully into account. While a fraction of carbon stays very long, that’s only a small fraction. As the total amount of carbon to be released is limited, the ECS that corresponds to the maximal CO2 concentration will never be reached. TCR may be more relevant as a guideline even for the very long term considerations.

      • Pekka

        Agree fully that TCR is a more relevant parameter, if we consider centuries to reach “equilibrium”.

        This would mean that the new data are suggesting that AGW is even less of a future threat than one would estimate, based on the ECS of ~2C, rather than the TCR at ~1.3C.

        Max

  7. Unimportantly transitory paper with too much trumpet, not enough Mozart.

    By AR6, will have shrunk to insignificance, and too late for AR5, so all it has is blogplaints and bumpf.

    • Steven Mosher

      It will make for a good test of peoples principles with regard to papers that miss deadlines. thats a knuckleball in the fine sport of climateball.

      • Steven Mosher | May 20, 2013 at 12:26 am |

        I suspect it’s more a test being floated in anticipation of other outcomes, and possibly a bit of a trap.

        I’ll hold off on further comment until after AR5.

      • Steven Mosher

        ya well, for me its easy. rulz are rulz, miss the deadline, see you at Ar6. by which time I suspect the waters will be equally muddy

      • With two lead authors on the list I would expect at least a blurb mentioning some paper that just missed the deadline that increases the uncertainty monster. They did down play the results a tad for a CYA though.

      • Fair point about the deadline for AR5, but this paper makes it harder to dismiss Nic’s earlier paper

      • Steven Mosher

        looks like it made the deadline.

      • The deadline issue isn’t a curve ball. It’s a spit ball.

        The IPCC trumpeted its rules for inclusion in the AR4 as making it the “gold standard.” But it ignored the “rule” against gray literature on a wholesale basis but kept to the “deadline” rule. Why? Because the solons of CAGW smart (the consensus) controlled the review process and rushed through the papers they wanted, and stalled those they didn’t. There was no need to violate that rule, sop it was enforced.

        But there was so much good PR available in the gray literature, whioch did not meet the “rule” for AR4, but would help make th3e sale on the need for decarbonization. So they ignored the rule and picked and chose at will what pseudo science (disappearing glaciers and rain forests being just two examples) they wanted to include.

        You either are an ethical, rule obeying person, or you aren’t. If you follow one rule slavishly, but ignore another in toto, we have established what you are, and all we are doing is dickering about the price.

      • I don’t know Mosher. For one so fond of Socratic type inquiry, I’m surprised to hear you call it a simple issue. “rulz are rulz” is just too easy. Isn’t this a case where competing values must be sorted? If there’s a sign on a burning building saying “no trespassing,” aren’t you going to consider rushing in anyway to save the fair damsel?

      • It is listed as Correspondence. Does that get peer-reviewed or is it just the Editor who decides to publish it? I suspect the latter, so it may not count as peer-reviewed.

      • Steven Mosher

        Socratic inquiry? that’s a bizarre notion to throw into the mix.

        In the first place I was wrong. I thought the letter had to pass both the submitted by date and accepted by date. Looks like it only had to clear
        the accepted by date. For me it is pretty simple. The rulz are the rulz. If one wants to making burning building exceptions, then those exceptions need to be well thought out, well reasoned, and applied consistently.

        So, lets suppose that the Letter had missed the deadline. I don’t think we have a burning building exception where the good produced by breaking the rule outweighed the good of keeping the rule. Its one Letter in a sea of science. Breaking the rulz to include it would apportion a level of importance to it that is not yet justified. Folks need time to assess it, push on it and we should see how it stands. The same was true for the Jesus paper. AR4 would not have been harmed by its exclusion. Ar5 would not be harmed by the exclusion of this Letter. It would have put the authors in a tricky situation, and perhaps folks will wise up and change the silly rulz and make the AR process a living document. But until they do, rulz are rulz and exception making needs to be done with the greatest of care.

      • Mosh, Yes, you’re right on my poor choice of words. I simply meant that you’re fond of close examinations of philosophic issues…and sometimes appear to use a Socratic technique.

        This issue is to my way of thinking a philosophic question… i.e. ethics….or “moral philosophy” if you prefer. In my opinion your “rulz are rulz” though not without a folksy appeal, does not do the situation justice.

      • Steven Mosher

        pokerguy

        As one of the folks who complained about the fiddling with the dates and the crowbar applied to get the Jesus Paper in AR4, I’m certainly not going to be inconsistent and go around special pleading, even for a friend like Nic. And whats more this paper supports what I think. So, I think I show a willard of integrity.. nay two willards ( thats the official unit of measure ). Thankfully no conspiracy theorist has come along to claim that I knew it made the deadline all along

      • Anything that can help bait and switch on the Jesus paper is lukewarmingly good.

        Claiming an aura of INTEGRITY provides a nice bonus.

        INTEGRITY ™ — Bait and Switch, Sweet Jesus

    • Somebody remind me is Bart a skeptic or believer?

      Is the point that given our limited understanding and limited data that most of climate science ATM is transitory? The one thing we can start to say is we have measurements of the energy budget for most of the climate system which makes the present CS estimates more meaningful than the past.

      Bart your comment is like that of the worst sort of contrarian who wants to ignore the fact that the science is moving on.

      • I agree with your history.

        In 1981 Hansen had no ocean option. That why they built and deployed ARGO. He had a semblance of a surface temperature record. It has largely slaughtered the attacks upon it. The ocean is now coming in, and Hansen quickly agreed with the conclusion that the oceans in the models were taking up too much energy, and that would mean there is less heat in the pipeline. The orbiting stuff largely agrees with ARGO.

      • > Somebody remind me is Bart a skeptic or believer?

        Who cares. He’s a lukewarmer, like we all are, except for irrational people. Does this mean we’re rational?

        More trumpets, please.

      • Who cares. He’s a lukewarmer, like we all are, except for irrational people. Does this mean we’re rational?

        I’m not a lukewarmer, I’m an agnostic. Does that make me irrational?

      • Steven Mosher

        AK,

        you are a lukewarmer you just dont know it yet.

      • AK,

        A lukewarm baptism does not need water.
        Nor does it need blood.
        All it needs is some kind of blog presence.
        And even then, who cares, really.

      • @Steven Mosher, willard (@nevaudit)…

        you are a lukewarmer you just dont know it yet.

        Actually, both ECS and TCR are myths. Just like “forcing” they involve forcing (heh!) a very complex non-linear system into a Procrustean Bed made of unwarranted assumptions based on linear intuitions. IMO there’s really no good reason to assume either response is linear. Perhaps the TCR will suddenly jump to twice its value when the GHG forcing crosses 2.5 W/M^2. Or drop to zero. And pick your value.

        I’m also skeptical that CO2, or GHG’s in general, constitute a “control knob” anywhere close to the importance of geological features, and the ocean currents they influence. Models based on that assumption haven’t been able to explain the Pliocene Paradox, even when they’re allowed to do all sorts of “current fitting”.

        So am I really a “luke-warmer”, or an agnostic like I said?

      • Steven Mosher

        yes AK, nothing you have said is inconsistent with lukwarmerism.

        see I even used a Hansenesque double negative

      • Steven Mosher

        yes willard lukewarmerism is not unimportant

      • So why not set up new categories over whether or not somebody advocates “solutions” that involve significant increases to energy prices? You could have “warmists” who say there’ve got to be, “luke-warmists” who say do all sorts of things, but don’t raise energy prices, and “deniers” who say there clearly isn’t a problem and nothing needs to be done. That way people could argue both what and why.

      • HR | May 20, 2013 at 8:44 am |

        I’d think a criticism of a paper ought be most clear absent personal bias of any sort.

        I am most definitely a skeptic of the question-everything-always variety, not a ‘skeptic’ of the climate camp cults, and while I have beliefs, they are nothing I really care to impose on strangers on a blog, other than by my conduct.

        It is of course a silly standard to hold a paper up to, whether it is or is not included in this AR or that one. The length of time it takes to publish, and the arbitrary nature of committee rules for inclusion, and the relative unimportance of committee rules compared to the drive to get the Science right, is a patent bit of nonsense.

        It is also silly to prejudge how a paper might fit into this AR or that one, until the final publication. What AR5 looked like in leaks will hardly be what the final polished product will be, or perhaps it will, but the world it is released into will be different. If this paper is included, the increasing pace of climatology publication dictates dozens of others more important will be extant and not included. Papers ought be considered on their merits and our interest in them, not on how some committee rules.

        So why all the trumpet about this single paper that largely is consilient with the consensus of the rest of the field? There’s nothing brilliant, revolutionary or inspirational about it. It’s of solid, workmanly quality so far as it goes.. which is pretty much noplace. Unless you’re telling me you’ve worked its conclusions into a large and novel model and it has tipped your conclusions from this new unrevealed grand unification to completion explaining all we had up to now been ignorant and uncertain of. In which case, you’re likely Girma.

        Get six hundred years more data, or produce six thousand years of well-validated simulations, and apply the techniques of the paper to that, and then we’ll have something to talk about. Find an insight that lets the rigor claimed be applied in new and interesting ways that extend the field. This is mere extension of decimal points of precision, which is a practice that has had little rank in science for over a century due its overloud trumpeting as all that was left to do at the turn of the previous century.

        I’d love if the science were moving on, in this paper. That’d be something worth trumpeting. But all I see here is tribal squabble and toldjaso’s.

        willard (@nevaudit) | May 20, 2013 at 10:00 am |

        I’m most definitely no variety of lukewarmer, and would much rather be counted among the irrational, if I had to choose between lukewarm and nuts.

        Warming is absolutely happening, and will until it stops, which is a point we cannot predict and in a manner we cannot know.

        Human activity is absolutely a key, uncontrolled and net harmful factor in warming and uncounted other widespread changes. Human policy at the national, multinational and local levels are largely contrary to good sense and often unjustifiable economically or on arguments of rights, law, democracy, fairness, justice or prudence.

        This costs me money. That it costs you money, for nine out of ten of you, is not my real concern. Why it isn’t yours, I cannot fathom.

        I want my money. If you don’t want yours, that’s fine. I’m not above accepting it from you, if you’re really that eager to give it up.

        That’s my stance. That’s what I’m warm under the collar about. And believe me, when it’s about my money, it’s not lukewarm.

      • > I’m also skeptical that CO2, or GHG’s in general, constitute a “control knob” anywhere close to the importance of geological features, and the ocean currents they influence.

        This is stronger than agnosticism, AK.

        An agnostic like me would not care less about entertaining a personal belief regarding all this.

      • Steven Mosher

        Bart you believe in GHGs?
        you are lukewarmer material.
        you have to be really nutty to get outside the big tent

      • Bart R,

        You don’t understand.
        You can’t even decide to be out.
        This would be rational.
        You can’t rationally decide to be irrational.

        Only the Pope decides:

        If I choose to divide the world into 3 classes: wacked out alarmists. Wacked out skeptics; and the sane middle ground, you dont get to challenge my classification. You simply dont get to challenge it. And in the end you will see that 97% of people are
        in the middle, as I define it.

        http://rankexploits.com/musings/2013/on-the-consensus/#comment-113304

        Enjoy your baptism,

        w

      • HR | May 20, 2013 at 8:44 am |

        *sigh*

        Reading me this long, and not nearly hard enough to know otherwise?

        GHG’s are mere detail, and too much detail.

        You miss some substance of what AK proposes, I suspect.

        Warming, cooling or standing still as far as GMT is concerned, the Forcing will out. It will arise in restructuring of climate bodies like ocean circulations and jet streams, blocking patterns and oscillations until they break out of their old attractors. It will arise in different patterns of extremes, new distributions. It will come at us sideways, because we think there is a forward and there just isn’t.

        So all the discourse in the world about temperature sensitivity to CO2E levels is liable to miss the point. The luddites who believe history has precedents for this because history has seen some of these random changes in some basins at some times are lulling themselves comfortably into a slumber, and doing it in a calculated manner to leave them slumped at the wheels with throttles on full.

        Why?

        For some, shear contrariness.

        They’ve been in charge for so long, they’re not letting anyone wrest control from them.

        Or they’ve never had any sense they’ve been in charge and now that they can do something that even feels like control, they’re not going to give it up.

        Or they really believe they’re doing right, not wanting to be convinced of some folly by people just like the last group that convinced them to do wrong.

        Or they’re right, by every measure they find right, those measures being built on considerations of other worlds than this real physical one, and they will never be convinced by those who do not share their values.

        So you’ll excuse me if I don’t fit the categories you declare me to belong to, by simple act of declaring I do not subscribe to your frame of reference. Or you won’t excuse me, and live in constant surprise as your reference frame fails to come to grips with the real world over and over again.

        Boil or freeze or remain stuck on some plateau, I simply don’t care. The Forcing is the thing that matters.

        The control knob isn’t one of temperature, but of the level of Forcing.

        It may be CO2E emission alone, or land use alone, or some combination of the two.

        It may lead to cataclysm or unbidden utopia.

        But if you’re selling unbidden utopia, I’m not buying any: it’s too much freedom to give up to the tyranny of someone else making those decisions on my behalf without my consultation and without compensation to me on my own negotiated terms. What fairness is there in someone else deciding for everyone on the planet what is right for the air they breath without paying a price? If you want to slump over the console with throttle on, you pay me for what you take from me.

        Not warmist. Not denier.

        Capitalist.

        I want my money.

    • Bart R

      Agree with you that AR5 (and AR6) “will have shrunk to insignificance” by the time of their publication, as will IPCC itself.

      We appear to agree.

      Max

      • manacker | May 20, 2013 at 5:58 pm |

        By demonstration of inability to comprehend what you read, or will to miscomprehend so purposely, we can be seen to agree that your comments on the writing of others are not worthwhile as a guide of what was originally written.

        As we can’t trust the premise of what you claim, we can never trust the conclusion.

        We agree, it’s not worth reading what you write.

  8. My question here about their linear ECR extrapolation is whether the recent trends in temperatures which are mostly over drier areas (land, polar regions) can be projected into the future, or whether the tropical ocean surface will eventually warm faster and release more of the water vapor to increase the H2O feedback and hence ECR. Their assumption that the land will continue to warm as it has been is worrying when you work it out that its recent transient sensitivity (1980-2010) is 4 C per doubling. I suspect their low ECR is because the warming has not yet affected the tropical ocean surface much, but if that continues it is not a good scenario for land and Arctic that have to warm more to compensate. Hopefully someone will address the assumptions behind linear extrapolation. I seem the recall that the Armour et al. paper was relevant to this issue, taking into account different regions warming at different rates due to their thermal inertias which indicates short-term sensitivities may be underestimated.

    • Part of the abstract from Armour et al. (2012, J. Climate)
      “Time-variation of global climate feedback arises naturally when the pattern of surface warming evolves, actuating regional feedbacks of different strengths. This result has substantial implications for our ability to constrain future climate changes from observations of past and present climate states. The regional climate feedbacks formulation reveals fundamental biases in a widely-used method for diagnosing climate feedbacks and radiative forcing—the regression of the top-of-atmosphere radiation flux on surface temperature. “

    • JimD, that was the point in the Stephens energy budget. With latent underestimated, the “potential” warming was greater. Each Wm-2 of additional latent offsets ~ .8 Wm-2 of GHG warming at the surface. With latent underestimated, precipitation was also underestimated, cloud absorption of ORL was underestimated and convection was underestimated. These new estimates are based on the Energy budget models, that is why there is such a large impact.

      The question of the oceans and “pipeline” ESC hinges on Paleo. If there is a significant amount of the warming during the past 100 years due to recovery, then ECS will be lower. That IPWP reconstruction I showed you indicates there has been substantial longer term ocean recovery from ~ 1700 AD in that region which represents a large amount of OHC. The shift in the lower stratosphere cooling preceding the 1998/99 El Nino indicates change in the rate of OH uptake contributing to the “pause” as the tropical ocean warming slows. That curve in the stratosphere cooling slope, resembling the approach to an asymptote, is lagged by a similar slope change in the surface temperatures. Each of those slope changes is an indication of a change in “sensitivity”. All the wiggles can actually mean something.

      https://lh5.googleusercontent.com/-yCVnY6nXIiQ/UZmVEhGt-oI/AAAAAAAAIJs/EozQSkgn614/s817/IPWP%2520spliced%2520with%2520cru4%2520shifted%2520anomaly%2520from%25200ad.png

      It great news Jim! We have a second chance to save the world.

      • But, as I pointed out, their linear extrapolation means the land will fry unless the ocean surface starts warming more quickly to take some of the response load.

      • JimD, “But, as I pointed out, their linear extrapolation means the land will fry unless the ocean surface starts warming more quickly to take some of the response load.”

        Linear exploration from what altitude? You are back to the initial “no feed back” sensitivity and a new initial condition after allowing for the recovery. All of that recovery energy “appears” as an ln(2) function amplified by the ln(cf/ci). Since the warming due to CO2 over land is greater than ocean at the new initial conditions, you have to allow for the difference in Cp with altitude. Basically, you have to start the modeling over, get rid of most of the aerosol “adjustments” and now every “other” forcing has a larger impact relative to CO2. That was Santer’s appraisal, oops!

      • Linear extrapolation in time from the recent past through the next century. Armour et al. say the evolution is nonlinear due to varying thermal inertias. Basically you get the wrong gradient with short records and accelerating forcing changes. The paper also refers to Armour et al. as a reason its ECS is a lower limit, but they make no attempt to correct for it.

      • JimD, “Armour et al. say the evolution is nonlinear due to varying thermal inertias.” Right, varying thermal inertia aka Cp or specific heat capacities. The paleo will provide some insight on the ocean inertia and you need to allow for the land altitude, specific humidity etc. and back out the natural recovery ln(2) portion of the land warming trend. That is why it is a model do over. That is the “Land Amplification” due to differences in specific heat capacity I have been talking about.

        The Team has one serious mess on their hands.

      • Paleo does give much higher ECSs (greater than 4 C), but that is only because of glaciers melting and vegetation spreading into polar regions which are longer term effects. It doesn’t have the century resolution to say how quickly this value evolves, but much of the ocean response should be delayed by less than a century, I would think.

      • JimD, “Paleo does give much higher ECSs (greater than 4 C), but that is only because of glaciers melting and vegetation spreading into polar regions which are longer term effects. It doesn’t have the century resolution to say how quickly this value evolves, but much of the ocean response should be delayed by less than a century, I would think.”

        Start with your first sentence, the greater that 4C, greater than 6 C less that 2C as low a 1.1 C. That is paleo sensitivity estimates, all over the place. Why is that? http://onlinelibrary.wiley.com/doi/10.1029/2009PA001809/abstract

        Now how much glacial recession to you think is left relative to the different paleo eras? What is the real impact of OHT? Since the resolution is not fine enough to pick out 30 to 60 year fluctuations, you have to start at a point with less uncertainty and rely on the physics.

        That is “no feed back” sensitivity and brand new initial conditions. I am afraid that it is a do-over JimD, time for a Climate Science Mulligan.

        But life is good, I am in the cheap seats watching the largest scientific train wreck in the human history.

    • The New Scientist article focuses on the 2 C target that they say is now reachable with this new estimate if policies get enacted by 2020. I would caution that a 2 C global average rise is misleading if the land amplification is a factor of two, which is what it appears to be. The Arctic amplification is well known to be even greater, so prospects for Greenland’s glaciers look dim.

      • JimD, Land amplification is close to two because it is responding to ocean heat uptake, part natural, part “other” and CO2 forcing. So what the “final” value is depends on the percentage Natural versus the percentage “other”. That is why you end up with the 50% +/-20 or 1/3, 1/3, 1/3 splits on attribution estimates. Since CO2 is actually the best known, it can be used to estimate the ratios. Kind of ironic, but Hansen, Best and even Webster’s estimates are then “worst” case scenarios.

        As for Greenland glacier prospects looking dim, the ~150 melt “cycle” is another clue that there is longer term variability. If there is longer term natural variability or recovery in this case, there is Joy in Mudville. If there is not, we are screwed anyway, might as well do some partying.

        You should really look at that Indo-Pacific Warm Pool reconstruction, that is the hot side of the ENSO and the longer term ENSO reconstructions pretty clearly show long term variability. Once the peak is reached, then the system is near enough to equilibrium that you can make more accurate estimates. Still tons of uncertainty, but at least you will have a better idea of how much more you don’t know.

      • captd, it is a thing they have not acknowledged that there is a land amplification of the global 2 C rate. This factor of two may not be sustainable because hopefully the ocean will take up some of the Planck response burden that is building up, but that doesn’t get us out of trouble because an accelerated ocean surface warming puts more H2O into the atmosphere, and adds to the water vapor feedback as this also blocks heat from escaping. It seems no-win either way. (1) the land continues to rise by 4 C per doubling (Arctic even more) even as ECR remains near 2 C, (2) this slows down as the ocean surfaces warm faster and add H2O feedback, leading to more ECR, perhaps up to 3 C. (1) is just a linear forward extrapolation of what has been happening in the last few decades, which is what the article says it assumes.

      • JimD, “captd, it is a thing they have not acknowledged that there is a land amplification of the global 2 C rate.’

        I am not sure who “they” are, but Tsonis did acknowledge there was some “land” and other amplification, Curry acknowledges it, most everyone that is not addicted to the Koolade, acknowledges it.

        This paper can’t “acknowledge” it because the paper is based on energy flux balance. You would need some method to “tease” out the Natural impact and time frame to put a number on that part of the problem. It is a huge leap forward just having some of the authors recognize that past energy budgets were worse than useless.

        Now, if all the superseded and useless “peer reviewed” papers were culled, like would happen in a real problem solving situation, we could get down to bidness.

      • Jim D

        You are apparently not getting the message here.

        Your fear mongering no longer works.

        There is no existential threat from AGW.

        Get used to it.

        Max

      • manacker, no, the message is very clear that they think the current state can be extrapolated forwards. Consider the current state. The land has a TCR of 4 C per doubling, so they say we won’t get as much water vapor feedback, but the other areas will warm as they already are. It adds up to a 2 C global sensitivity. Given their assumptions, this is what it means.

      • I would add that the land warming twice as fast as the ocean also naturally leads to a reduction of relative humidity over land, less rain and less clouds, possibly a positive cloud feedback there too. Be cautious what you wish for with the consequences of a drier world’s 2 C sensitivity.

      • Jim D

        In AR4 the threat was from an increase in the “globally and annually averaged land and sea surface temperature anomaly” due to AGW.

        Now that this threat appears to have been a paper tiger, we now have a new threat: a posited increase in “the NH land temperature” due to AGW.

        Gimme a break, Jim. You can’t simply move the goal posts and still frighten people.

        NO SALE.

        Max

      • manacker, as I mentioned above, they extrapolate current sensitivity linearly, so the logical conclusion is that the land (and Arctic) sensitivity will be maintained at its current value. I am not saying that will happen unless 2 C really is the ECS, but that sensitivity does restrict how much the ocean can do to offset the land’s role in warming. I think they made a mistake by not listening to Armour about nonlinearity in short-term estimates of ECS, which leads to an underestimation.

  9. R Gates “the slower feedbacks will take centuries”. Could you explain what the cryosphere and biosphere actually did 200 years ago to cause the temperature rise in 1970-2000 and what happened next to cause the pause. Just curious is all.

    • “The slower feedbacks will take centuries” is CAGW code for “maybe there are no slower feedbacks at all”

      Max

      • Gad, if only we could rely on the ‘slower feedbacks’ to kick in as the glaciers advance. But then, proper skepticism wonders at the existence let alone the effect.
        ==========

    • R. Gates aka Skeptical Warmist

      You are putting the cart before the horse here. Certainly the tropospheric temperature rise during the 1970-2000 timeframe is partly natural cycles and partly anthropogenic. It is moving forward that the anthropogenic forcing will have effects that will take centuries to fully work out. The GH forcing is waking up the sleeping ice masses in both Greenland and Antarctica. Once these start to move, as they now are, that thermal inertia will last centuries. See, for example:

      http://phys.org/news/2013-05-northern-hemisphere-region.html

  10. This one may have “missed the AR5 deadline”, but the other seven papers that average around 1.6C for 2xCO2 ECS did not.

    Let’s see how IPCC addresses these.

    Max

    • Peter Lang

      Manacker,

      I have to praise you for so consistently and diligently emphasising that recent, observation based estimates of ECS are well below the estimates in AR4 and the leaked draft AR5.

    • Alexej Buergin

      When people are climbing down (and they still have some way to go), they have to save face and keep their dignity. So they will do it slowly, and not in one giant step. They will say something like “there are some signs that…” and in AR6 they will say “as we explained in AR5…”.
      In AR7 they will confirm what Girma and DocMartyn know now.

      • Alexej Buergin | May 20, 2013 at 4:54 am |

        In the same sense that the technology of WW IV will be rocks?

    • Nic Lewis says the letter was accepted before the deadline.

  11. Not strictly relevant.. but not completely irrelevant:

    http://www.guardian.co.uk/environment/2013/may/17/global-warming-not-stalled-climate

    If ECS is so low, and yet AGW is still chugging along normally through its low phase (which is due to end shortly, as phases do), then what does this say of the high phase?

    • Who cares. We had the science discussion. Now we can cut to the policy chase.

      Thank you for playing the lukewarm pea and thimble game.

      • willard (@nevaudit) | May 20, 2013 at 10:13 am |

        Quite welcome.

        Thank you for participating in Three Forcing Monte.

    • Steven Mosher

      Hansen

      ‘”It is not true that the temperature has not changed in the two decades.”

      haha. where’s willard

      • Sometimes, it’s politeness.
        Sometimes, it’s something else.

      • Steven Mosher

        ‘”It is not true that the temperature has not changed in the two decades.”

        write that down willard, on your yumblr.

      • For now, let’s see what Hansen said:

        In the last decade it has warmed only a tenth of a degree compared to two-tenths of a degree in the preceeding decade, but that’s just natural variability. There is no reason to be surprised by that at all. If you look over a 30-40 year period the expected warming is two-tenths of a degree per decade, but that doesn’t mean each decade is going to warm two-tenths of a degree: there is too much natural variability. […]

        Our understanding of global warming and human-made climate change has not been affected at all. It’s because the deniers [of the science] want the public to be confused. They raise these minor issues and then we forget about what the main story is. The main story is carbon dioxide is going up and it is going to produce a climate which is going to have dramatic changes if we don’t begin to reduce our emissions.

        Our first emphasis underlines the subject of a previous hurly burly at Judy’s. The second one underlines that even Hansen seems lukewarm to the idea of keeping business as usual. Speaking of which, Moshpit’s playing with someone else’s business, yet again.

        So be it.

        It might be interesting to compare this double negative with Judge Judy’s.

        So much to do, so little time.

      • Steven Mosher

        ‘”It is not true that the temperature has not changed in the two decades.”

        thats pretty funny don’t you think willard.

      • Steven Mosher

        willard

        “, but that doesn’t mean each decade is going to warm two-tenths of a degree:”

        strawman. those folks who point at natural variability know that there will be ups and downs. In fact, they listened to folks when they said 10 years without warming would be rare.. no wait, 13 years. no wait 17.. and now Ray is saying as many as 25 years ( kinda like moshpit, wonder if he read me ”

        The assumption that the recent muting of warming is “just noise” or “natural variability” of course insulates folks from looking for real causes. Like TCR being too high, or aerosol forcing being wrong..
        Just noise..nothing going on here, move along.. why that was Willis’s argument about the warming being nothing surprising.
        The current observations are at the boundary of 95%.. a 1 in 20 occurance is interesting.. and one can always assume its noise.. shit happens.. but some floks are more curious

      • Steven Mosher

        “The main story is carbon dioxide is going up and it is going to produce a climate which is going to have dramatic changes if we don’t begin to reduce our emissions.”

        Well, we have begun to reduce our emissions. In fact the US met its obligations under a treaty we never signed. Go figure. While those who signed it, didn’t do so hot. That’s evidence for the weakness of treaties not evidence in favor of treaties. You have skeptics calling for reductions in black carbon: crickets on the other side. Folks calling for fracking which buys us more time: crickets. A push for nuclear: crickets. Hell it took sandy to get folks to say the word adaptation. At some point folks may own up to their misguided push for global treaties and they may own up to wasting decades of political capital. If they care about the planet its a small price to pay.

      • > thats pretty funny don’t you think willard.

        Yeah, very. But my point was not about funniness.

        It was about something else.

    • David Springer

      The low phase is 30+ years long. It’s got a long way to go. It’s basically a sine wave and we reached the apex probably in 2005 and GAT began to decline in 2010. The decline, or possibly plateau, will likely continue IMO until about 2035 when we cross through the nadir. We won’t have a complete cycle of the AMDO in the satellite record (which is the only reliable record IMO) until 2040 give or take a few years.

      This could very well be the first in a series of ECS downgrades over the next 20-30 years forced by observations diverging too far from model projections.

      • R. Gates aka Skeptical Warmist

        Your assumption seems to be that GH gas concentrations have no effect on ocean cycles? I am doubting any serious downgrades to ECS as we move forward, just as I doubt the overall accuracy of predicting ECS without the use of paleoclimate data. Too many unknown feedbacks in a very complicated chaotic system. The paleoclimate data is the only way to capture all these feedbacks, even if we don’t know exactly what they are. This kind of study seems to be the best way to move forward:

        http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.177.6584&rep=rep1&type=pdf

    • Bart R

      Yep. Not relevant at all.

      Hansen pontificates that it has warmed “over the past two decades”.

      Duh!

      Sure, it warmed more the first decade (1993-2002) than it cooled the second decade (2003-2012).

      Since this blurb, Hansen has conceded that there is a current pause in the warming (earlier thread here).

      Max

      • manacker | May 21, 2013 at 2:02 am |

        Did it?

        Warm more, I mean?

        How do you know?

        Perhaps it warmed less in the first decade (1973-2002) than it warmed in the second decade (2003-2032).

        See, the second decade’s 30-year climate hasn’t happened yet. It’s not a statistic.

        It’s a fable, a fiction, a nonsuch.

        And let’s look at David Springer | May 20, 2013 at 2:23 pm | for where the 30 year phase actually started.

        If the cooling phase is even large enough to overcome the warming in the system (a huge if, not yet established), then we must wonder when it really started?

        What’s your candidate start year? 16 years ago? 17? 25?

        And why, given the amount of new Forcing in the system, should we expect this part of the climate system to maintain its periodicity (if there even is any) while all over the climate basins other structures fall aside, or are shown to have never been?

        Are you relying on yet another thinly-disguised rehash of AMO-PDO-Solar?

        http://www.woodfortrees.org/plot/hadcrut4gl/mean:11/mean:13/from:1900/plot/esrl-amo/mean:11/mean:13/scale:1.25/from:1900/offset:-0.2/plot/jisao-pdo/mean:11/mean:13/scale:-0.25/offset:-0.4/plot/sidc-ssn/mean:11/mean:13/scale:-0.001/from:1900/offset:-0.5

        Sure, if you manipulate enough curves enough for short spans you can make them add up to something that looks a bit like GMT. The problem is, these curves fall apart when you extend them further back, or try to apply them forward, or split GMT into hemispheres.

        The 60-year model is defunct. It was defunct when Girma was debunked on it. It’s defunct now.

    • The so-called AGW itself is just a phase, which ended around the beginning of this century.

  12. What struck is that this “marked lowering of the IPCC “likely” range”, as James Annan put it, only came about after observations started to deviate from the expected path. Climate sensitivity is a derived quantity – i.e. it cannot be measured directly but has to be derived from other observables. Hence, with every new extension of the observational record – either up or down – the climate sensitivity can/will be adjusted. It is thus observation-driven, not theory driven, even though it appears to work quite well in the numerical climate model environment. But then again, a numerical climate model is nothing more than a numerical representation of climate theory. Thus, climate sensitivity is ‘just’ a construct, and ultimately if observations deviate in unexpected directions, it is not unimaginable (although currently unlikely, given its importance) that in the end the concept of climate sensitivity will be judged unrealistic.

    The crux now is to try to figure out why climate models appear to deviate from what is happening in reality (and for the moment let’s assume that this is accepted as a fact, as I am sure not everyone will surrender so easily). If not, then when the next bunch of ‘share prices’ have materialized a new number of climate sensitivity will be determined but without advancing understanding of the climate system.

    Given the bunch of uncertainties as expressed in the Uncertainty Monster, it remains to be seen if it can be easily determined what exactly is going on.

    Interesting times.

    • Climate Theory says it snows less when oceans are warmer.
      They promised that snow would be a thing of the past.
      Climate Data shows it snows more when oceans are warmer.
      October 2012 to May 2013 is more than ample proof.
      When models are based on Theory that is Flawed they will Fail and they did.

  13. Chief Hydrologist

    ‘Although climate sensitivity is usually used in the context of radiative forcing by carbon dioxide (CO2), it is thought of as a general property of the climate system: the change in surface air temperature (ΔTs) following a unit change in radiative forcing (RF), and thus is expressed in units of °C/(W/m2).’
    Wikipedia

    What do we really know about radiative forcing and how and why it changes?

    http://s1114.photobucket.com/user/Chief_Hydrologist/media/AdvancesinUnderstandingTop-of-AtmosphereRadiationVariability-Loebetal2011.png.html?sort=3&o=30

    • David Springer

      The IPCC AR4 glossary defines ECS as response to doubling CO2 including CO2 equivalents. It DOES NOT define it as a generic response to change in power. Anyone who defines it as a generic response to change in power ignores the fact that how the power is delivered makes a difference. The frequency of electromagnetic radiation and the surface it illuminates both make a huge difference. Different surfaces respond differently to the same frequency and the same surface responds differently to different frequencies. Ignoring these differences causes erroneous answers. A critical difference being overlooked is the difference in response between water and rocks to the downwelling longwave infrared produced by greenhouse gases.

      • Steven Mosher

        David Chief is talking about LAMBDA

        write that down.

        climate sensitivity = lambda
        ECS is not lambda
        write that down.

      • David Springer

        Mosher, Ellison quoted wikipedia on definition of climate sensitivity. I pointed out the IPCC AR4 definition differs from wikipedia. I’m not sure what you think you were correcting but whatever it is it isn’t germane.

      • David Springer

        Given it was climate science and it came from wikipedia it’s probably William Canolli’s personal definition of climate sensitivity. In any case it is not the consensus definition if we presume that IPCC AR4 represents the consensus. AR4 has a glossary. I quoted the glossary. Deal with it.

      • That’s a qwarty interesting question there, Dave, re: Connolley and his peculiar manifestation of logomania.
        ============

      • Chief Hydrologist

        ‘Although the definition of equilibrium climate sensitivity is straightforward, it applies to the special case of equilibrium climate change for doubled CO2 and requires very long simulations to evaluate with a coupled model. The effective climate sensitivity is a related measure that circumvents this requirement. The inverse of the feedback term a is evaluated from model output for evolving non-equilibrium conditions as

        1/e = T / (F – dHo/dt) = T / (F – Fo)

        and the effective climate sensitivity is calculated as

        Te = F2x/αe

        with units and magnitudes directly comparable to the equilibrium sensitivity. The effective sensitivity becomes the equilibrium sensitivity under equilibrium conditions with 2xCO2 forcing. The effective climate sensitivity is a measure of the strength of the feedbacks at a particular time and it may vary with forcing history and climate state.’ http://www.ipcc.ch/ipccreports/tar/wg1/345.htm

        And still the point is what we know about TOA radiative forcing.

        ‘The top-of-atmosphere (TOA) Earth radiation budget (ERB) is determined from the difference between how much energy is absorbed and emitted by the planet. Climate forcing results in an imbalance in the TOA radiation budget that has direct implications for global climate, but the large natural variability in the Earth’s radiation budget due to fluctuations in atmospheric and ocean dynamics complicates this picture.’ http://meteora.ucsd.edu/~jnorris/reprints/Loeb_et_al_ISSI_Surv_Geophys_2012.pdf

      • Steven Mosher

        Dont confuse Springer, Chief. he done read the IPCC.

    • Chief Hydrologist

      And the only point is what we don’t know about radiative flux and how it changes.

    • Chief Hydrologist

      Was this linked by someone else before – ’cause I remember seeing it. The problem remains one of attribution. How much of the recent warming was natural? The toa radiative flux data says damn near all of it.

      http://s1114.photobucket.com/user/Chief_Hydrologist/media/WongFig2-1_zps2df93e8b.jpg.html?sort=3&o=62

      • David Springer

        TOA radiative flux data has a margin of error ten times greater than the supposed imbalance. And that’s further complicated by supposing that solar output is constant, which it isn’t.

      • Chief Hydrologist

        Whoops wrong spot

        When the four ERBS error sources are combined, the total stability uncertainty (1-sigma) in the 60°N to 60°S and tropical annual mean radiation for the ERBS WFOV 15-yr dataset from all three sources combined is on the order of 0.3 to 0.4 W m2.’ http://www.image.ucar.edu/idag/Papers/Wong_ERBEreanalysis.pdf

        You might note that these are anomalies. The anomalies show changes reflected and emitted radiation at TOA. A rising net is planetary warming by convention whereas rising LW and SW show increased emission or reflection of energy. The ERBS showed an decrease of 2.1 W/m2 in reflected SW and a decrease in emitted LW of 0.7 W/m2 between the 80′s and 90′s. Solar TSI changes are minor in comparison.

    • BartR

      R Gates posted this a few days ago to sceptical reviews…

      Here is the Pilchard Inn Devon dating from 1336

      http://www.theaa.com/pubs/bigbury-on-sea-pilchard-inn-376027

      There is also pilchard point and various other reminders of a warm era when pilchard was caught in abundance.

      We went through the cold water herring era in intervening centuries and are back to where we were 700 years ago. The 1000 year old fisheries records at nearby Plymouth illustrate the changing species as the waters warm and cool
      tonyb

    • They say shifts could have negative effects.
      The fail to say shifts could have positive effects.

    • Chief Hydrologist

      ‘When the four ERBS error sources are combined, the total stability uncertainty (1-sigma) in the 60°N to 60°S and tropical annual mean radiation for the ERBS WFOV 15-yr dataset from all three sources combined is
      on the order of 0.3 to 0.4 W m2.’ http://www.image.ucar.edu/idag/Papers/Wong_ERBEreanalysis.pdf

      You might note that these are anomalies. The anomalies show changes reflected and emitted radiation at TOA. A rising net is planetary warming by convention whereas rising LW and SW show increased emission or reflection of energy. The ERBS showed an decrease of 2.1 W/m2 in reflected SW and a decrease in emitted LW of 0.7 W/m2 between the 80’s and 90’s. Solar TSI changes are minor in comparison.

      • David Springer

        TOA radiative flux data has a margin of error ten times greater than the supposed imbalance. And that’s further complicated by supposing that solar output is constant, which it isn’t.

      • David Springer

        Solar constant is variously defined as either 1362W/m2 or 1366W/m2. It was recently changed and not all sources have been updated. Thus it is demonstrated that incoming solar energy is subject to a pencil whipped revisions on the order of 4 Watts. Incoming energy is the EASY measure because the sun is a point source so you only have to measure a single point. Outgoing energy from TOA is far, far more difficult to measure because it isn’t a point source and varies a great deal dirurnally and seasonally.

        The ostenble imbalance is on the order 0.5W/m2. Calling the margin of error in measurement of TOA imbalance ten times greater than the imbalance is already being charitble. The fact of the matter is the polarity of it isn’t even reliable. Deal with it. We go to war with the data quality we have not the data quality we wish we had. Pencil whipping can sometimes improve it at the margins but this isn’t the margins. The instruments we have are simply not up to the task of measuring sub-watt mean annual power changes in both incoming and outgoing energy at TOA.

      • Chief Hydrologist

        The revision of the TSI absolute value is a calibration issue. What can it be compared to? The changes regardless is a little over 1 W/m2 at TOA over a solar cycle – or about a quarter Watt at the surface. Not all that significant and not much more over the longer term.

        The outgoing radiation is not reported as absolute numbers at all – so you can be assured that the problem of absolute calibration is well known. So what is important is what anomalies can tell you and how stable the instruments are over time.

        ‘This paper highlights how the emerging record of satellite observations from the Earth Observation System (EOS) and A-Train constellation are advancing our ability to more completely document and understand the underlying processes associated with variations in the Earth’s top-of-atmosphere (TOA) radiation budget.’ http://meteora.ucsd.edu/~jnorris/reprints/Loeb_et_al_ISSI_Surv_Geophys_2012.pdf

        Write that down springer.

      • Matthew R Marler

        Chief Hydrologist, thank you for the Loeb reference. Here is a quote from the abstract: CERES data show that clouds have a net radiative warming influence during La Niña conditions and a net cooling influence during El Niño, but the magnitude of the anomalies varies greatly from one ENSO event to another.

        Almost like a “thermostat”, eh?

        I look forward to reading the whole thing.

      • David Springer

        Blah blah blah Ellison babbles on.

        http://www.aos.wisc.edu/~tristan/publications/2012_EBupdate_stephens_ngeo1580.pdf

        Nature GeoScience 23 September 2012

        An update on Earth’s energy balance in light of the latest global observations
        Graeme L. Stephens1*, Juilin Li1, Martin Wild2, Carol Anne Clayson3, Norman Loeb4, Seiji Kato4, Tristan L’Ecuyer5, Paul W. Stackhouse Jr4, Matthew Lebsock1 and Timothy Andrews6

        For the decade considered, the average imbalance is 0.6 = 340.2 − 239.7 − 99.9 Wm–2 when these TOA fluxes are constrained to the best estimate ocean heat content (OHC) observations since 2005 (refs 13,14). This small imbalance is over two orders of magnitude smaller than the individual components that define it and smaller than the error of each individual flux. The combined uncertainty on the net TOA flux determined from CERES is ±4 Wm–2 (95% confidence) due largely to instrument calibration errors12,15. Thus the sum of current satellite-derived fluxes cannot determine the net TOA radiation imbalance with the accuracy needed to track such small imbalances associated with forced climate change11.

        Publish your rebuttal in Nature GeoScience. Oh wait. You’re a blogger not a scientist. Babble on, Garth.

      • David Springer

        Marler,

        Loeb’s an author on this recent Nature Geoscience paper which reads (my emphasis):

        For the decade considered, the average imbalance is 0.6 = 340.2 − 239.7 − 99.9 Wm–2 when these TOA fluxes are constrained to the best estimate ocean heat content (OHC) observations since 2005 (refs 13,14). This small imbalance is over two orders of magnitude smaller than the individual components that define it and smaller than the error of each individual flux. The combined uncertainty on the net TOA flux determined from CERES is ±4 Wm–2 (95% confidence) due largely to instrument calibration errors12,15. Thus the sum of current satellite-derived fluxes cannot determine the net TOA radiation imbalance with the accuracy needed to track such small imbalances associated with forced climate change.

        See if you can get Ellison to explain which part of the bolded text he doesn’t understand. I’m an electronics guy and was professionally repairing meteorological instrumentation (military) when I was 18 years old. Ellison doesn’t seem to understand data quality issues. The scientists above are painfully aware and write exactly what I’ve been trying to pound into Ellison’s thick skull for months. It’s not rocket science. Well actually it sort of is but you know what I mean.

      • Chief Hydrologist

        Actually being trained in environmental science and engineering as compared to an obsolete gamer?

        Not strictly my area but I notice that the two studies – yours and mine – contain at least two authors in common.

        This might suggest that the two ideas are not mutually excusive? Tracking radiant imbalances might be difficult but the variations can enable understanding of the processes involved.

        Didn’t take you long to revert to your usual charming self – but thanks for playing. You lose as usual.

      • The Graeme Stephens article shows that a globally averaged 0.6 W/m^2 is going into the heat sink of OHC. It can’t go anywhere else unless it is a measurement error, and the Chief has utmost belief in the validity of satellite-based measurements.

        That 0.6 applied to the ocean’s 70% areal coverage explains nicely the distinction between ECS and TCR measurements. Subtract the 0.6 dicrepancy in the denominator of Mosh’s equation for ECS and you will find agreement with what Jim D has stated is a doubling of land temperature anomaly over the SST anomaly.

        Hansem, Trenberth, and all those guys could explain this just as well, but they seem to not take a keen interest in the misapprehension of skeptical commenters on some arbitrary climate science blog.

      • Chief Hydrologist

        ‘• We calculate that during the past decade Earth has been accumulating energy at the rate 0.58±0.43 W/m2, suggesting that while warming of Earth’s surface has slowed during the 2000s, energy continues to accumulate in the subsurface ocean.

        • The apparent decline in ocean heating rate after 2004 noted in other studies is not statistically robust.

        • Our results do not support the claim that there is missing
        energy in the system.

        • Synergistic use of satellite and in‐situ ocean heat content anomaly
        measurements provide critical data for quantifying short and
        longer‐term changes in the Earth’s net TOA radiant imbalance.

        • New cloud data from satellite (MODIS, CALIPSO) key to unraveling
        role of clouds in a changing climate.’

        http://ceres.larc.nasa.gov/documents/STM/2011-10/16_.Loebpptx.pdf

        As I keep saying – the ocean heat content data is consistent with changes in the TOA flux. It has nothing to do with greenhouse gases. It is an order of magnitude greater than the theoretical increase in greenhouse gas forcing. It all happened in shortwave.

        Stephens et al don’t actually calculate anything on OHC but use studies by Willis and one by Norris(? – from memory). But there is also von Schuckmann who first integrated to 2000m.

        webby and springer seem peas in a pod – superficial and abusive. One might wonders wtf is their problem if I gave a rat’s arse.

      • Webby

        a globally averaged 0.6 W/m^2 is going into the heat sink of OHC

        Never to be seen again.

        Let’s see.

        0.6 W/m^2 would warm the atmosphere by 0.16°C.

        and it would warm the upper ocean (top 700 m) by 0.0007°C

        Wow!

        Should I be frightened, Webby?

        Fuggidaboudit.

        Max

      • Matthew R Marler

        David Springer, thank you for the link to the Stephens et al paper, which I sometimes cite because Dr Curry posted it here pre-pub (as I recall). The abstract also includes: This additional precipitation is sustained by more energy leaving the surface by evaporation — that is, in the form of latent heat flux — and thereby offsets much of the increase in longwave flux to the surface.

        that is not exactly the point I make (which is that increased CO2 may cause increased latent heat flux), but it does address the heat flow that is ignored by what Steven Mosher and WebHubTelescope refer to as “the science” or “the physics”.

        Thus the sum of current satellite-derived fluxes cannot determine the net TOA radiation imbalance with the accuracy needed to track such small imbalances associated with forced climate change11.

        That is one of the important “known unknowns”. It isn’t just that knowledge and models are imperfect (we acknowledge that perfection will never be obtained), but that the imperfections (aka “cavities” as some statisticians wrote) are known to be large enough that the desired effects can’t be known.

      • Matthew R Marler

        WebHubTelescope: It can’t go anywhere else unless it is a measurement error,

        Once in a while you write something truly insightful.

        The globally spatio-temporally averaged differential heat flux is not known with great accuracy in part because it is not constant in time and space, but fluctuates, and the “measurements” (besides being based on an in-built estimate of a functional relationship, which itself has some calibration error), are samples, with an imperfectly known sampling bias and sampling variability.

        The oceans are the latest, but probably not last, refuge of the true believers. Models can be modified to put the excess heat into the deep oceans, but the expected change in temperature attributable to recent CO2 increases is so slight that it can’t be detected by measurements, at least not yet. That the ocean temperature change “is compatible with” measured TOA heat fluxes is hardly surprising when neither is estimated with sufficient precision..

      • In their 2009 paper Trenberth, Fasullo, and Kiehl wrote:

        There is a TOA imbalance of 6.4 W m−2 from CERES data and this is outside of the realm of current estimates of global imbalances (Willis et al. 2004; Hansen et al. 2005; Huang 2006) that are expected from observed increases in carbon dioxide and other greenhouse gases in the atmosphere. The TOA energy imbalance can probably be most accurately determined from climate models and is estimated to be 0.85 ± 0.15 W m^−2 by Hansen et al. (2005) and is supported by estimated recent changes in ocean heat content (Willis et al. 2004; Hansen et al. 2005).

        Stephens et al write

        The combined uncertainty on the net TOA flux determined from CERES is ±4 Wm^–2 (95% confidence) due largely to instrument calibration errors. Thus the sum of current satellite-derived fluxes cannot determine the net TOA radiation imbalance with the accuracy needed to track such small imbalances associated with forced climate change.

        Thus both agree that the accuracy of the measurements of the absolute TOA energy balance is far too poor for determining it’s value at the required level. The size of that imbalance is better constrained by even the most rudimentary determination of the heat content of the Earth system (dominated by the OHC). The value 0.85 presented by TH&K as well as the values presented by Stephens et al have been derived in that way rather than from observations of the energy balance.

        Stephens et all do, however, continue

        Despite this limitation, changes in the CERES net flux have been shown to track the changes in OHC data. This suggests that the
        intrinsic precision of CERES is able to resolve the small imbalances
        on interannual timescales, thus providing a basis for constraining the balance of the measured radiation fluxes to time-varying changes in OHC (Supplementary Information). The average annual excess of net TOA radiation constrained by OHC is 0.6 ± 0.4 Wm^–2 (90% confidence) since 2005 when Argo data became available, before which the OHC data are much more uncertain. The uncertainty on this estimated imbalance is based on the combination of both the Argo OHC and CERES net flux data.

        Here they argue that changes in the TOA balance can be determined much more accurately than the absolute value. I have understood that CH has made this point many times over over the last year or two.

        On the absolute imbalance we remain dependent on the accuracy of the determination of OHC changes including all depths.

      • Graeme Stephens made an estimate of 0.58 Wm^2 energy imbalance for the earth as a whole. Divide by the cross-section area of the ocean and we get 0.58/0.7= 0.83 W/m^2 concentrated as an OHC imbalance. This value of 0.83 W/m^2 is very close to the one quoted by Trenberth of 0.85 ± 0.15 W/m^2 originally estimated by Hansen et al. in 2005.

        According to the skeptics, Stephens is supposedly in violent disagreement with Trenberth and Hansen. So why are they in so much agreement with 0.83 versus 0.85?

        Is it the error bars that they are arguing over? If that is the case, we can add in Levitus as an independent estimate.

  14. David Springer

    This paper is certainluy a step in the right direction. Another step or two is yet to be made. The next surprise in store is that transient climate sensitivity may be *larger* than equilibrium sensitivity. I believe they are close to equal but that’s just me. In principle TCS can be larger than ECS. You’ve probably all seen the bumper sticker and T-Shirts and coffee mugs with the “SH!T HAPPENS” on them. Well guess what boys and girls OVERSHOOT HAPPENS too. Heat isn’t going to sequester in the deep ocean to jump out at some later date to cause additional land warming. That’s nutty. COLD that was stored in the deep ocean during 100,000 years of glaciation preceding the Holocene interglacial is going to jump out to cause additional land cooling. Brass monkeys beware. Interglaical warmth is no more than than a fleeting shallow layer of warm water floating atop a globe spanning ocean with a bone-chilling average temperature of 4C.

    • Springer, “This paper is certainly a step in the right direction. Another step or two is yet to be made. The next surprise in store is that transient climate sensitivity may be *larger* than equilibrium sensitivity.”

      That is the problem with the definition of climate sensitivity. Temperatures can over shoot a set point or rate uptake limit and it can “look like” TCR is greater than ECS, since there may not be anything close enough to an equilibrium state to be useful with the current definitions. If you allow for bistable operation, there would be a range of “sensitivities” between those bi-stable points, which looks more like reality.

      Then if you mention “bi-stable” the CAGW aficionados start blathering about some geological period that is more uncertain than today as some kind of proof of some nonsense.

      • The rapid response of temperature to day and night and to summer and winter show that the temperature has some lag but not a lot of lag. Earth is always very close to Equilibrium

      • HAP, “The rapid response of temperature to day and night and to summer and winter show that the temperature has some lag but not a lot of lag. Earth is always very close to Equilibrium”

        Very close is true, by “very” isn’t a very useful. The denser the energy storage medium the longer the delay.

        Looking a the paleo deep ocean “average” temperatures, the range of change is about 2 C with a lag of about 1700 years. It is those longer term lags that make your “when it warms it cools” theory work. The system basically just over shoots setpoint. As the energy density decreases, the lag time decreases. Since the oceans have stratification layers, you can have periods of days, years, decades, centuries and millienia, so technically there is no “equilibrium” if you need an accuracy of greater than about 2C degrees. The CAGW crew is freaking out over tenths of degrees.

      • Over shooting the set point is a natural part of the cycle.
        The temperature is still near the changing equilibrium.
        When oceans are warm it snows and builds ice volume.
        The ice advances and lowers the equilibrium temperature for a long time.
        It gets cold enough to freeze the oceans and stop the snowfall.
        Ice still advances until it hits the bottom of a Little Ice Age.
        Then the lack of snowfall lets the ice retreat. Then the equilibrium temperature rises until the sea ice melts. This starts the snowfall but there is still not enough until the Arctic Opens so warming continues until about now.

      • Ice Albedo is the key to this.

        A Little Ice Age is colder than a Medieval Warm Period because there is more ice extent.

        The ice did not happen because it got colder.

        The ice advance made it colder and ice retreat makes it warmer.

        This is very clearly shown by the actual ice core data.

      • Ice Extent does lag Ice Volume. Ice Volume reaches it min at the peak of the warm time and reaches the max during the cooling and ice volume starts to decline while ice is still advancing and more cooling is occurring. Ice melts and retreats until the oceans warm enough to open the Arctic. This is the low point for ice volume. Then it snows enough to build the ice volume and start the ice advance. I keeps snowing during the advance until the oceans get cold and freeze, cutting off the snow.

      • Consensus Climate Theory reaches max ice volume at the coldest time, but it can’t happen that way. Ice would still be advancing if it was still snowing.

      • Ice advances because of snow that fell on the top of the ice sheets and glaciers, years earlier, and not because of snow that fell at the edges and tails.

      • Snow Cover Extent does advance and retreat with the ice packs and glaciers as a powerful feedback that helps.

    • David,
      Sobering point!

      • David Springer

        Yes in the immortal words of Rick Perry, Governor of the Great State of Texas, the global ocean is one cold mofo.

        If there’s a tipping point to be afraid of its the warm shallow surface layer of the global ocean mixing with the frigid abyss at an increased pace. Fortunately for us the rapid side of interglacial/glacial transitions is the warming side which is violent in speed and environmental upheaval – glacier melt feeds on itself accelerating the whole time until only the polar ice caps remain then it stops after overshooting about 3C. This is then followed by a long slow march over tens of thousands of years of rebuilding NH glaciers until the tipping point into the next melt making for a cycle time of about 110,000 years.

        The Holocene interglacial was atypical. There was no 3C overshoot at the end of the melt and sea level stopped rising 6-9 meters lower than other interglacials. The big difference appears to be that Greenland was spared and retained its ice sheet. Pope hypothesizes that an ice dam near the US/Canadian Great Lakes broke before Greenland’s glacier had been taken out which flooded the north Atlantic with fresh water temporarily ending the melt in an event known as the Younger Dryas. The meltdown then continued after a few hundred years but it had lost its inertia and Greenland’s ice sheet remained largely intact and sea level rise was halted early.

        I’ve added some of my own detail to that scenario. The lower sea level at the end of the melt left the global ocean with less surface area. The ocean’s surface is close to black and thus absorbs almost all the solar energy that reaches it. Rocks are not as dark and reflect quite a bit more solar energy instead of absorbing it. Thus the earth’s average albedo didn’t sink as low and the overshoot failed to happen.

        Pope’s theory is that with Greenland’s glacier intact the earth is stuck in a never-ending interglacial.

        The never-ending interglacial is speculative to say the least but it’s clear after 12,000 years the long slow march of rebuilding NH glaciers hasn’t begun so it’s still a valid hypothesis as far as contrary evidence is concerned. I find it plausible after having added in the effect of a reduced global ocean surface area due to the incomplete melt. The breaking of the ice dam triggering the Younger Dryas isn’t original with Pope and other theories have been proposed to explain it but the ice dam theory appears to be the most widely accepted.

    • Yes, the transient sensitivity over land is over 4 C, so it may be true that it exceeds the ECR, even if the global TCR doesn’t.

      • Have you measured it?
        The BEST land data and Keeling’s curve from 1959 to 2010 give a climate sensitivity of 3.46 degrees, with 1.7 degrees of heating already felt, and 1.85 still to come. Without a putative cyclical component this fall to about 2.3 degrees.

      • I gave this on another thread.
        1980-2010 CO2 ppm went from 337 to 388.
        CRUTEM4 or BEST land temperature rose ~0.84 C
        Transient Sensitivity from these numbers: 4.1 C per doubling.
        The NH land trend is 15% higher.

      • For 1980-2010, with BEST and Keeling’s curve, you get a 3.9 degrees for 280 to 560 ppm, and the fit is lousy.

      • DocMartyn did his analysis wrong for BEST. From the start of carbon emission records and the entire extent of BEST, the ECS sensitivity for land is 3.1C for doubling of CO2. The BEST folks did this themselves and obtained 3 C, which my estimate substantiates.

        I suggest that DocMartyn write a post on why the BEST analysis is WORST.

    • @DS: COLD that was stored in the deep ocean during 100,000 years of glaciation preceding the Holocene interglacial is going to jump out to cause additional land cooling.

      The ocean has no temperature memory whatsoever of the last glaciation. The thermohaline circulation (THC), aka the meridional overturning circulation (MOC), aka the ocean conveyor belt (Wallace Broecker‘s term for it), refreshes the deep ocean in one to two millennia. The temperature of the deep ocean is governed by the temperature of the melt from the icecaps, which being colder and heavier sinks as the circulation carries it away from the icecaps and around the globe.

      • David Springer

        Yeah right. Nice just-so story. Did you make it up yourself or get it from someone else?

      • David Springer

        Here’s a narrative for you to construct.

        The average temperature of the ocean mixed layer today is 14C and has presumably been 14C for 10,000 years. How did the average temperature of the entire basin get to 4C without that being the average temperature of the top layer?

        A followup, how does anyone know the conveyor belt was doing say 1000 years ago, 10,000 years ago, and 50,000 years ago?

        Spin me a yarn, Pratt.

      • Chief Hydrologist

        ‘The world’s oceans, however, differ from the atmosphere in important ways.

        1.Water is most dense at 4° C. It expands when cooled below this temperature, and when heated above this temperature.

        2.The oceans are heated from above.

        These two facts have important implications for ocean circulation. Because water becomes less dense (expands) when heated above 4°C, heating from the surface will form a warm, buoyant, and stable layer at the surface. There is no tendency towards deep convection and mixing. Conversely, if water is cooled below 4° C, it becomes less dense, so a cold, buoyant and stable layer forms on the surface. Water also undergoes a dramatic expansion on freezing, increasing in volume by 9%, so pack ice also forms a stable upper layer. Thus, in many circumstances, vertical motions of the oceans are suppressed. Only where ocean surface waters with temperatures over 4° C undergo cooling can surface temperature changes cause vertical motions, and these tend not to extend to very great depths.

        However, ocean-water density is also influenced by salinity. The sea contains dissolved ions (charged molecules or atoms), mainly sodium and chlorine, and is consequently denser than fresh water (typically 1035 kilograms per cubic metre compared with 1000 kilograms per cubic metre). The salinity of sea-water is rather variable, however. Evaporation at the sea surface will increase the salinity (as water is evaporated off but dissolved ions are left behind), whereas inputs from rain or river run-off will decrease the salinity. Salinity will also be increased by the formation of sea ice. As ice forms, salt is rejected from the freezing water, so the remaining liquid water is enriched in salt.

        The density of sea-water is thus a combination of its temperature and salinity. Saline waters at around 4° C will be dense, and will tend to sink, whereas warm, less saline water will be less dense, and thus will be buoyant. This has important implications for vertical motions in the sea. Deep convection, whereby surface waters are transported to the deep oceans, is only possible where saline waters are cooled. This occurs only in a few locations in the North Atlantic off Greenland, and in the Weddell Sea, part of the Antarctic Ocean south of the Atlantic. In both of these locations, saline surface waters are cooled and sink, a process known as downwelling. The surface waters are made saline by evaporation nearer the equator in the case of the North Atlantic, and by the formation of sea-ice – which also expels salt -in the case of the Weddell Sea. Shallower downwelling, to intermediate depths in the oceans is also possible in areas of high evaporation.’

      • David Springer

        Chief Hydrologist | May 20, 2013 at 8:06 pm |

        ‘The world’s oceans, however, differ from the atmosphere in important ways.

        1.Water is most dense at 4° C. It expands when cooled below this temperature, and when heated above this temperature.

        Imbecile. Seawater reaches maximum density at -1.8C.

        http://www.nature.com/nature/journal/v428/n6978/fig_tab/428031a_F1.html

      • David Springer

        The ocean convection lecture makes a germane point to Pratt’s prattle (I wonder if that’s where the word prattle came from?).

        Deep convection, whereby surfce waters are transported to the deep oceans, is only possible where saline waters are cooled. This occurs only in a few locations in the North Atlantic off Greenland, and in the Weddell Sea, part of the Antarctic Ocean south of the Atlantic.

        So Pratt wants us to believe that a few small locations where the ocean is AIR COOLED enough to sink to the floor and flow equatorward happens fast enough to fill nearly the entire ocean basin in 1000-2000 years. Right. AIR COOLED. Imagine the giant sucking sound at these locations as cold air rushes in from God only knows where to replace the air heated by the ocean. The water would have be sinking fast enough to cause gigantic vortexes like a huge toilet flushing. I’ll tell you what, the flushing toilet starts and ends with Pratt’s prattle. The ocean basin’s average temperature is a consequence of the average temperature of the mixed surface layer over an entire glacial/interglacial cycle. Most of the mixed layer today is simply too warm to allow the vast bulk of the ocean (90% of it’s volume) to have a temperature of 3C.

      • Chief Hydrologist

        ‘The ocean water is constantly churning underneath, bringing nutrients up to the top. The difference in density of cold water versus density of warmer water is responsible for ocean currents and upwelling. Warm seawater floats and cold (4°C), dense (1 g/cm3), seawater sinks, so ocean temperatures also vary across the surface and into the depths.

        Seawater is saturated with salts at 35 ppt and at 4°C the salinity causes the density to actually be 1.0278 g/cm3. This slightly heavier density is another contributing factor to upwelling as it causes the water molecules to roll over each other.’ http://marinebio.org/oceans/temperature.asp

        Water is densest at 4 degrees C – and at less than that salts precipitate out leaving it less dense in 2 ways. Unlike springer who is more dense in all ways.

        Salt water freezes at -1.8 degree C – but… ain’t got nothin’ to do with maximum density.

        And yes – the residence time of water in the deep ocean is said to be 1500 years give or take.

        ‘Deep water circulation is driven by density differences. Density is in turn dependent on the temperature and salinity of the water sample. Because of this dependence, deep water circulation is sometimes referred to as thermohaline circulation. Thermohaline circulation is very slow. Once water sinks from the surface, it can spend a long time away from the surface until that water rises again to become part of surface circulation. Some scientists estimate that the residence time of deep water, the time spent away from the surface, can be about 200-500 years for the Atlantic Ocean and 1,000-2,000 years for the Pacific Ocean.’ http://www.windows2universe.org/earth/Water/deep_ocean.html&edu=high

        What else was there? Who can be really bothered with someone who makes it up from whole cloth – and is so charming at the same time. Just yesterday he was chiding me for talking about space cadets. What is this bizarre behavior? Par for the course for an intelligent design ubertwit?

      • David Springer

        Chief Kangaroo Robert “Skippy” Ellison writes that salts precipitate out of seawater when it cools below 4C.

        Still making things up as you go along? See if you can find a link to support what you just made up. Good luck. :-)

      • The 4C phenomenon is one of the more perplexing puzzles. That is the maximum density for fresh water but salt changes the density/temperature relationship. If you slowly freeze salt water more brine is expelled than if you rapidly freeze the salt water. Sea ice extent varies the volume and rate of freezing changing the mixing depths. All combined, the “average” ocean temperature range is ~2C to ~4C with approximately a 1700 year lag between surface and deep ocean temperature.

        Because of the asymmetry of the polar region ocean areas, the temperature of the subsiding NH deep water flow is warmer and more saline than the southern hemisphere.

        BTW, the freezing point of brine is approximately -18 C degrees. Coincidence?

      • @…

        Water is densest at 4 degrees C – and at less than that salts precipitate out leaving it less dense in 2 ways.

        Salt water freezes at -1.8 degree C – but… ain’t got nothin’ to do with maximum density.

        Actually, that’s not true. I’d always thought it was, but an actual look at the Salinity dependence of the freezing point of water and the temperature of maximum density shows otherwise. “The intersection of the curves, at a salinity of 0.025, represents a change in the freezing properties of salinewater.” (For reference purposes, the salinity of most ocean water is around 0.035 (3.5%)). Fresh water, and water with a salinity of less than about 0.025, actually reaches a point of maximum density at a higher temperature than the freezing temp. This means it will tend to sink, and if cooled from beneath water at the freezing temp will tend to rise to the surface, where it will freeze.

        This is not true of sea-water. Of course, if ice forms, it will be pretty much salt-free, with a lower density than the (saline) liquid, so will still float to the top. The fact that the depths of the oceans are at around 4°C, which is also about the temperature of maximum density of fresh water, may be a coincidence. In any event, there doesn’t seem to be any “hard” barrier to small changes in the temperature of the ocean depths.

        (Contrary to what I had always thought. Fortunately, I actually went an looked before posting what I thought. Always a good idea.)

      • AK, “This is not true of sea-water. Of course, if ice forms, it will be pretty much salt-free, with a lower density than the (saline) liquid, so will still float to the top.”

        “Pretty much” is one of those things that may matter. The ice of course will float, but what happens with the sinking more saline water? The rate of freezing changes the rate of brine rejection. That and density dependance on temperature and salinity changes the sink rate and temperature. Now you have the mother of all layered cocktail problems.

        The mixing of the layers depends on the rate and angle of flow relative to the density layer. This problem is no pycnic, it is diapycnal versus isopycnal mixing at a convergence layer. I believe there is one of those unclaimed million dollar prizes if you are interested.

        Changing the rate of mechanical mixing, surface winds, tides etc. changes the mixing efficiency. Since the Hemispheres are 180 out of phase seasonally and fairly well isolated by Coriolis Forces, you have long term delays in surface impact depending on the average depth of mixing. 15, 30, 60, 90,120, 150 etc. up to the length of a precessional cycle.

        It is a “travesty” that these lags where not considered at the start of the problem, but that’s what you get when you over simplify.

      • Chief Hydrologist

        ‘Seawater is saturated with salts at 35 ppt and at 4°C the salinity causes the density to actually be 1.0278 g/cm3.’

        http://marinebio.org/oceans/temperature.asp

        Supersaturated solutions will crystalize salt – but I think this is not quite what it says on reflection.

        Deep water formation is driven by salinity and temperature. At about 4 degrees C and 35 ppt salt – water in the North Atlantic sinks. The retention time of deep water is some 1000 years.

        http://www.st-andrews.ac.uk/~dib2/GE1001/oceans.html

    • David Springer

      Correction: “male brass monkeys beware”

      Max

  15. I suspect those hoping for some softening of the IPCC position with respect to climate sensitivity will be disappointed. They can’t climb down without admitting that they were wrong, and a bureaucracy by its nature just about never does that. Let’s no forget who’s in charge if this dog and pony show, a fraud name Pachauri who’s already demonstrated he’d rather lie than admit to being mistaken… as he did with his famous “voodoo science” smear..

    More and more Individual scientists will jump ship however in order to save their reputations and careers.

    • Rud Istvan

      That is why I suggested upthread how significant this paper is. Two of the authors are the lead on AR5 sensitivity. This paper echoes several others about significantly lower sensitivity, based on net net actual observations, not just models. So if the IPCC does not climb down on ESC, they prove their scientific dishonesty. And Nic Lewis says it made the inclusion criteria. Now we have see what changes from AR5 SOD, which was pretty much an echo of AR4. Ther is now a clear, simple test of the IPCC process. ECS best estimate remains around 3, it is scientifically unsound and dishonest. Best estimate falls to 2 or below, the process is sound but then all the policy conclusions should change significantly.

  16. I have the feeling that if the analysis was in roving 13-year chunks, 1970-1983, then 1971-1964, e.t.c. the returned value for CS would return the sinewave of heat storage/release that I and others have modeled as part of the temperature record.
    The very low CS during the 70’s indicates heat storage and 30 years later, the high CS in the naughties shows this release of accumulated heat.

  17. From New Scientist

    After heating rapidly in the late 20th century, Earth warmed only slowly in the last decade, partly as a result of natural cycles in the climate system.

    Web, who has been a denier until now?

    Are we ever going to get an apology for the abuse we received for telling the truth?

  18. Who is going to author the paper that challenges this finding?

    • The first team that finds missing heat under the ice!

      • I guess it needs something like that because given the robustness of this result over a multidecadal period it looks unlikely that adding new data to the present data sets is going to change things much for the decades to come.

      • Well, there is this persistent thing about the heat buried in the depths of the oceans going Jason on us in the future.

        So anyway, in the articles about this letter contain words of caution from a scientist named Sherwood.

  19. Barry Woods

    How times have changed – there is an audio of Prof Myles Allen at the Royal Institution, giving the BBC’s Roger Harrabin, a very hard time about how irresponsible the BBC were, even libelous to go on about ‘Hide the Decline’ – Harrabin was really on the back foot!

    Actually, I’l dig out the transcript, I’m sure there is one.

    Additionally, Myles is on the same side of a debate as Benny Peiser and David Rose next week, at the Oxford Uniion

  20. Pingback: Why the new Otto et al climate sensitivity paper is important – its a sea change for some IPCC authors | Watts Up With That?

  21. Temperature data is not following CO2.
    There in nothing in the temperature data that indicates it will ever get warmer than it was in 1998. Rutgers Snow Extent showed that 2012 had a higher Snow Cover Extent than 11 of the other years in the past. When the oceans are warm and the Arctic Opens, it snows like crazy. It snows like it has since October 2012 until May 2013. This puts a lid on temperature, no matter how much CO2 makes green things grow better.
    Does anyone ever just look at the data and just think.
    Climate Models continue to remove Snow Extent in the face of data that is clearly not supporting the Alarmism.

  22. You can use Model Output that disagrees with Actual Data, for a while.
    After a Decade, people get suspicious.
    After two Decades even some Climate Scientists get suspicious.

  23. One thing is clear: It’s becoming harder and harder to be a full on alarmist, at least for those with some integrity. As has been pointed out by WUWT and others, the Otto et all paper is critically important because it’s a climbdown for some IPPCC authors.

    Joshua and others love to sneer. but the great unwinding has begun.

    • Steven Mosher

      Joshua will surely diss this paper because as he will tell you Nic Lewis is no climate scientist. One wonders what the motivated reason of the other 14 was.

    • R. Gates, Skeptical Warmist, etc.

      In the full scheme of things this paper will be fairly meaningless. Some authors may use it as a “climbdown” (though even that suggestion is dubious), but more importantly, the myopic focus of climate sensitivity on the troposphere will be adjusted in the light of the overwhelming energy being stored in the global oceans. We may be tropospheric dwelling creatures, but the oceans are the masters of both our climate past and climate future and will call the shots on the future warming of the troposphere which is inescapably where the energy being stored in the oceans will eventually have to pass through on its way back to space.

  24. I may need the help of David Springer or Arno Arrack on this one but is there any chance our planet may begin to extract more CO2 than we emit and what would be the consequences if this were to occur?

  25. To clarify my previous question I realize there are the natural extractions between the atmosphere, the oceans, etc. on an everyday basis. But are we in danger of first getting to equillbrium and then the extractions overwhelm our emissions to an extent where our croplands and forests are starved of the nutrition required for a healthy and vibrant planet?

    • Actually, AK and Springer are deviously devising programmable bacteria as we speak to steal atmospheric carbon dioxide in order to corner the synfuel and carbon nano-fiber market. BartR should be along shortly to include both carbon and carbon dioxide in his carbon commons legislation so that revenue neutral taxation might prevent this travesty. The fate of the world hinges on the creative tax policy of a lone British Colombian statistical super hero as he fights against the evils of big biotech.

      • captdallas 0.8 or less | May 20, 2013 at 11:57 am |

        I realize satire need make no sense.

        However, if AK and Springer were to grow the Carbon Cycle synthetically by new technology, they’d arguably be recipients of double dividends, not prevented from going ahead.

        In BC, Australia or any other jurisdiction with a fee and dividend system, the new high tech jobs created by AKSpringer Inc. would thereby be paid more than elsewhere in the world.

        See how that works?

        Until AKSpringer Inc. draws down so much CO2E that the demand for lower CO2 vanishes, they’d be rolling in dough due their entrepreneurial capitalist spirit.

        And BC, or Australia, or wherever would be able to grab up massive royalties through carbon trading markets, gaining a positive trade position and bursting reserves, allowing the government to eliminate all other taxes, as well as government debt.

        As corporate headquarters and the ultra wealthy the world over migrate with their fabulous wealth and high-paying service sector jobs, Australia, or BC, or wherever would become a beacon of power, and home to secret cabals controling the lives of serfs and proles everywhere, in a carbonless utopia.

        See how satire is meant to be done?

      • As CEO of my new carbon bio-capture company, I’ll concentrate on building my carbon capture factories in regions where there are subsidies for solar power, and I can acquire carbon credits I can trade to fossil-carbon burning energy utilities. I’ll create methane plants for use in subsidized power plants, or feed-in to the consumer market, I’ll create sugar/starch plants for creating fodder for the meat trade, I’ll create feedstock plants for use making graphite and carbon nano-fibers, I’ll create biofuel plants for making liquid fuels for vehicles, and so on.

        As economies of scale and Swanson’s law bring the price of my hydrogen-fueled biosynthesis down exponentially, I’ll reinvest much of the profits from selling methane, fodder, graphite and other construction materials, and fuel into creating ever more solar-powered feedstock plants, fueling a revolution in construction, and generating an ever-growing market for my product.

        As the atmospheric CO2 reaches pre-industrial levels, I’ll ready my plan to privatize CO2 capture and release, so that the level will be maintained at a steady 280ppm, just like 1700. And then…

        The board of directors will remove me as CEO and replace me with somebody who’ll turn my company into a devouring monster that keeps dragging more CO2 out of the air and drags the world into the next ice age.

        And that’s only semi-satire. Similar things have happened.

    • But are we in danger of first getting to equillbrium and then the extractions overwhelm our emissions to an extent where our croplands and forests are starved of the nutrition required for a healthy and vibrant planet?

      Yes.

  26. Sorry Mosher…You have long ago lost any credibility with me or those who need some honest enlightenment…you need to join your friend Hansen on the picket line or anywhere else the crazies gather.

  27. Thank you Herman Pope, I also depend on people like you for an honest evaluation for my concern.

  28. I’ll offer a few tentative comments on this subject, since I recently commented here on some of the relevant principles. The material in Nature Geoscience has already provoked controversy in the blogosphere, but also within the climate science community, including a variety of criticisms. What I’ll suggest is that their reported values for equilibrium climate sensitivity (ECS, range 1.2 – 3.9, best value 2C) and TCR probably underestimate the “true” values, perhaps substantially. I’ll suggest with more confidence that there is not at this point much reason to deviate from the standard ECS estimates in the IPCC reports (range 2 – 4.5, with a most probable value close to 3C). I suspect that IPCC estimates of TCR may need to be revised downward slightly, but I won’t address that issue further. Here are some points to ponder.

    1. A minor point – what appeared in Nature Geoscience is a Letter to the Editor rather than a “paper” or “article”. I imagine that it received some editorial attention, but perhaps not the level of critical review reserved for accepted papers or articles. The extensive authorship by well known figures may have hastened its publication.

    2. I think Jim D and others have referred to the hazard of estimating ECS based on data from only recent decades, energy balance calculations, and an assumption of a linear radiative response to a specified surface temperature change. Those estimates are more correctly described by the term “effective climate sensitivity”, and are likely to underestimate ECS when the intervals were characterized by significant forcing, mainly because of delayed feedbacks that over time gradually reduce the rate of radiative cooling below the initial rates. A number of recent papers follow this same pattern, and so the same reservations apply to them.

    3. The authors of the Letter appropriately caution against over-interpretation of their data in the light of many uncertainties about the magnitude of forcing (and also of ocean heat uptake, which I won’t address further here). What I suspect to be the most serious flaw in the estimates relates to the values for aerosol forcing that led to their somewhat low ECS and TCR estimates. What they did was to take multimodel estimates of aerosol negative forcing and then somewhat arbitrarily add 0.3 W/m-2 to them, making them less negative and thus increasing total positive forcing – the stronger forcing led to reduced ECS and TCR estimates. They justified the addition based on satellite observational data. I suggest that this added forcing may well have been a step in the wrong direction. Instead of making the modeled aerosol forcings less negative, it’s possible justify increasing their negativity, which would significantly increase estimated ECS and TCR values. The basis of this conclusion is a different set of observational data, not derived from satellite estimates of aerosol behavior but of observed values of the extent to which sunlight has been impeded from reaching the Earth’s surface, including values observed under clear-sky conditions (i.e., areas not covered by clouds). The relevant reference is Inferred Underestimation Of Aerosol Direct Effects Over Europe, China, Japan, and India. The degree of underestimation is striking in all regions, but particularly so over China and India “where the multi-model mean yields a decrease in clear sky proxy radiation of -1.3±0.3 and -1.2 ± 0.2 W m-2 decade-1, respectively, compared to observed decreases of -6.5 ± 0.9 and -8.2±1.3 W m-2 decade-1.” At least part of the decreases are attributed to underestimation of aerosol emissions. If underestimation of these magnitudes has been a general phenomenon, even IPCC estimates might be on the low side.

    I don’t wish to argue that to be the case although I wouldn’t exclude it. In fact, like the Nature Geoscience Letter, over-interpretation would be risky. The data come from one group (although references to earlier data are included). The data are not global, due to limited information from many other regions for the studied intervals. Equally important, these are not TOA forcings but surface changes in energy. To some extent, the surface reductions represent solar energy absorbed in the atmosphere by aerosols (particularly black carbon), and this will contribute to surface warming, although less so than if received at the surface. Certainly, caution is warranted in quantifying the significance of the reported results. At a minimum, however, it might be conservative to avoid correcting modeled aerosol forcing in either a more negative direction (Allen et al), or a more positive one (the Nature Geoscience Letter). That would leave an ECS estimated range not too different from the IPCC’s, and a “best value” of about 2.4C. We obviously haven’t heard the last word on this.

    • Lukewarm words indeed.

      The audit never ends.

    • Steven Mosher

      Fred.

      your point 1 is silly speculation.
      your point 2 is embracing the uncertainty monster.
      your point 3 is addressed in a sensitivity study

      ‘ I suggest that this added forcing may well have been a step in the wrong direction. Instead of making the modeled aerosol forcings less negative, it’s possible justify increasing their negativity, which would significantly increase estimated ECS and TCR values”

      This is covered in the SI. no need to speculate
      Sensitivity scenario D, without the aerosol adjustment TCR is 1.5 and ECS is 2.4.

      • Steve –

        In order:

        1. Silly speculation is one of my favorite activities.
        2. No, the underestimation of ECS by “effective climate sensitivity” estimates due to non-linear radiative responses has experimental support. The magnitude is uncertain.
        3. My point 3 is based in part on the SI – i.e.,the 2.4C ECS without the aerosol adjustment, but that’s the less important part of my point 3. The more important part is derived from my link to the JGR paper by Allen et al, indicating that aerosol negativity has actually been significantly underestimated. That goes beyond the SI. and could theoretically justify an ECS estimate well above 2.4 and possibly above 3C. It underscores the importance of better quantifying aerosol forcing.

      • Steven Mosher

        Fred

        ‘The more important part is derived from my link to the JGR paper by Allen et al, indicating that aerosol negativity has actually been significantly underestimated.’ over china . with no firm diagnosis of the problem I dont know how you want to conclude anything except ‘more study please.’

        It would also be nice to actually look at the GEBA data, links were busted.

    • Fred so which method for estimating CS do you favour? Because they all look like they have big holes in them.

      Is this estimate method better than Hansen’s back in 1988?

      • HR – Yes, of course they all have holes, and I don’t know enough to give a wise answer. as to which is best (or least worst). Looking at more than one method is probably a good idea. In any case, TCR may be of more practical use than any of the ECS estimates. As for the latter, my naive preference would be to take the generally well accepted value for doubled CO2 forcing of about 3.4 – 3.7 Wm-2 and then independently to try to estimate feedbacks by using model-based feedback estimates constrained by observational data on water vapor, lapse rate change, albedo change, clouds, etc. Conversely, some of the recent efforts based on simple energy balance principles strike me as quite shaky, including the new Nature Geoscience correspondence and the related blog material from Nic Lewis on WUWT..

        What I most worry about is the cherry-picking in the blogosphere that seizes on some single recent report to support preconceived positions on the subject.

        On Hansen 1988, if I remember correctly, his model generated an ECS value of about 4.2C or so for doubled CO2, which now seems too high as a most probable value. I’m not sure the current range of 2 to 4.5C is far off, though as a way of defining reasonable boundaries..

      • Heh, ‘naive’. Gotta luv ya, Fred.
        =========

      • Fred Moolten

        “A single report”?

        Well, yes, the one in the lead post is, indeed, a “single report”.

        But, in addition to this study, there have been several reports since 2011, all pointing to a 2xCO2 ECS below 2°C. Most of these have been (at least partly) based on actual physical observations rather than simply model simulations.

        Recent studies on 2xCO2 ECS
        Lewis (2013) 1.0C to 3.0C
        Berntsen (2012) 1.2C to 2.9C
        Lindzen (2011) 0.6C to 1.0C
        Schmittner (2011) 1.4C to 2.8C
        van Hateren (2012) 1.5C to 2.5C
        Schlesinger (2012) 1.45C to 2.01C
        Masters (2013)* 1.5C to 2.9C
        * not yet published

        The average range of these recent studies is 1.2°C to 2.4°C, with a mean value of 1.8°C, or about half of earlier model-based predictions cited by IPCC and very close to the Otto et al. study in the lead post.

        Max

      • The average range of these recent studies is 1.2°C to 2.4°C, with a mean value of 1.8°C, or about half of earlier model-based predictions cited by IPCC and very close to the Otto et al. study in the lead post.

        Hmmm. Calculating a mean across studies that use different methodologies. Now I know less than nothing about statistics, and apparently no one else is bothered your statistical methodology, but IMV, it is creates a statistically meaningless result.

      • Joshua, “Hmmm. Calculating a mean across studies that use different methodologies. Now I know less than nothing about statistics, and apparently no one else is bothered your statistical methodology, but IMV, it is creates a statistically meaningless result.”

        Careful now, that is exactly how the conical range was established. In the world of “leave no scientist behind” all results can be meaningful and used to determine physical elements of our biosphere.

      • Careful now, that is exactly how the conical range was established.

        Well, if it isn’t statistically valid, it doesn’t matter who’s doing it.

      • Joshua, “Well, if it isn’t statistically valid, it doesn’t matter who’s doing it.”
        That would depend on how stubbornly the ones saying that it is statistically valid defend their position.

        http://redneckphysics.blogspot.com/2013/05/monty-hall-or-not.html

        I use Monty Hall. Dr. Curry uses the Italian Flag. Part of the uncertainty is the “robustness” of the statistics. Once you get into the “games” Monty Hall is just as statistically valid as any for decision making in the face of uncertainty. Since 3C was “picked” it is best to switch doors. Which door?

      • Steven Mosher

        Fred,

        You didnt answer the question.
        The question was “which method?”
        you answered that TCR was most relevant.
        Well, paleo doesnt give you TCR and model sensitivity is nothing
        more than an expression of the uncertainity in aerosols.
        so what method? preferably one that uses observational data

      • Steve – I partially answered the “which method” question by referring to a method that starts with no-feedback sensitivity (generally agreed on at about 1.2C, or somewhere around 3.4 – 3.7 Wm-2 forcing), and then adding feedbacks based on modeling constrained by observations on water vapor, ice-albedo, clouds, etc. (see e.g., AR4 Chapter 8). All methods involve both models and observations. The most recent tendency to use simple energy balance models makes the models less complex but the results more dependent on assumptions and the accuracy of observational data on aerosols and ocean heat uptake, neither of which are important for the method I referenced above. In fact, the energy balance estimates aren’t ECS estimates at all, but “effective climate sensitivity” estimates that probably understate ECS (see earlier discussions in the “more on the pause” thread). That’s the best I can do, because all methods are flawed.

        As for TCR, I don’t know any way around the use of observational data on aerosols, which are highly uncertain. The paper estimates TCR to be below 2C, but that’s not a new estimate – Isaac Held had a post months ago to justify 1.8C as an upper limit

    • Matthew R Marler

      Fred Moolten,

      2 & 3 are worthy comments, but 1 is irrelevant.

    • Fred –

      I’ll suggest with more confidence that there is not at this point much reason to deviate from the standard ECS estimates in the IPCC reports (range 2 – 4.5, with a most probable value close to 3C). I suspect that IPCC estimates of TCR may need to be revised downward slightly, but I won’t address that issue further.

      It isn’t clear to me how to reconcile those two statements. If there is not much reason to deviate, then why do you suspect that the estimates may need to be revised downward slightly?

      • much reason to deviate from IPCC ECS…

        may need to revise downward slightly IPCC TCR…

        Clearer?

      • Yes. Thanks.

      • Steven Mosher

        Joshua you might wantto inform those 14 authors that they published with a guy who is an accountant and not a climate scientist.

      • steven –

        At the point where you repeatedly mis-characterize what I do and don’t say, the question of whether it is a simple misunderstanding on your part, or deliberate, becomes relevant.

      • Oh, and despite having been corrected, btw.

  29. That is what Hansen said:

    no missing heat
    models get ocean uptake wrong
    negative aerosols underestimated

    And

    No no non-warming in last two decades.

  30. Please note. If you read the New Scientist article, you will understand that they did not publish this paper to say that the skeptics/lukewarmers were right. Their take-home message is that while previously they claimed that in order to save the planet ‘something’ would have to be done by now already, now they are saying that there’s just enough time to ‘do something’ with the next Kyoto-type agreement.

    So rather than admit that their scare stories were wrong, they’ve just moved the goalpost back a bit, and now they’re preaching the identical message – delayed. It’s like the person who predicts that the world will end on June 13th, and when the day comes and goes, re-does the math, and announces that the actual date is August 23rd.

    • When it comes to grown-up debate

      “save the planet” needs to be relegated to the same level as “just like the Nazis” and “think of the children”

      On your general point I’d see it differently. Annan, who is an insider, seems to be suggesting this offers an opportunity for some sort of break with the past. But Gravy Trains are the hardest to derail.

  31. Richard Verney has posted some good comments at Bishop Hill:

    http://bishophill.squarespace.com/blog/2013/5/20/ecs-with-otto.html#comments


    No disrespect intended to Nic Lewis, I consider all the claims regarding the ability to assess climate sensitivity disingenuous, even bordering on the dishonest.

    It may be possible to calculate how CO2 behaves in laboratory conditions and hence to calculate a theoretical warming in relation to increasing CO2 levels in laboratory conditions. But that is not the real world.

    In the real world, increased concentrations of CO2 would theoretically block a certain proportion of incoming solar insolation so that less solar radiance is absorbed by the ground and oceans, and it would also increase the rate of out going radiation at TOA. Both of these are potentially cooling factors. Thus the first issue is whether in real world conditions the theoretical laboratory ‘heat trapping’ effect of CO2 exceeds the ‘cooling’ effects of CO2 blocking incoming solar irradiance and increasing radiation at TOA and if so, by how much? The second issue is far more complex, namely the inter-relationship with other gases in the atmosphere, whether it is swamped by the hydrological cycle, and what effect it may have on the rate of convection at various altitudes and/or whether convection effectively outstrips any ‘heat trapping’ effect of CO2 carrying the warmer air away and upwards to the upper atmosphere where the ‘heat’ is radiated to space. None of those issues can be assessed in the laboratory, and can only be considered in real world conditions by way of empirical observational data.

    The problem with making an assessment based upon observational data is that it is a hapless task since the data sets are either too short and/or have been horribly bastardised by endless adjustments, siting issues, station drop outs and polluted by UHI and/or we do not have accurate data on aerosol emissions and/or upon clouds. Quite simply data sets of sufficiently high quality do not exist, and therefore as a matter of fact no worthwhile assessment can be made..

    The nub of the issue is that it is simply impossible to determine a value for climate sensitivity from observation data until absolutely everything is known and understood about natural variation, what its various constituent components are, the forcings of each and every individual component and whether the individual component concerned operates positively or negatively, and the upper and lower bounds of the forcings associated with each and every one of its constituent components.

    This is logically and necessarily the position, since until one can look at the data set (thermometer or proxy) and identify the extent of each change in the data set and say with certainty to what extent, if any, that change was (or was not) brought about by natural variation, one cannot extract the signal of climate sensitivity from the noise of natural variation.

    I seem to recall that one of the Team recognised the problem and at one time observed “”Quantifying climate sensitivity from real world data cannot even be done using present-day data, including satellite data. If you think that one
    could do better with paleo data, then you’re fooling yourself. This is
    fine, but there is no need to try to fool others by making extravagant
    claims.”

    We do not know whether at this stage of the Holocene adding more CO2 does anything, or, if it does, whether it warms or cools the atmosphere (or for that matter the oceans). Anyone who claims that they know and/or can properly assess the effect of CO2 in real world conditions is being disengenuous.

    For what it is worth, 33 years worth of satellite data (which shows that temperatures were essentially flat between 1979 and 1997 and between 1999 to date and demonstrates no correlation between CO2 and temperature) suggests that the climate sensitivity to CO2 is so low that it is indistinguishable from zero. But that observation should be viewed with caution since it is based upon a very short data set, and we do not have sufficient data on aerosols or clouds to enable a firm conclusion to be drawn.

    May 20, 2013 at 10:28 AM | richard verney
    Further to my above post, in which I suggest that it is impossible to assess climate sensitivity until we fully understand natural variation, consider a few examples from the thermometer record (Hadcrut 4) before the rapid increase in manmade CO2 emissions.

    1. Between 1877 and 1878, the temperature anomaly change is positive 1.1degC (from -0.7 to +0.4C). To produce that change requires a massive forcing. That change was not caused by increased CO2 emissions, nor by reduced aerosol emissions. May be it was an El Nino year (I have not checked but no doubt Bob may clarify) but we need to be able to explain what forcings were in play that brought about that change, because those forcings may operate at other times (to more or less extent).

    2. Between 1853 and 1862 temperatures fell by about 0.6degC. What caused this change? Presumably it was not an increase in aerosol emissions. So what natural forcings were in play? Again one can see a similar cooling trend between about 1880 and about 1890, which may to some extent have been caused by Krakatoa, but if so what would the temperature have been but for Krakatoa?

    3. Between about 1861 to 1869 there was an increase in temperatures of about 0.4degC. What caused this warming that decade. It is unlikely to be related to any significant increase in CO2 emissions and/or reduction in aerosol emissions. How do we know that the forcings that brought about that change were not in play (perhaps to an even greater level since we do not know the upper bounds of those forcings) during the late 1970s to late 1990s?

    4. Between about 1908 and 1915 again there is about 0.4degC warming. What caused this warming during this period. Are they the same forcings that were in play during the period 1861 to 1869, or are they different forcings? It is unlikely to be related to any significant increase in CO2 emissions and/or reduction in aerosol emissions. How do we know that the forcings that brought about this change were not in play (perhaps to an even greater level since we do not know the upper bounds of those forcings) during the late 1970s to late 1990s? If the forcings that were operative during the period 1908 to 1915 were different to those that were operative during 1861 to 1869 can all these forcings collectively operate at the same time, and if so How do we know that the forcings that brought about that change were not in play during the late 1970s to late 1990s?

    One could go through the entire thermometer record and make similar observations about each and every change in that record. But my point is that until one fully understands natural variability (all its forcings and the upper and lower bounds of such and their inter-relationship with one another), it is impossible to attribute any change in the record to CO2 emissions (or for that matter manmade aerosol emissions). Until one can completely eliminate natural variability, the signal of climate sensitivity to CO2 cannot be extracted from the noise of natural variation. Period!”

    May 20, 2013 at 10:30 AM | richard verney

    • curryja | May 20, 2013 at 3:49 pm |

      Well, that’s one mainly internally consistent way to go, though far more ambitious than it appears to realize.

      It appeals to the Fallacy of Impossible Perfection, of course, but that’s hardly uncommon.

      More internally consistent would be to draw the opposite conclusion from the same fact pattern, that natural variability is impossible to extract from the CO2 signal, at this point.

      Ergo, we ought dismiss any natural variability that is not strong enough to produce a compelling independent signal in the GMT record.

      Volcano-related particulates do this for spans of up to about a half-decade. The solar cycle once did this for about a century and a half reliably up to about 1955. But that one’s gone, now.

      The rest? Nonsensical supposed waves we have too little data to confirm or disconfirm, and too little grasp of the mechanics of to stand on.

      (See, one can demand possible perfection, without resorting to fallacy.)

    • Judith

      In the same thread Richard kindly said this about my CET reconstruction to 1538

      climatereason says:
      May 20, 2013 at 12:16 am
      /////////////////////////////////////////////////

      Tony

      You are undertaking an extremely worthwhile exercise with CET, I am very impressed.

      I consider that many fail to appreciate the extent of natural variability (and the underlying strength of forcings that have brought about that change).

      I have often posted to the effect that the holy grail of cliimate science is the proper appreciation and understanding of natural variability. We need to fully know and understand this and its bounds. It is only after we possess a full understanding of natural variability that we can begin to eliminate it from the various data sets and thence extract a response signal (if any) to CO2.

      ——————–
      I agree with his comments on natural variability we need to know the extent of past natural variability before we can extrapolate any co2 element. It looks like at 280/300ppm co2 ceases to have any noticeable effect according to observational evidence

      However I appreciate people Iike Mosh who will argue radiative physics means the co2 effect has not yet made itself felt to any extent. However many people such as Dr Mann try to use the past to scare us about the future

      Tonyb

      • Btw

        I meant to say that Whether or not I agree with nic, he has done a very fine piece of work and deserves our admiration and thanks for all his hard work
        Tonyb

    • Richard Verney writes “Anyone who claims that they know and/or can properly assess the effect of CO2 in real world conditions is being disengenuous.”, and quoted by our hostess. I have been trying to say precisely this for years on Climate Etc. It is nice to know that I have company. All one can hope to do from the empirical data is to establish a maximum value for climate sensitivity.

      Now if someone whom our hostess trusts would just add that the estimations done by whatever means shed no light whatsoever on what the numeric value of climate sensitivity is, I would be more than happy.

      Please note, in theory, climate sensitivity can be measured, as I have stated many times. It is just that we do not have the technology at present to make the measurements.

      • …which is why in the paper they call them ‘estimates’.

      • Jim D and Jim Cripwell

        in theory, climate sensitivity can be measured, as I have stated many times. It is just that we do not have the technology at present to make the measurements.

        Then why in hell aren’t we spending at least three-fourths of the multibillion dollar/year “climate science budget” on developing that technology and making those measurements?

        Are we afraid of the answer we might get?

        Is it more convenient to simply hide behind model predictions which can be controlled than to take the chance that actual measurements might show even lower 2xCO2 ECS?

        Hard questions.

        No answers.

        As a rational skeptic, I smell a rat.

        Max

      • Max, you write “Then why in hell aren’t we spending at least three-fourths of the multibillion dollar/year “climate science budget” on developing that technology and making those measurements?”

        I was a little short in my remark. Not onl;y do we not have the tech nology, the technology is not even at the initial research stage. We have no idea how to develop the necessary technology. It could be dceades, even centuries, beore we can develop the technology.

    • I don’t like to be too harsh, but Richard Verney appears to have almost no understanding of climate dynamics as evidenced by his suggestion that cooling from CO2 absorption of solar radiation might approach or exceed warming due to its ability to absorb longwave radiation emitted from the Earth and atmosphere -http://www.globalwarmingart.com/wiki/File:Atmospheric_Transmission_png.

      • I had a similar reaction. It is almost not worth reading the rest when he demonstrates his lack of physics knowledge up front with his first references being to cooling and not warming effects.

      • Well, it’s not a universal truth that ‘wrong in one, wrong in all’ works at the restaurant at the end of the known truth.
        ==========

      • you just have to skip the crazy parts i think. his argument is that the data is too uncertain to determine climate sensitivity. If I actually believed him that climate sensitivity to 2xCO2 was unknown I would be calling for immediate action to curtail CO2 emissions.

      • OK, if it is not unknown, then it must be known. Please, lolwot, stop with the coy act; some of us are dying to know the climate sensitivity. Jim C’s passed into acceptance, and that’s ominously omening omega.

        Please tell. You know, so we’ll know how to act.
        ==================

      • Chief Hydrologist

        It is obvious from Fred’s absorption spectra that radiation is absorbed in some frequencies emitted by the sun. There is about a 5% overlap of absorption frequencies and solar emission.

      • The paragraph that starts “In real world” is not about this world but full of serious errors. That’s so bad that I cannot take seriously anything written by a person that shows such lack of understanding.

      • Fred,
        1)The latest climate energy diagram from Wild, Folini and Dutton, is showing that the downward IR flux of 345 W/m2 has a virtual temperature of 279 K and has a peak at 10.38 microns.
        2) Water has an extinction coefficient of just over 1000 at 10 microns;

        http://frienergi.alternativkanalen.com/Water_Atoms_filer/watopt.gif

        3) 50% of the photons that come from the sky that hit the oceans are absorbed in the first 10 microns, 905 in the top 20 microns and 99% in the first 30 microns.

        Now if you have faith in physics, tell me what happens to the energy content of these IR photons in the first 30 microns of the ocean. Are they thermalized and increase the temperature, or do they jiggle the molecules on the surface making it easier for them to break free of the water and become gaseous?

      • Matthew R Marler

        Chief Hydrologist: It is obvious from Fred’s absorption spectra that radiation is absorbed in some frequencies emitted by the sun. There is about a 5% overlap of absorption frequencies and solar emission.

        The energy flow diagrams of Trenberth and Fasullo and Graeme Stephens provide estimates of how much incoming radiation is absorbed in the upper atmosphere. It is not a trivial amount.

      • Graeme Stephens found that a 0.5 W/m^2 to 0.6 W/m^2 energy imbalance was going into the OHC as a heat sink. That is quantitatively consistent with what James Hansen has been finding for an ECS 3 C temperature rise for a CO2 doubling.

        http://www.iac.ethz.ch/doc/publications/Wild_etal_GlobalEnergyBalance_ClimDyn2012.pdf

        http://www.aos.wisc.edu/~tristan/publications/2012_EBupdate_stephens_ngeo1580.pdf

        To understand this, you have to place this in the context of what Mosh has pointed out as the difference between ECS and TCR.

        Hansen has found the forcing function to be about 1.5 W/m2.
        A value of 0.5 to 0.6 W/m^2 applied just to the ocean makes it 0.75 W/m^2 concentrated. This means that any fit to global temperature data is completely consistent with the land temperature rise being 50% bigger that the SST rise.

        Now think what Mosh posted. This means that 0.75 W/m^2 is NOT acting as a forcing function over just the surface of the ocean. This goes in the denominator of the ECS calculation as a subtractive offset.

        It is a very straightforward proportional engineering calculation.

      • Chief Hydrologist

        ‘• We calculate that during the past decade Earth has been accumulating energy at the rate 0.58±0.43 W/m2, suggesting that while warming of Earth’s surface has slowed during the 2000s, energy continues to accumulate in the subsurface ocean.

        • The apparent decline in ocean heating rate after 2004 noted in other studies is not statistically robust.

        • Our results do not support the claim that there is missing
        energy in the system.

        • Synergistic use of satellite and in‐situ ocean heat content anomaly
        measurements provide critical data for quantifying short and
        longer‐term changes in the Earth’s net TOA radiant imbalance.

        • New cloud data from satellite (MODIS, CALIPSO) key to unraveling
        role of clouds in a changing climate.’

        http://ceres.larc.nasa.gov/documents/STM/2011-10/16_.Loebpptx.pdf

        As I keep saying – the ocean heat content data is consistent with changes in the TOA flux. It has nothing to do with greenhouse gases. It is an order of magnitude greater than the theoretical increase in greenhouse gas forcing. It all happened in shortwave.

      • > OK, if it is not unknown, then it must be known.

        Let’s not forget that it could be an unknown known:

        In the case of the disappearing bees, there are things we know that we know (their vulnerability to pesticides) and things we know that we don’t know (say, how the bees react to human-caused radiations). But there are, above all, the unknown unknowns and the unknown knowns. There are dimensions of how bees interact with their environs which are not only unknown to us, but which we are not even aware of. And there are many “unknown knowns” in our perception of bees: all the anthropocentric prejudices that spontaneously colour and bias our study of them.

        http://en.wikipedia.org/wiki/Unknown_known

    • Matthew R Marler

      curryja: The second issue is far more complex, namely the inter-relationship with other gases in the atmosphere, whether it is swamped by the hydrological cycle, and what effect it may have on the rate of convection at various altitudes and/or whether convection effectively outstrips any ‘heat trapping’ effect of CO2 carrying the warmer air away and upwards to the upper atmosphere where the ‘heat’ is radiated to space.

      How it warms my heart to read that!

      • i once tried to get C. to comprehend that warming by CO2 might somehow be more effectively transported poleward, but he convinced me I was probably wrong.
        ==========

      • It’s nonsense. Also you make it look like judith curry said that. She didn’t. She was quoting someone else.

      • Matthew R Marler

        lolwot: She was quoting someone else.

        She included it in a list of “good” comments.

    • HR | May 20, 2013 at 5:00 pm |

      Until possible to prove, they’re unproven, and impossible to prove.

      How ought we proceed upon finding an impossibility?

      Agree it’s impossible, and that to the dgreee of its effects we have Uncertainty.

      Which, when we account for Uncertainty, we must take into account equally the high and low side in all cases.

      Will the future produce heavy snow in winter, or mild snowless seasons with no snow pack, and consequent arid springs and summers? Large uncertainty deems we must prepare for both in prudent policies, and in all years, thereby suffering the lost opportunity of overpreparing for each Risk.

      Floods and droughts in plenty and absent entirely, we must prepare for, under broad uncertainty ranges. Heatwaves and cold spells, blankets and air conditioners, the whole gamut become equally probable because they are equally uncertain. And while the temptation is there to listen to prognosticators with some harebrained hypothesis that we’ll have such-and-such weather this year, we’d be morons under Uncertainty to listen to them and plan only for what they predict while abandoning preparations for the equal other likelihoods.

      Verney’s argument is pure wish-fulfillment, the most abject form of denialism possible.

      While what the mathematics tells us is quite pessimistic, in that it demands we be pessimistic about mutually exclusive outcomes because we can’t know which is likelier, it is what it is. The cost of such Uncertainty is the cost of burning enough carbon to make the math turn out that way. It’s the cost of that one extra Forcing in the climate that wasn’t there before 1950. It’s the cost of the ‘cheap energy’ policy that drives the fossil industries by subsidy and favorable legislative climate and failure to price the Commons by the law of supply and demand.

      In short, it’s the cost of corporate communism.

    • @jc: increased concentrations of CO2 would theoretically block a certain proportion of incoming solar insolation so that less solar radiance is absorbed by the ground and oceans,

      What is that proportion? I would have thought it extremely low. In any event it heats the atmosphere, albeit at an average altitude of 7 km.

      and it would also increase the rate of out going radiation at TOA.

      I would have thought it decreased the rate, at least while not in equilibrium—once equilibrium is restored the outgoing rate increases again to become exactly equal to the incoming radiation. How do you figure it increases the rate?

      • Matthew R Marler

        Vaughan Pratt: I would have thought it decreased the rate, at least while not in equilibrium

        Outgoing long wave radiation is what is radiated in the outbound direction by upper atmosphere CO2; double that CO2, other things being equal, and you double the net outbound radiation rate. As usual, other things will not be equal, but the fraction of the upper atmospheric heat that is not a result of radiation of ILWR from the surface (e.g. from thermals and the hydrologic cycle) might be radiated outward at an increased rate if CO2 concentrations double.

        Doubling CO2 concentration will probably change the vertical distribution of heat over land and water in a way not quite modeled by the equilibrium assumptions.

      • The effective radiating level rises with more CO2, and the new level is colder, so it reduces the outgoing long wave, all things being equal.

      • @MRM: double that CO2, other things being equal, and you double the net outbound radiation rate

        Spoken like a statistician. :)

        What you’re ignoring here is that when the CO2 doubles in any given parcel of air, it also doubles in the whole air column above it. So yes twice as much radiation is emitted by a (sufficiently thin) parcel, but the extra CO2 it now has to pass through will trap it before it can pass to TOA.

        A good way to think about this is to put yourself in the position of a satellite observing the TOA from above. At any given wavelength absorbed by CO2 you can only see down so far into the atmosphere before it looks opaque. The opaque gas you see at that wavelength has an apparent surface amounting to the photosphere for that wavelength. Increasing the amount of CO2 increases the altitude of that surface.

        @Jim D: The effective radiating level rises with more CO2, and the new level is colder, so it reduces the outgoing long wave, all things being equal.

        Exactly right (“effective radiating level” is what I was calling “photosphere”).

      • Matthew R Marler

        Vaughan Pratt: What you’re ignoring here is that when the CO2 doubles in any given parcel of air, it also doubles in the whole air column above it. So yes twice as much radiation is emitted by a (sufficiently thin) parcel, but the extra CO2 it now has to pass through will trap it before it can pass to TOA.

        I think that the net effect on TOA outbound radiation that would result from doubling CO2 can’t be calculated on present evidence. Recall that doubling the upper atmosphere CO2 concentration would increase (double?) the amount of the incoming solar radiation that is absorbed in the upper atmosphere before reaching the ground or troposphere. According to the Graeme Stephens NatGeosci article linked elsewhere by David Springer, that is about 22% of all incoming solar radiation: 75 wm^2, nearly equal to the 88w/m^2 carried into the upper atmosphere via convection of latent heating.. 75w/m^2 is also 45% of the energy flux that is absorbed by the Earth surface. It is too large an amount of energy to ignore.

        Since you like logic and philosophy, I thought I’d make two more general comments.

        1. High dimensional non-linear dissipative systems seldom have equilibria, but equilibrium calculations are frequently made without the possibility of equilibrium being shown, If an equilibrium or approximate equilibrium can exist, it is the result of all of these energy flows, not their driver. The equilibrium approximation may be a “boundary condition”, but there are an infinite number of ways that the boundary condition can be met. What actually will be the result of a CO2 doubling can’t be known until the effects of CO2 doubling on all the fluxes are known.

        2. The transient climate sensitivity depends entirely on the changes of these heat fluxes during the transient. For the transient climate sensitivity , the equilibrium calculation does not even in the best of circumstances provide “boundary conditions” to the transient climate sensitivity.

        I am glad that you drew attention to an omission from my comment. Statisticians usually write about omissions (model bias) and random variation. I am happy to write as a statistician, as I nearly always address model inadequacies and natural variation.

      • Mattstat, “Recall that doubling the upper atmosphere CO2 concentration would increase (double?) the amount of the incoming solar radiation that is absorbed in the upper atmosphere before reaching the ground or troposphere. ”

        That doesn’t sound right. Assuming you mean double the amount of SW absorbed in the atmosphere by CO2, it wouldn’t. You would have the same Ln relationship but be on a flatter region of the curve. Water vapor interaction with both solar and increased CO2 is more interesting. The increase in upper level convection increases the percentage of ice crystals and the potential for mixed phase shenanigans. I call it the atmospheric heat pipe effect, but ice crystals, solar and ozone can produce a major negative feed back. In fact, mixed phase clouds were a large part of the atmospheric absorption missed by Trenberth, ~20Wm-2 worth of OLR.

      • Matthew R Marler

        Capt Dallas: That doesn’t sound right. Assuming you mean double the amount of SW absorbed in the atmosphere by CO2

        No, I meant double the amount of CO2, uniformly through the atmosphere (it is assumed to be rapidly well mixed — otherwise no calculations are possible at all.) The transient climate sensitivity to a doubling of CO2 presupposes a doubling of CO2, so that is what I assumed.

      • Mattsatt, right, CO2 is well mixed enough to be approximated as uniform. When you mention SW and CO2 at the same time that kinda throws things off. Pratt is right that additional solar absorption by CO2 is likely negligible. Other solar absorption and the ratio of total atmospheric absorption to surface absorption is a more significant issue.

      • Matthew R Marler

        Capt Dallas: Pratt is right that additional solar absorption by CO2 is likely negligible. Other solar absorption and the ratio of total atmospheric absorption to surface absorption is a more significant issue.

        You and I have dueling conjectures. I look forward to some research. Unless almost all incoming lwr is absorbed by CO2 at high altitudes, I do not see how doubling the CO2 concentration could have a “negligible” effect.

        Who here knows what fraction of incoming ilwr is already absorbed by CO2 at high altitudes? According to the CO2 absorption spectrum posted by Fred Moolten, it is unlikely to be 0%.

        “Negligible” is hard to define: a doubling of CO2 is hypothesized to produce a slight increase in net radiation at the surface, producing a 1% increase in Earth equilibrium surface temperature in the distant future, and a ~0.4% increase in earth surface temperature 70 years hence (the target article). Potentially biologically important and intuitively meaningful temperature changes are small in absolute (physical) terms, where the computations have to be carried out.

    • Steven Mosher

      Huh Judith?

      “”The nub of the issue is that it is simply impossible to determine a value for climate sensitivity from observation data until absolutely everything is known and understood about natural variation,”

      That’s hardly a defensible position.

      1. it is possible to determine a value. Nic just did.
      2. There isnt a single field where we know absolutely everything
      3. At this stage in his argument he calls natural variation a thing to be explained, and at the end he calls it noise. In truth you don;t have to understand natural variation you just have to be able to bound it, and then only if your are interested in doing attribution which I think is overrated. Attribute all the warming since 1750 to today to “natural variation” and its still a bad idea to burn all the carbon. Attribute all the warming since 1750 to today to natural variation and its still a bad idea to rely on coal.

    • The “nub” of Richard Verney’s comments (as I see it):

      The nub of the issue is that it is simply impossible to determine a value for climate sensitivity from observation data until absolutely everything is known and understood about natural variation, what its various constituent components are, the forcings of each and every individual component and whether the individual component concerned operates positively or negatively, and the upper and lower bounds of the forcings associated with each and every one of its constituent components.

      Uncertainty writ large (and the key problem with all ECS estimates, including the latest ones.).

      Max

      Max

      • It’s not possible to determine TCR with absolute precision until everything is known.

        It’s by now fully possible and practical to do it reliably enough to serve as basis for decision making. There are difficult problems in decision making, but this is not one of them.

      • Peter Lang

        Pekka Pirla,

        There are difficult problems in decision making,

        The only real difficulty is educating the so called ‘Progressives’ to stop blocking progress. If we could get through their thick skulls – such as their representatives who blog on this site – then they could start educating their comrades, and the “difficult problems in decision making” could be overcome within a decade or less.

      • The world would be simple if either pure libertarians or regulators were fully right. If either pure view would be correct it would have won already.

        Unfortunately the reality is not that simple and we must search for compromises and finding wise compromises is always difficult.

      • Peter Lang

        Pekka,

        I’ll try to put it another way because you seem an expert at obfuscation.

        You said in the comment I replied to:

        There are difficult problems in decision making,

        .

        My point is that the ‘difficult problems‘ are largely due to ideology. They are not technical.

        Your continually advocacy for uneconomic renewable energy is a classic example. If you refuse to accept the facts on the economics and prospects of renewable energy, and can’t be educated on this (despite holding a chair of energy economics), how on Earth can we expect to make progress with those even more strongly attached to their ideological beliefs?

        The ‘difficult problems‘ you refer to is clearly one of ideology blocking progress.

    • Steven Mosher

      Lets take another example of Verney’s overstating the case. What you see is a an unjustified certainity on his part about observations

      “The problem with making an assessment based upon observational data is that it is a hapless task since the data sets are either too short and/or have been horribly bastardised by endless adjustments, siting issues, station drop outs and polluted by UHI and/or we do not have accurate data on aerosol emissions and/or upon clouds. Quite simply data sets of sufficiently high quality do not exist, and therefore as a matter of fact no worthwhile assessment can be made..”

      1. The datasets are not too short to give is an estimate. He merely asserts this. This tactic is akin to those who tell us that 15 years is “too short” to tell us anything. The length of a dataset informs your uncertainty, it does not preclude making conclusions with the proper error bars.

      2. He claims the datasets have been bastardized by adjustments when all the published science shows otherwise.

      3. Siting issues. There is one paper on siting issues, Fall et al. It showed no effect.

      4. Station drop outs: the great thermometer drop out is a myth. It has been shown numerous times that the estimates do not change by either
      using fewer stations, or more stations, or only those stations which have complete records.

      5. UHI. all published papers on UHI find values of effectively zero bias in the record. Being generous we can say there is an upper bound on the UHI bias of .1C per decade in the period that this paper used. Since the land is 1/3 of the total this bias is within the error bands of the paper, they used +- .2C

      It is not a matter of fact that no worthwhile assessment can be made. If folks like they can adjust the temperature down for any of these factors or adjust the error bars. The net result will be to shift the PDF to the right as these issues are focused on moving the numerator down from .75C
      ( see the SI) moreover the paper used HADCRUT4 which we know is biased low.

      • 1. The datasets are not too short to give is an estimate. He merely asserts this. This tactic is akin to those who tell us that 15 years is “too short” to tell us anything. The length of a dataset informs your uncertainty, it does not preclude making conclusions with the proper error bars.</blockquote

        This seems worthy of repetition for emphasis – and the logic if it seems well-abused by combatants on both sides of the debate.

        So steven, have you seen anyone attach error bars to the "global warming has stopped/paused" conclusions – which are based on drawing conclusions from relatively short duration data?

  32. Judith wrote: “The New Scientist has an article on this paper that is worth reading.” What ($*^$)%#&** !!!

    Paragraph 3 includes: “Temperatures are currently rising faster than they have been for 11,000 years.” Over what period is this true? Temperatures haven’t risen over the last 17 years. Do we have really have data for the past 11,000 years with the accuracy and resolution to show that 20th century warming is unprecedented? The warming rate in the second half of the 20th century was comparable to earlier warming rates in the instrumental record (that haven’t been attributed to GHGs). .

    Paragraph 4 says: “Governments have promised to limit the world to 2 °C of warming – the agreed threshold for dangerous climate change.” SOME governments currently have a 2 degC objective, but no government on its own can promise to limit warming to 2 degC. That requires an international treaty. No treaty can limit warming to 2 degC, because no one can specify the maximum GHG concentration consistent with keeping warming below 2 degC. At best, a probability density function can be used to estimate that 450?, 500?, or 600? ppm of carbon dioxide will provide a 95%, 90%, or 50% chance of keeping warming below 2 degC. Due to uncertainty about carbon sinks, we can’t accurately calculate how much CO2 we can emit and still keep atmospheric levels below these thresholds. And finally we can’t keep anthropogenic warming below 2 degC, because that requires accurate knowledge of the global temperature before the Industrial Revolution, smack in the middle of a “naturally cold” period, the LIA, and we don’t have a clear understanding of the natural variability that produced the LIA.

    Paragraph 9: “The team focused on how much hotter the planet will be in the year that carbon dioxide concentrations reach double their pre-industrial value. On current trends, that will happen between 2050 and 2070. Previous studies had suggested temperatures would rise up to 1.6 °C, but Otto found a temperature increase of 1.3 °C.” In AR4, the central estimate for TCR was 2.0 degC. New Scientist is trying to pretend that this is a minor refinement of previous estimates, not a dramatic reassessment.

    Paragraph 10: “It might buy us five or ten years,” agrees Chris Forest of Penn State University in University Park, although he cautions that the problem hasn’t gone away.” Laughable.

    Paragraph 16: “”The observations are telling us one thing and the climate models are telling us another,” says Forest. He thinks the most likely range is between 2.5 and 3 °C.” Why? WIthout a scientific rational, this is simply a random opinion.

    Paragraph 17: “For the last few years, governments have been planning to sign a deal in 2015 that will come into force in 2020.” The EU may be HOPING to negotiate such a deal.

    • Fisk a fillet of
      Physicly fishy science.
      Sensitive knife edge.
      ============

      • Kim,
        NIce.
        PG

      • David Springer

        Mosher the air temperature above melting ice (presuming surface melt) is 0C or higher. The question was trivial and pointless so I ignored it. Ice mass is overwhelmingly lost by melting at the bottom of the glacier from geothermal heat and secondarily from calving where it meets the ocean. Surface melt is insignificant in comparison. Mass is gained or lost by snowfall atop the glacier being more or less than the loss from the geothermal melt and calving both of which are relatively constant. Snow accumulation rate is the primary free variable.

      • Steven Mosher

        David. the issue I raised was the change in greenland air temps.
        this is not about bottom melt which nobody questioned.

        write that down

    • “Temperatures haven’t risen over the last 17 years”

      Not so, check out the mass loss of the greenland ice sheet:
      http://www.arctic.noaa.gov/reportcard/greenland_ice_sheet.html

      • David Springer

        I poured a glass of icewater yesterday at sunset. The temperature was falling but the ice still melted. So how does Greenland ice loss show that temperature was rising while it was melting?

        You say some pretty dumb things but that was dumb even for you.

      • lolwot

        The subject was global temperature, not the Greenland Ice Sheet.

        Duh!

        Max

      • It’s not ME saying that. It’s the esteemed Pulkovo Astronomical Observatory in Russia.

        “In 2005 data from NASA’s Mars Global Surveyor and Odyssey missions revealed that the carbon dioxide “ice caps” near Mars’s south pole had been diminishing for three summers in a row.

        Habibullo Abdussamatov, head of space research at St. Petersburg’s Pulkovo Astronomical Observatory in Russia, says the Mars data is evidence that the current global warming on Earth is being caused by changes in the sun.”
        http://news.nationalgeographic.co.uk/news/2007/02/070228-mars-warming.html

        See, according to a famous russian scientist, mass loss of ice at the pole of a planet is a measure of whether that planet is warming or cooling.

        Don’t shoot the messenger Springer!

      • Mike Jonas

        lolwot – You say ““Temperatures haven’t risen over the last 17 years”
        Not so, check out the mass loss of the greenland ice sheet”

        On Earth, mass loss of an ice sheet over a period of time most likely indicates rising temperature. But the ice sheet will then continue to lose mass after the air temperature stops rising, because the oceans will still be warm. (Changes on an annual basis are mainly caused by winds and currents – I can find paper(s) to quote if needed.)

        On Mars, there are no oceans, so I expect the rules are different. What Abdussamatov has concluded may well be correct even though his logic would be incorrect if applied on Earth.

      • The claimed warming on Mars is between 2002 and 2005. But solar output dropped from 2002 to 2005.

        And did the Earth really warm between 2002 and 2005?

        Whichever way you cut it the claim that there is evidence here for the Sun causing warming on Earth is rubbish.

      • David Springer

        How does that prove it was getting warmer on Mars during those three seasons? It may have gotten warmer in the first season then held a constant temperature thereafter.

        In fact that’s what happened on the earth. It got warmer during 1997 and 1998 from an extreme El Nino then held the higher temperature constant for the next ten years then for the next three years (to now) temperature has declined a lot from back to back La Ninas.

        As far as you being a messenger that would be a compliment you don’t deserve as you didn’t transmit the message but rather your own mistaken inferrals taken from it.

      • Steven Mosher

        temperature change over greenland has been pretty low relative to other land masses at its latitude.

        whats the temperature above melting ice? hehe.

      • David Springer

        Greenland’s ice sheet predominantly melts from below not from above. Calving is the second largest source of ice loss. Melting from the top down is a marginal phenomenon. This is encyclopedic (i.e. basic) knowledge. Ice sheets are warmer at the bottom than at the top due to geothermal heat.

        Contrary to urban legend ice sheets grow when the air temperature is warmer not colder. That’s because, calving aside, mass is added at the top from snowfall and lost from the bottom from geothermal melting. Warmer temperatures cause more snowfall and greater mass accumulation.

        I can’t tell from Mosher’s drive-by snark if he’s aware of how ice sheets grow and shrink or not but clearly loltwat hasn’t a clue.

      • Alexej Buergin

        This reminds me of the dumb fellow who said he is ten times as smart as Burt Rutan.

      • Steven Mosher

        You didnt answer the question David. whats the air temp above melting ice.. or melting snow for that matter.

        But for grins, tell me about the seasonal changes in greenlands temperature compared to other areas with the same latitude.

        come on.. tell me..

      • HI Mosh

        We have some interesting coastal and ice sheet data for Greenland

        Professor Phil Jones illustrated temperatures in Greenland during 1784-2004 (published 2006) which usefully complements the NOAA and NASA (arctic wide) temperatures His graphic showing coastal instrumental records comes from Page 10 figure 10 of the following link.

        http://www.cru.uea.ac.uk/cru/data/greenland/vintheretal2006.pdf
        This link provides the composite data used; (http://www.cru.uea.ac.uk/cru/data/greenland/swgreenlandave.dat

        “The warmest year in the extended Greenland temperature record is 1941, while the 1930s and 1940s are the warmest decades. Two distinct cold periods, following the 1809 (‘‘unidentified’’) volcanic eruption and the eruption of Tambora in 1815 make the 1810s the coldest decade on record.”

        The Figure 10 composite raises some intriguing issues. The warmest two consecutive decades in the Arctic record- the 1930’s and 40’s- did not apparently cause as much melting as the current shorter-but becoming warmer- Arctic warm period commencing around 2000. That data can be seen in the NOAA and NASA graphs and also in the link below, where data was brought up to 2011.

        http://www.skepticalscience.com/Greenland_ice_sheet_summer_temperatures_highest_in_172_years.html

        Note the different parameters used here of ‘Greenland ice sheet summer surface air temperatures: 1840-2011’ rather than the coastal temperatures used in the CRU study, but whilst not like for like the general trends can be seen.

        It will be interesting to see if the period 2000-2020 eventually exceed that of the two warmest consecutive decades in the 1930’s and 1940’s. Perhaps everyone can bet on it like they do on annual ice extent?
        tonyb

    • Frank

      +100

    • Peter Lang

      +200

  33. David Springer

    Less red meat and more science here written by one of the paper’s authors Nic Lewis:

    http://wattsupwiththat.com/2013/05/19/new-paper-shows-transient-climate-response-less-than-2c/

    Question of the Day:

    Is the science settled yet?

  34. dennis adams

    Many have called the paper a lot of different things. I call it progress. It is always progress when some on this thread start moving the targets. They will remain nameless, but it is enjoyable to watch nonetheless.

  35. The journey is slow and tortuous, but 2xCO2 temperature response estimates (at some arbitrarily established “equilibrium”) are slowly but surely coming down from their past lofty values.

    We had 3.2C (with a big fat tail) for many years, as an almost holy number engraved in stone, now we have ~2C (with no tail). Other recent estimates are around 1.6C.

    Where is the journey headed?

    If we believe Richard Lindzen, we still have quite a way to go (to a value well below 1C), but the journey will continue to be slow and painful.

    And, as it continues, the once highly lauded, Nobel Peace Prize winning IPCC will become increasingly irrelevant as its CAGW premise disappears into oblivion.

    Sic transit gloria.

    Max

    PS It couldn’t happen to a more deserving bunch.

  36. Mike Jonas

    ECS ~ 2 C
    “Humanity has a second chance to stop dangerous climate change.”

    This is plain daft. ‘ECS ~ 2 C’ means there is no dangerous climate change. There is nothing that needs stopping.
    [PS. I’m not claiming that the paper is correct.]

    • ECS ~2C means CO2 is and will be the dominant driver of climate of the 20th, 21st and 22nd century.

      It means the Earth is scheduled to warm more than 4C over a matter of a few centuries which is far higher than natural variation and takes Earth into a super-interglacial state it hasn’t been in for millons of years.

      Saying that there is no dangerous climate change implies that you know what this uncharted territory entails.

      • ECS ~2C means CO2 is and will be the dominant driver of climate of the 20th, 21st and 22nd century.

        Probably not.

        It means the Earth is scheduled to warm more than 4C over a matter of a few centuries which is far higher than natural variation and takes Earth into a super-interglacial state it hasn’t been in for millons of years.

        If you accept the IPCC’s tech-denying “business as usual” scenario. That should be “business as we can see with our eyes closed”, most technological change follows some sort of exponential path, as in Moore’s “Law”, which says that the cost of computing power is cut in half every ~18 months. PV has been going somewhere between 2-4 years for a similar cycle, so 40 years from now the cost of PV hardware will be around or under 1/1000 of its current cost, assuming real “business as usual”.

        Why go to the trouble to dig up coal out of the ground, or methane hydrate off the ocean floor, when sunlight is everywhere.

      • lolwot

        ECS ~2C means CO2 is and will be the dominant driver of climate of the 20th, 21st and 22nd century.

        It means the Earth is scheduled to warm more than 4C over a matter of a few centuries which is far higher than natural variation and takes Earth into a super-interglacial state it hasn’t been in for millons of years.

        Nope, lolwot.

        You are mistaken.

        CO2 is at 394 ppmv today.

        It might rise to around 640 ppmv by the end of this century – possibly less, if the world starts building more nuclear plants instead of new coal plants.

        It could theoretically rise to around 980 ppmv when all inferred possible recoverable fossil fuel resources on our planet have been exhausted, some day in the far distant future.

        So this means “equilibrium” warming of:

        2.0°C * ln(640 / 394) / ln(2) = 1.4°C by 2100 (Yawn!)

        And

        2.0°C * ln(980 / 394) / ln(2) = 2.6°C (maximum ever warming, some day in the far distant future when all fossil fuels are 100% used up)

        Sorry, lolwot.

        Your fear mongering doesn’t work anymore.

        NO SALE.

        Max

      • remember to add the warming so far

      • I’ll do it:

        2.0°C * ln(980 / 280) / ln(2) = 3.6C

      • lolwot

        Remember to add the warming so far

        Why?

        Max

      • David Springer

        Actually Moore’s Law states that the number of components on integrated circuits doubles approximately every two years. Computing power is closely related to component density but that’s a corollary to Moore’s Law stated by David House (senior VP at Intel in charge of microcomputer development) wherein CPU performance doubles every 18 months due to greater number of components plus higher clock speed.

      • Steven Mosher

        AK

        “If you accept the IPCC’s tech-denying “business as usual” scenario. That should be “business as we can see with our eyes closed”, most technological change follows some sort of exponential path, as in Moore’s “Law”, which says that the cost of computing power is cut in half every ~18 months. PV has been going somewhere between 2-4 years for a similar cycle, so 40 years from now the cost of PV hardware will be around or under 1/1000 of its current cost, assuming real “business as usual”.

        That’s doubtful. If you look at the total system: panel + inverter, then you can see that the panel side will go down in cost albiet more slowly than moores law, but the components in the inverter, last I looked had slim chance of following a similar cost curve. since the system cost is dominated by the panel cost you will see a drop that will reach an asymtope at the cost of the inverter.

      • @Steven Mosher…

        […] the components in the inverter, last I looked had slim chance of following a similar cost curve.

        And what about the components in Nocera’s artificial leaf? The designs I’m looking at cut out a lot of the “middlemen”.

      • @Steven Mosher…

        On another thread, Chief Hydrologist posted a comment including a video of Carbon Engineering’s air capture approach. In a later comment, I posted a screenshot of their main capture technology, which includes drawing air over a capture fluid, then using energy (presumably heat) to extract the CO2.

        I also posted a screenshot of their vision of the overall process creating fuel from CO2 and hydrogen. I tried to post a picture of the process I envisioned, but accidentally linked to the wrong picture.

        But here’s the process I envision. It uses hydrogen, presumably from Nocera’s artificial leaves or some other very efficient process, and feeds it directly into a bio-reactor where tailored “bacteria” extract the CO2 from the carbonate solution, and combine it with hydrogen to produce methane. The process is fairly energy efficient, just enough downhill to drive the CO2 extraction process.

        I don’t know what sort of capture solution Carbon Technologies is using, but I envision something alkaline, with a pH around 9. It will contain large amounts of an enzyme called carbonic anhydrase (CA), which catalyses the transistion between CO2 and carbonic acid. There are alkalinophile “bacteria” (actually archaea) that appear to produce CA’s that work best at about that pH, some gene surgery ought to allow it to be produced in industrial quantities.

        There are also methanogens that operate happily at that pH, so the entire reaction could take place at that pH, although there would probably have to be a controlled diffusion membrane separating the carbonate solution from the bio-nutrient solution. Hydrogen could be inserted as a gas in high-diffusion capillaries, and the methane could be allowed to bubble out in the bio-nutrient solution. (Of course, the “wild” methanogens would probably required substantial GM in order to do the needed job.)

    • Mike Jonas

      I’m not claiming that the paper is correct.

      But your conclusion is 100% correct:

      ‘ECS ~ 2 C’ means there is no dangerous climate change. There is nothing that needs stopping.

      But in the eyes of the IPCC, the only thing that “needs stopping” is the inclusion of this new data into its AR5 report, because “there is no dangerous climate change” is NOT a message that IPCC wants to send.

      But, hey – the new data came out after the IPCC “deadline”!

      Whew! Saved by the bell!

      Max

      • manacker can you explain what makes 3C ECS dangerous but 2C ECS not dangerous in your opinion?

      • lolwot

        You ask:

        can you explain what makes 3C ECS dangerous but 2C ECS not dangerous in your opinion?

        Se my post above for why 2C ECS is not “dangerous”.

        I never believed that 3C ECS was “dangerous”, as IPCC wanted us to believe in its AR4 report.

        Hope that answers your question.

        Max

      • Lolwot, you write as if “dangerous” were not a relative term. Life is dangerous.Walking across the street is dangerous. Driving a car. Flying in an airplane. Jumping out of said plane in a parachute.

        And yet it’s only the last one that most people won’t do, deeming the possibility of death or injury too high given the non-essential nature of the activity.

        As atmospheric sensitivity trends lower, do does the relative danger.. Making an assessment as to whether anything should be done about that is something about which reasonable people might disagree.

      • “I never believed that 3C ECS was “dangerous”, as IPCC wanted us to believe in its AR4 report.”

        So what ECS would you consider dangerous?

        4C? 6C? Come on what number begins to enter the dangerous zone for you? Maybe you can appreciate why I think 2C is dangerous if you consider perhaps what 10C would mean to you.

      • exactly pokerguy, that is what I am getting at. Claiming that 2C ECS isn’t dangerous makes no sense because danger is relative not absolute.

      • Peter Lang

        lolwot, you are clutching at straws.

        The wheels are falling off the CAGW cart. Only a few years ago the ‘father’ of the global warming scare campaign, James Hansen, was telling us that coal trains are ‘death trains’ and the oceans will boil off if man doesn’t stop his evilness. But now we are finding the whole thing was a massive case of group think, herd mentality, self interest, gullible young academics and loony left ideological agendas

      • Peter Lang

        lolwot,

        So what ECS would you consider dangerous?

        It’s a meaningless question. Unless you also state the damage function its about as meaningless as asking “how long is a kilogram?”

      • lolwot

        You ask what value of 2xCO2 ECS would I consider to be “dangerous”.

        This is a very rhetorical question to a hypothetical situation.

        Just off the top of my head I’d say anything above 5C.

        But, of course, I do not mean someone’s WAG that it is 5C, or some model predictions based on hypothetical deliberations. I’d want to see some empirical evidence telling me that a doubling of CO2 in our atmosphere would lead to a 5C increase in the globally and annually averaged land and sea surface temperature. And not in several centuries, but in several decades.

        Hope this answers your rhetorical question.

        Max

      • “Just off the top of my head I’d say anything above 5C.”

        So how much global temperature rise above the current level would you think was dangerous? I assume it’s something more, but not much more, than 5C?

  37. We have new data.

    They show that 2xCO2 ECS is significantly lower than was previously estimated.

    But wait!

    It came out too late for the “AR5 deadline” (“rulz are rulz”, as Mosh wrote).

    So that just means that AR5 will be outdated before it even gets published, right?

    So it can be tossed on the trash heap of scientific history at publication.

    Good to know.

    Max

    • Peter Lang

      Manacker,

      So that just means that AR5 will be outdated before it even gets published, right?</blockquoted.

      We'll not entirely. Anything that supports 'the cause' will be considered and presumably a way will be found to include it, just as all previous IPCC reports found a way to include papers that were considered 'relevant', i.e. helped 'the cause'.

      • Pekka

        I posted this to the Chief yesterday but would be interested in your comments

        “Ben Bond-Lamberty and Allison Thomson, terrestrial carbon research scientists at the University of Maryland’s Joint Global Change Research Institute in College Park, conducted the study by stitching together almost 50 years of soil-emissions data — 1,434 data points — from 439 studies around the world. To compare measurements, the researchers accounted for differences between the studies, such as mean annual temperatures and techniques used to gauge carbon dioxide levels. They totalled the data for each year to create a global estimate of soil respiration — the flux of carbon dioxide from the ground into the atmosphere.

        The researchers found that soil respiration had increased by about 0.1% per year between 1989 and 2008, the span when soil measurement techniques had become standardized. In 2008, the global total reached roughly 98 billion tonnes, about 10 times more carbon than humans are now putting into the atmosphere each year. The change within soils “is a slow increase, but the absolute number is so large, even a small percentage increase is quite a bit,” says Bond-Lamberty.

        http://www.nature.com/news/2010/100324/full/news.2010.147.html

        —– —–
        The possible effects of co2 emitting from soil on the scale envisaged is interesting enough. However, bearing in mind the vast scale of agriculture over the past 1000 years and that much of the co2 ‘exhaled’ will have stayed in the atmosphere before disappearing back into sinks-presumably including oceans and soils-it would be interesting to see how this all fits into the climate models and the overall co2 ‘budgets’

        tonyb

    • Steven Mosher

      of course its outdated. every summary of the science is outdated.
      does this surprise you? doesnt surprise me, but then I never use a secondary source as proof of a matter. do you?

      • of course its outdated. every summary of the science is outdated. does this surprise you? doesnt surprise me, but then I never use a secondary source as proof of a matter. do you?

        The problem is that a lot of governments do. The whole multi-year cycle of “assesments” turns what should be science into a political circus.

      • Steve Mosher

        For a guy like me, I have to rely upon secondary sources for climate science as this ain’t my field. I listen to guys like Chief and Capt’nDallas as well as Tomas Milanovic for no better reason than their writings don’t set off my BS detector.

        Now when I was early on in this game, reading the RealClimate blog, I observed that the impact of Mt. Pinatubo on global temperatures caused a perceptible global temperature drop. After a couple of years, global temperatures returned to the same baseline as before the eruption. Same was true for the 1998/99 El Nino temperature elevation, where temperatures returned to the same baseline. To me, that temperature behavior appeared to represent: homeostasis. Now, in my line of work, I have some familiarity with homeostasis and this perturbation and return to baseline reminded me of a homeostatic system.

        The next area of climate scientists’ writings were on CO2. I read where in a glass house, CO2 had this or that radiation spectrum, and caused this or that impact on temperatures. All very interesting. Now, again in my line of work, CO2 has been interesting to me for three or more decades and it was evident that a lot of stars had to be aligned to have CO2 impact anything. When I heard of the radiative transfer model, I was a little startled to have such a trace gas and the radiative transfer model mentioned in the same breath. Water, on the other hand, and some blue water experiences, made me very very respectful for water and all its forms. Water as a greenhouse gas, the most important green house gas, kinda made sense as at times I was awash in it.

        So my educational and experiences with homeostatic systems as well as the environmentally trace gas made me somewhat skeptical of some claims as that CO2 was moving heave and earth.

        So yes, I rely upon secondary sources and rely upon my BS detector to alert me that something isn’t quite right. As I said, for the above three authors, I’ve found this strategy to help me navigate the climate science mind fields.

        I do listen to others including yourself. People who are regarded as being very smart can package ideas in other ways that I haven’t thought of is valuable to me. So, I listen to secondary sources, cognizant that my knowledge base is limited for the field I choose to learn more about.

        Regards

      • Steven Mosher

        “To me, that temperature behavior appeared to represent: homeostasis. Now, in my line of work, I have some familiarity with homeostasis and this perturbation and return to baseline reminded me of a homeostatic system.”

        really? glad you never worked for me as an engineer.

        C02 is not a trace gas. It is large portion of the gases in the atmosphere
        that are relatively opaque to IR.

      • Mosh

        Whether or not you call CO2 a “trace gas” in our atmosphere at 0.04% is a matter of choice.

        That it is a far less potent GHG than H2O at several orders of magnitude greater concentration is pretty obvious.

        Max

      • Steven Mosher

        manaker you are smarter than that.
        you know the trace gas argument is BS, just like if I said GCR could have no effect because they were a trace particle. At some point you have to stomp out the stupid arguments on your side if you expect to be welcomed at the debating table. Stop standing up for dopes on your team.

      • Steven Mosher

        Yes. Every summary report is, by definition, “outdated” when it appears.

        But if IPCC are stupid enough not to include this latest report (and the seven other studies, which have shown a much lower 2xCO2 ECS) in their AR5 report, it will be “hopelessly outdated” (and ready for the trash heap) when it appears.

        Max

      • It is interesting to note that at temperatures colder than -40 C, which is a significant volume of the atmosphere, CO2 is at least as abundant as H2O molecule for molecule.

      • Mosh

        Improve your reading skills.

        Nowhere did I write that the fact that CO2 is a trace gas in our atmosphere means that it is, by definition, insignificant to our climate. Did I?

        It is a trace gas.

        It very likely has an influence on our climate due to its GH properties.

        We can estimate how much this could be, based on laboratory data on CO2 IR absorption characteristics.

        Based on these, we can estimate (Myhre at al.) that a doubling of CO2 could add a forcing of 3.7 Wm-2, and this could result in a calculated global warming of around 1C.

        All the rest is model assumptions on feedbacks and hype.

        OK?

        Max

      • If temperature increases by s degrees when a greenhouse gas doubles, why should it matter whether that gas is a trace gas or 1% of the atmosphere? If it goes from 0.001% to 0.002% the temperature will go up by s degrees. And if it goes from 1% to 2% it will again go up by s degrees.

        Moreover the volume of gas needed to increase from 0.001% to 0.002% is a thousand times less than to get from 1% to 2%. So ironically it is the trace gases that put the planet at risk more than gases like water vapor because it takes vastly more volume of gas to double the average water vapor of the atmosphere than to double a trace gas.

        Those who dismiss certain gases on the ground that they are trace gases have it backwards.

      • Manacker, Add another 1 C from the water vapor outgassing that will come along for the ride with the 1.2 C increase due to CO2. Then add another 0.8 C from other GHG’s such as CH4 & N2O that always appear with CO2 and the albedo positive feedback , and you have the 3 C ECS. And that is just as strong as the CO2 theory. Pretty simple and it matches observational evidence.

        Sold !

      • WHY

        Naw, Webby, I’m not going to play your silly game and “add in 1C” from the postulated “climate-carbon cycle feedback”, because that is a purely hypothetical construct.

        At present, half of the CO2 emitted by humans is “missing” – presumably going at least in part to the oceans.

        So the net flow is from the atmosphere to the oceans.

        It could be, however, that a major portion of the “missing CO2” is being absorbed by increased plant growth, rather than the ocean.

        We frankly don’t know.

        But you can forget about the “climate-carbon cycle feedback” story. It’s a bit too hairy-fairy for me.

        Max

      • Webby

        Albedo feedback?

        The most significant albedo comes from clouds.

        It appears that these decreased over the 1980s and 1990s and then started increasing again this century (Palle et al.).

        We (including you, Webby) don’t know what they will do in the future, because we do not know what makes them do what they do.

        Max

      • Alexej Buergin

        “Mosher:
        C02 is not a trace gas.”

        You do not need to open a physics book, just look it up in Wikipedia.

      • Max,

        Half of the CO2 releases up to date is around 200 gigatons carbon. We know enough about changes in vegetation and soil to exclude the possibility that a major fraction of that would have gone there. We also know that nothing can stop CO2 from going to oceans when the atmospheric concentration goes significantly up.

        The fraction of roughly one half staying in the atmosphere is not a law that must remain in force. The value is very much dependent on the history of the releases. If the releases were to go down less than half of the reduced releases would stay as addition in the atmospheric carbon; if the releases would grow very rapidly more than half would remain. The release history just happens to be such that the value has been near 50% for quite a while.

      • I have long pondered this one half factor as well. I can easily verify the factor mathematically as the outcome of a diffusional random walk.

        In a random walk in one direction, say along the depth of the ocean, for every step 1/2 of the walkers will go deeper and 1/2 will move in a shallow direction.

        The outcome is that 1/2 will keep moving back to where they originated from. This eventually turns into a fat-tail response as some fraction go deeper and deeper and will take longer and longer to come back to the surface. Mathematically, one takes a finite window to represent the fraction that can return and you obtain a function that slowly drifts less than 1/2 to represent permanent sequestration.

        This is nothing new and is very common in 1D diffusion problems such as in diffusion of particles into a substrate.

        Incidentally, this also explains the 1/2 of the excess thermal forcing energy that is being sequestered in the ocean as OHC. Look at the data and about 1/2 of the heat that enters the ocean is getting “sunk” or semi-permanently sequestered into the ocean without an ocean surface temperature rise.

        Bingo. We just explained two observational sets of data.

        I have all the details of this derived on several blog posts but I think I will write a new post dedicated to the 1/2 factor, as it keeps on coming up and there are not enough people familiar with the mathematical details of diffusion that appreciate this. We need a Dummy’s Guide to Diffusion pamphlet to walk people through this.

        And I am sure that Chef Hydro will complain of my preening and prattling.. So it goes.

      • WHUT,

        You give too much value for rules of thumb. Rules of thumb are nice as they are often valid over surprisingly wide ranges in practice, but they are still rules of thumb that hold only as long as nothing essential changes.

        In this case the fraction can be easily calculated from emissions and a sequestration model. Whatever the sequestration model, the outcome depends critically on the development of the emissions. Your rule of thumb has been successful only because the emissions have followed a specific path. No reasonable diffusion model can change that conclusion. as long as it’s accepted that the removal rate from atmosphere is determined by the deviation of the atmospheric concentration from some effective level determined by the earlier sequestration.

        The situation is similar with many other models that you present. They describe some reasonable rules of thumb that are valid up to a point, but are not based on any permanently valid basic principles.

      • Pekka, Fortunately I have worked out the math and can justify my hand waving intuition with some rigor.

      • Steven Mosher,

        “really? glad you never worked for me as an engineer.”

        Like you, I am also glad I never worked for you as an engineer as that would have been a disastrous situation on many fronts.

        “C02 is not a trace gas. It is large portion of the gases in the atmosphere
        that are relatively opaque to IR.”

        As 0.04% of the atmosphere, I regard that as trace, not necessarily inconsequential, just trace. I would expect that CO2, having such an expected and powerful IR influence, one could measure its signal. Now I know that some claim through their assumptions, manipulations and calculations they can observe the CO2 signal. For now at least, I put those thoughts in the “pending” file.

      • WHUT,

        Math is a tool that transforms input to output. As I’m sure that your output is wrong, I assume that your input is also wrong. You have probably done the math in between correctly.

        In the previous case of lapse rate you were talking on “virial” something, don’t remember the exact phrase.. My impression was that you didn’t know what you were talking about, but didn’t consider that interesting enough to dig deeper.

      • Hehe, GIGO.

      • Pekka, no problem. I will add your comments to that post, and see how things pan out over time.

        If it doesn’t work out as a theory, I can use it as a heuristic in my semantic server for delivering environmental models. It is very simple as a heuristic. I can get standard lapse rates, poisson’s PV +PT+VT realationships, and barmometric pressure with zero adjustable parameters for 3 planets. It is all lookup of gravity, molecular weight, and degrees of freedom.

      • Pekka

        Since we cannot measure with any accuracy how much CO2 is ending up going into the ocean and how much is ending up going into the biosphere (principally plant growth), it is foolish to ASS-U-ME one thing or another.

        What we do know from observation is:

        – the percentage of the CO2 emitted by humans that “remains” in the atmosphere varies from 15% to 90% on a year-to-year basis
        – over a longer period it has been around half
        – this percentage has decreased slightly since Mauna Loa measurements started in 1959, by around 1 percentage point per decade

        IOW the “missing” CO2 has increased by around 5 percentage points from 1959 to today.

        Whether this added amount has gone into a slightly warmer ocean or into increased plant growth is still an open question AFAIK.

        Since studies show that plant growth is enhanced at higher CO2 concentrations and slightly warmer temperatures at the same time that net ocean absorption would theoretically diminish with a slightly warmer ocean, one might surmise that the added amount is going primarily into added plant growth.

        But I do not believe we know the answer to that question.

        Max

      • Pekka, do you have something to say about the flat-out fabrication made by Manacker? My sequestration model works perfectly for explaining this slight decline.
        http://theoilconundrum.blogspot.com/2013/05/airborne-fraction-of-co2-explained-by.html

        So much for me applying a Green’s function approach and other physics as “rules of thumb”.

      • Max

        The chemistry of CO2 in the oceans is well understood. There are quantitative uncertainties due to the varying constitution of the ocean water, but the uncertainties are not too large for making relatively accurate estimates on CO2 sequestration by the oceans. The influence of the warming of the oceans is also well understood. From that understanding we know very reliably that the warming contributes well less than 10 ppm to the present atmospheric concentrations.

        The amount of carbon stored in the vegetation is small enough to make uncertainties from that very small in comparison to the full amounts. The worst known part is probably what has happened to the amount of carbon stored in soil.

        Taking into account, what’s known about oceans and what’s known about soils, I don’t think that there’s much doubt about the dominant role of oceans in the long term sequestration. On the other hand it’s also known that the short term variability is mostly due to variability in the net flows of carbon to vegetation and soils. In the variability they dominate and oceans play a lesser role, in the long term behavior weighting is the opposite. There are significant uncertainties in the details of the above, but qualitatively the situation is clear enough.

      • Manacker,
        Here is some more background info and theory on the diffusional model of CO2 sequestration. The model of diffusion takes into account uncertainty and is also known as Warburg diffusion as I have recently learned http://theoilconundrum.blogspot.com/2011/09/derivation-of-maxent-diffusion-applied.html

      • Pekka

        Missed badly with my first shot so here goes again

        —– ——
        Ben Bond-Lamberty and Allison Thomson, terrestrial carbon research scientists at the University of Maryland’s Joint Global Change Research Institute in College Park, conducted the study by stitching together almost 50 years of soil-emissions data — 1,434 data points — from 439 studies around the world. To compare measurements, the researchers accounted for differences between the studies, such as mean annual temperatures and techniques used to gauge carbon dioxide levels. They totalled the data for each year to create a global estimate of soil respiration — the flux of carbon dioxide from the ground into the atmosphere.

        The researchers found that soil respiration had increased by about 0.1% per year between 1989 and 2008, the span when soil measurement techniques had become standardized. In 2008, the global total reached roughly 98 billion tonnes, about 10 times more carbon than humans are now putting into the atmosphere each year. The change within soils “is a slow increase, but the absolute number is so large, even a small percentage increase is quite a bit,” says Bond-Lamberty.

        http://www.nature.com/news/2010/100324/full/news.2010.147.html

        —- —— —-
        The possible effects of co2 emitting from soil on the scale envisaged is interesting enough. However, bearing in mind the vast scale of agriculture over the past 1000 years and that much of the co2 ‘exhaled’ will have stayed in the atmosphere before disappearing back into sinks-presumably including oceans and soils-it would be interesting to see how this all fits into the climate models and the overall co2 ‘budgets’
        tonyb

      • WHUT,

        My above comment should tell, what I think on CO2 sequestration.

        When I have commented on rules of thumb, I don’t want to tell that they are worthless. They are actually often very good and useful. When a theory is incomplete, a rule of thumb is often quantitatively more accurate than actual model predictions.

        The case of the lapse rate is also interesting as the same environmental lapse rate works more widely for the average lapse rate than one might expect. The lapse rate in topics is essentially the same as in subtropic areas and even in arctic summer, only the arctic winter is really different, and has temperature profile that cannot be described by a single lapse rate at all. In no wide area is the average lapse rate close to the dry adiabatic (I believe that it could be observed at a specific time in a specific location, but not in the averages).

        The unexpected extent of validity of a single environmental lapse rate rises the question of its dependence on the warming. If the present tropics has almost the same lapse rate than extratropical areas, does warming change that state of matter. The lack of clear “hot spot” may, indeed, tell that it does not.

        As I explain here, I don’t believe that the lack of the hot spot would be an essential blow on the standard understanding. The hot spot is not at all a specific fingerprint of the influence of CO2, it’s a fingerprint of some parts of the present models, but of parts that are known to be incomplete.

      • Tony,

        First of all, from all I have read major uncertainties remain in the understanding of all components of carbon cycle. Having major uncertainties does, however, not mean that everything is similarly uncertain. The strongest constraints apply to the long term trends, i.e., to what happens over periods long to smooth out multiyear variability due to El Nino and other similar influences.

        In this trend the contribution of vegetation of soils is surely much smaller than that of oceans, but telling more precisely what it is, is another matter. Several references of research on carbon cycle can be found in IPCC reports. The reports do not claim much more that what I wrote in the first paragraph. There are also more recent papers all the components. Much further research is needed to clarify the details further.

        From the point of view of the AGW over a few decades to the future, what we know is enough. The additional knowledge is not likely to change much in that. What’s a more important question to me is the sequestration over longer periods. A fraction of carbon will remain in the active carbon cycle for long, but how much does that fraction affect the atmospheric concentration after 100, 200, or 500 years? That’s not known well enough, as far as I have been able to interpret the actual science on that. I have seen only papers that present too weakly justified estimates of that to make me comfortable with their conclusions. The papers tend to overlook some important points, or at least they fail to tell that they have considered all very important factors.

      • Rare to find any lapse rates in the zone between 9.8 and 6.5 C/km. Many are less than 6.5 which pushes toward isothermal.

        Why does the forbidden zone exist? It has to do with the missing kinetic energy not included in the calculation. This is kinetic energy manifested as virial forces holding the gravitational state. I really have no other explanation and I haven’t seen one given elsewhere.

        Pekka, are you not curious about this aspect of atmospheric physics?

      • Webster, “Rare to find any lapse rates in the zone between 9.8 and 6.5 C/km. Many are less than 6.5 which pushes toward isothermal.”

        Yep, If they can’t find a stable range, they wouldn’t exist. The 50% entropy TOA is required for Ein+Eout, As pressure increases and composition changes, different time frames and internal heat transfer characteristics shift stability requirements to ~33% : 66% to allow for meridional and zonal energy flux. That is why for Earth you have to considered the water phase greenhouse differently than the dry gas greenhouse.

        BTW, that is one reason the Golden Ratio is so common in nature, it is a stable range. Call it the Irish Law of Nature, “If it ain’t broke, don’t fix it.”

      • WHUT,

        Having averages of 6.5 or 7.0 and sometimes lower values is hardly possible without higher values as well.

        The physics is unambiguous. When air, not saturated by moisture, rises smoothly, it’s temperature drops at a rate close to the adiabatic lapse rate. When air subsides smoothly without mixing with nonsubsiding air, it’s temperature rises a little faster than the adiabatic lapse rate would predict. For subsiding air even the moisture plays no role unless liquid water droplets are present.

        The averages are lower than that, and that must be due to the influence of other phenomena like horizontal mixing.

      • @whut)…

        Rare to find any lapse rates in the zone between 9.8 and 6.5 C/km. Many are less than 6.5 which pushes toward isothermal.

        Why does the forbidden zone exist? It has to do with the missing kinetic energy not included in the calculation. This is kinetic energy manifested as virial forces holding the gravitational state. I really have no other explanation and I haven’t seen one given elsewhere.

        You’re still showing an abysmal ignorance of meteorology! Not climate, you’re not there yet. At lower altitudes the pseudo-adiabat is around 3-4 degrees/Km. Once moisture starts to condense as droplets, convection will occur at any lapse rate above that. A lapse rate of around 7 C/Km is about as much as humid air can get without convection setting in. You know, hot towers and cyclones in the tropics, frontal systems and thunderstorms in the temperate regions. And it’s actually fairly rare for air at lower altitudes to be all that dry. There’s usually enough mixing that the differences are relatively minor.

        The actual average lapse rate is determined by a complex non-linear process involving evaporation, transport both vertical and horizontal, and precipitation.

      • AK, “You’re still showing an abysmal ignorance of meteorology! Not climate, you’re not there yet. At lower altitudes the pseudo-adiabat is around 3-4 degrees/Km. Once moisture starts to condense as droplets, convection will occur at any lapse rate above that.”

        Yes and no. Webster is trying to derive a lapse rate based on MAXENT. It is actually an interesting approach, but he doesn’t realize that advection of energy also has a limited stable range and that range changes with density and atmospheric/surface composition. This is basically the same mistake Hansen and Sagan made leading to all the Paradoxes and over estimation of the GHE, the symmetry of the “surfaces” matter. It is funny that people can see the problems when he does it, but never did when it was done before.

        It is the old Ideal Black Body – Radiant Shell game.

      • Web and Pekka

        Thanks for your theoretical dissertations, but you have not explained why the amount of CO2 emitted by humans that “remains” in the atmosphere has diminished by around 1 percentage point per decade as atmospheric concentration has risen.

        Apparently neither of you know the answer to that question.

        That’s OK, I don’t know the answer either.

        Max

      • This deal on the lapse rate, no reason for people like AK to jump on my case and accuse me of “abysmal ignorance of meteorology”. All I am trying to do is find a statistical or mathematical derivation that results in an average lapse rate as that observed.

        The average lapse rate is considered a standard and so there should be some reasonable explanation.

        Here is the global view of the average

        http://ars.els-cdn.com/content/image/1-s2.0-S003707380300232X-gr2.gif

  38. How does carbon dioxide trap heat?

    • IR radiation coming from the Earth, on its way to space, is intercepted by water and by CO2, it absorbs a photon, becomes energetic, and the energy is converted into velocity. The GHG molecules then collide with other molecules like O2 and N2, warming the local. A warmer atmosphere sends more IR downward than a cool one.
      Fire a laser across a room and you cant see the beam, but you can see dust particles twinkling in the beam. CO2 is like IR dust.

      • Carbon dioxide is a real gas, a real gas absorbing photons of heat does not get bounced into motion, it gets hotter and expands. The heat capacity of carbon dioxide is even less than that of oxygen and nitrogen, it can’t store or trap the heat, it releases it practically instantly.

        Where on earth did you get this idea that absorbing a photon of heat, aka thermal infrared aka longwave infrared, aka radiant heat, causes the molecule of carbon dioxide to move?

        Are you confusing this with a photon of visible light which gets bounced by the electrons of oxygen and nitrogen which is how we get our blue sky? (This is electronic transition reflection/scattering.*)

        This is because the electrons of the molecules of nitrogen and oxygen, practically 98% of our atmosphere, absorb the visible light and become more energised and move, still within the confines of the molecules, when the electron returns to ground state it releases the same energy it took in, so we get reflection/scattering. Blue visible light is more energetic than the longer visible and so gets bounced around more often in reflection/scattering.

        Visible light is non-ionising, its energy does not cause the electron to move out of its orbit, compare with ionising radiation which does; uv has both ionising and non-ionising.

        —————–

        *See here for electronic transitions: http://en.wikipedia.org/wiki/Transparency_and_translucency

        UV-Vis: Electronic transitions [edit]In electronic absorption, the frequency of the incoming light wave is at or near the energy levels of the electrons within the atoms which compose the substance. In this case, the electrons will absorb the energy of the light wave and increase their energy state, often moving outward from the nucleus of the atom into an outer shell or orbital.

        The atoms that bind together to make the molecules of any particular substance contain a number of electrons (given by the atomic number Z in the periodic chart). Recall that all light waves are electromagnetic in origin. Thus they are affected strongly when coming into contact with negatively charged electrons in matter. When photons (individual packets of light energy) come in contact with the valence electrons of atom, one of several things can and will occur:

        1. An electron absorbs all of the energy of the photon and re-emits it with different color. This gives rise to luminescence, fluorescence and phosphorescence.
        2. An electron absorbs the energy of the photon and sends it back out the way it came in. This results in reflection or scattering.
        3. An electron cannot absorb the energy of the photon and the photon continues on its path. This results in transmission (provided no other absorption mechanisms are active).
        4. An electron selectively absorbs a portion of the photon, and the remaining frequencies are transmitted in the form of spectral color.

        ——————–

        2. is what happens to visible light in the atmosphere, gets bounced back out.

        3. is what happens to visible light in the ocean, water does not absorb it but transmits it through.

        Visible light is not capable of moving the whole molecule into vibrational mode which is what it takes to heat up land and water, it isn’t big enough or powerful enough, in water it can’t even get in to play with the electrons.

        4. is what happens in photosynthesis, the plant absorbs blue and red to convert to chemical energy, not heat energy, in the creation of sugars, and reflects back green.

        Visible light, the shortwaves of the AGWScienceFiction’s comic cartoon KT97 and ilk Greenhouse Effect Illusion energy budget.., is not physically capable of converting land and water to heat, that’s why in traditional science it is called Light and not Heat.

        Heat, longwave infrared, is bigger and more powerful, it moves the whole molecule into vibration which is kinetic energy which is heat.

        Real gases expand when heated and so become lighter than air and rise, when they cool off releasing their heat they condense and sink because heavier than air under gravity.

        Radiant heat energy does not cause carbon dioxide to move, but expand where it is. It would move if that expansion continued and it became lighter than air (it is one and a half times heavier than air at the same temperature), however, it has a very low heat capacity and hit by a photon of waste heat upwelling longwave photon it would absorb it and release it instantly.

        This is called convection. Expanded gases create areas of low pressure and condensed gases created areas of high pressure which make them lighter or heavier than air under gravity and so they move. That is how we get our winds which are convection currents. Convection currents are created by differential heating of volumes of air, mainly real gas nitrogen and oxygen, when volumes of high pressure (cold dense heavy) sink flowing into volumes of low pressure (hot expanded light). Hot air rises cold air sinks. Winds flow from high to low.

        AGWSF fisics does not have real gases which expand when heated and condense when cooled, because their molecules are ideal gas without volume and so have nothing to expand, (nor any other real gas properties like mass and attraction) and so nothing for gravity to work on .

        AGW gases are imaginary gases in empty space. There is no sound in the AGWSF’s Greenhouse Effect Illusion world.

        Carbon dioxide physically cannot trap or store heat. Compare with water which has a very high heat capacity and can trap and store heat, it takes is huge amounts of heat before it changes temperature and then evaporates, phase change, taking longer to heat up and longer to release its heat and cool down.

      • David Springer

        DocMartyn is correct. When we look down on the atmosphere from space we see a deficit of photons at the absorption frequencies of CO2 and a surplus at all other frequencies. CO2 is trapping photons of a specific frequency and then thermalize adjacent molecules (mostly nitrogen) which then emits a continuous spectrum. The greenhouse trick is that the absorption by CO2 is of upwelling radiation and the emittance of thermalized neighbors goes in all directions. The end result is a restriction in the flow of radiation from surface to space similar to what happens when you partially block the flow of water in a gate at the base of a dam. The restriction causes the water to rise higher behind the dam which increases the pressure at the entrance to the gate which forces more water through the gate until the water stops rising. Pressure in fluid flow is analogous to temperature in radiative flow.

      • “Carbon dioxide is a real gas, a real gas absorbing photons of heat does not get bounced into motion, it gets hotter and expands”

        The effect of a photon being absorbed by a single molecule is to increase its motion. You do this to a number of molecules and you have an increase in temperature. This is what temperature means.

        “The heat capacity of carbon dioxide is even less than that of oxygen and nitrogen, it can’t store or trap the heat, it releases it practically instantly.”

        Who is talking about ‘storing heat’, the photon is thermalized, giving the water or carbon dioxide molecule energy, causing it to move faster. It will also collide with other molecules. Overall, there will be a small rise in temperature and this manifests itself as a change in the IR emission of the gas mixture.

        “Where on earth did you get this idea that absorbing a photon of heat, aka thermal infrared aka longwave infrared, aka radiant heat, causes the molecule of carbon dioxide to move?”

        Physics lessons.

        “Are you confusing this with a photon of visible light which gets bounced by the electrons of oxygen and nitrogen which is how we get our blue sky?”

        No.

        “This is because the electrons of the molecules of nitrogen and oxygen, practically 98% of our atmosphere, absorb the visible light and become more energised and move, still within the confines of the molecules, when the electron returns to ground state it releases the same energy it took in, so we get reflection/scattering. Blue visible light is more energetic than the longer visible and so gets bounced around more often in reflection/scattering.”

        The atmosphere is mostly transparent to visible light, which is why it can get down to the surface and bounce of solid matter. Such is the amount of light coming down that damn near everything has evolved sensors to quantify light levels by quite complex optics. Our eyes work because light actually makes it down to the surface.
        Light scattering of sun light by the atmosphere is due to Rayleigh scattering, about 22% of blue light is scattered and about 8% of red light is scattered, thus we get more red light directly and more blue indirectly, giving us a ‘blue’ and not yellow sky. Rayleigh scattering is not due to absorption/emission of photons, but the more subtle interaction of photons with the electron field.

      • What DocMartyn writes is good enough for understanding the role of CO2, but it’s perhaps worthwhile to be a little more accurate.

        When CO2 absorbs an IR photon it goes from vibrational ground state to vibrational excited state. The additional motion is not motion of the molecule but vibration of atoms within the molecule. Each molecule of air collides with other molecules roughly once in a nanosecond. Such a collision has a high likelihood of releasing the excitation of the molecule. The energy from deexcitation adds to the kinetic energy of the two molecules (CO2 and the colliding molecule).

        O2 and N2 have little influence on any other radiation than most energetic UV from sun. They neither absorb nor emit significantly otherwise. Oxygen is more important absorber of solar radiation as well as GHG as ozone than as O2.

      • DocMartyn | May 21, 2013 at 2:21 pm | “Carbon dioxide is a real gas, a real gas absorbing photons of heat does not get bounced into motion, it gets hotter and expands”

        The effect of a photon being absorbed by a single molecule is to increase its motion. You do this to a number of molecules and you have an increase in temperature. This is what temperature means.

        You said velocity:

        “..is intercepted by water and by CO2,it absorbs a photon, becomes energetic, and the energy is converted into velocity. The GHG molecules then collide with other molecules like O2 and N2, warming the local.”

        This comes direct from the AGWScienceFiction claim that carbon dioxide, nitrogen and oxygen are ideal gas without mass, so they travel at vast speeds through empty space colliding with each other and bouncing off, so “thoroughly mixing”.

        As I posted to Dave Springer above, heat does not convert to velocity in a real gas, it converts to expansion which makes it less dense and so lighter under gravity. When it releases that heat, which carbon dioxide with its low heat capacity does practically instantly, it condenses again.

        Real gases expand and condense because they have real volume. They do not travel at great velocities through empty space under their own molecular momentum, because the real volume of the other real gases around constrains their movement. That’s how we get sound in the real world. There is no sound in the empty space atmosphere of AGWSF, because it doesn’t have real gases with volume which constrain the movement of gases.

        “The heat capacity of carbon dioxide is even less than that of oxygen and nitrogen, it can’t store or trap the heat, it releases it practically instantly.”

        Who is talking about ‘storing heat’, the photon is thermalized, giving the water or carbon dioxide molecule energy, causing it to move faster. It will also collide with other molecules. Overall, there will be a small rise in temperature and this manifests itself as a change in the IR emission of the gas mixture.

        Heat causes real gas molecules to vibrate where they are, not to move faster in space, velocity.

        “Where on earth did you get this idea that absorbing a photon of heat, aka thermal infrared aka longwave infrared, aka radiant heat, causes the molecule of carbon dioxide to move?”

        Physics lessons.

        You should ask for your money back.

        “This is because the electrons of the molecules of nitrogen and oxygen, practically 98% of our atmosphere, absorb the visible light and become more energised and move, still within the confines of the molecules, when the electron returns to ground state it releases the same energy it took in, so we get reflection/scattering. Blue visible light is more energetic than the longer visible and so gets bounced around more often in reflection/scattering.”

        The atmosphere is mostly transparent to visible light, which is why it can get down to the surface and bounce of solid matter. Such is the amount of light coming down that damn near everything has evolved sensors to quantify light levels by quite complex optics. Our eyes work because light actually makes it down to the surface.

        ? I didn’t say it didn’t get to us at the surface. AGWScienceFiction however says that the Sun’s radiant heat, longwave infrared, doesn’t get through TOA because there’s some, unknown to traditional science, “invisible barrier like the glass of a greenhouse at TOA preventing longwave infrared from the Sun from entering, but letting visible and shortwaves either side through, and, this invisible barrier like the glass of a greenhouse is what traps upwelling longwave infrared from the heated surface of the Earth and stops it escaping”.

        Light scattering of sun light by the atmosphere is due to Rayleigh scattering, about 22% of blue light is scattered and about 8% of red light is scattered, thus we get more red light directly and more blue indirectly, giving us a ‘blue’ and not yellow sky. Rayleigh scattering is not due to absorption/emission of photons, but the more subtle interaction of photons with the electron field.

        Please describe this.

      • The Skeptical Warmist (aka R. Gates)

        Myrrh said:

        “Carbon dioxide is a real gas, a real gas absorbing photons of heat does not get bounced into motion, it gets hotter and expands.”
        ___

        Your knowledge of physics seems just enough to make you confused. You must think about the GH effect on a molecule by molecule basis in terms of the actual physical action. The individual molecule does not expand but actually vibrates when absorbing certain frequencies energy. The LW absorption characteristics of CO2 are due to the triatomic nature of the molecule. Suggest you refer to this:

        http://www.phy.davidson.edu/stuhome/jimn/co2/pages/co2theory.htm

        The GH effect is about altering the thermal gradient of the atmosphere when looking at the bottom line effect– making the thermal gradient between surface and space less steep, and thus, energy flows less readily from surface to space.

        Also, strongly suggest you read all 8 parts of “An Insignificant Trace Gas?” if you really want to dig down and understand what’s going on:

        http://scienceofdoom.com/roadmap/co2/

        Best general overview of CO2’s “greenhouse” physics on the web– period.

      • The Skeptical Warmist (aka R. Gates) | May 23, 2013 at 12:44 pm | Myrrh said:

        “Carbon dioxide is a real gas, a real gas absorbing photons of heat does not get bounced into motion, it gets hotter and expands.”
        ___

        Your knowledge of physics seems just enough to make you confused. You must think about the GH effect on a molecule by molecule basis in terms of the actual physical action. The individual molecule does not expand but actually vibrates when absorbing certain frequencies energy. The LW absorption characteristics of CO2 are due to the triatomic nature of the molecule. Suggest you refer to this:

        http://www.phy.davidson.edu/stuhome/jimn/co2/pages/co2theory.htm

        You don’t understand what I’m saying because you don’t have real gases, but the imaginary ideal; that link does not show a picture of your molecules because instead you have massless hard dots of nothing travelling through empty space by their own molecular momentum bouncing off other ideal gases without attraction or volume (and so the AGWScienceFiction meme explanation for “carbon dioxide rapidly diffuses into the atmosphere and mixes thoroughly and can’t be unmixed without a great deal of work having to be done, like in separating in ink from the water it was poured into”).

        The picture you linked to is of real gases and note that the vibrational
        movement caused by heat, longwave infrared, is how molecules get heated up, rub your hands together, that is mechanical energy doing the same work in heating your skin. Visible light from the Sun cannot do this, it works on the tinier electronic transition level and is not powerful enough to move whole molecules into vibration which is the internal kinetic energy of a molecule, the more heat applied the faster this movement.

        When a molecule of real gas gets heated it not only increases vibration it expands and when it gets cooled it slows down and and also condenses, because, it has volume, because it is a something – look at the pictures again.., these are not massless dots of nothing which is the imaginary ideal gas. In real gases expanding their volumes increase, they become less dense so they weigh less under the pulling power of gravity and becoming lighter than air they will rise. In condensing their volumes decrease so they weigh more under the pulling power of gravity and becoming heavier than air they will sink.

        Air is a fluid real gas mainly nitrogen and oxygen weighing 14lb sq/inch, that is our atmosphere weighing down on us, exerting great pressure on us, a ton on our shoulders.

        It is the weight of volumes of heated air which create areas of low pressure in this fluid , because the molecules have expanded their individual volumes becoming less dense and taking up more space they weigh less, they exert less pressure on us because there are fewer of them in the space and so lighter, their combined lighter volume forms an area of low pressure. So areas of high pressure are heavier, colder, denser volumes of individual molecules which combined create areas of high pressure in this ocean of real gas, because as gases cool they condense and taking up less room in the same amount of space there are more of them, so their combined denser volume is much heavier exerting greater pressure on us. High heavier, low lighter under gravity.

        This is how we get our winds, when a volume of air, a parcel of air, is heated the individual molecules expand becoming lighter than air and rise, each individual molecule in the parcel is taking up more space so rising as a combined volume it takes the heat away from the surface into the colder heights where it releases its heat and again condenses as the individual molecules condense becoming heavier than air and so sink displacing rising volume of hot air.

        This is how we get: Hot air rises, Cold air sinks, and, Winds flow from High to Low.

        And how we get our major wind systems from the intense heating by the direct longwave infrared from the Sun at the equator heating the land and water which heats the volume of air above it and this rises and flows spontaneously to the intense cold of the poles where it releases its heat and condenses again and so becoming heavier it sinks and flows back to the low pressure of the hot equator.

        When combined volumes of hot molecules rise being lighter than air under gravity it is because their individual volumes expand, individually becoming ligher than air. The colder heavier combined volumes of molecules around this will sink displacing the ligher because the individual volumes of the molecules have condensed, becoming individually heavier than air.

        So, for example, water vapour and methane are lighter than air and always spontaneously rise in air, carbon dioxide which is one and a half times heavier will always spontaneously sink in air, and will not readily rise in air.

        AGWSF has by sleights of hand manipulating physics (by giving the properties of one thing to another, by taking out whole processes, by taking laws out of context, by changing meanings and playing on words and so on) created a fictional fisics to confuse you, generic, it was deliberately introduced into the general education system.

        AGWSF has in its basics taken out all the real gas properties and processes of the natural world around us and reduced our atmosphere to empty space populated by the imaginary no mass ideal gas miles apart from each other which has no individual volume, in other words, it has taken out the whole tangible volume of our real gas atmosphere – your AGW Earth goes straight from the surface to empty space – and has taken out that heat energy converts to expansion of this volume, because, your gases don’t have individual volume to expand or condense.

        Some differences between ideal gas and real gas here: http://wiki.answers.com/Q/What_is_the_difference_between_an_ideal_gas_and_a_real_gas

        All your basic physics is imaginary. It is not easy to spot these in the confusion of arguments …

        You cannot spot them if you do not know what the real world gases are and can do and not do.

        AGWSF has done the same with electromagnetic energy, its fake fisics meme there is “all electromagnetic energy is the same and all create heat on being absorbed”.

        There are three points here immediately. The first that it is not all the same, that’s why we give them different names and put them into different categories because in traditional science we know their properties and processes by their differences and similarities.

        The second is that they do not all create heat on being absorbed, for example visible light converts to chemical energy, sugars, in photosynthesis not heat, and, electrical energy not heat energy in stimulating nerve impulses in sight. This is to deliberately confuse that “visible light, shortwave, from the Sun heats the Earth’s surface and no longwave infrared [direct radiant heat] from the Sun gets through TOA.

        The third is a play on the word “absorbed” – as used for example in the AGWSF claim that “visible light is absorbed by the water in the ocean and blue visible light goes further before it is absorbed so heats the water deeper” – there are different reasons for attenuation in the real world ocean, physical absorbion of energy by a water molecule is one of them, but as water is a transparent medium to visible light and does not absorb it but transmits it through unchanged, so cannot be heating the water, then the “AGWSF absorbed”, is misdirection by play on word meaning.

        The GH effect is about altering the thermal gradient of the atmosphere when looking at the bottom line effect– making the thermal gradient between surface and space less steep, and thus, energy flows less readily from surface to space.

        What? Oh, this is the “invisible dam” meme? So this invisible barrier unknown to traditional science which ‘like the glass of a greenhouse magically stops the great thermal energy from the millions degree hot Sun at TOA’ is the same invisible dam unknown to traditional science which squashes your empty space atmosphere, except you have nothing to squash because your imaginary ideal gases are not subject to gravity and have diffused long ago into outer space…

        Also, strongly suggest you read all 8 parts of “An Insignificant Trace Gas?” if you really want to dig down and understand what’s going on:

        http://scienceofdoom.com/roadmap/co2/

        Best general overview of CO2′s “greenhouse” physics on the web– period.

        Sadly, for real science and so for all of us, Spencer doesn’t have a clue, assuming he’s genuinely ignorant of traditional physics.. He keeps repeating the memes and does it with all the conviction of a high priest with his incantations of complex mathematical formulae – luckily for me I was only on reading the third discussion on topics he’d introduced when someone, an engineer, i.e. an applied scientist, picked him up on his AGWSF meme “molecules travel at great speeds” – the engineer pointed out that molecules can’t in the real atmosphere, they move at great speeds but don’t travel at great speeds.

        So I realised then that Spencer didn’t know what he was talking about – it took me a while longer through other discussions to find that he was confusing ideal gas with real gases, because AGWSF had changed the physics and created its own version, a fantasy world that is impossible in the real world, this side of the looking glass.

        You have no sound in your imaginary AGWSF world because you have no volume of individual molecules.

        I haven’t found again the original engineer’s comment I recall reading, but I’ve found a similar one here, maybe from the same engineer..:

        http://www.drroyspencer.com/2010/06/faq-271-if-greenhouse-gases-are-such-a-small-part-of-the-atmosphere-how-do-they-change-its-temperature/

        Spencer says: “When the radiatively active molecules in the atmosphere — mainly water vapor, CO2, and methane — are heated by infrared radiation, even though they are a very small fraction of the total, they are moving very fast and do not have to travel very far before they collide with other molecules of air…that’s when they transfer part of their thermal energy to another molecule. That transfer is in the form of momentum from the molecule’s mass and its speed.

        “That molecule then bumps into others, those bump into still more, and on and on ad infinitum.”
        ..
        “2) at room temperature, each molecule is traveling at a very high speed, averaging 1,000 mph for heavier molecules like nitrogen, over 3,000 mph for the lightest molecule, hydrogen, etc.”

        Kevin says:
        June 21, 2010 at 6:43 PM

        “You wrote:

        ““2) at room temperature, each molecule is traveling at a very high speed, averaging 1,000 mph for heavier molecules like nitrogen, over 3,000 mph for the lightest molecule, hydrogen, etc.”

        “Respectfully, I believe a more accurate statement might be:

        “2) at room temperature, each molecule is vibrating in place at a very high speed, averaging 1,000 mph for heavier molecules like nitrogen, over 3,000 mph for the lightest molecule, hydrogen, etc.”

        “There is no linear motion imparted to any molecule when heated, for a simple example; does your teakettle zoom off of your stove at 1,000 mph when you make your tea in the morning ?”

        And that really is how it looks to someone who has basic physics on this from traditional science, the extrapolations are funny, but I have found a more serious way to explain what Kevin the applied scientists means:

        Read the following to understand why without molecular volume your AGWSF world has no sound.

        http://www.mediacollege.com/audio/01/sound-waves.html

        “Before you learn how sound equipment works it’s very important to understand how sound waves work. This knowledge will form the foundation of everything you do in the field of audio.

        “Sound waves exist as variations of pressure in a medium such as air. They are created by the vibration of an object, which causes the air surrounding it to vibrate. The vibrating air then causes the human eardrum to vibrate, which the brain interprets as sound”

        “Sound waves travel through air in much the same way as water waves travel through water. In fact, since water waves are easy to see and understand, they are often used as an analogy to illustrate how sound waves behave.”

        “Note that air molecules do not actually travel from the loudspeaker to the ear (that would be wind). Each individual molecule only moves a small distance as it vibrates, but it causes the adjacent molecules to vibrate in a rippling effect all the way to the ear.

        “Now here’s the thing: All audio work is about manipulating sound waves. The end result of your work is this series of high and low pressure zones. That’s why it’s so important to understand how they work – they are the “material” of your art.”

        ——————————————————————————–

        The real gas air is a combined volume of molecules with real volume, a fluid medium, gases and liquids are fluids. That’s how sound travels through the heavy voluminous fluid medium of air, as does energy through water, because the medium doesn’t move, its volume is constrained by the pull of gravity.

        The water in the ocean doesn’t move, that would be currents, the energy moves through the water as waves as the combined volumes of water molecules are vibrated and pass that increased vibration on before returning to their ground state. Think Mexican wave.

        It is the combined volume of the real gases under gravity which constrain the individual molecules preventing them from linear motion, preventing the heat energy converting to velocity as below in the ocean so above in the air.

        Because the molecules do not move linearly in empty space but are constrained in a medium which is going nowhere very fast…

    • David Springer | May 21, 2013 at 1:40 pm | DocMartyn is correct.

      No he isn’t – he said –

      DocMartyn | May 20, 2013 at 8:43 pm | ” IR radiation coming from the Earth, on its way to space, is intercepted by water and by CO2, it absorbs a photon, becomes energetic, and the energy is converted into velocity.”

      Velocity. He said Velocity. And Pekka knows very well he said velocity.

      When we look down on the atmosphere from space we see a deficit of photons at the absorption frequencies of CO2 and a surplus at all other frequencies. CO2 is trapping photons of a specific frequency

      Carbon dioxide can’t trap heat. It has a very low heat capacity and so releases its heat practically instantly. It cannot trap heat.

      Carbon dioxide absorbs heat.

      and then thermalize adjacent molecules (mostly nitrogen) which then emits a continuous spectrum.

      DocMartyn said this was by the heat converting to velocity, was converted to velocity so the carbon dioxide was energised to move in space, from one place to another.

      It is not converted to velocity, it is converted to expansion. Heated gases expand. They do not go flying off at great speeds bouncing into other gases.

      Carbon dioxide re-emits the heat it imbibed, as it emits that heat it condenses again. So how does it thermalize the adjacent molecules of nitrogen and oxygen? DocMartyn said the heat had coverted to velocity and sent the molecule of carbon dioxide speeding into the molecules of oxygen and nitrogen and colliding tranferring that velocity to them, thus heating them up.

      What do you mean by “continuous spectrum”?

      The greenhouse trick is that the absorption by CO2 is of upwelling radiation and the emittance of thermalized neighbors goes in all directions. The end result is a restriction in the flow of radiation from surface to space similar to what happens when you partially block the flow of water in a gate at the base of a dam. The restriction causes the water to rise higher behind the dam which increases the pressure at the entrance to the gate which forces more water through the gate until the water stops rising. Pressure in fluid flow is analogous to temperature in radiative flow.

      What? Oh, is this the magic invisible container like the glass of a greenhouse around the AGWSF Earth? So it’s an invisible dam?

      When real gases are heated they expand. Which means they become less dense under gravity, which means they are lighter, because gravity is what gives gases weight. When heated gases expand and become lighter they rise in air. When a volume (packet) of gases is heated at the surface and becomes less dense and rises it forms an area of low pressure – because, now less heavy they are not exerting as much pressure onto the Earth as they did when they were heavier before they expanded. This is convection.

      Convection is the transfer of heat from one place to another in fluids.

      This sets up convection currents, called winds.

      As the volume of hot lighter expanded real gas rises, volumes of colder heavier condensed real gas, which form areas of high pressure, sink and flow into the areas of low pressure. Cold heavy air sinks and flows down into the volume of hot low pressure areas of lighter gases.

      Note “flow”. Gases and liquids are fluids. We have a heavy voluminous ocean of fluid around the Earth, weighing down 14lbs/sq inch, a stone/sq inch, a ton on your shoulders, that is how heat is tranferred in a fluid.

      Hot air rises, cold air sinks. Winds flow from high to low. This is basic bog standard meteorology.

      That’s how we get our winds which are convection currents, as we get convection currents in the ocean when volumes of water are heated and so lighter rise and colder heavier volumes of water sink flowing beneath into the areas of rising lighter water.

      As above so below… This is what is happening at the equator where the Earth’s land and water are being heated intensely by the radiant heat, longwave infrared aka thermal infrared, direct from the Sun which is millions of degrees centigrade, it’s a blazing Star. This invisible radiant longwave infrared heat from the Sun is the same as we feel radiating out to us from a camp fire. Direct from the Sun it is the Sun’s thermal energy, heat energy, in transfer via radiation. Which is thermal infrared, aka longwave infrared, aka radiant heat. There is no invisible unknown to science barrier to it at TOA. This radiant heat, longwave infrared, reaches us in around 8 minutes direct from the Sun, travelling in straight lines.

      We feel this heat, we cannot feel shortwaves of uv, visible and infrared because they are not hot. They are not heat. Radiated heat is longwave infrared.

      This longwave radiation direct from the Sun travelling in straight lines heats up the land and water at the equator which heat up the volumes of air above, this causes the real gas air which is mainly nitrogen and oxygen to expand and so become lighter than air they rise and, heat always flows spontaneously from hotter to colder, these hot volumes of air flow to the cold poles. Where their heat is sucked out by the cold and they condense again becoming heavy and sink and flow back to the equator; the cold heavy high pressure volumes sink and flow back to the hot light low pressure areas of the equator.

      Throw in the Earth’s spin and we get the Coriolis effect, our great wind systems which do not cross hemispheres. If the Earth didn’t spin the heat would rise and flow straight to the poles and cold would sink and flow straight back to the equator.

      Carbon dioxide is a real gas as the real gases oxygen and nitrogen, it expands when heated becoming lighter and condenses when chilled becoming heavier.

      It is the pulling power of gravity keeping these gases around the Earth by their property of expanding, becoming less dense, when heated so lighter under gravity and condensing, becoming more dense, when cooled so heavier under gravity; they create the pressure by their weight. It is an open system, not an enclosed container like a tyre/tire.

      Thermal infrared, aka longwave infrared, aka radiant heat, does not convert to velocity in carbon dioxide, it converts to expansion. Carbon dioxide has a very low heat capacity, it heats up practically instantly and releases that heat practically instantly, it doesn’t go anywhere to do this.

      • Imagine we have a column of CO2 from the surface of the Earth all the way out to space. The Earth’s 300K SURFACE is pumping out IR through the column of CO2, and then out into the sink of space at a temperature of 4K.
        Note that the overall flux is unidirectional.
        Note that the Earths gravity well is unidirectional.

        The photons rising up from the Earth have a vector, upwards, and momentum.
        The passage and attenuation of radiative flux by absorption of photons by the gas results in radiative pressure; first postulated by Maxwell and independently Bartoli. Radiation pressure was first demonstrated experimentally 2012 years ago by Nichols and Hull.

        The impact on the gas, where we have a unidirectional photon flux and a gravity well, is to raise the temperature of the column.

        The transfer of momentum to an electrons transition on the absorption of a photon alters the overall momentum of the whole molecules, and is both unidirectional and against the gravity field. Whereas, emission of photons from gas molecules in an excited state is multi-directional due to ability of the gas molecules to tumble whilst in an excited state, i.e. tumbling occurs during the time between photon absorption and photon emission.
        The overall effect of having an overall flux of photons traveling in one direction, against a gravity well, is to cause heating, as individual gas molecules are raised against the gravity well by the radiative pressure.
        Thermodynamically, the presence of the gas is causing an increase in disorder in the system as discrete IR centered around 10 microns is absorbed and much lower frequencies are emitted, due to heating.

      • DocMartyn | May 23, 2013 at 9:58 am | Imagine we have a column of CO2 from the surface of the Earth all the way out to space. The Earth’s 300K SURFACE is pumping out IR through the column of CO2, and then out into the sink of space at a temperature of 4K.
        Note that the overall flux is unidirectional.
        Note that the Earths gravity well is unidirectional..

        Gosh, I don’t know where to start. Do you understand what you’ve said in this post well enough to put it into English so we can all join in?

        What do you mean by “gravity well”?

        The photons rising up from the Earth have a vector, upwards, and momentum.
        The passage and attenuation of radiative flux by absorption of photons by the gas results in radiative pressure; first postulated by Maxwell and independently Bartoli. Radiation pressure was first demonstrated experimentally 2012 years ago by Nichols and Hull.

        And what is that pressure on the surface of one molecule of carbon dioxide compared with one molecule of nitrogen and one molecule of oxygen when “Radiation pressure is the pressure exerted upon any surface exposed to electromagnetic radiation” and which has been described as “feeble” and “negligible for everyday objects. ..one could lift a U. S. penny with laser pointers, but doing so would require about 30 billion 1-mW laser pointers.”?
        ( http://en.wikipedia.org/wiki/Radiation_pressure http://en.wikipedia.org/wiki/Light#Light_pressure )

        The impact on the gas, where we have a unidirectional photon flux and a gravity well, is to raise the temperature of the column.

        Are you offering yet another version of this “invisible barrier” unknown to traditional science “container” like the “invisible barrier at TOA like the glass of a greenhouse which stops the powerful radiant heat which is the Sun’s great thermal energy which is the Sun’s heat energy in transfer by radiatiation aka thermal infrared aka longwave infrared from the millions of degee hot Sun from entering”? Like the unknown to traditional science “invisible dam around the Earth blocking upwelling heat building up heat by restricting its flow to space”, at the top of your “gravity well”?

        How does this “gravity well” “raise the temperature of the column”?

        The transfer of momentum to an electrons transition on the absorption of a photon alters the overall momentum of the whole molecules, and is both unidirectional and against the gravity field.

        So the upwelling photons from Earth in this “gravity well” are not heat but light?

        Electronic transitions are not applicable to radiated heat.

        http://chemwiki.ucdavis.edu/Organic_Chemistry/Organic_Chemistry_With_a_Biological_Emphasis/Chapter__4%3A_Structure_Determination_I/Section_4.3%3A_Ultraviolet_and_visible_spectroscopy

        “While interaction with infrared light causes molecules to undergo vibrational transitions, the shorter wavelength, higher energy radiation in the UV (200-400 nm) and visible (400-700 nm) range of the electromagnetic spectrum causes many organic molecules to undergo electronic transitions. What this means is that when the energy from UV or visible light is absorbed by a molecule, one of its electrons jumps from a lower energy to a higher energy molecular orbital.”

        Light cannot move the whole molecule into vibration which is what it takes to heat up a molecule, which is increasing its internal kinetic energy.

        Visible light works on the much tiner electronic transition level because it is much tinier than the bigger heat energy of longwave infrared. Visible light is some 500 times bigger than the even titchier electron and knocks the electron into a higher energy state, not out of orbit, that would be ionising and visible light isn’t powerful enough to do that; the electron emits the same visible light energy as it settles back to ground state which is how we get our blue sky, the electrons of nitrogen and oxygen bouncing it around like a pinball machine. The much bigger heat wave of longwave infrared bangs into the whole molecule and gets the lot into internal vibration as the molecule absorbs it. Which is kinetic energy the energy of movement.

        Rub your hands together, that is mechanical energy, friction, moving the whole molecule into vibration, kinetic energy aka heat – the molecules of your skin don’t move linearly, or perhaps they do for you. Are you, generic, shape-shifters? Are you part of the evolution of the life on Earth or are you blow-ins?

        “Whereas, emission of photons from gas molecules in an excited state is multi-directional due to ability of the gas molecules to tumble whilst in an excited state, i.e. tumbling occurs during the time between photon absorption and photon emission.

        Because real gas is a fluid? Not an imaginary massless ideal gas in empty space with no volume to tumble? Or are you saying the stability of the whole molecule is affected by each encounter of an electron with light?

        The overall effect of having an overall flux of photons traveling in one direction, against a gravity well, is to cause heating, as individual gas molecules are raised against the gravity well by the radiative pressure.

        You’re still claiming linear kinetic movement when traditional physics says this results in either electronic transitions in reflection/scattering by absorption of light, so no change, or internal vibration of the whole molecule, which is kinetic movement i.e. heat, from absorbing the bigger more powerful radiant heat which is longwave infrared, which results in the expansion of the volume of the gas molecule which means it becomes less dense and therefore lighter than air under the pulling power of gravity which causes it to move linearily in space, that is, being lighter than air it will rise.

        For example, the same amount of molecules in liquid water will take up 1,000 times more space as water vapour. It’s the combined volumes of expanded less dense gases which create areas of low pressure, we can feel this because they do not weigh so heavily on us.. Hot air rises cold air sinks; when real gases get cold they condense, they become denser, there are more of them in the same space because they take up less room.

        Van der Waals has been excised from AGWScienceFiction fisics the way it has excised the Water Cycle and excised rain from the Carbon Cycle, real gas molecules have volume and attraction and weight relative to each other under gravity, they will separate out, that is what evaporation is. Water vapour and methane are lighter than air under gravity so will always rise, carbon dioxide is heavier so will always sink. Always equals sponstaneously, it takes work to change that just as it takes work to change the direction of the movement of water which always flows downhill..

        Real gas molecules of traditional physics are not the imaginary ideal gas dots of nothing bouncing off imaginary invisible containers or each other not subject to gravity of the fantasy AGW fisics. Yet you invoke gravity, in your “gravity well”..

        You, generic AGW/CAGWs, don’t have a real gas atmosphere and don’t have real gravity – you go straight from the surface to empty space – yet you bring in terms from traditional physics inapplicable to your scenario, like gravity, or have to postulate the existence of unknown to traditional science powerful strong “invisible barriers”, which not only prevent the the great invisible thermal energy from the millions of degrees hot Sun from entering at TOA, but keep your not subject to gravity volumeless weightless dots of nothing which you disingenuously or out of ignorance call carbon dioxide from continuing their journey in empty space …

        Thermodynamically, the presence of the gas is causing an increase in disorder in the system as discrete IR centered around 10 microns is absorbed and much lower frequencies are emitted, due to heating.

        Gobbledegook.

  39. As I mentioned above, a global ECS of 2 C probably means a land ECS of 4 C, because without the ocean surface warming much at present, the main warming response has been from the land and Arctic. This rate of land heating is the only way left to balance the forcing change. We see that the land TCR is already in excess of 4 C per doubling, and the linear assumption of this paper (if you believe it) means the land will continue to warm at twice the rate of the ocean, and the Arctic even more rapidly. As I mentioned above, the Armour et al. (2012, J. Climate) paper is relevant to the issue of varying thermal inertias and a nonlinear response. The paper refers to Armour on this point which is why they say their ECS is a lower limit.

    • Chief Hydrologist

      Personally speaking – the distinction between land and ocean seems a bit of a furphy.

      http://s1114.photobucket.com/user/Chief_Hydrologist/media/best_hadcrut4_zpsda90f7a2.png.html

      • I don’t know about that, one is wet and the other not so much.

        Actually, to really see the difference you need the estimated absolute values. In the NH, the Ocean is about 19.9 C and the Land is about 11 C. yielding a basic “sensitivity” of ~0.196 for the oceans and ~ 0.175 for the land. Then the “average” land elevation is +680 meters resulting in an “effective” “sensitivity” about 10% higher, ~0.158 C/Wm-2.

        You can also compare the ratio of the standard deviations, but unfortunately, the “surface” temperature record has some issues with interpolation, so the satellite data appears to give a better estimate.

      • JimD, You know what is kinda funny about anomalies?, They are baseline dependent, dangedest thing. If you use a satellite era baseline, since that is the cat’s ass of global telemetry, land is catching up instead of leading. Being able to get different results using the same data is called a lack of “Robustness”. Some, not Cappy of course, might use preferential “views” that the lack of robustness allows for their own gain.

        Of course, absolute values tends to reduce the “creativity” of presentation.

      • maksimovich

        Lets adjust Jims graph,to accounts for waterworld ie the SH

        http://www.woodfortrees.org/plot/hadsst2sh/mean:120/mean:12/from:1900/plot/hadcrut3vsh/mean:120/mean:12/from:1900

        The proposition does not hold.

      • captd, you can shift the zero where you like, the rate for land will still be double that for ocean over the last 30 or so years. I can see you are trying to pull your obfuscation thing here but it doesn’t work.

      • maksimovich, yes, the northern continents are larger and have by far the greatest warming. More people live on those too (90% of the world population). But OK, if Oz and South America stay comfortable, you may be getting new neighbors.

      • maksimovich

        The SH also has the greatest forcing of solar irridiance into the ocean 2 orders of magnitude (the annular mode) greater then the forcing here

      • You can compare the southern ocean with the northern CRUTEM4 continent. It is not even close.

      • Chief Hydrologist

        Sorry using BEST and Hadcrut4 -is what it is. There is a bit of a divergence over the last decade but that isn’t climate is it? Otherwise we have what we expect – smaller variability over oceans but temperatures don’t diverge too much.

        Ultimately – connections between land and ocean are a couple of days at most.

        http://www.wisegeek.org/how-does-ocean-temperature-affect-air-temperature.htm

      • maksimovich

        Yes the SO is dampened ,we call these clouds eg(Korhonen et al., 2010

      • JimD, the rate would remain the same based on the anomaly, not the energy that anomaly is supposed to represent.

        https://lh5.googleusercontent.com/-skpB1ejZpNE/UZq1Yx5fS1I/AAAAAAAAIKo/6XqqqIROdro/s783/nh%2520UAH.png

        Like that, using ~19.9C for the Ocean and ~11 C for the land that is a rough estimate of the surface energy flux. In yours, you are comparing an anomaly for a low density higher elevation gas average of Tmax Tmin with a higher density average of a liquid, sub-surface temperature. The liquid Cp=Cv, the gas, not so much.

        If one is leading the other, which do you imagine is wagging the tail?

      • captd, the land rate is equivalent to 4 C per doubling and is twice the ocean rate. You seem to have drifted away from the salient point. And CH, the divergence has been going on for 30 years now, possibly even faster recently with the pause only affecting the ocean.

      • oops, I flipped the land/ocean, ocean is ~0.175 and land ~0.195 C/Wm-2 with density adjustment ~0.21 Wm-2 or about 20% more “sensitive” than the NH ocean.

      • Chief Hydrologist

        http://s1114.photobucket.com/user/Chief_Hydrologist/media/best_hadcrut4_zpsda90f7a2.png.html?sort=3&o=0

        There is not enough to draw conclusion at all – but that doesn’t stop you does it Jim? There is a bit of difference in the last decade. I am wondering if it is not an artifact of using sea surface temperature for surface temperature.

      • Chief, ” I am wondering if it is not an artifact of using sea surface temperature for surface temperature.” Looks like it to me. that skin layer that radiant energy “sees” is a tricky sucker.

        JimD, When there is a lack of “robustness”, you should look at other baselines to estimate a “norm” that may be the best “fit”. Since 1900, glaciers have been receding, agriculture expanding, population exploding, all have an influence, so land could easily have started at a lower “average” than the oceans. That is the whole problem, teasing out what “normal” should be so you can estimate the CO2 portion of the impact. You are assuming “all” which is the absolute worse case,an upper limit likely unobtainable. Not a problem, limits or boundaries are good things.

        Then look for a low end boundary. Its probably somewhere in between those two. 0.8 C should be the “design curve” the no feedback sensitivity. As more data comes in you can improve your fit.

        Part of that more data is the satellite telemetry which indicates, “Houston. We have a problem.” You are fighting the satellite data about as hard as Trenberth. That is why the new Earth Energy Budgets being cited in the new peer reviewed papers have different names.

        I think Trenberth may have “minor adjusted” himself into a lateral promotion.

    • Jim D,
      I agree wholeheartedly with your premise.
      http://theoilconundrum.blogspot.com/2013/05/proportional-landsea-global-warming.html

      Keep pushing this, and ignore the dudes with the fake officer titles.

  40. When I look at that para above from New Scientist, and consider similar “serious” publications which use language in such a trashy and manipulative way, I wonder if we aren’t seeing the birth of a new genre: the climate tabloids.

  41. We don’t know what the climate sensitivity to CO2 is.
    We don’t know what the global average temperature is.
    We don’t know what the GAT was 100 years ago.
    We don’t know what the GAT will be 100 years from now.
    We don’t know what all the forcings are that contribute to changing climate.
    We don’t know the net effect of water vapor as a forcing.
    We don’t know at what level increasing temperatures will have net negative effects.
    We don’t know the actual cost of the proposed solution, or the actual cost of failing to act.

    We know none of these things with the accuracy and certainty that would justify decarbonizing the global economy at enormous cost.

    We do know that enacting massive taxes on energy will impact the poorest people hardest, no matter what redistributive Rube Goldberg machine the governments come up with.

    We do know that the Chinese, Russians and Indians will not join in an economic suppuku pact with the west to decarbonize the economy.

    We do know that the most modern, comprehensive, expensive, detailed climate models make predictions all over the map, virtually all of which predicted much warmer temperatures than we have actually experienced the last 15+ years.

    We do know that virtually all consensus supporting papers in climate science have been funded by governments, and have made findings and recommendations that support the government goal of…more government..including more government funded research.

    We do know that there has never been a debate about CAGW, including the C, among the government funded climate science community, ever.

    Given what we don’t know, and what we do know, why precisely should I care about the “most likely value of equilibrium climate sensitivity” at the current state of our knowledge?

    • Gary,
      I made a similar comment a few months ago, only not as nicely done as yours. Everything you say is to the best of my knowledge correct. Next time you run into WHT ask him to defend his precious precautionary principal in the face of all that. And while you’re at it mention that in point of fact we don’t even know if any warming might actually be a net benefit…

    • Gary M

      +100

    • Gary M, you write “Given what we don’t know, and what we do know, why precisely should I care about the “most likely value of equilibrium climate sensitivity” at the current state of our knowledge?”

      Gary, I, and many others, agree completely with what you write, and in our own styles, have been writing the same sort of thing for years. The trouble is that there are thousands of peer (pal) reviewed papers in prestigeous publications, that say you, and we, are wrong. And so, it becomes very dificult to discuss the subject scientificly, since the proponents of CAGW trot out these various papers, and claim that they prove that what you say is wrong. They also claim that, since there is all this “science” to prove you are wrong, then what you are doing is spreading disinformation, and you must be stopped from doing this at all costs.

      Unfortunately, until we have one of the major scientific learned societies in our corner, the proponents of CAGW will not listen to what the proper science is telling us. We just have to wait for Mother Nature to prove, with empirical data, that we are right. Luckily, it is much more difficult for the proponents of CAGW to argue against empirical data. Not impossible; just more difficult.

      • Jim Cripwell,

        Oh, I understand completely. Much of the debate about CAGW takes the form of “even if what the consensus says about A is true, proper analysis leads to C, not B.” This is true with your comments, the whole UHI imbroglio, paleo proxy debates, etc. Arguing about the proper results of an analysis does not concede that the initial assumptions that go into it are true.

        I just think sometimes skeptics get lost making that ,kind of argument, and forget just how much we don’t know.

        pokerguy,

        I have seen you post similarly, and I have posted similar comments to the one above previously as well. I just think it bears repeating every once in a while.

    • I will just add that the list of what we don’t know is taken solely from what I have learned by reading consensus scientists’ explanation of their own and related research.

  42. Matthew R Marler

    from the SI of Otto et al: The atmospheric heat uptake amounts to about 1% of the total trend. We produce an estimate of atmospheric uptake as follows: we convert an estimate of annual global mean atmospheric temperature anomalies to energy changes owing to specific and latent heating assuming a total atmospheric mass of 5.14 × 1018 kg, a mean total water vapor mass11 of 12.7 × 1015 kg, a heat capacity of 1 J g-1 °C-1, a latent heat of vaporization of 2,466 J g-1, and a fractional increase of integrated water vapor content12 of 0.075 °C-1.

    They assume that this increase in atmospheric heat does not cause any increase in the rate of transfer of heat from the lower troposphere to the upper troposphere via convection. It just sits there in the troposphere.

    However, if doubling the CO2 reduces the rate of radiative transfer from the lower troposphere to the upper troposphere, transiently (as proposed by the authors), then the temperature differential between the lower and upper tropospheres should increase, transiently, and during that transient the convection from lower to upper should increase. You have both a faster convection rate and higher heat density of air convected.

  43. > It will be very interesting indeed to see if the IPCC budges from the 2-4.5 C range that has remained unchanged since the 1979 Charney report.

    We estimate the most probable global warming for a doubling of CO2 to be near 3 C with a probable error of 1.5 C.

    http://www.atmos.ucla.edu/~brianpm/download/charney_report.pdf

    INTEGRITY ™ — No Need for Quotes

  44. The UN’s IPCC was never setup to do scientific research in climate and it shows in their work. Consequently it is not surprising that there is a breakaway group of scientisrs.One can only hopr that it is not too late to repair the damage. However it will take more than fiddling with probability rheory. The 20 or so models spomsored need a much closer look because in the end the only hope of predicting climzte at the end of this century lirs with the models.

    One thing current models appear to lack is the ability to forecast periods of no global increase in average global temperature. Of course, models havr to be able to predict both average and standard deviation as well as power spectrum of temperature, although not necessarily regional.

  45. Just wondering what the Judith Curry line is on all this.

    Would it be that calculations and measurements of climate sensitivity (2x CO2) are just too uncertain to be at all meaningful? That is, except when they are acceptably low. Then the uncertainty drops away and the meaning is clear to all?

  46. At the time of the IPCC First Assessment Report the main stream view was that getting relevant upper and lower bounds for ECS and even for TCR would take a more than a decade based on the rate of warming expected from other considerations and natural variability. The rapid warming of the next 10 years didn’t follow the expectation rising the lower bound to the point that it was justified to conclude that the warming up to that point was difficult to understand based on natural variability alone. The rapid warming left the upper bound open, but gave a meaningful lower bound. The best estimate from warming went up to a level similar to rates derived from paleo studies and also seen in many climate modes.

    Around year 2000 it was not possible to tell, whether the warming of the preceding decade had a strong natural component. It was certainly not possible to exclude that possibility, but it was also possible to think that it was all AGW. Now we know more. The latest 15 years tell that there was surely a lot of natural variability in the warming. In the 1990’s every new warm year added a little to the best estimate of TCR, over the last 15 years every year has taken a little off from the best estimate. By now we have even a meaningful upper limit.

    The paper discussed in this thread presents some new analysis, but basically it just confirms that the authors accept views like those I present above. They have probably done that all the time, now they are ready to make it public.

    The real main stream seems now to be more willing to accept that the conclusions from the recent warming are very significant. We have still many scientists, who don’t think so and present arguments to support the view that this kind of analysis overlooks important issues, and that the relatively low TCR is an illusion. Fred Moolten has brought up such thinking in his comments above.

    My own intuition tells that there are certainly some significant factors like those discussed by Fred, but that there are equally important opposing factors. I’m happy with the conclusions of the paper of Otto et al. Perhaps I’m just an optimist, but that’s the way I do really feel.

    What I have written here and on my own site over the last two years is all based on a personal estimate on the likely range of warming rate very similar to what this paper presents. Thus what I have written on policy is unchanged by this paper. I’m still puzzled by the difficulty of making wise decision, when the uncertainty and lack of knowledge has all the characteristics of the climate issue. I don’t like the Kyoto agreement or a continuation of it, but I’m not willing to say that nothing comparable should be done.

    What I’m most unhappy with are decisions that force the development to fruitless but costly paths in the spirit if panic reactions. While the warming may, indeed, be a serious threat, there’s no reason to panic. It’s better to search vehemently for good solutions than to implement bad ones. This paper gives a little more assurance that we’ll have a change of making what’s needed before damages grow too large.

    • Peter Lang

      Pekka Pirila,

      What I’m most unhappy with are decisions that force the development to fruitless but costly paths in the spirit if panic reactions. While the warming may, indeed, be a serious threat, there’s no reason to panic.

      I strongly agree with this statement.

      Examples of “fruitless but costly paths” are carbon pricing and mandating renewable energy.

      Thus what I have written on policy is unchanged by this paper. I’m still puzzled by the difficulty of making wise decision, when the uncertainty and lack of knowledge has all the characteristics of the climate issue.

      I agree with you on this statement too. However, I fail to understand why you seem so opposed to even taking the first step towards “making wise decision

      The first step must be, IMO, to define the requirements that the policy must achieve and define the requirements that must be achieved for the policy to succeed. I suggested two requirements that, IMO, must be achieved if the policy is to succeed. They are:

      1. The policy must have a high probability of achieving the requirements (the specified outcomes.)

      2. To be able to be implemented it will need to be able to demonstrate it provides benefits (e.g. economic or other benefits) almost immediately and throughout the duration of the policy for all countries and most people in each country.

      However, you avoided discussing them. Therefore, I cannot take your comment about policy seriously. You do not seem to want to even consider the most basic requirements essential for success of any policy.

      By the way, Kyoto Protocol does not meet these requirements and therefore is doomed to fail.

      • My view on developing renewable energy choices is that R&D should be funded, and that there are also many cases where some support of deployment is justified. Funding demonstration and experimental projects is often productive.

        Funding larger scale deployment should, however, be prudent. Very high subsidies that lead to very large volumes are a waste of money as the costs may be several times higher than all the benefits taking all indirect ones into account. A good case for subsidies for deployment has the characteristics:
        – The technology has the potential to turn profitable in foreseeable future.
        – The technology has reached the maturity where increasing the scale of production is the main additional step for reaching profitability.
        – The risks and delays are still too high for private companies taking into account also the fact that the first company to reach profitability cannot prevent competition from entering the market before it has recovered the costs of development.

        In the above case the appropriate level of subsidies is not very high, as the need for very high subsidies proves that the requirements are not met.

      • Peter Lang

        Pekka Pirila,

        I can’t disagree with your points, but would still like to focus, as a first step, on defining the requirements that must be met for policy to succeed. This is the essential first step IMO. Until the requirements are recognised, we will all keep picking winners and arguing about policies that have no chance of succeeding in the real world. Thus I refer you once again to these two as a starting point to focus discussion:

        1. The policy must have a high probability of achieving the requirements (the specified outcomes.)

        2. To be able to be implemented it will need to be able to demonstrate it provides benefits (e.g. economic or other benefits) almost immediately and throughout the duration of the policy for all countries and most people in each country.

        Do you agree with them and if not why not?

        On a separate issue from this main point I am still hoping we can resolve and get past (see above), I’d urge we acknowledge there is no point in advocating renewable energy. It is hopelessly uneconomic and unlikely to be economic in the foreseeable future for grid generation except in a few niche locations. It uses far more resources than other low emissions technologies. It cannot provide more than a small component of the world’s energy needs. It has no significant benefits. It is also a major distraction from progress. Therefore, spending time on it is a waste of time, money and resources that could be far better utilised.

        As an example of just how uneconomic it is, roof top PV in sunny Australian capital cities abates CO2 for about $600/tonne. Solar thermal and wind generation are also very high cost. All are mandated as ‘must take’ and heavily subsidised. There are many hidden costs resulting from the mandatory ‘must take’ distortion on the electricity market.

        If you are interested in knowing a bit more about this, this recent paper on the hidden costs of PV in Australian capital cities is excellent:
        http://www.mdpi.com/2071-1050/5/4/1406

      • Peter,

        I don’t agree that the probability of success must always be high, the required probability depends on the potential benefits. The combination of probability and potential benefits must exceed some threshold.

        Similarly the level of evidence that must be presented depends highly on other contributing factors.

        A simple rule that I believe in is that almost all simple rules are poor.

        Formal rules must be combined with less formal considerations. Objective facts don’t give full valid answers, but must be complemented by more subjective judgments. Subjectivity is always a potential source of error, but having a potential of error is better than being sure that there’s on error as there’s in typical purely objective comparison.

        Reliance on measurable objective facts resembles the case of looking at keys under the streetlight just because the light provides a better visibility, while the keys are elsewhere.

      • Peter Lang

        Pekka Pirila,

        A simple rule that I believe in is that almost all simple rules are poor. …

        Statements like this and what follows it are unhelpful. The quoted bit and what follows to the end of the comment are just diversion and obfuscation to avoid dealing with the substantive issue. So how about giving up on the obfuscation and arm waving and try to focus on the requirements that must be met for policy to succeed?

        I don’t agree that the probability of success must always be high, the required probability depends on the potential benefits. The combination of probability and potential benefits must exceed some threshold.

        Similarly the level of evidence that must be presented depends highly on other contributing factors.

        Are you saying you disagree with one or both of the two requirements I’ve posted six times? I am not clear on what you are saying, nor why?

        Do you have suggested wording changes and an explanation for the reason for those changes?

        Do you recognise that the two proposed requirements are the requirements to get the broad public support necessary to get legislation passed and to maintain broad public support for as long as it takes to complete the job?

        For policy to pass legislatures in democratic countries they must gain broad public support. For global polices to succeed every nation state will have to believe it will gain from participating. I presume you recognise these basics.

        We are not talking about science, economics or engineering here. We are talking about policy that we can get passed by the legislatures in all democratic countries and get sign by all governments.

        I don’t agree that the probability of success must always be high, the required probability depends on the potential benefits.

        Actually, it doesn’t depend on the “potential benefits”. Getting the legislation passed depends on the public’s perception of the near term costs and benefits. But maintaining it depends on the public believing the policy is delivering the benefits. This applies in each democratic country and also globally. Globally, countries will not agree to participate in a policy unless they can see they will gain almost immediately. Free trade is an example where the participating countries gain, but there are some losers in each country in the short term so the gain for the rest in that country has to be sufficient to pay sufficient compensation to the losers so they accept they are better off.

        Please don’t be pedantic about me getting all possiblke caveats into every phrase and sentence so that I’ve covered every possible scenario. Try to understand the big picture.

        I remind you:

        Uncertainty about the problem is a given; uncertainty about the chosen solution is inexcusable. This is to say, we should be confident that our solutions are going to be effective, and the more expensive the solution the more confident we should be.

        Carbon pricing and renewable energy are enormously expensive solutions. However, the policies I’ve been advocating are not. In fact, I believe they are potentially net positive after a short time, irrespective of any presumed climate benefits in the distant future. That is, they are ‘No Regrets policies. This is what we should be striving for. The policies I am advocating meet the two requirements I’ve kept posting and you keep dodging.

        Because we haven’t made any progress on the policies I’ve been advocating (over the past 2 years or so), I decided to drop back a notch and to try to define and agree the requirements for the policies if they are to succeed. So, I ask again, can you say if you accept the two requirements, or if not can you say why not and perhaps provide alternative wording.

        But, please, let’s focus on the requirements and stop the diversions, obfuscating and arm waving. It’s not helpful.

      • Peter,

        I believe that valid statements can be made at two levels:

        – on generic principles
        – on fully specified cases

        What you are asking is to make such statements on the intermediate level, more specific than the generic principles, but without the full specification of a case. Such statements are not what I want to make. Statements made on that level are somewhere true and elsewhere not. I don’t want to make such statements.

        While I find that renewable energy has been promoted wrongly, I see more potential for it under suitable conditions than you appear to see. Similarly I have a basically positive attitude towards nuclear energy, but I’m not at all as enthusiastic on it’s real world potential as you seem to be.

        Going to specific non-generalizable cases I do have stronger opinions.

      • Peter Lang

        Pekka Pirila,

        I believe that valid statements can be made at two levels:

        – on generic principles
        – on fully specified cases

        What you are asking is to make such statements on the intermediate level, more specific than the generic principles, but without the full specification of a case. Such statements are not what I want to make. Statements made on that level are somewhere true and elsewhere not. I don’t want to make such statements.

        This is the most frustrating arm waving waffle imaginable. Is this how you worked as a Chair of energy economics or whatever?

        What on Earth does all that waffle mean?

        If you disagree with the two requirements I’ve laid out, why don’t you have the balls to say what is wrong with them and suggest improvements.

        If you want to start at a higher or lower level why don’t you suggest the requirements you are thinking of and justify them. then at least there is the possibility of a rational debate about rational policy that could succeed.

        Your arm waving, avoidance and obfuscation provides and excellent example as to why scientists would be totally unsuited in policy development. Which is what you said earlier, I disagreed and that is what got this discussion started. You are proving how right I was.

        Going to specific non-generalizable cases I do have stronger opinions.

        What does this mean? Are you talking about requirements for policies that could cut global GHG emissions or something else? because this discussion is about policies that are likely to succeed in cutting global GHG emissions.

      • Peter Lang

        Pekka Pirila,

        While I find that renewable energy has been promoted wrongly, I see more potential for it under suitable conditions than you appear to see.

        So why don’t you be more specific? What proportion of global energy could renewable energy realistically provide in say 2060 (about 50 years from now)? What would be the cost compared with fossil fuels and nuclear? What would be the environmental impact? What would be the land area required, and the materials required?

        Have you seen these:

        ‘81,000 truckers for solar’: http://bravenewclimate.com/2013/03/14/81000-truckers-for-solar/

        ‘Energy demand equation for 2050’: http://bravenewclimate.com/2009/10/11/tcase3/

        ‘Energy system build rates and material units’: http://bravenewclimate.com/2009/10/18/tcase4/

        ‘A check list for renewable energy plans’: http://bravenewclimate.com/2010/07/12/tcase12/

        ‘Emission cuts realities for electricity generation – costs and CO2 emissions’: http://bravenewclimate.com/2010/01/09/emission-cuts-realities/

        Renewable Electricity for Australia – the Cost’: http://bravenewclimate.com/2012/02/09/100-renewable-electricity-for-australia-the-cost/

        Renewables or Nuclear Electricity for Australia – the Cost’: http://oznucforum.customer.netspace.net.au/TP4PLang.pdf

      • @Lang: As an example of just how uneconomic it is, roof top PV in sunny Australian capital cities abates CO2 for about $600/tonne.

        If CO2 abatement were the only economic benefit of solar PV no one in their right mind would install it.

        We did not install solar PV to abate CO2 but to decrease our $6000/yr electric bill, which is now down to $800/yr. Our meter claims we’ve abated 55 tons of CO2 since installation (on May 22, 2008, exactly 5 years ago today, so 11 tons/yr), but that wasn’t a factor in our purchase decision.

        Nonetheless the cost to us to abate a ton of CO2, amortized over the first decade, will be slightly negative, and once the system has paid for itself in this way we will in effect thereafter be paid $5200/11 = $470 to abate a ton of CO2.

        But I would agree with you that we couldn’t get remotely like $470 for a ton of CO2 if we could somehow extract it from the atmosphere and try to sell it.

      • Peter Lang

        Vaughan Pratt,

        The sole reasons for subsidising mandating uneconomic renewable energy, like solar PV, is on the belief it will cut GHG emissions. There is n o other reason.

        The people who do not have solar PV’s are subsidising the elites like you. the poor are subsidising the rich – by an enormous amount.

        I’ve explained to you previously the difference between what is economic to an individual customer after subsidies, compared with what is economic for the country. Solar is hugely expensive for the country, as is wind power.

      • @PL: the poor are subsidising the rich – by an enormous amount.

        Perhaps I misunderstood you. You said

        roof top PV in sunny Australian capital cities abates CO2 for about $600/tonne.

        Let’s consider an installation in Australia that is abating 10 tons of CO2 a year. (The one on my roof in Northern California has averaged 11 tons/yr over the past 5 years.) Over a 20-year lifetime for that installation it will have abated 200 tons. According to you those 200 tons will have cost someone $120,000.

        I don’t see how subsidies can account for that $120,000 that you’re saying abating those 200 tons of CO2 costs. I’d therefore be very interested in seeing the maths that supports your figure of $600/ton of abated CO2.

      • Peter Lang

        Vaughan Pratt,

        I’d therefore be very interested in seeing the maths that supports your figure of $600/ton of abated CO2.

        Did you read the Graham Palmer paper I linked? If not you need to read it first. Then apply the method and realistic assumptions for the average operating life and average capacity factor through life for roof top PV. You also need to include the full cost of the installation including the hidden cost to the network and other generators. After you’ve had ago using reasonable assumptions tell me what you get and what numbers you used for those three assumptions. Then I’ll tell you what I used.

        By the way, a simpler way, but not as correct as Graham Palmer’s, is in the spreadsheet you can download here: http://bravenewclimate.com/2012/02/09/100-renewable-electricity-for-australia-the-cost/

      • @PL: After you’ve had a go using reasonable assumptions tell me what you get and what numbers you used for those three assumptions. Then I’ll tell you what I used.

        I’m sure a lot of people here would find it very useful to understand how a private homeowner’s 7.5 kW home solar PV installation could cost the public $120,000 over 20 years. That’s $16 a watt in public subsidies for that installation. If true that would be a terrific argument against solar PV for all those opposed to it.

        How could that possibly be true? Peter’s best answer seems to be “read the bravenewclimate literature and do the maths yourself,” a burden he proposes to place on everyone interested in this question.

        If someone else (Faustino?) has a straight answer that isn’t too much more complicated than the question, one that can be summarized reasonably convincingly in a paragraph or three in this thread, the readers of Climate Etc. should find its reasoning very useful in arguing against solar PV.

        The more paragraphs it takes, the less convincing the reasoning and hence the less useful to everyone here who wants to make the case against solar PV to others. Simply repeating Peter’s argument “read the bravenewclimate literature and do the maths yourself” won’t get people terribly far in making that case.

      • Peter Lang

        Vaughan Pratt,

        I’m sure a lot of people here would find it very useful to understand how a private homeowner’s 7.5 kW home solar PV installation could cost the public $120,000 over 20 years.

        You’ve made up a strawman which is totally false. A lie. I’ve had plenty of experience with your dishonest tactics before so am not prepared to play if you are going to be dishonest, kiniving, devious, disingenuous – which I fully believe you will.

        You asked a question. Therefore, if you want to learn, you are the student and I am there teacher. I’ll take you through it. But you have to be prepared to do your homework. The fact you haven’t even bothered to open the Graham Palmer paper shows you are just playing games. (And I know you didn’t even open it because the link I gave is not to the peer reviewed paper, not to BraveNewClimate). It’s in this comment (I am giving the link to the comment rather than the paper so you’ll re-read the first comment you responded to: http://judithcurry.com/2013/05/19/mainstreaming-ecs-2-c/#comment-323910 )

        The more paragraphs it takes, the less convincing the reasoning

        Your last paragraph considered in the context with your five paragraphs of dishonest strawman tactics, avoiding even reading the paper, and misrepresentation seems rather hypocritical don’t you think?

      • Peter Lang

        Vaughan Pratt,

        Further to previous comment, the fact you didn’t even bother to read the Graham Palmer Paper (let alone study it and really understand it, and perhaps ask questions if there are parts you don’t understand), nor down load the simple spreadsheet, shows you are not in the slightest interested in understanding or getting an answer to the question you asked me. I assert you are far more interested in trying to justify your beliefs and your justify why it is OK for a member of the elite, a professor at Stanford in fact, to bludge off the poor.

        Your a disgrace.

        [That should embarrass you sufficiently so you now go and read the Graham Palmer paper and try to find a genuine error in the $600/tonne CO2 abatement cost with roof top PV. ]

        I’d be very impressed if you come back and say some thing like “Wow, I didn’t realise any of that. I’ve learnt a great deal. Thank you Peter Lang. I bow to your great wisdom!! I feel so guilty I am gong to give my PV set to a poor family. :)

      • Peter Lang

        Vaughan Pratt,

        I’ve just reread the first comment you responded to where I gave the link to the Palmer paper and stated the $600/tonne CO2 abatement cost with roof top PV in Australia (Melbourne; Brisbane is similar). http://judithcurry.com/2013/05/19/mainstreaming-ecs-2-c/#comment-323910

        Glad I did re read it. Because I had made a point of making clear in other comments but not this one that Graham Palmer, being a cautious engineer and generally tending to understate figures where there is uncertainty, gave a lower figure than $600/tonne CO2 abatement cost. The difference is that I used more realistic assumptions for average operating life of installations and average capacity factor for the fleet of roof top solar installations in a city. But we can get to arguing about the assumptions once you have done your homework – if you decide to do it of course!

        What fun kicking the sh-t out of a pompous, arrogant, ignorant professor with an enormous ego.

      • Since you had avoided answering the simple question of how abating 200 tons of CO2 over 20 years could cost $600 a ton, I was hoping someone else could answer it since we didn’t seem to be making much progress here. But I must say how my question constitutes a “lie” is a mystery to me.

      • vaughan Pratt,

        You figures are made up by you, totally baseless and not worth discussing. If you are not prepared to read the paper and get some understanding of the subject, what’s the point of me getting into the stupid sorts of discussion you want to have – which are just about your beliefs. You make stuff up as you go along, avoid the subject, continually twist and misrepresent what I’ve said. In short, your integrity is not befitting of an academic , let alone a professor at Stanford, so there is no way I’ll play your game. If you want to learn, I’ll teach you. Go do your homework and come back when ready.

      • I asked a simple question, “how can abating 200 tons of CO2 cost $600 a ton?” You responded first with “read this 30-page paper and get back to me,” and then when I protested that this was not a straight answer you began accusing me of lying.

        I can assure you of the veracity of the two numbers I cited, namely the 5-year age of my system and its meter reading of a tad over 110,500 lbs of CO2 abated to date. From that it follows that over 20 years my home system (7.5 kW at 37 N) will have abated 200 tons. There are no other relevant numbers I could be lying about.

        Your cost of $600 to abate a ton of CO2 implies a cost of 200 x $600 = $120,000 over 20 years. A straight answer would be to break down the $120,000 into its components: what are the various costs that add up to that figure? That is, how is that $120,000 pie sliced? No need to give a hundred components, just group them logically into say half a dozen or so main components so people can get a rough idea of where those costs are going. That would be six lines or so forming one table, equivalent in size to one paragraph. Later on one could break down each line into its own components if that seemed worthwhile.

        If you can’t do that then I’m sure I won’t be able to do it by slogging through a 30-page paper that I’m much less familiar with than you and wondering which pages are relevant to the cost breakdown. It would not be a productive expenditure of the time of either me or anyone else here who might be interested in how abating a ton of CO2 could possibly cost $600. If there were even one other such person, that would entail a pointless duplication of effort, and if there weren’t then perhaps the question isn’t interesting enough to the other readers to be worth pursuing in this forum.

      • Peter Lang

        Vaughan Pratt,

        You do not understand what you are doing. We’ve discussed this many times before and it was apparent on every occasion you have little understanding of the issues involved. You seem to not appreciate that if you are connected to the grid, the installation is part of the electricity system. Therefore, the full system costs of PV in the system have to be included. Your figures for your house are irrelevant because they are not the system costs and do not include the subsidies from governments and other electricity consumers. I’ve explained all this to you on numerous previous occasions. On each occasion you were devious and more interested in trying to prove you had made the right decision in buying a PV installation than in understanding the issues and the costs.

        I know there is no point whatsoever in providing the basis of the estimate until you have read and understood the methodology. If I did, then I’d have to spend the next twenty years arguing with you about the method. No thanks. If you want a simpler method, download the spreadsheet I pointed you to earlier.

        The Palmer paper explains what you need to know. It is a good paper. You are a professor. Surely you are not trying to tell me it is too much for you to read it and understand it, are you, Professor?

      • @PL: Your figures for your house are irrelevant because they are not the system costs and do not include the subsidies from governments and other electricity consumers.

        Ah, I think I see the problem. You seem to think that the figures from my house involve costs. They do not. The only relevant figure from my house is the meter’s claim that the system is abating 10 tons of CO2 per year.

        The only figure bearing on cost to anyone was provided by you, namely $600 per ton of abated CO2. According to you, over a 20-year period my system is costing me plus governments plus other electricity consumers etc. etc. $120,000.

        I am completely open to an analysis of this amount in terms of the costs to all entities. For a start I’d be interested in just three numbers $x, $y, and $z summing to $120,000, as follows:

        Cost to me: $x
        Cost to government(s): $y
        Cost to other electricity consumers: $z

        If you have a more detailed breakdown that would be interesting too.

      • Peter Lang

        Vaughan Pratt,

        If you are interested in learning about the cost of roof top PV and the CO2 abatement cost with PV (for the nation which is what we are talking about), just ask and I’ll assist you to learn. If not, no point wasting my time on a zealot.

      • Peter Lang

        Vaughan Pratt,

        If you want those numbers for your installation (which is connected to the grid and subsidised by other rate payers and taxpayers, you can work it out for yourself. I haven’t attempted to split them up this way.

        As I said, read the Palmer Paper as a first step.

        I know your game, and not interested in playing it. Got that? But I will help you to understand if you are genuinely interested (which I am convinced you are not). Got that?

        Just to reinforce the point, the CO2 abatement cost with roof top PV in Melbourne is about $600/tonne CO2 (using reasonable, defensible assumptions for average capacity factor, average operating life of the installation, average inverter life and other assumptions). It is similar in Brisbane and Sydney.

      • @PL: If you want those numbers for your installation (which is connected to the grid and subsidised by other rate payers and taxpayers, you can work it out for yourself. I haven’t attempted to split them up this way.

        Actually my question is not specific to my installation but to any home installation that is abating 10 tons of CO2 a year. I’d be very interested in a top-level cost breakdown for a Melbourne home abating CO2 at that rate.

        In fact I’m surprised a straightforward breakdown of that kind doesn’t already exist: how is the $600 apportioned between those shouldering that burden? Why hasn’t Palmer done it already? He’d presumably be a reliable source for such a breakdown—those interested in the question but lacking the requisite background for such calculations certainly would not be. A professionally done cost breakdown would be far more convincing than “go read the paper and work it out for yourself.”

        I did actually look at the paper, but was unable to find any “method.” Turning the many facts in the paper into a method looks like it would need someone with Palmer’s level of expertise to do it. I could fully understand the reluctance of others to attempt it since it would not be an effective use of their time and would be unlikely to produce a reliable result.

      • Peter Lang

        I believe in a ‘horses for courses’ energy policy especially when it comes to renewables, in the UK for instance we should be concentrating far more on that inexhaustible source of power-the sea. .As an island nowhere is more than 70 miles from it.

        In that regards I wonder if you have any figures as to the cut off point when solar strays from the ‘marginal but reasonably effective albeit at considerable cost’ and ‘don’t even think of trying to do that.’

        I am thinking of the UK experience whereby my own area receives some 1750 hours per year of sunlight and (pathetically) is one of the sunniest places in Britain. Sunshine and light levels are markedly lower of course in winter when solar power is most needed.

        Any thoughts? I’ve no idea how many hours of sunshine Vaughan’s house in Melbourne would get but I suspect it is considerably more than us,

        tonyb

      • Peter Lang

        Vaughan Pratt,

        In fact I’m surprised a straightforward breakdown of that kind doesn’t already exist: how is the $600 apportioned between those shouldering that burden?

        Someone may have done it that way, but I’ve never seen it and I’ve never seen anyone ask for it before. It is of no interest to me. The answer would change from state to state because the feed in tariffs vary from state to state and from time to time (they are frequently changed, as are the up-front subsidies. They’d also vary according to the penetration. My interest is in the total capital cost of renewable energy systems to the country compared with fossil fuels and nuclear, and also the cost of electricity from such systems and the CO2 abatement cost from such systems. I’ve summarised and compared these three costs on Figure 6 here: http://oznucforum.customer.netspace.net.au/TP4PLang.pdf . Those are the sorts of comparisons that are of relevance to policy decisions and especially, relevant as to whether or not we should continue to mandate and subsidise renewable energy.

        Your question about your particular PV installation is totally meaningless to me because you want to think only about what you’ve spent and ignore the other costs. You haven’t even given an indication yet that you realise the extent to which taxpayers and electricity consumers (especially the poor) are supporting your elitist hobby.

        I did actually look at the paper, but was unable to find any “method.” Turning the many facts in the paper into a method looks like it would need someone with Palmer’s level of expertise to do it. I could fully understand the reluctance of others to attempt it since it would not be an effective use of their time and would be unlikely to produce a reliable result.

        I accept that statement. You need to be able to reproduce the figures in Figure 6 and then change some of his rather cautious assumptions (cautious in that he wants PV enthusiasts to read and understand the paper not just dismiss it because it doesn’t tell them what they want to hear).

        If you show the slightest inkling of wanting to learn rather than just wanting to push your barrow, I’ll help you. I’ve already done the sensitivity analyses to changes of important inputs.

      • Peter Lang

        Tonyb,

        I believe in a ‘horses for courses’ energy policy especially when it comes to renewables, …

        In that regards I wonder if you have any figures as to the cut off point when solar strays from the ‘marginal but reasonably effective albeit at considerable cost’ and ‘don’t even think of trying to do that.’

        Yes I do have those figures. And the short answer is don’t even think of it anywhere except for off-grid situations. Solar is not viable anywhere und unlikely to be viable in the foreseeable future, if ever. The same applies to wind power and most other non-hydro renewables (traditional geothermal in volcanic areas are one exception).

        That’s the short answer. This excellent 18-minute TED video by David Mackay, Chief Scientist for the UK Department of Climate and Energy Efficiency may answer some of your questions: http://www.ted.com/talks/david_mackay_a_reality_check_on_renewables.html .

        Happy to answer more questions on costs. Or you can look at this the cost to generate eastern Australia’s electricity with a mix of renewables:
        http://bravenewclimate.com/2012/02/09/100-renewable-electricity-for-australia-the-cost/

      • @PL: Your question about your particular PV installation is totally meaningless to me because you want to think only about what you’ve spent and ignore the other costs.

        Not true. My question was, how could an installation (mine for example but it could be that of any other house) that abates 10 tons of CO2 a year cost $600 a ton? Obviously I’m not ignoring other costs than mine, though I would like to know how much of that $600 is paid by the owner of the installation. If for example half of it is paid by the owner then others are only paying $300. If $400 is paid by the owner then others pay only $200, and so on.

        Without an answer to my apportionment question it is meaningless to cite $600 as an “unfair burden on others.”

      • Vaughan Pratt,

        Your question are meaningless to me as I’ve explained many times. If you want to do the calculation, do it yourself. Or read Faustino’s reply to you at the bottom of this thread.

        You have demonstrated repeatedly you are not interested in anything but pushing your beliefs in solar PV, as you have revealed many times in the past, so I am not willing to play your games. You’ve got the answers you need. If you were interested, you’d go an research it instead of playing the games zealots play.

        All you need to know is that elites like you are bludging off the poor.

      • Hope you don’t mind me butting in PL and VP, but part of the issue seems to be that you’re focussing on different elements of the question.

        VP was looking at the individual installation perspective and PL the over-all system cost which includes factoring in direct subsidies, as well as large items such as transmission and distribution implications.

        And VP is right to ask for details on these, as these abatement costs figures are very dependent on the assumptions made. $600/ton is not written in stone. I’ve seen figures from as low as $90/ton to well over $600.

        But if you really want the full picture as PL is suggesting, it’s worth noting these abatement cost figures ignore any potential costs on the carbon fuel side stemming from carbon emissions impact on the climate system. Yes, this would involve many, many assumptions……. but we’re already down that road.

      • “You have demonstrated repeatedly you are not interested in anything but pushing your beliefs in solar PV,”

        ! ? !

        It’s really Lang’s “beliefs” that are in question here. I doubt that Vaughan is living in a dream state, given that he is the one that runs his homestead on solar power. The fact that Vaughan can produce power on his own is more a “reality” than a “belief”.

      • Peter Lang

        More loony left wingnuts chiming in with their ideologically motivated reasoning as well as their ignorance of the subject. Try some other sources. Here is an extract from an article by the ‘Environment Editor’ in today’s ‘Weekend Australian’:

        “We have no issue with solar; it’s just we have a situation where one of the most expensive forms of generation is forcing the cheapest form of generation offline,” Merritt says.

        Early solar rooftops are guaranteed 44c/ kWh compared with retail electricity prices as low as 22c and the 4c/ kWh paid to baseload generators.

        McArdle says by 2015-16 most Queenslanders will be paying $276 a year, or about 17 per cent of their annual power bill, to subsidise other people having solar power on domestic roofs.

        “But these are just the direct costs of the solar bonus scheme,” he says. “What is not included is the cost of upgrading the electricity network to cope with widespread power flowing back into the grid, which has resulted in voltage and other issues making the electricity grid in some areas highly unstable.

        “Solar power is being provided into the grid during the day when there is a surplus of power available. Queensland’s coal-fired baseload power stations cannot be switched on and off and need to run continuously.

        “There is no logic to this policy. It does little to reduce carbon, as its supporters claim, because the baseload power stations have to stay on … to meet demand at peak periods.”

        Government-owned power companies face revenue losses of several hundred million dollars from lower electricity sales and forced electricity purchases due to solar.

        The Electricity Supply Association of Australia says most solar households end up paying only a fraction of their fair share of the cost of maintaining the network.

        The cost of maintaining the electricity network is incorporated into electricity tariffs, which are mostly charged on the basis of how much electricity is consumed. Households with solar are paying less for the network because they generate some of their own electricity and import less from the grid.

        But the ESAA says solar households can be among the biggest users of the networks, because they import and export electricity at different times of the day.

        The current arrangements are unfair and need to be changed,” an ESAA report on the issue says. “At the moment, low-income households who cannot afford solar, renters and people in apartments pay more to underwrite those customers who can install their own solar system.

        “We need to change the way we charge consumers for the cost of the networks to make sure everybody pays their fair share.

        … “

        http://www.theaustralian.com.au/national-affairs/rooftop-solar-panels-help-to-generate-problems-as-well-as-power/story-fn59niix-1226650182854

        Is this clear enough, yet?

      • Peter’s right – the big power generators don’t like solar.

        For all the reasons indicated…..and as other recent research has shown, it’s been knocking a little off the top of their most profitable time – peak demand.

      • Chief Hydrologist

        ‘The lowest abatement costs can be found in Australia with both lots of sunshine and mostly coal fired power stations that are being replaced.’

        http://www.greenrhinoenergy.com/blog/?p=55

        The abatement cost is still $160/ton, Melbourne is not Australia and coal plants cannot be shut down merely run very inefficiently at 60% capacity.

      • Peter Lang

        Chief Hydrologist,

        Thanks for the link. Interesting find and would answer VP’s questions … if it was correct. But it is not. It is way off. It is the solar PV industry’s spin.

        Three wrong figures for Australia I noticed after a very brief look are in the first Table are:

        1. Solar PV does not substitute for coal generation, or for the average emissions intensity of the grid. It substitutes for gas generation, i.e. the highly flexible generators that provide power during intermediate and peak demand. So the ‘avoided emissions factor’ is too high by 50% or more.

        2. The average energy yield is nowhere near 1800 kWh/kWp. This is a theoretical figure and not even close to what the roof top PV installations average. It’s also too high by probably 50% or more.

        3. The 10 year interest rate of 4.89% is not close to the discount rate used for analysing electricity generation technology options. The Australian government uses 10% in the most recent analyses.

        4. Average operation life o\f PV module of 25 years is overstated, probably b y about 100%

        You can recognise the author’s bias from the first sentence in the article:

        By converting abundant sunlight directly into electricity without any fuels, photovoltaic modules (PV) are an ideal technology to reduce green house gas emissions. No wonder it has caught the attention of many governments that have made tackling climate change their declared priority.

      • Chief Hydrologist

        Peter – I deliberately used a solar industry site and pointed out a couple of problems with their assumptions that suggest a higher abatement cost for Melbourne. Even given the assumptions the cost of abatement is still very high – and pursuing this with public funds seems less than efficient.

    • > The rapid warming left the upper bound open, but gave a meaningful lower bound.

      What was the lower bound, again?

    • Peter Lang

      Pekka Pirila,

      You may have missed my reply here: http://judithcurry.com/2013/05/19/mainstreaming-ecs-2-c/#comment-324143

      • Peter,

        Do you think that your comment was written in a way that makes me likely to answer?

      • Peter Lang

        Pekka Pirila,

        Since you have avoided answering the question so many times, it is pretty obvious you wont. But that has nothing to do with the way it was written since it has been written perfectly clearly over six times. Clearly there are other reasons. We’ve come across those before, haven’t we?

    • David Young

      Pekka, I agree with your sentiments.

  47. The thing here is, we’re not seeing a failure to warm.

    We’re seeing a failure to count all the energy.

    Some of it goes into the oceans, so we decide we somehow magically get more time?

    More time for what?

    That energy shift into the deep oceans (if indeed that is what is happening, and is most of what is happening), isn’t a stable feature of the system, or at least mechanically it shouldn’t be.

    How would you build a conveyor that worked that way in a closed system, and make it reliable? You’d have emergency release valves, or shunts, or vents, to avoid explosive buildups or blockages or reversals, wouldn’t you?

    What would those structures mean in a naturally evolving global climate system? Would we get sudden deep sea bursts of gases released from solution? Harmonic twins of the ENSO equatorial phenomenon at mid-latitudes or in the Atlantic? A sudden halt to an ocean oscillation? A maelstrom?

    Warming is relatively benign, near the surface in an atmosphere. Gases expand and contract or change pressure or precipitate and almost always have a clear line to free up energy at night in the dark. Motion in contained fluid bodies is something entirely different.

    • The oceans give us time and they help even in the long run. They absorb a lot of energy and they take a lot of CO2. This combination means that the maximum temperature reached will be more limited than simplistic extrapolations tell. It takes longer for the oceans to warm than it takes them to bring the atmospheric CO2 concentration to levels low enough to be of little concern.

      While I’m writing that, I’m not claiming that there’s no reason to worry, only that it’s not as bad as simple extrapolation tells.

      • The Skeptical Warmist (aka R. Gates)

        Pekka said:

        “This combination means that the maximum temperature reached will be more limited than simplistic extrapolations tell.”
        ____

        Maybe or maybe not. We need to look at all the consequences of a warming ocean, including the heat that is being advected to the polar region, causing such a dramatic decrease in sea ice. There may be some big positive feebacks that are being kicked-in because of this rapid decline, and should we get an increasingly ice-free summer Arctic, these feedbacks could be amplified.

        I am a big advocate of looking at the best multi-source, multi-location collection of the paleoclimate data to get a real sense of what might happen in a world of 560 ppm of CO2. We can’t model all the feedbacks as the climate is too complex and chaotic, but the paleoclimate data contains all the feedbacks by default. The paleoclimate data strongly supports an ECS of at least 3C, and higher if you look at the ESS.

    • Bart
      – “Co2 warming in the ocean is confined mostly to the upper portion especially in the surface layer near 60N and 50S. Through increases in precipitation, weakened westerly wind stress and reduced overturning salinity, amounts decrease at high latitudes of each hemisphere. Salinity also increases in the subtropics. The resultant warming and refreshing of the high latitude ocean surface layer stabilizes the ocean and cause a weaker thermohaline circulation”
      From ; Developments in atmospheric Science 19 “Greenhouse gas induced climate change 1991. This was as a result of a 1989 workshop in the US. –

      Above quote from TonyB’s post elsewhere affirms the role of far North Atlantic’s ocean-atmosphere interaction as the driver of the North Hemisphere’s climate change.
      In my research I have found that the CO2 component could not in any way measure or compare with the geological factors so prevailing in this area.
      http://www.vukcevic.talktalk.net/NaturalVariability.htm

      Many experts and non-expert alike may weep in lament of doom brought by the CO2 ‘demon’, but as always nature will have the last word.

    • BartR, “That energy shift into the deep oceans (if indeed that is what is happening, and is most of what is happening), isn’t a stable feature of the system, or at least mechanically it shouldn’t be.”

      Of course it should be. 1000:1 Specific heat ratio with a 100+:1 heat transfer coefficient liquid to gas. The ocean are highly efficient thermal reservoir.

      “How would you build a conveyor that worked that way in a closed system, and make it reliable? You’d have emergency release valves, or shunts, or vents, to avoid explosive buildups or blockages or reversals, wouldn’t you?”

      Yep, SSW events and deep convection are relief mechanisms. Glacial ice and snow are storage. Ocean surface area is a regulator. Tides, Coriolis effect and temperature differential are pumps. Salinity changes shunts flow to different depths.

      Add Toggwieler and this new guy Brian Rose, to your reading list.

      http://www.atmos.washington.edu/~brose/

      Just as an example, the value of DWLR is equal to the “average” energy of the oceans. The “average” OLR is equal to the DWLR times the percentage open ocean surface area. Based on a black body source and radiant “shell” that is exactly what is should be. In others words, a water vapor greenhouse surrounded by a dry WMG greenhouse, where the area of the WVGH is less that the WMGH.

      Since the NH and SH have 180 degree out of phase seasonal oscillations, you will have a coupled oscillator with varying periods due to the layers of the system.

      http://en.wikipedia.org/wiki/File:Coupled_oscillators.gif

      • captdallas 0.8 or less | May 21, 2013 at 7:20 am |

        Wow.

        Three people in a row just not getting it.

        These changes you describe with handsweeping are not benign, not negligible, not long-term sustainable and not without massive negative global and regional consequence in short term and medium term.

        More precipitation? Just like that. Ho hum. It rains and snows more. There’s more storms, more frequent storms, more intense storms, or some unknowable combination? (Hard to imagine more precipitation and less, less frequent, less intense storms.) How could that possibly be bad for anyone or anything?

        Salinity changes happening at increasing rates, over larger areas, for longer durations earlier or later into seasons? That’s thermohaline layers, and no doubt must accompany shifts in oxygen and nitrogen ion solution balance and pH (lower saline concentration meaning smaller acid-base changes cause greater pH shifts). How could that be bad for anyone or anything?

        If your energy isn’t just sitting around being heat, it’s doing some other mischief. More poorly studied mischief. More poorly documented mischief. Harder to categorize. Harder to anticipate. Harder to predict, or to predict the consequences of. More chaotic, in new states within the complex systems of the biom.

        It’s like the co-pilot telling the captain, “Great news! That fussy number four engine you were worried about because of the flashing lights on the display panel? Won’t trouble us any more. It just blew apart and fell off. I have no clue what’s happening with these flashing lights for the other three engines, but one less to worry about is a good thing.”

      • BartR, Yeah, you could look at it like that. All unknowns are bad. That is the basic premise behind linear no threshold modeling.

        To some the unknown is frightening to others exciting, but it is still just unknown. You stated, “That energy shift into the deep oceans (if indeed that is what is happening, and is most of what is happening), isn’t a stable feature of the system, or at least mechanically it shouldn’t be.”

        That is not an unknown, it is actually a stabilizing feature. When there is a larger control range the hunting is smoother and less violent. A tight control range is ironically, more unstable. All of the complex feed backs, amplifications, buffers and cycles, make the system more stable.

        That is unknown to you, you assume it cannot exist, therefore your are frightened. That means you are not allowed to be the captain, yet.

        Until you can wrap your head around how naive your statement was, your input is useless. Go back with the cargo.

      • captdallas 0.8 or less | May 21, 2013 at 11:21 am |

        You seem to have ‘feature’ and ‘bug’, ‘stabilizing’ and ‘destabilizing’ confused. So what if the freaking temperature portion of one tiny part of the huge complex machinery is ‘stabilized’ by your ‘feature’?

        It’s at the cost of an unknowable number of unknowable types of destabilization.

        Screw frightening or exciting. I don’t do fear, and my excitement is not your business. Introducing new unknowns is costly.

        If you want to introduce new costly unknowns, that’s fine. We can negotiate that.

        But be prepared to pay me if you’re going to force it on me without consultation. And be prepared to pay through the nose.

        I want my money.

      • Bart you seem to be ranting.

      • HR | May 21, 2013 at 1:12 pm |

        Seem?

  48. Alexej Buergin

    A new record at the Nenana Ice Classic, which is (now was?) supposed to be a “good proxy for climate change”.

  49. Well that is getting closer to Shaviv’s estimate of 1.3+/-0.4.
    http://www.sciencebits.com/OnClimateSensitivity

  50. David Springer

    Vaughn Pratt prattles:

    “So ironically it is the trace gases that put the planet at risk”

    Maybe we can agree that a trace gas causes hyperbolic statements to issue from people who should know better.

  51. David Springer

    Interesting factoids.

    The solar constant at earth’s average distance from the sun is 1366 Watts per square meter. Projected onto a spinning sphere divides this number by four so each square meter of the top of the atmosphere gets an average of 342 Watts of power from the sun.

    The global ocean has an average temperature of about 4C (~3.9C in most estimates) top to bottom.

    The ocean is an approximate blackbody with an average albedo of 0.04 which can be restated as 96% of all incident light that falls upon it is absorbed and thermalized.

    Blackbodies emit what’s called a continuous spectrum where the peak frequency and power are determined by temperature.

    If we plug 4C into the blackbody calculator below we find that a blackbody at 4C emits 335 Watts per square meter.

    Is it coincidence that the radiant power of the ocean at its average temperature is almost precisely the power delivered by the sun to the top of the atmosphere? Doubtful.

    96% of top of atmosphere insolation, 0.96 *

    So the earth is illuminated by 341 Watts per square meter and the ocean’s average emittance power is 335 Watts per square meter.

    Here is the answer to why the ocean’s average temperature is a mere 4C. That’s as warm as it can possibly be on average simply because there is insufficient power delivered by the sun to make it any warmer than that.

    Everything is is simply due to stratification.

    http://www.spectralcalc.com/blackbody_calculator/blackbody.php

    • Very good David, but the oscillations? BartR, Webby and the rest of the resident geniuses can understand the oscillations.

      https://lh3.googleusercontent.com/-mN4qT42KMh4/UZtluJb7BXI/AAAAAAAAILg/vEflo0i6Cfk/s599/Bart%2527s%2520oscillation.png

      The NH and SH are seasonally 180 out of phase. The SH has a mean of 17 C with a variance of 4 C. The NH has a mean of 20 C with a variance of 7 C. As long as there is not shift in the thermal equator, that is you signal in blue.

      What if the thermal equator shifts? Would that be real energy or an artifact of “seasonal” signal removal? Inquiring minds.

      • David Springer

        There are any many oscillators with different frequencies from diurnal to annual to centuries to millenia to a hundred thousand years and those are just the ones we know about. These all feed into delay lines with uncountable different delays then they all beat against each other creating harmonics off the fundamentals. We can pick out some of the biggest ones and make a little sense out of them. The phases of water moving albedo around between two high points (snow and clouds) through a low point (liquid surface water) serve to bound the whole mess with floor and ceiling temperatures keeping the situation more or less suitable for life at all times in at least some if not most locations.

        Is there some specific question about oscillators in the climate system?

      • Springer, “is there a question about oscillations?”

        Yeah, Seasonal cycles change with the oscillations. What impact does removing a “seasonal” cycle based on a anomaly baseline period, have on the average mean surface temperature calculations?

        The divergence between satellite and “surface” temperature data sets is increasing. One or the other has issues. It don’t look like the satellites.

  52. I keep checking in here to see if DocMartyn has a response to Myrrh`s May 21 3:44 am entry…especially the first 3 paragrahs.

  53. Pingback: Science in the dock: experts, climate change and evidence | Troy Media

  54. Is 2C the correct value for ECS?

    Much of the blogosphere, including this blog, has been dominated by assertions that the new Nature Geoscience paper estimates ECS to be 2C or very close. I will argue here that the paper shows the opposite – that ECS is very unlikely to be very close to 2C. (The same applies to claims that the actual figure should be 2.4C, 3C, or some other value I’ve suggested should be considered).

    To make my point, I’ll link to a comparable exercise in probability – the winner of the 2014 Super Bowl – 2014 Super Bowl Odds. As you can see, the article tells us that the New England Patriots are the “odds-on favorite”. Well, if they are favored to win the Super Bowl, how likely are they to win it? It turns out to be 1 chance in 8. The team most likely to win is quite unlikely to win.

    Back to the Nature Geoscience paper. It doesn’t show a standard pdf graph, but if it did, it would resemble the ones shown by Nic Lewis at WUWT –New Paper, with a peak at 2C, like Nic’s red curve. It should be clear that if one takes a narrow slice of the curve from slightly below to slightly above 2C, that will represent only a small probability value.

    I don’t want to overdo the point, but I would argue that the value of the curve peak, while important, is less critical than has been implied in these discussions. The 5%-95% confidence interval is at least as important, and perhaps more so, and it’s also worth noting the shape of the curve and the areas under it. To me, it looks like the probability ECS exceeds 2C is greater than the probability ECS is below 2C. In any case, I suspect many of the paper’s authors would tell us we should pay more attention to the range (1.2 – 3.9) than the peak value.

    • Fred,

      In the spirit of Bayesian inference, this kind of analysis produces the conditional probability that must be joined with the prior pdf to make a posterior pdf. The results of the analysis is not a pdf and it need not be normalizable to one. Some earlier analyses did, indeed, produced only a lower limit and had a finite conditional probability for arbitrarily large values of ECS.

      For this reason the analysis cannot tell where the expectation value or median is, the peak value corresponds to the point of highest conditional probability.

      • Pekka – I agree completely that these are not posterior pdfs. The choice of priors is complicated and contentious, as you know, and I thought it would dilute my main point to get into discussing it. My own personal bias is that reasonable priors would probably make the lower bound higher than 1.2C, which is a “no-feedback” sensitivity value, but that’s a discussion for a different time, and there’s plenty of room for disagreement about on that choice..

      • Fred,

        I would think that the question of choosing the prior is less critical as long as we are discussing climate sensitivities less than 3, but even in this range the issue is significant enough to make estimating median values suspect. For higher climate sensitivities my preference is on priors that lead to a non-divergent distribution for the feedback parameter. That would mean that the prior distribution for sensitivity should fall as 1/S^2 or faster at high values. This would cut effectively off the high tail.

        As I started I don’t know about any arguments that would affect similarly the low end of the possible values as long as we don’t go well below 1.0.

    • David Springer

      Your point is pointless. Your argument that it’s extremely unlikely that ECS is 2C is like saying it’s extremely unlikely that anyone’s wristwatch has the correct time on it. Technically true but most of them are very close to the correct time. Similarly a stopped clock is correct more often than Fred Moolten.

  55. ECS estimates are built from temperature estimates.

    As such, we know from BEST that temperature estimates are fairly shoddy, so if we’re really rigorous we’ll want to speculate on correction factors from what we do know in the data.

    Taking BEST as the most nearly accurate land-only temperature record, simple graphical reasoning allows us to do the following:

    http://www.woodfortrees.org/plot/best/from:1850/trend/plot/crutem4vgl/trend/detrend:-0.086/offset:0.07/plot/hadcrut4gl/detrend:-0.086/offset:0.07/mean:101/mean:103/plot/hadcrut4gl/detrend:-0.086/offset:0.07/mean:95/mean:97/last:6/plot/hadcrut4gl/detrend:-0.086/offset:0.07/mean:89/mean:91/last:6/plot/hadcrut4gl/detrend:-0.086/offset:0.07/mean:83/mean:85/last:6/plot/hadcrut4gl/detrend:-0.086/offset:0.07/mean:77/mean:79/last:6/plot/hadcrut4gl/detrend:-0.086/offset:0.07/mean:71/mean:73/last:6/plot/none

    Fitting CRU4 land only temperature to BEST with detrending and offsetting (by eye), we see an adjustment of 8.6% (or so) on the slope and 0.07C on the amplitude of the least squares fit produces a fair match.

    Apply the same correction to the land-and-ocean, more up-to-date (but less accurate) HadCRU4, we arrive at something akin to a prediction of what BEST will produce, should they ever release a land-sea GMT.

    Base your ECS on that.

    As a side note, looking in detail at the time since 1998 — the hottest year on record, by far, up to that time:

    http://www.woodfortrees.org/plot/hadcrut4gl/detrend:-0.086/offset:0.07/mean:101/mean:103/last:84/plot/hadcrut4gl/detrend:-0.086/offset:0.07/mean:83/mean:85/last:18/plot/hadcrut4gl/detrend:-0.086/offset:0.07/mean:65/mean:67/last:18/plot/hadcrut4gl/detrend:-0.086/offset:0.07/mean:47/mean:49/last:18/plot/hadcrut4gl/detrend:-0.086/offset:0.07/mean:29/mean:31/last:18/plot/hadcrut4gl/detrend:-0.086/offset:0.07/mean:11/mean:13/last:18/plot/hadcrut4gl/detrend:-0.086/offset:0.07/last:24/trend

    We see the entire basis for claims of flat or negative trend since the turn of the millennium is two year’s weather, in particular the weather of 2008 & 2012. Removing those two years from the trend experimentally leads to quite other outcomes. And 2008 & 2012? Two of the ten hottest years on record, aren’t they?

    • BartR, the difference between ECS and TCR is the change in heat capacity.

      As the the change in heat capacity approaches zero, TCR and ESC become the same, some the measure of ESC is stored energy not temperature. TCR estimates are made from temperatures.

      If you want go get a rough estimate of the change in stored energy you can look at the ARGO data, too short to be of much good or compare rates of change in TCR by layers, regions etc.

      If you compare the rate of change of the Land Only surface temperature with the rate of change of the lower stratosphere temperatures, you should see some difference in the TCR estimates. Basically, different layers with difference specific heat capacities charge at different rates. By the same token you could compare land to ocean rates.

      Since WMGHGs response quickly in the atmosphere, they provide a “fast response” reference, TCR and the higher the density the more the response is related to ECS.

      https://lh4.googleusercontent.com/-FhXqwgJLtu4/UZuw5MyQtPI/AAAAAAAAIMo/pw9esu6QYtE/s979/BartR%2527s%2520Warming.png

      That compares UAH Land by Hemisphere with GISS Land only. The trends are close, but the difference changes. That indicates a change in the rate of heat uptake between the two “layers”. There is of course uncertainty in both data sets, but the hemisphere trends are pretty close to the same even though there is a large difference in the regional anomaly trends.

      That gives you a rough idea of the “sensitivity” related to changes in atmospheric factors. Remember, CO2 is a well mixed gas which should have a common impact. The hemispheres have all those internal energy shifts that create the noise.

      You can include more atmospheric layers and use absolute temperatures and get even sexier estimates. Like that the current rate of OH uptake is slowing to less than 0.3 Wm-2, indicating an approach to a Pseudo-Equilibrium condition. That would be the “pause” which has a greater impact on ECS than TCR.

      Now had the older Energy Budget Data not been totally FUBAR, most of the higher end “sensitivity” estimates would have never existed. Climate Science suffers from some Piss poor record keeping and code sharing so people can check for mistakes.

      • .. and this is addressed to me, why?

      • Because you are having a Girma moment.

        “Base your ECS on that.”

        ECS is about rates of change, the delta F – delta Q, not linear trends, no matter how much they are massaged. Rates of change implies the variance or standard deviation of the data, is a fairly important factor.

        https://lh6.googleusercontent.com/-YT1RNGjeMlY/UZwY_runNJI/AAAAAAAAINU/VLJKMM6cM2M/s843/BartR%2527s%2520Warming%2520stdev.png

        That compares the 132 month standard deviation of the BEST hemisphere Tmax and Tmin with UAH hemispheres. That is a pretty significant difference in standard deviation.

        As I have said before, GISS, HADCRU and BEST did a fine job, but the accuracy is still not up to the task. They are likely not up to the task because there is a significant decrease in temperature with increase in latitude with wicked seasonal cycles that are not in the same phase. In fact, the Antarctic is often out of phase with the water surrounding it.

        Satellite, despite their issues, are better designed for that “global” averaging.

      • captdallas 0.8 or less | May 21, 2013 at 9:17 pm |

        Ahhhhh.

        Artificially low sigma doesn’t make bad data better.

        And I’d hoped you might be talking about something that hadn’t been rehashed a hundred times before.

        Fix the broken satellite system. Heck, put up enough satellites that you can actually tell how invalid the data is. And get 300 more years of satellite data.

        Then be all means, be my guest and use satellite data.

      • BartR, “Fix the broken satellite system. Heck, put up enough satellites that you can actually tell how invalid the data is. And get 300 more years of satellite data. ”

        Spoken like a true believer.

        The problem appears to be in the surface temperature methodology. The data is baseline dependent due to the removal of the seasonal cycle. You can’t compare the surface to the satellites without using a common baseline, which is the satellite era. This is the same problem with much of the Paleo reconstructions, choice of baseline impacts the final results.

        You can ignore the satellite data, like Trenbert elected to do, or chop the the divergent paleo data like a few have elected to do, but you just end up fooling yourself.

        Normally, a tenth or two of error would not be that big of a deal, but with the apparent “sensitivity” dropping, it is becoming a big deal. That is the problem with chaotic “oscillations”, they can be real or the product of poor procedures.

        Since you can’t understand how minor miscalculations can “snowball”. perhaps you have a few in your revenue neutral master planning.

      • captdallas 0.8 or less | May 23, 2013 at 9:06 pm |

        My apologies. I’d forgotten you were Dunning-Krugerized to the eyeballs where statistics and trendology are concerned.

        Your graphs attempt to compare apples and oranges, at least three different ways.

        You apply the concept of standard deviation incorrectly.

        You apply the concept of baseline imprecisely. While in a general way some of what you say about baseline has technical foundation, you’re misapplied the technique.

        I really don’t have the time or energy to fix what’s wrong with your approach; I don’t care to repeat the Girma fiasco, and I’m loathe to sound like Brandon by saying, “you make no sense.”

        So let me leave it with, please refer me to whatever textbook you think you learned statistics from, so I can obtain a copy and beat the authors about the head with it.

      • BartR, “You apply the concept of standard deviation incorrectly.”

        Yes, technically it is incorrect, but it is a short cut. It is just part of “looking” at the data first.
        “You apply the concept of baseline imprecisely.’ That would be incorrect. I try a number of baselines to see if they are “robust” Marcott et al. is just one of many example where baseline and incomparable smoothing/binning produces less than robust results.

        The reason I am interested in the impacts of baseline selection and use standard deviation as a shortcut is because there are cumulative impacts and.longer term seasonal shifts.

        http://nomad3.ncep.noaa.gov/cgi-bin/pdisp_sst.sh?ctlfile=monoiv2.ctl&varlist=on&new_window=on&ptype=ts&dir=

        Compare 60S to 50S with 50N to 60N. Then compare 15S to equ with equ to 15N. Toggwieler and others have stress the impact of shifts in the thermal equator or ITCZ. Since the two hemispheres have out of phase seasonal oscillations of different magnitudes that can shift, baseline dependence will produce errors at some point in the time series. Standard deviation changes or changes in variance if you don’t want the error scaled, just give you a place to start looking.

        You can look at the first differences of GISS LOTI SH and HADCRU SH and easily see that the different methods produce different errors. BEST also has a shift in sd/variance at the same period indicating there is a shift in the system, likely in Antarctic.

        As for the satellite data you are convinced is wrong, I have compare that with, steric sea level rise, solar variation, CO2 fluctuations, SST temperature and just about anything I can find and it is really impressive. In fact, the correlation between solar, steric sea level and stratospheric temperatures is remarkable. Pity there are two volcanoes at the beginning of the series.

        https://lh3.googleusercontent.com/-bRNTLuksgKY/UaQdK91LXvI/AAAAAAAAIUg/dyxgbGEuk_g/s691/Solar%2520and%2520the%2520tropical%2520stratosphere.png

        At least it is remarkable to a statistical novice like myself. I am sure to you it is just noise.

      • Cappy shows absolutely no facility for deeper physics and botches any math or stats that he touches. Like Bart R says, he applies standard deviation incorrectly, something I have pointed out as well.

        Cappy is worse than Girma in that at least with Girma, you can see his equations. With Cappy, it is non-stop Gish Gallop with the scientific gibberish.

      • Webster, “Cappy shows absolutely no facility for deeper physics and botches any math or stats that he touches”

        You need to get a handle on the simple realities. 1) Land is not uniformly distributed across the Globe. 2) Ocean Heat transport varies between hemispheres because of 1).

        You are assuming 1) and 2) have no significant impact on climate “sensitivity”. That appears to be an incorrect assumption.

        You have to get over that problem before you can see why your methods are not compatible with the problem. You have to define a “surface” that is reasonably stable or include adjustments for the asymmetry.

        https://lh6.googleusercontent.com/-YT1RNGjeMlY/UZwY_runNJI/AAAAAAAAINU/VLJKMM6cM2M/s843/BartR%2527s%2520Warming%2520stdev.png

        That simple comparison of the possible “surfaces” you can use illustrates that the lower troposphere temperature anomaly is a more stable reference than the “surface” temperature anomaly. It would be more stable because energy is more uniformly distributed at that “surface”. You could use longer or shorter periods, but since the satellite data is short, I selected 132 months, 1/3 of the satellite series length and roughly a solar Pseudo-cycle.

        Both you and BartR prefer to ignore the differences between the temperature responses of the different “surfaces” and select the ones that favor your intentions instead of exploring the differences. If the two of you believe that a simple comparison of sequential 132 month standard deviations is a fatal s