Ludecke et al. respond: Part II

by Horst-Joachim Lüdecke, Rainer Link, Friedrich-Karl Ewert

It is nearly impossible to answer more than 1000 comments that the previous three threads [here, here and here] on the surface temperature data records have generated. Most technical comments and errors would settle themselves if the commentators better understood the methods applied.  This large number of comments demonstrates the interest in the BEST and the LU, LL publications.

We respond here to comments and questions that we regard as relevant or interesting, using the same abbreviations as in  our original guest post.

What is meant by ‘natural’ in autocorrelation (persistent) time series?

The following example using dice is intended for unexperienced readers, not for experts. Imagine an “autocorrelated” dice. If you throw it and perhaps the 5 comes up, then a mechanism within the dice moves a balance weight on the opposite side of the 5 (the 2). Consequently, the probability rises to throw again the 5. Nevertheless, after repeatedly throwing the dice again at some time a new number will arise, perhaps the 1. Consequently, the mechanism takes away the weight from the 2 and adds it to the opposite side of the 1, i.e. on the 6. This procedure holds for all 6 numbers. As long as the same number comes up the weight remains. However, it is slided to the new place when a new number arises.

Remarkably, the autocorrelated dice is fair because each number still arises with the same probability of 1/6 in the very long run. The only difference compared with an ordinary dice is that “unusual looking” series of the same numbers appear. Note that the autocorrelated dice is not determined by an external trend. Therefore, one would like to designate both the records generated by an ordinary dice and by an autocorrelated one as purely stochastic or ‘natural’. The persistence of the dice increases with increasing balance weight.

A game with an autocorrelated dice is an extremely simplified analogy of autocorrelated temperature series without external trends. They exhibit unusual looking long ‘natural’ trends. In addition, even purely autocorrelated synthetic records are denoted as ‘natural’, which sounds a little bit strange without the knowledge of the causes. In contrast to this, an external trend changes the character of a record into ‘unnatural’. If one person in a group of players is cheating, an external trend in the record of this person becomes apparent in the long run.

What is the strategy to distinguish between ‘naturalness’ and ‘external trend’ of a temperature record?

Assume that a real (observed) temperature record shows a warming trend. Without major effort the answer whether this trend is caused by autocorrelation or by an external trend cannot be given. The detrended fluctuation analysis (DFA) assumes that the real temperature record consists of a ‘natural’ component and – if existent at all – an external trend. Importantly, the DFA yields the HURST exponent of the natural part in the real record. The value of the HURST exponent corresponds to the strongness of persistence.

In the next step, one generates a large quantity of synthetic records with the same fixed HURST exponent as the real record and checks how many of them have the same or greater warming than the real record. If the number is very small one assumes that the warming of the real record is too predominant to be still ‘natural’. Consequently, the warming of the real record is classified as ‘external’. Otherwise, one assumes that it is ‘natural’. In other words, the question that has to be anwered is the following: does the amount of warming of the real record match the probability of seeing this warming in its ‘natural’ component?

Could a ‘natural’ fluctuation become an external trend?

Nature does not distinguish between “external trends” and events that increase the persistence of a record. However, this could happen by considering a time scale that is too short. In instrumental records of roughly 200 years, the detection of persistence is restricted to a maximum of about 50 years caused by the numerics of the DFA. A natural fluctuation with frequency  that is long compared to 50 years (for example the variation of the Sun’s magnetic field), therefore, is automatically removed by DFA as an external trend.

Does one get different HURST exponents by using daily, monthly or annual means for the DFA?

In principle, the Hurst exponent of a record is the same by using mean temperatures of days, months, or years. This comes from the nature of the power function and is basic maths. Monthly means of temperatures are most adequate for temperature analyses because the extreme short-term persistence of daily means could disturb the numerical evaluation of the HURST exponent. Further, one is not interested in extreme short persistence. Yearly means are not feasible for instrumental records because the DFA requires a minimum of about 600 data points. If one uses monthly means the seasonal influences must be eliminated. Seasonal oscillations are regular and, therefore, the HURST exponent is feigned detrimentally without removing them.

Is energy conservation violated by DFA?

The HURST exponent of monofractal time series goes from 0.5 (completely random) to values > 1. For values > 1 a record becomes unsteady and, therefore, could never come back. Consequently, for HURST exponents > 1 energy conservation could be violated in the long run. However, for values between 0.5 and < 1.0 no such problem arises. All of our observed temperature records have HURST exponents below 1.

Do conflicts exist between the outcomes of LU and LL?

Yes. Except for Vienna (1791-1890), LU shows that all five long-term instrumental records (Hohenpeissenberg, Paris, Vienna, Prague and Munich) have 100 year fluctuations that reveal external trends. On the other hand, most records of the 20th century analyzed in LL show natural 100-year fluctuations only. Obviously, the five instrumental stations in LU belong to the group of records worldwide that are determined by external trends.

The results of LU and LL in a nutshell  

LU evaluates 100 year fluctuations only – over a period of 2000 years. LU finds that those  fluctuations (∆/σ)i exceed the value of 2 very often and sometimes even of 4. ∆ is the rise of the backward 100-year linear regression line through the temperatures Ti-99, …,Ti, σ is the standard deviation around the lines, and i counts the years. See for this the left panel of Figure 6 in LU. In contrast, for the instrumental records the value of 2 is roughly kept, see Figure 3 of LU. As a consequence, the 100 year fluctuation of the 20th century compared with the climate history 2000 years back has to be assessed as quite normal. Under the restrictions of LU – only 100 year temperature fluctuations, the use of proxies, and only five long-term instrumental records – the results contradict an unusual forcing in the 20th century as a rather modest conclusion. Unfortunately more instrumental records that go back further than 230 years are not available.

LL follows a different strategy. Here a large quantity of 100 year and two 50 year long records are analyzed under the question “how natural are their courses”. With the advanced method of [3] and [4] LL found naturalness at a high percentage. The most important results of LL is that the UHI and a stronger warming with the station elevation are verified. To our knowledge the latter phenomenon was never observed in the literature before, except for a short remark about the strong warming of three mountain stations (Zugspitze – Germany,  Säntis – Switzerland and Sonnblick – Austria). LL evaluates the probabilities for the ‘naturalness’ of the 20th cenrtury warming and does not at all exclude additional warming by anthropogenic CO2. However, the applied method cannot distinguish for this additional part between CO2 and many other possible forcings. The results of LL are roughly compatible with the known climate sensitivity of the CO2 without feedback of about 1 °C. Also in an earlier paper of Link and Lüdecke – not discussed here –  the same was concluded. However, LL cannot find any indication of a feedback, i.e. a higher climate sensitivity of the CO2 as the assumed 1 °C.

What do we judge as the strongest shortcoming of LL and all the papers about “global” records?

Clearly the extremely unequal station density over the Earth. We did not make any attempt to correct this and are quite aware of this deficit of the available local stations. Further we argue that global records are not the feasable tool to examine global temperatures. The too high HURST exponents of global records and their too small standard deviations make them inapplicable for DFA. Until now we have no idea how to solve the problem of the extremely unequal station density over the Earth.

Why we did not apply general error estimations?

We applied error bars only in Figure 10 of LL because here the usual frequency error estimation could be applied. Further, we noted an error of ± 0.2 for the evaluation of the HURST exponents in LU page 13 last section. However, we did not make error estimates of the further analyses for the following reason. The errors that are statistical, systematic, and caused by the changing environment conditions of the screens at the individual stations are desperately unknown and scatter widely. As a consequence, a correct error evaluation would by far exceed the scope of our papers. Looking at the literature concerning temperature analyses with DFA one finds hardly papers that try appropriate error estimates.

JC comment:  This post is a response to the comments on the previous three threads.  I have edited the post slightly for clarity. Guest posts reflect the opinions only of the guest posters.  The presence of a guest post at Climate Etc. reflects my judgement that the topic is of interest to the participants here, and does not imply any endorsement by me of the analyses/ideas presented in the post.

Moderation note:  this is a technical thread and comments will be moderated for relevance.

239 responses to “Ludecke et al. respond: Part II

  1. Since there are signs of the current burst of the global warming is either slowing down or even stalling, lot of attention is directed to the existing natural variability factors.
    The Atlantic Multidecadal Oscillation better known as the AMO has frequently been presented as some ‘mystifying’ natural force driving the sea surface temperatures ( SST) of the North Atlantic.
    New research shows that the AMO is simply result of the thermo-haline circulation powered by the North Atlantic Subpolar Gyre driving the Atlantic – Arctic exchange.
    Put in the most simplistic terms: the AMO is a delayed response (with R^2 = 0.74 ) to the semi-permanent low atmospheric pressure system over Iceland (measured at Reykjavik / Stykkisholmur) as graphically shown here:
    http://www.vukcevic.talktalk.net/theAMO.htm
    including the link to the relevant pre-print paper (currently in ‘document technical moderation’ at the CCSd / HAL science archive, any constructive comments will be considered for the final paper version.)

    • Signs that “the current burst of the global warming is either slowing down or even stalling” are like empirical facts added to emotional certainty.

      Unwillingness to admit the emotional responses that all humans have to empirical facts (and their uncertainties) may be the key to understanding reluctance to accept our powerlessness over empirical evidence of the evolution of Life, Earth’s Climate, the Sun, and the Cosmos [1].

      1. “Origin and Evolution of Life Constraints on the Solar Model”,
      Journal of Modern Physics 2, 587-594 (2011)
      http://dl.dropbox.com/u/10640850/JMP20112600007_31445079.pdf

      • ” this is a technical thread and comments will be moderated for relevance.”

      • Oliver’s comments are very relevant to the technical matter he is personally talking about…

      • I found a great quote from Bertrand Russell criticizing Henri Bergson, who was a prototype scientific skeptic/wacko known for his alternate space-time theories challenging Einstein:

        One of the bad effects of an anti-intellectual philosophy such as that of Bergson, is that it thrives upon the errors and confusions of the intellect. Hence it is led to prefer bad thinking to good, to declare every momentary difficulty insoluble, and to regard every foolish mistake as revealing the bankruptcy of intellect and the triumph of intuition. There are in Bergson’s work many allusions to mathematics and science, and to a careless reader these allusions may seem to strengthen his philosophy greatly.

        from Russell’s History of Western Philosophy

        Fill in a different name for Bergson and I couldn’t phrase it better today.

      • “Fill in a different name for Bergson and I couldn’t phrase it better today.”

        Indeed. But, I think the people to whom you and I assign this behavior are different.

      • Leaving out the big yellow thingy up there would be like leaving out the Delta in a statistical set…

      • Oliver,

        Many people choose not to see.
        Then they are not accountable as they can claim ignorance.

        By the way, what VESTED INTERESTS do climate scientists have in keeping the AGW alive and kicking. Even leaving out ocean data to attain more warmth.
        Very suspicious activity.

  2. LLE continue to duck the real questions.
    They do introduce a new element: “The results of LL are roughly compatible with the known climate sensitivity of the CO2 without feedback of about 1 °C.” CO2 is in the abstract, but not in the paper; carbon dioxide is in neither. The paper does not mention climate sensitivity. Greenhouse gases are mentioned twice, as in “it could be GHGs, it could be something else”, without any formal test.
    The sentence “The results of LL are roughly compatible with the known climate sensitivity of the CO2 without feedback of about 1 °C.” is not supported by the paper.

    • Richard Tol –

      Perhaps to be pedantic – do you mean “…….” is not supported by the paper AT ALL, or not supported in a convincing manner?

      I ask because the essence of the sentence you quote – “The results of LL are roughly compatible with the known climate sensitivity of the CO2 without feedback of about 1 °C.” is that one thing is roughly compatible with another – I wonder what sort of support you are looking for, for an observation of compatibility?

      • Richard Tol –

        I should have mentioned that my question comes from noticing that they describe non-feedback sensitivity as ‘known’ – they don’t really ever need to mention it otherwise – it is background knowledge to us all. Thereafter, they are claiming [it's not much] a ‘rough compatibility’

      • Not supported at all. The paper does not even mention the word “climate sensitivity”, so it cannot draw a conclusion.

      • You did say it was a “new element”. The paper does not even mention the word “climate sensitivity”, so it cannot draw a conclusion.

        “It” did not draw a conclusion. Horst-Joachim Lüdecke, Rainer Link, Friedrich-Karl Ewert today have told us that the results “are compatible with”, which is not false.

        Your criticism seems to be that they have told us today something which was not in their published paper and which is not in disagreement with their published paper.

        You wrote on a previous thread that you published in 2008 a paper that had the macroeconomics all wrong, what I suggested might be a “serious flaw”. I recommend that you publish a corrigendum next, before continuing with these criticisms of the work and affiliation of Horst-Joachim Lüdecke, Rainer Link, and Friedrich-Karl Ewert. Since your support of the futile yet expensive new carbon tax in Ireland was based in part on a paper that had the macroeconomics all wrong, you might apologize to the people of Ireland before telling us all that EIKE is a corrupt organization.

        Everybody makes mistakes, but to make a mistake like that and then arrogate some sort of moral superiority is unseemly.

      • Matt,

        Dr. Tol must still be thinking about how he is going to reply to this one, or maybe he is busy writing the corrigendum. I wonder how he managed to get a paper published with the macroeconomics all wrong. Sounds like a case of a lot of people being negligent, or complicit, in spreading disinformation.

      • What will happen one day, when the masses learn the truth, that: H2O controls the climate 100%, on many different ways; CO2 has nothing, nothing to do with the climate?!?! Trick was used to blame CO2 for the phony GLOBAL warming – then was pined on the climate.

        Constant Climatic Changes are a natural phenomena – Global warming is a phenomenal lie. ALL PROVEN ALREADY. You can drive from Florida to LA in a week and encounter 50 different climates – did 25 GLOBAL warmings and as many GLOBAL cooling happened in that week? Ignoring all my proofs is not a skill – but fear from the truth – admission of one that doesn’t believe in his own persuasion.

    • Richard

      In regards to an earlier comment around the idea of people staying within their field of expertise and not commenting about that which they are not an expert (which I partially agree with) I thought your field was economics and not atmospheric physics. Was I mistaken?

    • Richard, I can’t help feeling this is somewhat mendacious. I get what they are saying and no way am I as smart as you.
      All they are saying that what they have found is consistent with an ‘external’ effect that would be consistent with CO2 forcing without feedbacks. They are quite clearly not saying that it is and they clearly have not set out to establish any form of attribution for the effects they are seeing.
      I have not heard or seen of a statistical treatment such as this and it seems a perfectly understandable and legitimate approach, even at the ‘informed layperson’ level. I find interesting and informative, but if there are errors in the calculations or something genuinely incorrect, then you should point it out:

      – What’s actually wrong with the approach?
      – Have they made any errors of maths?

      It strikes me in this business not enough people take a step back and ask themselves if they have fully understood what the author was saying before jumping in with both boots.

      • I have not heard or seen of a statistical treatment such as this and it seems a perfectly understandable and legitimate approach, even at the ‘informed layperson’ level.

        These papers are not at the layperson level but are more akin to the way I would think of the problem, and were in fact discovered after I worked out a “layperson treatment” on my blog.

        A Stochastic Energy Budget Model Using Physically Based Red Noise 2011

        Climate tipping as a noisy bifurcation: a predictive technique 2010

        These are very far removed from Ludecke’s approach but in fact contain the elements needed to reason about stochastic time series, which must first and foremost consider fundamental statistical physics.

      • Thanks for these links. I’ve taken a quick look at your post, looks interesting. Interest in a new thread discussing this?

      • I would title the post “Red, White, and Pink(?)”

      • do you want to put together a guest post on this?

      • do you want to put together a guest post on this?

        I added another analysis angle here:
        http://theoilconundrum.blogspot.com/2011/11/multiscale-variance-analysis-and.html

      • Web,

        You mention you need a mechanism for the Greenland versus Volkec series.

        First, http://www.youtube.com/watch?v=O4WMdwIwrSw
        Note the SH.

        Second, the the temperature relationship below -20C for both CO2’s radiant and conductive imp[acts.

        http://redneckphysics.blogspot.com/2011/11/thermodynamic-layer-convergence-and.html

        I am working on a proof that Arrhenius was Erroneous with a multilayer radiation model.

      • First, http://www.youtube.com/watch?v=O4WMdwIwrSw
        Note the SH.

        That does make a lot of circumstantial sense. Looking at the video and your blogpost, and from what I can infer, variances in geospatial CO2 are extremely strong and these reflect the large temperature swings in NH vs SH. (At least that part of the Arrhenius rate law is correct :)

        Consider that the Antarctic proxies go back several hundred thousand years but Greenland only covers a 50,000 year interval, so we don’t see any of the interglacial swings in Greenland than we do in Vostok and EPICA.

        With that as a caveat, is it that Greenland is always swinging wildly between extremes thus possibly obscuring the interglacial differences? And that the relative insensitivity of the Antarctic to these swings makes Vostok a truer representation of the actual climate change?

      • Web,

        Arrhenius’ law is very close for conditions near the arctic circle, where he lived :) I guess he didn’t get out much.

      • Web,
        “With that as a caveat, is it that Greenland is always swinging wildly between extremes thus possibly obscuring the interglacial differences? And that the relative insensitivity of the Antarctic to these swings makes Vostok a truer representation of the actual climate change?”

        I would say that yes, Vostok would be a truer indication of average temperature swings. CO2 forcing is dependent on both the surface available energy and the average temperature of the radiant layers (Arrhenius assumed 255K as the average radiant layer and 246K as the average secondary radiant layer).

        For the conductive part, that is an indication how weak the radiant impact is in the Antarctic region. In the Northern extent, albedo change is much greater reducing available surface energy to fuel the radiant impact, much greater swings.

        Can I prove that? Not yet.

      • The fundamental nature of the Arrhenius rate law can’t be challenged in that thermal activation barriers do exist.

      • The challenge is not to the fundamental nature. The challenge is to the adequacy of the formulation. His formula does not indicate the temperature dependence of the radiant layer, only the change in concentration related to one temperature. His law does not adequately indicate the impact of latent energy and Water vapor concentration below the effective radiant layer. That impact is indicated by the unexpected increase in convective velocity with increased CO2 concentration.

        His paper specifically states, “impact on surface temperature” not impact on temperature above the surface.

        So In my opinion, his “law” is really more of a guideline :)

      • Maybe we are trying to converge.

        The basic activation energy rate law is:
        $$ e^{{-\Delta E}/{kT}} $$
        This is bread&butter stuff for material scientists and process engineers who are trying to calibrate vapor phase reactors, for example. There are lots of refinements to this particular law, but it is what Arrhenius says specifically about CO2 that is up for more debate. That involves much more than the rate law, agreed.

      • Web,

        That looks about right. I am taking a totally different approach since I am really working more on theoretical physics, http://redneckphysics.blogspot.com/2011/11/tropopause-and-2nd-law-from-energy.html

        I am trying to work towards the basic unit of emissivity which I think may be fun for the Dark Energy theory. That is pretty rough, but you may see where I am going.

      • The problem with “red noise” is that there is no unique type of process which can create it. I can create it with a diffusion process, or with a suitably selected sum of exponentially correlated processes, or I can get its signature when my data has simply been polluted with extraneous noise, typically in the form of random transient events (e.g., popcorn noise).

        Once again, you give us a mathematical abstraction with little empirical grounding.

      • Once again, you give us a mathematical abstraction with little empirical grounding.

        You apparently didn’t look at my post where I placed the red noise in the context of a physical model describing a shallow potential energy well. I think we all understand that the historical empirical observations suggest that some barrier exists to prevent the excursions from getting too large.

        I can create it with a diffusion process, or with a suitably selected sum of exponentially correlated processes, or I can get its signature when my data has simply been polluted with extraneous noise, typically in the form of random transient events (e.g., popcorn noise).

        Popcorn noise, also known as random telegraph noise or burst noise, I have written up in an online book and this falls under a semi-Markov classification not too far removed from what I am talking about. I have a derivation in the book where I generate 1/f noise from a range of burst noise signals, so I understand the issue of ambiguity clearly. One of the most important aspects is separating out the very random white noise that obviously occurs, especially when interpreting proxy records.

        Incidentally, I also have a section on the modeling of popcorn popping in that same book, and what do you know, but that is also a activation energy driven stochastic process.

        What else do you want to know?

      • WebHubTelescope: I think we all understand that the historical empirical observations suggest that some barrier exists to prevent the excursions from getting too large.

        That is an interesting comment. Is it true that we all understand that? I thought that CAGW derived from a belief that Earth mean temperature could increase a great deal. Where would you put the barrier — 288K?

      • “I think we all understand that the historical empirical observations suggest that some barrier exists to prevent the excursions from getting too large.”

        But, the manner in which that bound is enforced could take on any number of forms. I have only seen the phenomenon modeled successfully on a case-by-case basis.

        You popcorn model sounds interesting. I will have to look it up when I have a chance.

      • The potential well describes a sweetspot with negative feedbacks on each side preventing a large temperature fluctuation.

        The energy barrier is exactly as Fred and Pekka have recounted many times. The T^4 negative feedback due to the Stefan-Boltzmann law qualitatively describes the upper temperature barrier. On the low side, we have a variety of latent energy barriers such as glaciation and sea ice formation and the very constant solar radiation.

      • Web said, “The energy barrier is exactly as Fred and Pekka have recounted many times. The T^4 negative feedback due to the Stefan-Boltzmann law qualitatively describes the upper temperature barrier. On the low side, we have a variety of latent energy barriers such as glaciation and sea ice formation and the very constant solar radiation.”

        That’s true. My question would be if there is also a latent barrier on the upside and if the Tropopause/Antarctic are not also upside thermodynamic barriers. You have to admit, the Antarctic is awfully stable and that stability is a major factor in deep OHC.

      • That’s true. My question would be if there is also a latent barrier on the upside and if the Tropopause/Antarctic are not also upside thermodynamic barriers. You have to admit, the Antarctic is awfully stable and that stability is a major factor in deep OHC.

        Yes indeed, that would be interesting. The issue with latent barriers is that when you do break through the capacitive buffer, there ain’t no turning back because the inertia going forward is now huge.

      • These are very far removed from Ludecke’s approach but in fact contain the elements needed to reason about stochastic time series, which must first and foremost consider fundamental statistical physics.

        Why must a stochastic time series analysis of Earth surface temperature data “first and foremost consider fundamental statistical physics”? The roles of the clouds in potentially regulating feedbacks (for example, increased temperatures leading to increased water vapor leading to increased cloud cover leading to decreased insolation leading to temperature reduction) are not well known, and not derivable from fundamental statistical physics.

        Horst-Joachim Lüdecke, Rainer Link, Friedrich-Karl Ewert have shown that the possibility of long-term autocorrelation due to an unknown (or poorly characterized) mechanism (or mechanisms) can’t be ruled out. Theirs is not the last word, and I hope their work will be followed up by analogous analyses of other temperature time series. Analysis of a large sample of time series (recall that the BEST team reported that 1/3 of all series show cooling instead of warming over the recorded intervals) might very well provide evidence of the time scale of the relevant processes.

        I doubt it. I think that the answers to the questions we are all asking will require the collection and study of data into the future, but I see no reason to rule it out a priori.

      • To somebody looking at the earth from a distance like it was a pea, it would indeed seem to follow statistical physics laws.

        I come from the semiconductor world, where the addition of a minute dopant concentration is enough to change the properties of a device. So I agree that clouds could indeed have an effect just like the dopant of CO2 can have an effect. That, however, does not mean that we can’t use statistical mechanics to understand the behavior. In fact, that is the fundamental basis for all device physics modeling, and I consider the earth a big photo-thermal device.

        Analysis of a large sample of time series (recall that the BEST team reported that 1/3 of all series show cooling instead of warming over the recorded intervals) might very well provide evidence of the time scale of the relevant processes.

        That is a very good point; if the time series is not big enough, use multiple parallel time series to build up the counting statistics. A noise model on top off a drift term should be able to show this 1/3 cooling fraction.

      • WebHubTelescope, that was a good post.

      • In the first paper you quote, the solar forcings are low by +- 2Wm^2. I find this rather disturbing. when you negate 80% of the solar forcing you allow other areas to become “suspect” for the warming or lack there of. they show simply 0 to 0.5 Wm^2 as the functional range when it is actually a full range of 4Wm^2

    • Dr. Tol,
      Feed backs and climate sensitivity are fictional climate parameters produced by a virtual earth science of our making that has nothing to do with the Earth that we live in. There are no theoretical or observational evidence that confirm their existence. Surface and lower tropospheric temperatures have increased but the temperature of the upper atmosphere decreased at the same time. The average temperature of the atmosphere has not changed, and there can be no feed back resulting from no change in the average temperature of the atmosphere. As a consequence of the average temperature of the atmosphere being unchanged, water vapor content in the atmosphere has not changed, and there can be no feedback due to water vapor either.

      • The average temperature of the atmosphere has not changed, and there can be no feed back resulting from no change in the average temperature of the atmosphere. As a consequence of the average temperature of the atmosphere being unchanged, water vapor content in the atmosphere has not changed, and there can be no feedback due to water vapor either.

        This illustrates the problems entailed by thinking of spatial-temporal averages. On each particular place on Earth the surface temperature and water vapor content of near surface atmosphere change every day. All of these changes produce local feedbacks, only poorly measured for the most part, especially poorly measured over 3/4 of the Earth surface covered by water. The “climate” is a loose conception aggregating all these local processes operating at their local time scales.

      • Dear Matt,
        With all due respect, I disagree. Surface air belongs to the surface layer, which is a small mass of air; it does not belong to the atmosphere. Water vapor is reasonably mixed in the atmosphere.

      • Surface air belongs to the surface layer, which is a small mass of air; it does not belong to the atmosphere. Water vapor is reasonably mixed in the atmosphere.

        So what? The temperatures still change, and the water vapor content of the atmosphere changes. Everything in the climate that happens at particular places and times, with particular temperatures, winds, humidities, and so forth. Working with spatio-temporal averages blunts most understanding of the kinetics and dynamics of heat transfer.

      • But that is not what we see, and we go by what we see. Based on the latest studies, Dessler (2010), to name one, water vapor content in the atmosphere has not changed. It increased in the air located near surface but decreased in the upper troposphere.

        It seems that we are going in a circle on this subject.

      • Based on the latest studies, Dessler (2010), to name one, water vapor content in the atmosphere has not changed. It increased in the air located near surface but decreased in the upper troposphere.

        You identified a change that you say is not a change?

        The changes that I identified occur every day in almost every place on Earth.

    • Do you need a formal test for “it could be GHGs, it could be something else”? What would it be?

      • Exactly. If “it’s natural” was an explanation, science would begin and end with it.

        The authors are not providing any hypothesis to explain the temperature record. Nor any falsification of the theory that GHGs are the primary cause of the recent warming.

      • I think you have the wrong end of the stick. The statement “A or Not A” is logically true.

      • “The authors are not providing any hypothesis to explain the temperature record.”

        That’s absolutely correct, they are not, and it doesn’t seem to me they are trying to. They include a small amount of speculation, but their paper is simply to try a statistical approach to determine how much of the 20th century is ‘natural’ in statistical terms and how much might be external. After that you can have discussion about how to attribute that.

        They may well be wrong, they may have made a mistake somewhere, but the approach strikes me as interesting and worthwhile. I just don’t understand why Dr Tol has such a problem with it if it is not merely sticking to a position he might have been overly quick to adopt.

      • “If “it’s natural” was an explanation, science would begin and end with it.”

        Where in the world do you get that idea? That is not true in the slightest!

        Gravity is natural, does that stop us from doing all we can experimentally to tease apart just what gravity actually is, how it works, and what governs its processes?

        Cancer is natural, does that stop us from trying to dissect how it begins, how it spreads, how it kills, and how we can stop it?

        Arsenic is natural, does that stop us from studying its effects on living cells, how to protect from poisoning, how heavy metals function in biological systems in general?

        All these things are natural. Their functions are natural. There is no external influence from Man (other than on certain cancer rates, but we only can know that by knowing how it works “naturally”). Nor can we know how they work unless we try to experiment and gain a grasp of their basic functional parameters, akin to what these papers are doing at the basic level.

        So again, where do you get such a preposterous statement?

  3. LU shows that all five long-term instrumental records (Hohenpeissenberg, Paris, Vienna, Prague and Munich) have 100 year fluctuations that reveal external trends.

    You haven’t dealt with several comments relating to data quality of these records, problems detailed in Winkler 2009.

    The paper only deals with Hohenpeissenberg but all the other stations were part of the same meteorological network so would likely be subject to some of the same changes, and therefore biases. When the prescribed corrections are applied to Hohenpeissenberg the 19th Century trend is flat.

    • I did an analysis of the Vienna data here:
      http://theoilconundrum.blogspot.com/2011/11/multiscale-variance-analysis-and.html

      The variance in the temperature record appears to be white noise, uncorrelated except perhaps a slight uptick at the longest time intervals.

      • Web, Does the profile of DE vs Tq have to be symmetrical?
        Instead of being a symmetrical ‘U’-shaped curve can one side have a steeper gradient than the other?

      • Web, Does the profile of DE vs Tq have to be symmetrical?
        Instead of being a symmetrical ‘U’-shaped curve can one side have a steeper gradient than the other?

        Quite possible. The classical potential well is symmetric and due to the Taylor series expansion, it is usually simplified as a parabola. This is common in quantum mechanics, for example.

        The V-shaped well comes straight out of the Ornstein-Uhlenbeck process, and it generates a simple analytic solution.

        An asymmetric well is possible but doesn’t lend itself to simple closed-form analytic solutions. In reality, diffusion is really all about symmetry and trying to describe an unbiased motion of a particle or quantity. When that particle reaches a quasi-equilibrium, it can hop either way and that is reflected in the diffusion coefficient. With something like Ornstein-Uhlenbeck process, we can be a bit more imaginative on how we set up the asymmetry, especially if we don’t care about an analytic result, and simply simulate the process. This gives us an unbalanced high and low variance.

    • @PaulS Thank you for (re)asking my question. I still am waiting for an answer. I would like to add a question also.

      Why do LU only show data from Hohenpeissenberg, Paris, Vienna, Prague and Munich, when there are other long term records of similar quality. Is it perhaps because they do not show such dramatic dips and rises?

      Finally, why does LU not acknowledge that the 20th century rise increases in slope significantly, even in the records used in the paper?

  4. Like Richard Tol, I’m struck by the fact that the authors have failed to address the most serious criticisms of four previous threads, but have simply repeated their previous assertions as though participants here couldn’t understand what they were doing. Rather than repeat the deficiencies in the LL paper here (there are too many), I think it would be more efficient for readers to revisit the earlier threads to understand why the paper’s conclusions about the probability of external trends have no merit on either physical or statistical grounds.

    • Fred, what are you talking about? I read those threads very carefully, and this summation they have presented here seems perfectly reasonable. Why don’t you outline the exact points you feel they have missed? You don’t need to go into detail just the bullet points you feel have not been addressed and should have.

      • Fred, what are you talking about? I read those threads very carefully, and this summation they have presented here seems perfectly reasonable. Why don’t you outline the exact points you feel they have missed? You don’t need to go into detail just the bullet points you feel have not been addressed and should have.

        It’s kind of difficult to argue with something so ultimately misguided as the Ludecke papers. I would rather start from scratch and show how to do the analysis correctly, with statistics supporting a physical model.

        This is my take on how to model temperature time series correlations:
        http://theoilconundrum.blogspot.com/2011/11/multiscale-variance-analysis-and.html

        Note that I don’t reference Ludecke in that particular post because it makes no sense to argue against muddled thinking when a clean sheet working from first principles is the better path to take.

        The essential ideas are:
        1. Assume random walk as the basic mechanism of weakly perturbed temperature variation
        2. Come up with a good metric that has an intuitive feel and works at multiple time scales
        3. Find data with long time series to test against
        4. Model the random walk with some physical interpretations, such as an Ornstein-Uhlenbeck process
        And then see what unfolds.

        Probably the only thing good to say about Ludecke is that it provided some motivation to try the analysis from a fresher perspective. They through the challenge out there and it boomeranged back at them.

      • “Assume random walk as the basic mechanism of weakly perturbed temperature variation”

        A random walk is ultimately unbounded in variance, so how is that a realistic “physical model”?

      • A random walk is ultimately unbounded in variance, so how is that a realistic “physical model”?

        Ornstein-Uhlenbeck is a mean-reverting random walk.
        In the limit the variance is related to the diffusion coefficient and drag terms by:
        var(x_t) = \frac {\sigma^2}{2\theta}
        The drag essentially pulls the unbounded random walk back to the mean.

      • A random walk is ultimately unbounded in variance, so how is that a realistic “physical model”?

        Empirically, random walks often fit the data quite well. Perrin’s experiments on Einstein’s analysis of Brownian motion being the classic example.

      • Ornstein-Uhlenbeck is a diffusion process which, in its simplest form, begets an exponential correlation. A random walk has variance increasing linearly with time.

        “Empirically, random walks often fit the data quite well.”

        Over a finite time interval, it can. Not for a highly variable but bounded process within the time span of the data record, though.

        Fundamentally, though, why “assume” anything? When you assume something, the result you get may or may not match reality. As you assume many things, the likelihood of doing so decreases geometrically. Get some data and analyze it to see what is really happening, then build your model.

      • Where would we be if Newton had started by assuming that position was a first order lagged response to force? If m*dx/dt = -b*x + f instead of m*d^2x/dt^2 = f? The model has to fit the data, or you get nonsense.

      • Where would we be if Newton had started by assuming that position was a first order lagged response to force? If m*dx/dt = -b*x + f instead of m*d^2x/dt^2 = f? The model has to fit the data, or you get nonsense.

        ??? actually where would Newton be, perhaps a failed mathematician and alchemist.

      • WebHubTelescope, I like your plan.

        About this: 4. Model the random walk with some physical interpretations, such as an Ornstein-Uhlenbeck process
        And then see what unfolds.

        Would you not also require, as Ludecke et al did, that the family of models have some parameters that are at least approximately equal to some statistics from real series? Ludecke et al focused on the Hurst exponent, computed from the autocorrelation function of the detrended series. Wouldn’t you at least want a population model that has a similar variance and autocorrelation function?

        it provided some motivation to try the analysis from a fresher perspective.

        That’s one reason that flawed papers are deemed worthy of publication. It is premature to say that their approach has boomeranged on them.

      • It’s kind of difficult to argue with something so ultimately misguided as the Ludecke papers.

        Well actually WHT, I think this is a different argument. Not to get too bogged down in semantics, if you can describe why there approach is wrong I’d actually be interested to here it. My beef with Fred is that he has mischaracterized what that approach is, which makes me suspicious that he doesn’t like the implications. Most people have tossed back and forth specific issues of technicality without any real resolution, but no one has come out and said that the basic approach is wrong. Richard Tol for example, seems to take exception to technical aspects that would normally be questionable within a more orthodox approach.

        Mattstat: That’s one reason that flawed papers are deemed worthy of publication. It is premature to say that their approach has boomeranged on them.

        This is the impression I have formed thanks to some extremely interesting posts of yours. I am bothered by the arrogance of presuming that since a method of analysis that doesn’t follow a particular commentators expectation of how such an analysis should be done, it is therefore ‘fatally flawed’ and ‘disinformation’. It is not the same as saying that it is right, but it is worthy of examination. If it inspires WHT to try looking at it from a different perspective then so much the better! I thought that was how science progressed.

      • Anteros – I hope you’ll forgive me for not repeating the essence of dozens (possibly hundreds) of comments demonstrating the invalidity of the paper’s conclusions – an enormous amount of time and space has already been spent on this, and it would not make sense to repeat it. However, if you are interested in my comments alone, start with the Mail on Best post at the end of October and search for my name there and in subsequent threads. I believe my comments by themselves should make it clear why the paper can’t be taken seriously, but there are many other comments by others whose additional perspectives add great weight to this conclusion, even if you don’t find my points convincing.,

      • Anteros – My apologies for addressing my comment above to you. It was intended as a response to Agnostic.

      • Not a problem – it gave me pause to think about my comment!

      • Thanks Fred – I didn’t mean to give the impression that I thought the papers weren’t flawed, far from it. I simply thought Prof.Tol’s criticism – that they didn’t mention climate sensitivity in the paper – was pretty much beside the point – was there any necessity for them to do so? I accept he didn’t need to go over all the same points as the first time round but this just seemed like a lazy dismissal. Perhaps it was indeed pedantic of me.

      • I could easily suggest you do the same thing. I read with great interest what Mattstat, David Young, Brandon Shallonberger and a few others had to say. I read the criticism’s very carefully and it appears to me that they are based on misattribution of certain terms eg ‘natural’ and ‘external’, (which is understandable in the context of climate science, and you could argue that the authors should have been much clearer in defining their terms) and other misconceptions about what was or was not attempted.

        It strikes me that before you jump and determine that a paper is ‘fatally flawed’ you examine carefully what is actually being said. If you think you have ‘demonstrated the invalidity of the conclusions’ it may be that you have missed some important part of the comprehension process, and having denounced it, you have to further justify that position by making further mischaracterisations which you may or may not do consciously.

        I’ll repeat, the conclusions these guys have come to may ultimately be incorrect. There may be things they have over-looked, and certainly they themselves point to the difficulty in the temperature record, for example. But the paper, on the weight of arguments that I have read in all of the threads, was worth examining, and the approach novel and interesting.

        The reason it makes no sense for you to repeat the criticisms is that they have been a) addressed or b) not coherent enough to be expressed. Perhaps give us bullet points – how about limiting yourself to just 3 of the most egregious problems? Don’t even bother with links to the details critiques I am sure we’ll find them.

      • yes, Fred, 3 short and concise bullet points might well overcome the impression that you are hand-waving somewhat!

      • What is natural, external, internal, forced, unforced, feedback, response, artifact, signal, normal and Abby normal is pretty hard to keep up with in climate science.

      • Agnostic – Their terminology was confusing, and I think they ended up confusing themselves, but that is a minor quibble. In their earlier response, they said, “Our main task was to show that the 20th century warming was predominantly a natural fluctuation.” They concluded their evidence showed that it was – i.e., that the warming was due mainly to fluctuations within the climate system rather than an external trend.

        That conclusion has been shown in what are now five threads to be completely wrong. If you have further questions, and can’t find answers within these five threads, including the many comments I’ve already devoted to this paper, please email me, but I don’t want to waste everyone’s time redocumenting the conclusive evidence that the paper’s findings were wrong. I will be glad to respond to an email.

      • Well I’ve been over all of the previous posts and looked again at Fred’s objections to the papers. Despite his claims that the papers include numerous flaws I can only find one that he identifies in his comments (albeit one that he repeats several times).

        It is that the extent of ocean heating in the second half of the Twentieth Century precludes recent warming being caused by ‘natural fluctuations’

        So, one bullet point actually, but an important one nonetheless.

      • Rob – Thanks for mentioning the point about the ocean heat. For Ludecke et al, I think there are two separate issues – a physical one (related to the ocean heat change) and a statistical one.

        The physical question is whether most warming was due to natural fluctuations. The statistical question is whether the statistical treatment described by Ludecke et al showed that most warming was due to natural fluctuations.

        They failed the statistical test for a number of reasons, and I’ve reemphasized one below. However, their incorrect conclusion that they had shown natural fluctuations to dominate doesn’t exclude the possibility that this is indeed the case and that they simply erred in their attempt to prove it. The physical explanation based on ocean heat content and the negative constraints imposed by the Stefan-Boltzmann law (precluding runaway type climate feedbacks) is what shows the role of natural fluctuations to be no more than minor (at least for the 1956-2005 interval).

        Well, it looks like despite my intentions, I’ve been lured into posting two bullet points. But I’m determined not to give up a third one.

      • What I discovered a long time ago, on a thread related to global average temperature computation, is that Fred (of course is sincere in his attempts to read a lot on this subject and opine on them) simply does not understand the basic mechanisms that are behind the most basic papers that are the foundation for a variety of climate claims. I confronted him several times in that thread. He would do the classic dodge of, oh have you read that other paper, read this 1940s paper etc, without being able to put down the basic algorithm captured in the global avg temp calculation paper (by Hansen I believe. I dont remember now). He means well Im sure. But this is his tactic. He reads a lot and can quote a lot of those papers. But when push comes to shove, he wouldnt know the details. I have seen it at least on a couple of topics where I could nail that down. But since I dont have the time or inclination to skim through every climate paper and write comments on a blog, I dont bother taking him seriously though. It is one thing to sound like you know a lot and in depth and it is another thing to actually know that topic in depth. Since people on the blogs have other things to do, you can count on them to either not know the topic in depth or not go through your comments in depth and contront it, for the most part. I guess this gives you a certain license.

        Im not surprised by your comments

  5. Guest posts reflect the opinions only of the guest posters. The presence of a guest post at Climate Etc. reflects my judgement that the topic is of interest to the participants here, and does not imply any endorsement by me of the analyses/ideas presented in the post.

    I like this. I don’t know if it is intended to be sarcastic, but I think it is a useful disclaimer.

    This large number of comments demonstrates the interest in the BEST and the LU, LL publications.

    People slow down for car accidents on the side of the road, too. Interesting, certainly, but significant?

    • If it’s not significant Robert, what are you doing here? Gawping? :)

      • It’s cheaper than the zoo. ;)

      • Anteros,

        I know you meant no ill-will, but you see Robert is a very sensitive individual and the subject of comment-counts as a measure of a post’s worth touches on one of Robert’s very raw and exposed nerves.

        You may not know it, Anteros, but Robert has a blog of his own. And don’t “kick” yourself that you might not know that, since not many people do. But, if you can afford the expense of the full bio-hazard suit-up, I recommend a one-time visit to Robert’s blog, “The Idiot Tracker”, where you’ll find quite an eager-beaver, breathlessly self-important, oh-so-urgent, attention-seeking, I’m-smart-and-you’re-dumb, “purty pitchure” intensive, no-wonder-this-guy-can’t-get-a-date, real-life-substitute loser-blog fighting the good fight against the “denier” evil-doers. In other words, Greenshirt Manichaenism at it’s most reliably and unimaginatively orthodox, tedious, motor-mouth, and dork-ball–with the bonus of all those “purty-pitchures.”

        But despite the many attractions of “The Idiot Tracker” blog and its demonstrated ability to reliably track Robert, the blog suffers in one small area. In particular, no one reads it. No, that’s not quite right, there are a few that seem to be attracted to Robert’s blogospheric flame. But the number is derisory and possibly due to an unfortunately high incidence of illiteracy among Robert’s narrow readership or his readers’ busyness with other things more important, fewer still comment. And, even those few comments are invariably I’m-feeling-sorry-for-Robert-and-his-pathetic-loser-blog-so-I’ll-throw-him-a-one-or-two-line-mercy-comment crumbs.

        For example, since Oct 26, there have been seven posts to “The Idiot Tracker” and these posts have attracted a grand total of six comments. But I’m sure Robert will insist that crude comment-counts provide no worthwhile measure of a blog/post’s worth (except for the trivial case of car-wreck blogs/posts). And he may just have a point, this time, because the latest comments on Robert’s blog, through some sort of prodigy of numerology, probably occultic in its origins, have produced a SIX-WAY TIE FOR TOP COMMENTER! A blogospheric first that’s unlikely to ever be broken! Let’s congratulate the lucky winners!

        Comment # / Winner
        1/ Anonymous
        1/Tracker (the “Idiot” himself)
        1/Lars
        1/Steve Bloom
        1/Louise
        1/Manuel ‘moe” g

      • 1/ Anonymous
        1/Tracker (the “Idiot” himself)
        1/Lars
        1/Steve Bloom
        1/Louise
        1/Manuel ‘moe” g

        They are all Robert aliases. So we do have a winner. Although that is obviously a misnomer.

      • I have to confess that I visited the malignancy, the so-called ‘place-of-the-idiot’. For a long while though, I believed myself to be somewhere else – there was a sub-heading on the sidebar entitled ‘Popular Posts’ which I thought couldn’t possibly be a blog with only six commenters – especially if the ‘commenters’ are all Robert…

        However, the truth finally dawned…..

        Now, something else I noticed has led me to introduce this competition for which there is a prize of precisely nothing. I have reproduced two quotations below. One of them was made this week by Professor Richard Betts of the Met Office, and lead author of the IPCC AR5. The other is by Robert. Stretch your imagination if you will and see if it is possible for you to decide, in effect, ‘who said what’.

        “Scientists estimate a warming of 2C as the upper limit of what our civilization can adapt to, and not suffer disaster on a planetary scale”.

        “Most climate scientists do not subscribe to the 2 degrees “Dangerous Climate Change” meme (I know I don’t)”

      • Guys, you have achieved something that I thought impossible…

        you’ve made me feel sorry for Robert

  6. “The errors that are statistical, systematic, and caused by the changing environment conditions of the screens at the individual stations are desperately unknown and scatter widely.”

    So, can we conclude that, given the big possible errors in the data, statistical analysis cannot produce any physically meaningful knowledge ?

    If the data has errors of +- 2-5 deg (as more than 70% of stations have) how can you discern whether a trend of, say, 0.5 deg is “natural” of “forced” ??

  7. Here’s something I keep wondering about in this discussion. There are methods (developed by Donald W. K. Andrews and others) for testing the null “No trend change has occured during the period [t1,t2]” against the alternative “A trend change has occurred during the period [t1,2].” Here is a classic paper in the genre:

    http://www.jstor.org/pss/2951764

    The class of models allowed in this Andrews paper is very large, essentially any model estimable by GMM, which includes models with a persistent error process. I have used these techniques myself. So, would someone (maybe the authors) explain how the DFA/Hurst method is or is not related to this existing literature on testing for trend breaks in a time series?

    • That is from the world of econometrics where no physical basis exists for anything so trendology is all you have (halfway joking). The difference is that in climate science physics does play a role and the models of drift, drag, and diffusion often have real physical interpretations. Those parameters make the difference in interpreting trends, and what distinguishes the role of statistics from that of statistical physics and stochastic modeling.

  8. It’s probably quite impossible to convince the unconvincable.

    Meanwhile, let me see if I got this right. If a former value of “yesterday” (last week/last month/last year) is lower than today, and the value “tomorrow” is higher than today, we have a persistent trend attempting to hold on in one direction, which is typical for positive feedback, as it acts with a certain delay. But on the other hand, if a value “yesterday” is higher than “today” but the value “tomorrow” is lower than “today” again, we have an anti persistent trend, obstructing the deviation and steering back to the more moderate value, as is typical for negative feedback.

    So we simply have to count the number of persistency trends of three consequetive values T0<T1T1>T2 against three antipersistent trends: T0>T1T1<T2.

    So let's try that for instance with the MSU data of UAH here:
    http://vortex.nsstc.uah.edu/public/msu/t2lt/tltglhmam_5.4
    Now check out this XLS sheet.
    http://dl.dropbox.com/u/22026080/tltglhmam_5-non-persistent.xls
    We check for each set of three consequetive data if it is persistent or non-persistent in colums H-K for global, NH, SH and tropical datasets.

    Not that of the 392 data triplets in any column there are many more anti- persistent than persistent tendencies, roughly in a 0.4 ratio. I believe that this is the approximation of the hurst exponent, if I understood it all.

    That would mean that in a time-constant domain of one month, the lower troposphere demonstrates a negative feedback behavior. This is fortified by the notion that the ENSO causes extra persistent behavior for several consequetive months.

    So that's pretty simple and we can check any series immediately

  9. If I may, I would like to share a visual example of how a very simple system can give rise to complexity. The music is nice too.
    Notice, that with Newtonian physics and Excel we could describe everything that happens, but, if you only had a limited visual reference and were only looking at ‘white ball density’, the curve generated in time would be extraordinary complex.

  10. This science has all of the ‘naturalness’ of a Master Chef cookoff based on digital pictures of their competing steak salads.

  11. Horst-Joachim Lüdecke, Rainer Link, Friedrich-Karl Ewert

    First, a huge thankyou!

    The results of LL are roughly compatible with the known climate sensitivity of the CO2 without feedback of about 1 °C.

    This result agrees with a single global mean temperature pattern since 1880:
    http://bit.ly/sxEJpK

    Which demonstrates the increase in human emission of CO2 has not changed the global mean temperature PATTERN. This single pattern shows a long-term global warming of 0.06 deg C per decade with an oscillation of 0.5 deg C every 30 years.

    IPCC’s projection of continued warming of 0.16 deg C per decade for the period 1970-2000 into the future corresponds to its climate sensitivity of about 3 deg C.

    The last 8-years of data shows a slightly negative trend, which tentatively indicates the long-term global warming rate is only 0.06 deg C per decade.
    http://bit.ly/szoJf8

    As a result, if the last 8-years trend continues until the 2030s, the correct climate sensitivity will be only 3*0.06/0.16 = 1.12 deg C, which is the “known climate sensitivity of CO2 without feedback”.

  12. @ Fred: http://judithcurry.com/2011/11/15/ludecke-et-al-respond-part-ii/#comment-138766

    That conclusion has been shown in what are now five threads to be completely wrong. If you have further questions, and can’t find answers within these five threads, including the many comments I’ve already devoted to this paper, please email me, but I don’t want to waste everyone’s time redocumenting the conclusive evidence that the paper’s findings were wrong. I will be glad to respond to an email.

    That is very kind offer Fred, however I don’t think it is necessary, and I am finding your attitude on this exasperating even for an ‘agnostic’.

    – The conclusion you say they are drawing, and the conclusion I think they are drawing are two different things.
    – i do not think the evidence that the papers findings are wrong are in any way ‘conclusive’.
    – Therefore, there have not been five threads concluding they are wrong.
    – We are not wasting everyone’s time because I want to draw attention to the fact that in my view you and Richard Tol have been disingenuous.

    I should add I have read every comment you have written regarding this paper over the course of the threads and I keep getting the impression you are wilfully mischarcterising what they are trying to do.

    Please just outline 3 points of technicality in which you think they are incorrect that have been discussed in earlier threads in greater detail.

    I agree that their terminology was not sufficiently clear from the outset and lead to confusion. But it is terribly arrogant for you to say ‘they confused themselves’. I do not think that you should default to ‘egregious error’ on reading something that appears to you to be completely incorrect. You should consider that there might be something that you have missed first.

    In the case of Richard Tol, I strongly suspect he missed the strategy they were taking and objected to what they were doing from a more conventional view point. In your case, I suspect you have jumped on the bandwagon because you do not like the papers implications.

    What I am strongly objecting to, is the constant mischaracterization of what is clear to me is being said, and has been pointed out several times in response to criticisms. It is easy to disagree with something if you re-fashiion the meaning to be something you can disagree with!

    So 3 points of technicality that have not been adequately addressed. Thanks.

    • Well summerized

    • Agnostic – I regret you’re exasperated, but you are still free to email me. Regarding my understanding of what the authors “are trying do”, I quoted their exact words, and that was the basis of my understanding. I really don’t see any possibility that this paper is salvageable, but if you have truly read all of my many comments carefully (comments to which I devoted considerable effort and detail), and still have questions, the email route remains open. Summarizing my effort now in a few bullet points can’t adequately convey the weight of evidence, and strikes me as an invitation to continued arguing – if the details haven’t registered, the bullet points certainly won’t.

      • Agnostic

        If you have followed Fred Moolten on this site for any time, you will have seen that:

        – he is very eloquent
        – he gives the first impression of being quite knowledgeable
        – he exhibits a knee-jerk reaction to reject any message that might challenge his paradigm
        – he likes to refer to past posts (which are hard to find) as evidence
        – he prefers to speak in generalities
        – he avoids specific questions like the plague
        – he is more slippery than an eel

        Max

      • Max

        For some reason, I always agree with you!

        I thought I was very objective!

        Max, you are more objective than me!

        Thank you!

      • Girma – Don’t be silly. Max has accused me of being eloquent and knowledgeable but has not provided one shred of evidence to support his claims. It may be true that of the hundreds of comments I’ve made in this blog, most readers have considered almost all to be knowledgeable, but that is still just a matter of opinion and it may simply reflect the fact that everybody likes me.

        He has also refused to cite even a single example of my eloquence, and I challenge him to find one. I think he just made that up.

      • Fred

        Here is an example.

        The physical explanation based on ocean heat content and the negative constraints imposed by the Stefan-Boltzmann law (precluding runaway type climate feedbacks) is what shows the role of natural fluctuations to be no more than minor (at least for the 1956-2005 interval).

        You claimed “the role of natural fluctuations to be no more than minor”. This is just a claim. There is no evidence for your claim. It is just story telling.

        Fred, is not the “natural fluctuation” described in the following paper major?

        http://bit.ly/nfQr92

      • Fred

        I agree that you are an excellent story teller.

        However, regarding AGW, the story may be wrong.

      • Ain’t that the truth, brother! But I prefer ‘verbose’ to ‘very eloquent’.

    • Well Agnostic I’ve been over all of the previous posts and looked again at Fred’s objections to the papers. Despite his claims that the papers include numerous flaws I can only find one that he identifies in his comments (albeit one that he repeats several times).

      It is that the extent of ocean heating in the second half of the Twentieth Century precludes recent warming being caused by ‘natural fluctuations’

      So, one bullet point actually, but an important one nonetheless.

      Sorry for this repeat comment but it seems to fit best here.

      • Thankyou RobB. And it is not an unreasonable point. It is worthy of a response IMO.

        @Fred:
        Summarizing my effort now in a few bullet points can’t adequately convey the weight of evidence, and strikes me as an invitation to continued arguing – if the details haven’t registered, the bullet points certainly won’t.

        Oh rubbish. Surely you can give them headings and I’ll look up the rest. The point is the ‘evidence’ as you put it, is not conclusive and has been dealt with reasonably on those same threads! Just because you wrote it don’t make it so! It certainly doesn’t make it ‘evidence’. And it certainly doesn’t make the papers worthless and unworthy of examination. It’s the sort of hubris that is now archetypical in climate science that gives rise to the sort of nonsense as exemplified by a certain hockey-stick. It doesn’t convince – on the contrary it deepens skepticism.

        And do not think that I am unable to follow the details. I don’t pretend to have enough background knowledge to discuss the details intelligently, but enough that I can follow the conversations between those who do.

        I quoted their exact words, and that was the basis of my understanding.

        I can’t find where you quoted them, but I am pretty sure you are making a big effort not understand them – or to mischaracterise them. Instead you say:
        I think it would be more efficient for readers to revisit the earlier threads to understand why the paper’s conclusions about the probability of external trends have no merit on either physical or statistical grounds.
        Yet if they do visit they would not necessarily come to the conclusion you would like them to. What the paper concludes is:
        LL demonstrates that the 20th century’s global warming was predominantly a natural 100-year fluctuation.
        Which I would have thought was entirely uncontroversial provided you understand what they mean by ‘natural’.

        I tell you Fred, it is precisely this kind mendaciousness that has given this scientific field a really bad name, and you are partaking of the poisoned well and drinking deeply. I truly try to remain as open to new evidence as possible, but your and Dr Tols approach just has set off all my BS sensors.

        Here, in my view is how you and Dr Tol should be critiquing the papers, in such a way as to show that you have properly considered it, requested clarification, and made a judgement. It might go something like this:

        Richard Tol: “Initially I thought there were grave errors in the statistical analysis, but it turned out to be a new approach, which might be worth further exploration. However, I think this analysis is too confident in its conclusions, and has not accounted for the uncertainties in the station data to my satisfaction. I am somewhat skeptical.”

        Fred Moolten: “I think there is some doubt over the validity of the statistical approach, and I think there is a problem with regards to ocean heat increase which has not been accounted for in the analysis. I do not have the statistical knowledge to comment authoritatively, but I too am skeptical and would like to see the study replicated and/or revisited and improved by others.”

        In the mean time if you could kindly supply the other 2 bullet points – which should be easy if the paper is as error ridden as you say.

      • Actually I did find you quote from them here: http://judithcurry.com/2011/11/15/ludecke-et-al-respond-part-ii/#comment-138766

        That’s the bit I think you have either not understood, or mischaracterised. I understand what they mean, and in itself ought to be pretty uncontroversial.

      • Also see here. Most warming was anthropogenic, but if that is what Ludecke et al were trying to say, then I clearly misunderstood them. I thought they were arguing the opposite, and apparently others did as well..

      • Note that LLE do not use a new approach. They abuse existing methods.

      • The approach was new in terms of examining the temperature record. I note that you used the word ‘abuse’ as if they were a bunch of kiddy-fiddlers exchanging illicit photos. I read your critique and the responses and the more detailed examinations of knowledgeable commentators here and I cannot understand how you can maintain your position on this paper, and that Judith was disseminating ‘disinformation’. Can you really not see how that harms your credibility?
        Let me make this one point; I have my own reservations on the paper, and while I think it is interesting (especially as a first step in taking a new approach to putting bounds on climate sensitivity) I don’t think it is of earth-shattering importance. If I was not skeptical of CAGW it would not convince me to change my view. But owing to the spectacularly unreasonable gate-keeping, you have forced a defence of it out of all proportion to its importance.

        Surely you have to ask yourself why?

        @Fred; You clearly did misunderstand them since they did not address attribution of the warming they found at all. God sakes man, I understand what they are saying, why can’t you? You keep confusing ‘natural fluctuations’ in statistical terms with ‘natural variation’ as we understand it more generally in climate science. You can argue that this is too easy to misunderstand and I won’t disagree, but their paper was possible to understand provided you don’t reject it out of hand first.

        Go back and re-read some of the responses to your comments especially from Mattstat and Brandon Shallonberger.

  13. We can all appreciate the nuances of the various mathematical approaches to problem-solving. Let us not, however, deceive ourselves when it comes to applying these approaches to climate model-making.

    We obviously will never undertake such a task if we cannot first concede by unanimous consensus that can only understand climate as a holistic process. People with such candor also will know that our ability to effectively model climate is limited.

    Divining reality from the shadows on the walls of Plato’s prison cave may be as close as we will ever come: the shifting crusts and volcanic eruptions, oscillations of solar activity on multi-Decadal to Centennial and Millennial time scales with variations in gamma radiation and the role of the big planets, Saturn and Jupiter—and, a changing North Pole and variations in the magnetosphere—all, are a part of a holistic process that is the Earth’s climate.

  14. The paper by Ludecke et al entailed a misapplication of the process described in a 2009 GRL paper by Lennartz and Bunde, and so the original might be worth visiting. I have some minor reservations about their method, but it strikes me as both ingenious and well within the realm of accepted statistical principles, including those violated by Ludecke et al. It’s basically a form of significance testing of a null hypothesis that “natural fluctuations” account for observed temperature changes over 50 or 100 year intervals of the twentieth century.

    The authors appear to be unfamiliar with climate concepts, and so they oppose “natural fluctuations” to “anthropogenic warming”, which is not an accurate dichotomy. More accurately, their method is designed to evaluate random walk type fluctuations with a high degree of autocorrelation as distinguished from externally forced trends. If the latter are present, the method won’t tell us whether they are anthropogenic or due “naturally” to, say, a persistent solar forcing, or a persistent forcing from long term declines in cooling volcanic aerosols.
    That given, the method itself can be misinterpreted – it misled Ludecke into calculating probabilities that twentieth century warming was “natural” (i.e., random walk generated). Here is a relevant quotation from Lennartz and Bunde:

    “For estimating the probability that a certain relative temperature increase may occur due to natural fluctuations, we have studied long synthetic records…”. This is standard null hypothesis testing in that the authors asked, “Given the null hypothesis, with what probability will it yield the observed results?” What Ludecke et al did, however, is invert this into, “Given the observed results, what is the probability that the null hypothesis is true?” The two are not interconvertible, and the latter question is unanswerable without further data. Lennartz and Bunde didn’t fall into this trap. They simply inferred that an “anthropogenic trend” (they really mean an externally forced trend, anthropogenic or natural) is “likely” if the null hypothesis rarely generates the results (e.g., less than 5% of samples), and a “natural fluctuation” explanation is more likely if the null hypothesis can easily generate a set of observations. By not attaching a probability value to these conclusions, they avoided the misstep that led Ludecke et al astray.

    Elsewhere, I’ve described physical evidence based on ocean heat data that excludes a prominent role for natural fluctuations of the type discussed. My purpose here though is simply to provide a link for readers to compare the paper by Ludecke et al with the original work that they employed in their calculations.

    • Thanks, Fred, I had not noticed that.
      The difference in tone is also striking: The Lennert and Bunde paper is cautious and full of caveats.

      • This is sort of technical. Dr. Tol, do you know how to tie a do rag:

        Just kidding Doc. I am beginning to wish I had ignored this topic as you suggested. Another thread on this story and I am going to Robert’s blog. At least I wouldn’t be tempted to spend any significant time over there.

    • Well explained Fred!

      It’s on the list of common misconceptions concerning null hypothesis testing: P(data|H) is not equal to P(H|data). That pesky Bayes theorem needs to be dragged in to sort all that out with a host of other factors.

      They then go on to make the further mistake of counting up series that are ‘natural’ and ‘non-natural’ and seem (their English isn’t perfect so this may be me reading too much into it) to interpret the fraction that are natural as the probability that all the series are natural. This makes no sense at all and therefore one of the conclusions of their paper (conclusion (b) in How Natural is the Recent Centennial Warming? An Analysis of 2249 Surface Temperature Records) is entirely unsupported.

    • The authors appear to be unfamiliar with climate concepts, and so they oppose “natural fluctuations” to “anthropogenic warming”, which is not an accurate dichotomy.

      Really? How is that not an accurate dichotomy?

      • I’m not Fred, but I think what he’s driving at is that a deterministic external forcing, such as astronomical phenomena, isn’t in the same category as natural variation. It’s really kind of a semantic distinction as far as the big picture goes; there are things you know and things you don’t know, and things you don’t know go in the “natural variation” hopper until you know them.

      • The Lennartz/Bunde method is designed to evaluate distinctions between random walk type internal fluctuations on one hand and external trends on the other. If an external trend exists, the method doesn’t indicate whether it is anthropogenic or natural (e.g. from solar forcing), and so the distinction between “anthropogenic” and “natural” is a false dichotomy.

      • If an external trend exists, the method doesn’t indicate whether it is anthropogenic or natural (e.g. from solar forcing), and so the distinction between “anthropogenic” and “natural” is a false dichotomy.

        That is a bizarre usage of words. The distinction between processes that we can affect and processes that we can not affect is fundamental, and is a fundamental part of the policy debates. What you have exhibited is a method that does not tell us whether a change was or was not effected by humans; that’s not the same as that the dichotomy is false.

        I suppose you could say that the mechanism of asphyxiation by CO is the same as whether it occurred accidentally or intentionally, but the dichotomy is not false.

      • The dichotomy is quite false. The method doesn’t distinguish natural from anthropogenic temperature changes, but rather distinguishes natural unforced fluctuations on one hand from external trends that are either anthropogenic or natural, with no means of telling whether or not the trend is natural. Ludecke et al appeared quite confused on this, but that is a minor deficiency compared with the others that have been cited.

      • Mattstat,

        It is the attack of the climate speak!! Natural and external can be the same, i.e. solar is natural but external. Unless the system parameters are specifically define, there is confusion. It is Spencer/Dessler all over again. So the paper can’t distinguish between, internal variability, external natural variability or anthropogenic aerosol forcing (negative or positive), natural aerosol forcing (negative or positive) or anthropogenic land use change. The only thing it can say, with not a great deal of certainty, is that the major don’t seem to be CO2 forcing. Which is exactly what would be expected, inconclusive because the time series available are not adequate for the purpose.

        The paper did stumble on a possible method of determining something more conclusive, but is was not pursued. That was the variation of temperature change with altitude. Had they used regional Tmax and Tmin with the data, they may have been able to increase the data to ~2400 points and had a more significant value that might have eeked into the acceptable range.

        That’s why I prefer thermo and fluid dynamics.

        So what is the take away? Nearly every skeptical argument about questionable conclusions drawn from time series analysis is validated :)

      • Fred, you remind me of a saying from decades ago: you can’t distinguish the outlines of a building from background in the fog, but that does not make the distinction false.

        You are saying that an attribution they did not make would have been based on a false dichotomy; whereas the dichotomy is perfectly “true”, even though they did not make the attribution. Today I think that you tied yourself into a knot. You should accept the truth of Trenberth’s lament, which he called a “travesty” in private, but wrote papers about (“the missing heat”) in public: not enough is known about the heat flows.

      • Dallas: The paper did stumble on a possible method of determining something more conclusive, but is was not pursued. That was the variation of temperature change with altitude. Had they used regional Tmax and Tmin with the data, they may have been able to increase the data to ~2400 points

        That is a valuable suggestion, and I hope that someone follows up on it. Two problems remain, both related to the shortcomings of this paper: (1) there is still the problem of which temperature records to select — I doubt that there is a selection that will please a large majority of readers; (2) what you actually have is still just 1 temperature record of 100 years, with 1200 bivariate points.

      • Matt – I think if you read the paper by Ludecke et al, the false dichotomy will be clear. They claimed to distinguish between natural fluctuations and anthropogenic trends, but when you divide the pie into two pieces based on their method, the two pieces would be natural fluctuations and external trends (both natural and anthropogenic). By implying that the opposite of natural fluctuations were anthropogenic trends, they left out natural trends – hence the false dichotomy.. “Natural” and “anthropogenic” are antithetical – a true dichotomy – but that’s not what they purported to distinguish.

        Unfortunately, Lennartz and Bunde made the same mistake, but their statistical treatment of the problem was at least legitimate, even if they got confused on what they were distinguishing from what.

      • Fred said: They claimed to distinguish between natural fluctuations and anthropogenic trends,

        They claimed nothing of the sort.

        Fred, I have to say it is you that needs to read the papers again. In all of your posts on this thread you are confusing ‘natural fluctuations’ and ‘natural variability’. In fairness you are not the only one – Girma does it as well up thread. I think this is the source of your confusion and the puzzled responses you are getting from Mattstat.

        You have even adopted the term ‘natural fluctuation’ as in the quote from you. But ‘natural fluctuation’ is the statistical term they are giving to the auto-correlated data in their attempt to account for persistence, and they are claiming they can separate the external trends from it. They merely speculate about what those external trends might be and don’t seriously discuss it as it outside of the scope of the paper.

        To my knowledge, this is the first time this approach has been taken with the temp record and the implications are that if the approach can be validated then it might be a useful way to look at climate sensitivity, though, I stress, I don’t think they have attempted to do that here. It so happens that the amount of ‘external trend’ they found would be consistent with CO2 forcing alone with no feedbacks.

        It may be your question regarding OHC is a valid problem. I don’t really think it would make much difference because OHC and surface temperatures tend to correlate well and for the purposes of the exercise the data they have used is as fine as it can be. But I strongly recommend you re-read the threads, and read other commentators comments – don’t just re-read your own and believe you have uncovered ‘evidence’.

      • Here’s an abstract of the Lennartz Bunde paper, to which Fred is referring, for clarification:

        Long-term memory is ubiquitous in nature and has important consequences for the occurrence of natural hazards, but its detection often is complicated by the short length of the considered records and additive white noise in the data. Here we study synthetic Gaussian distributed records xi of length N that consist of a long-term correlated component (1−a)yi characterized by a correlation exponent γ, 0<γ0)=Bas−γ, and Ea={2Ba/[(2−γ)(1−γ)]}N−γ+O(N−1). The finite-size parameter Ea also occurs in related quantities, for example, in the variance ΔN2(s) of the local mean in time windows of length s: ΔN2(s)=[Δ∞2(s)−Ea]/(1−Ea). For purely long-term correlated data B0≅(2−γ)(1−γ)/2 yielding E0≅N−γ, and thus CN(s)=[(2−γ)(1−γ)/2s−γ−N−γ]/[1−N−γ] and ΔN2(s)=[s−γ−N−γ]/[1−N−γ]. We show how to estimate Ea and C∞(s) from a given data set and thus how to obtain accurately the exponent γ and the amount of white noise a.

        Fred states:

        If an external trend exists, the method doesn’t indicate whether it is anthropogenic or natural (e.g. from solar forcing), and so the distinction between “anthropogenic” and “natural” is a false dichotomy.

        As there were no known human forcings of any significance prior to around 1940 and yet there were significant climate fluctuations pointing to an external trend, one can assume a “null hypothesis” that climate fluctuations are caused by natural (not human) forcings, unless human forcings can be specifically identified and quantified. Since the latter is not the case prior to aound 1940, we are back to an assumption of natural forcings as the cause for the observed climate fluctuations.

        Voila!

        Max

      • Max – the statement that there were no known significant anthropogenic forcings prior to 1940 is a common misconception on the Web, but is incorrect. Although anthropogenic ghgs dominated after about 1950, their effect was significant even before then, with total forcings representing the combination of ghgs, declining volcanic aerosols, and solar forcing. One can have reservations about the magnitude of the earlier ghg effects, but not their non-trivial nature, based on the known properties and concentration of CO2, methane, and other ghgs – see Transient Response to Greenhouse Gases.

    • “Elsewhere, I’ve described physical evidence based on ocean heat data that excludes a prominent role for natural fluctuations of the type discussed.”

      And, how well is ocean heat content known pre-ARGO? Not very, I’d say.

      • Bartemis that is a huge point. Prior to ARGO it is a hodge podge of less than stellar methods and coverage. If ARGO is wrong, Fred is right, but…

        Since the Ocean’s do not seem to be too adverse to dumping huge amounts of energy when they wish, ~2.5e22 Joules in 1999 following the super El Nino, ARGO doesn’t seem that unbelievable to me. So we are back to “beliefs” in the data if we only use time series.

      • There’s nothing wrong with ARGO although the methodology still needs to be refined. However, the large increase in ocean heat content since the 1950s doesn’t require ARGO data, but is well substantiated by temperature measurements via the earlier instrumentation reinforced by rises in sea level too great to be accounted for exclusively by melting of land ice. ARGO precision helps us even out the short term bumps and dips, but there is no serious possibility the ocean didn’t gain heat earlier. I think that is a dead end.

        However, if you are reluctant to accept this, you should contact the individuals most responsible for the ARGO data (e.g., Josh Willis) to get their views. Josh is available via email, I believe, and in the past, he was willing to respond to my email questions, so I assume he still is. Ask him if he has any doubts about very large increases in pre-ARGO ocean heat, and if he responds, please share his answer with us.

      • OHC increasing during a warming period is to be expected. OHC leveling during a temperature rise is a bit unusual. ARGO only covers a limited period, but the 2010 El Nino in Argo is a touch different than 1998 in the pre-ARGO. I am not as convinced as you and a few others, Eli Rabet for one, are a bit concerned with ARGO. Time will tell.

      • You mean that this figure is the smoking gun for CO2 driven heating of the Earth?

        http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/

        Is this the piece of evidence Fred?

      • Dallas – I’m not sure you understand the dynamics. OHC is expected to increase during a forced warming, but to decrease during warming from an internal fluctuation such as an El Nino (at least during certain phases). That’s one of the ways of distinguishing forced trends from internal fluctuations. If both types of warming influence are operating, the direction of OHC change depends on their balance. However, that’s true over the long term. We don’t yet have enough understanding of short term changes in measured OHC (not the same as actual OHC) to draw definitive conclusions.

      • “well substantiated” is an incredible overstatement. Even Willis, who has discussed this on Pielke Seniors blog – so no need to bother him – would not stand by that statement. The entire reason for the Argo project in the first place was the unreliability of the bucket and air intake measurements which were never meant for this purpose. The Argo measurements were fully expected to show an increase in temperature and they didn’t; catching everyone by surprise. If they had shown an increase it may have been proof. That they didn’t causes a major headache with the hypothesis; the “missing heat” problem. Of course you probably know all this, which makes me wonder if you are just propagandising: You fool nobody though except perhaps yourself!

        Also you are not keeping abreast of recent papers that discount thermal expansion in sea rise and say it is only due to melting ice.

        Yes there is no possiilty the ocean didn’t gain heat earlier, but it has nothing to do with any increase in CO2, whose effects should be accelerating the ocean heat content rise, not stalling it! The ocean will continue to rise until we head back into an ice age. No reason to involve CO2 whatsoever.

      • Yes there is no possiilty the ocean didn’t gain heat earlier,

        James – I certainly agree with that statement, and Josh Willis and others will to, I expect, if you contact them. However, you may be confusing issues with your reference to bucket and air intake measurements. Those involve sea surface temperature, whereas the studies of Levitus and others address the heat gain within the ocean itself.

        The ratio of thermal expansion to ice melting as a cause of sea level rise has been observed to vary during different intervals. Thermal expansion was an important factor in the past, with ice melting inadequate to explain more than a fraction. Recent years have seen the thermal expansion factor to decline with most rise due to ice melting. This is consistent with surface warming data.

        Your statement about CO2 really doesn’t seem to make any sense to me. We know from the long term record that the OHC rise resulted from a strong external forcing influence. The dominance of CO2 in this process can’t be deduced from the OHC data, but requires a comparison among the different forcings that were operating.

      • Fred, I am not sure anyone completely understands the dynamics. So I don’t feel offended. The part of the dynamics I think I understand is that the 1998 El Nino and the 2010 El Nino had different responses to whatever was the driving dynamic. It seems very unlikely to me that 1998 was forced and 2010 was unforced. It seems more likely to me that something changed. Whatever changed was not DWLR whatever value it may be assigned, not CO2 concentration, most likely an internal pseudo-oscillation. That internal pseudo-oscillation appears to have originated in the Southern Hemisphere.

        Since the 2010 did bump up OHC, just not as much and not as quickly, and the primarily Equatorial warming from the 2010 was felt longer in the NH than in the SH, I think my Antarctic theories are not just another rabbit hole. The Fluid Dynamic Boundaries make more sense than most explanations.

        Internal pseudo-oscillations need a trigger. That is what would be fun to find.

      • Fred Moolten

        the large increase in ocean heat content since the 1950s doesn’t require ARGO data, but is well substantiated by temperature measurements via the earlier instrumentation reinforced by rises in sea level too great to be accounted for exclusively by melting of land ice.

        Let’s do a quick reality check on that, Fred.

        – prior to ARGO there was no real record of upper ocean temperature indicating a “large increase in ocean heat content”
        – even the expendable XBT devices used just before ARGO were spotty, sloppy and introduced a warming bias
        – since ARGO has been in service, most records have shown a net cooling
        – sea level has been rising at about the same rate since tide gauge records started in the 19th century; these show wild decadal variability, but that on average the rate of rise was higher over the first half of the 20th century than over the second half

        Yes.

        It is most likely that the upper ocean has been warming, at least since 1850, possibly in the same multi-decadal fits and spurts as the atmosphere, but the points you make are not a very robust substantiation for your point.

        And upper ocean temperature trends do not show any acceleration of direct or indirect warming signals, which could be attributed to the increase of human CO2 emissions after around 1940.

        Max

    • Based on your wilful misrepresentation of Ocean Heat Content, we can happily discount anything else you might come up with since your spin doesn’t seem to accord fairly with the facts. Why you do this I do not know. What is to be gained? You continually purport to be more certain than the all but the most fervent real scientists.

  15. Just a few English edit suggestions:
    1) “slided” is not a word. Use “slid”.
    2) “the HURST exponent is feigned detrimentally without removing them.” makes no sense, as “feigned” means pretended with the intent to deceive.
    3) “Until now we have no idea “… Use “So far” or “To date”.
    4) “are desperately unknown and scatter widely” Desperately means with great urgency or intense despair. Meaningless in this context.

    • Brian H

      As someone whose native language is German, I saw the same “Germanese” flaws (plus a few others):

      “dringend” can be translated as “desperately”, “urgently” or “strongly” (I would have picked the latter or written “not known at all”)

      “behaupten” or “vorgeben” can be translated as “feign”, “pretend”, “allege” or “contend” (the latter two do not necessarily imply “intent to deceive”).

      Whatever their skills at statistical analysis, the authors are not experts at translating from German to English – but we should not hold that against them in this rather hasty response, don’t you agree?

      Max

      • Making edit suggestions is intended to help the authors write clear English, not to “fault” them. Except, perhaps, for relying too much on their own vocabulary and the first entry in dictionary translations.

        My actual #1 pet peeve with Euro-English is the vast over-use and abuse of the word “possibility”. ;)

  16. @Bart

    “Get some data and analyze it to see what is really happening, then build your model.”

    That is a really scary statement to me. The model comes first, as it is a philosophical framework based on economic or scientific theory. You then use the sample data to see if your model is a true reflection of reality, to prove or disprove your theoretical framework.
    Otherwise, you are volating the scientific method which states that analysing the data comes after your hypothesis (and/or prediction) is made.

    • Complete rubbish. Faraday reported his observations but didn’t know why it worked until Maxwell produced the maths. Stefan and Boltzmann reached the same conclusions separately; one looking at the data, the other at theory. I could go on but maybe you get the point and can now stop being scared :)

    • Scott – the history of science is littered with examples of the practical preceding the theoretical. Often people couldn’t be bothered with the theory because they were interested in things working in the real world. You should have a quick read around and come up with a few hundred examples. A good place to start is steam engines :)

      • Ernest Rutherford used the mathematics that had been used to describe surface tension of water droplets when he calculated the binding forces of the atomic nucleus, how ever, he didn’t think is was a droplet of liquid.
        The ‘lapse rate’ so beloved of climate scientists is a phenomenological description of observed data.
        The gas laws aren’t. They don’t work very well with actual gasses and worse with mixtures.

    • What Scott appears to be referring to is methodology. How to conduct a statistical analysis in such a way as to preclude bias.

      One of the major flaws in many scientific studies is to select a statistical method after you have looked at the data, which leads to the problem of selection bias – a form of cherry picking. If you don’t get the answer you want, choose another statistical approach until you do. I’m holding a course later on the subject, Hockey Stick 101.

      • ferd

        I’d go along with what you wrote, BUT…

        In a general discussion of what comes first, hypothesis or observation…

        Didn’t the apple first fall on Newton’s head?

        On the other hand we have nuclear fission…

        Looks like a chicken and egg debate.

        Max

    • “It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.”
      – Sherlock Holmes

  17. Fred

    Levitus used data as backup. The only data available is questionable to say the least. Other crticisms can be found here:
    http://www.john-daly.com/mobydick/oceans.htm

    Bu is really foolish to pretend that Levitus is any more than guesswork. Ask Willis again why don’t you. I repeat, this is why ARGO exists; because prior ot Argo the data cannnot be relied upon. Argo is telling those who listen that it isn’t quite so simple as CO2 being the main driver of temperature.

    We don’t know anything about forcings from any long term record until and unless we know what causes ice ages and the end of ice ages. CO2 has been knocked out of that equqtion by Lowell Stott. Recent papers have finally shown that the real mechanism was orbital/solar influenced. Thus CO2 needn’t be a factor. Of course if you assume it is the main factor then you are also likely to conclude that. Such confirmation bias was the case with Levitus and too many others.

    • James – You suggest asking Josh Willis about the long term data. Here is what he said in his email exchange with Roger Pielke Sr.:

      “What we can say, without hesitation, is that over a time span of 2 to 5 decades, the oceans have warmed substantially. In other words, the long term rate of ocean warming has been reasonably well observed, but the year to year fluctuations are still a bit fuzzy. The cleanest part of the Argo record is still too short, and XBT errors are still too big to make definitive statements about short-term natural variations in ocean heat content.”

      I try not to be excessively blunt, but your statement that CO2 has been “knocked out of the equation” is too bizarre to require a detailed response.

      • Indeed. This is the sort of confusion you expect when your primary source material is “john-daly.com” rather than the actual scientific papers being discussed, as well as the discussions of those papers in meetings or in subsequent literature. OHC has generally been increasing, with a “flatline” since 2003 or so that is not especially significant (statistically) in the context of the long-term trend due to interannual variability

        It’s also been known for a long time that the Earth’s orbit matters for ice ages, but no one has whiped out the idea that CO2 is a primary factor as well (see e.g., Kohler et al 2010, QSR).

        James’ comment is a clear example of disinformation that results from not studying the subject.

      • “This is the sort of confusion you expect when your primary source material is “john-daly.com” rather than the actual scientific papers being discussed, as well as the discussions of those papers in meetings or in subsequent literature.”

        – pretentious hipster

      • “What we can say, without hesitation, is that over a time span of 30 decades, the oceans have warmed substantially.”

        “What we can say, without hesitation, is that over a time span of 800 decades, the oceans have cooled substantially.”

        “What we can say, without hesitation, is that over a time span of 2000 decades, the oceans have warmed substantially.”

        “What we can say, without hesitation, is that over a time span of 500000 decades, the oceans have cooled VERY SUBSTANTIALLY.”

        Natural Climate Change in action.

        http://deforestation.geologist-1011.net/PhanerozoicCO2-Temperatures.png

      • Fred Moolten

        Josh Willis was a co-author of the Hansen et al. “hidden in the pipeline” paper (where the “pipeline” was defined as the upper ocean, which Willis’ group happened to be measuring at the time).

        The evidence for the validity of Hansen’s model estimates of climate sensitivity was the upper ocean warming, where half of the “missing heat” was allegedly hiding.

        Time has passed since then.

        The ARGO data are now in, and they do NOT show the expected “pipeline” warming.

        Max

      • Max – The “pipeline” was not defined as the upper ocean – that is a common fallacy on the Web. Rather, it was defined by Hansen et al as the energy not yet in the climate system, which would arrive in the future if a persistent forcing remained – i.e., it is energy not yet in the climate system (neither in the ocean nor anywhere else). The same concept, without the use of the word “pipeline” was described by other authors at the same time as the 2005 Science paper by Hansen et al – Earth’s Energy Imbalance.

        From that paper: “the forcing not yet responded to, averaged ~0.75 W/m2 in the past decade and was ~0.85 +/- 0.15 W/m2 in 2003 (Fig. 1C). This imbalance is consistent with the total forcing of ~1.8 W/m2 relative to that in 1880 and climate sensitivity of ~2/3 C per W/m2. The observed 1880 to 2003 global warming is 0.6- to 0.7-C (11, 22), which is the full response to nearly 1 W/m2 of forcing. Of the 1.8 W/m2 forcing, 0.85 W/m2 remains, i.e., additional global warming of 0.85 x 0.67 ~ 0.6-C is ‘‘in the pipeline’’ and will occur in the future even if atmospheric composition and other climate forcings remain fixed at today’s values”.

        Please see my comments elsewhere in this thread regarding the ocean heat data and the certainty that OHC increased substantially over the 1950-2005 interval despite short term ups and downs.

      • Fred, pipes typically have openings at both ends. At one end you have energy going in from the imbalance at the TOA. At the other end you have this same energy going into the oceans. You can measure how much energy is going through the pipe by measuring what goes in or what comes out. Hope this clears it up for you because this argument is old.

      • Steven – There is no “argument” that Hansen’s “pipeline” refers to energy not yet anywhere in the climate system (or at least there shouldn’t be), nor is there argument that the oceans are a sink for heat flowing into the system, with a long time constant. What is false is the use of the word “pipeline” to imply that Hansen was referring to heat already in the system – see the quotation from his paper above.

        This could be considered a semantic quibble if it weren’t for the fact that the meaning of “pipeline” has been attributed to someone who clearly used it in a manner different from what is commonly stated. There is also an important physical principle involved, however. Because the “pipeline” refers to future heat, “pipeline” heat can be eliminated in theory simply by eliminating the forcing. This would happen if all anthropogenic CO2 emissions ceased. That’s unrealistic, of course, but it also tells us that “pipeline” heat will be less if emissions are reduced than it would be if they continue unabated.

      • The energy can’t be eliminated. It is either going into the oceans at the flow rate expected or it isn’t.

      • Steven – Of course it can be eliminated. If the forcing is eliminated (e.g., by ceasing CO2 emissions, “pipeline” heat will disappear, because it is heat that would result from a future radiative imbalance rather than any imbalance that has existed up to the present. In other words, the heat is not yet in the system and could be prevented from ever reaching Earth. Some of this is discussed in more detail by Matthews and Weaver in a Nature Geoscience article – for a non-paywalled discussion, see the RC post at Climate Change Commitments.

      • There is a flow of energy into the oceans from the energy imbalance. That flow rate is what created the missing heat problem. That energy should already be there, it is not some mysterious future energy. Can you stop the flow rate by reducing forcing, sure. But that doesn’t change the amount that should already be there.

      • Steven – I’ve tried to describe why the “pipeline” heat, as defined by Hansen (also by others – e.g. Wigley et al) isn’t anywhere in the system. It’s not in the ocean, the atmosphere, the land, or anywhere else, because it hasn’t yet arrived at the planet. If CO2 emissions ceased,it would never arrive, and if emissions are reduced, less would arrive. If you don’t want to accept that, all I can do is recommend that other readers visit the above link and the earlier quotation from Hansen et al.

        Like many on the Web, I believe you are confusing Hansen’s “pipeline” with the “missing heat” attributed to Trenberth’s analysis. Trenberth is referring to heat that he believes should be found somewhere in the system from past radiative imbalances. The “pipeline” heat refers exclusively to future imbalances and to heat that hasn’t arrived, and so the two concepts are different. Hansen, in fact, disagrees with Trenberth as to whether any heat is actually missing. The fact that the oceans absorb heat doesn’t change the fact the Trenberth and Hansen are referring to different entities.

      • “additional global warming of 0.85 x 0.67 ~ 0.6-C is ‘‘in the pipeline’’ and will occur in the future even if atmospheric composition and other climate forcings remain fixed at today’s values”. ”

        The pipeline clearly refers to the current energy imbalance and the flow of energy into the oceans that would eventually cause warming after considering the inertia of the oceans. That is why he states even if the atmospheric composition and forcings remain fixed at todays values. This is from the paper I referenced to you earlier and you just recently referenced. the meaning is clear.

      • Steven – I can’t force you to admit you are wrong, even though that is unequivocal, but I can ask other readers to review the Weaver and Matthews analysis, the Hansen quotes on forcing the climate “has not yet responded to”, and the reports by others who used the same concept without the word “pipeline” (e.g., Wigley et al). They will see that you are simply clinging to a false understanding that has permeated Web mythology despite what is in the actual scientific discussion.

        While I appear unable to get you to abandon your misconception, I think you might ask Hansen himself, explaining to him that what he meant by “pipeline” was heat already here rather than somewhere in the dynamics of energy generation in the sun and not destined to arrive here until later if ever. If he responds, I hope you’ll share his response with us.

      • Why would I bother someone to ask a question that is perfectly clear? You ask him and share his answer with the class instead. (eyeroll)

      • My take on this. Steven is not wrong at all.

        This statement from Fred, however, is completely backwards.

        Of course it can be eliminated. If the forcing is eliminated (e.g., by ceasing CO2 emissions, “pipeline” heat will disappear, because it is heat that would result from a future radiative imbalance rather than any imbalance that has existed up to the present.

        No, the pipeline refers to the CURRENT energy imbalance; not a prospective future imbalance. It means that the system is not yet in equilibrium; the Earth is absorbing more energy than it is emitting. If atmospheric CO2 levels could be somehow held fixed, so that there is no future change in forcings, then the current energy imbalance will mean the Earth continues to absorb energy until it is warm enough to restore the energy balance. That is what is meant by the pipeline.

        Can I suggest you all avoid using the word “heat” in this context? It’s a bit confusing, since heat usually refers to an energy flow arising from temperature differences. It’s not clear in context whether you are referring to internal energy by virtue of its temperature, or energy flow (power) by virtue of a disequilibirium.

      • Chris – You appear to have misread the comments here, as related to the 2005 article by Hansen et al as well as the Matthews and Weaver paper discussed on RC (see all the links in my comments above).

        I say that because you correctly surmise that the warming would occur if the current forcing were maintained, but then misunderstand that Steven (and many others – this is a common Web misconception) asserts that the heat (a correct term physically) is already somewhere in the climate system. It isn’t, and that’s the nature of the persistent Web fallacy. It would only enter the system if a forcing persisted, but if you look at the RC discussion, you’ll see that there would be no warming if CO2 emissions ceased – the “pipeline” heat would disappear.

        Your interpretation of the physics is correct, but it confirms the point I’ve been making that the energy is only a potential future addition to the system if a forcing is allowed to persist, and contradicts the beliefs by Steven and others that it is heat already somewhere in the system. If you reread his comments, you will see that they are incompatible with your interpretation or with the explicit statements by Hansen et al about forcing “not yet responded to”, all of which refer to energy that hasn’t yet arrived rather than residing in the ocean or elsewhere.

        This is well understood in the literature, but consistently misrepresented in many blogosphere discussions.

      • Chris – Just to correct a relatively small error in your interpretation of the physics, it is not true that the “pipeline” refers to the current imbalance rather than future imbalance. To understand this, visit the RC discussion of Matthews and Weaver.

      • Chris – Here is the link again.

      • Following what you mean is not always easy. I don’t think heat is the correct term to use for energy that is in the system, though it is often used that way. “Ocean heat data” would be more technically correct as “ocean thermal energy data”. In any case, it is confusing when the word heat is being used both for an energy flux and an energy content.

        Be that as it may, the following statement is certainly incorrect:

        It would only enter the system if a forcing persisted, but if you look at the RC discussion, you’ll see that there would be no warming if CO2 emissions ceased – the “pipeline” heat would disappear.

        The error here is saying “if CO2 emissions ceased”. The truth of the matter is that the heat (energy flux) in the pipeline is a consequence of CO2 already emitted. The existing atmospheric greenhouse, which is established right now means that there is more energy coming in that is going out, and as a result of that the internal thermal energy is increasing (the internal thermal energy is not the pipeline).

        If emissions cease, atmospheric carbon levels will start to fall, but not fast enough to restore equilibrium… there will STILL be more energy being absorbed, and increasing thermal energy and continuing warming.

        I use the word “heat” for an energy flow from a warm object to a cooler one. A pipe is likewise suggestive of flow, unlike a reservoir which is suggestive of energy store.

        When someone says the heat is already here, or not here, it’s unclear what they mean. As flux, yes it is. As content, no it isn’t.

        The energy flow certainly IS here, and will remain here even if emissions cease. The warming that will result is because of the future increase in energy content as the flow continues. So the thermal energy flux is here now and will be for some time; the pipeline does not refer to thermal energy content already in the ocean or anywhere else.

      • Postscript. Fred, you refer me to the Mathews and Weaver discussion at realclimate, but you misrepresent what that discussion is saying. You say:

        it is not true that the “pipeline” refers to the current imbalance rather than future imbalance. To understand this, visit the RC discussion of Matthews and Weaver.

        The RC discussion backs up the fact that the pipeline refers to the current imbalance. How you get anything else from that I have no idea.

      • Chris – To see why you have misinterpreted the “pipeline” concept, look at the “zero emissions” curve in the RC discussion. It shows that if emissions ceased, there would be no further warming despite the existence of a flux imbalance at the time of the cessation. It is only future imbalances that would lead to “pipeline” warming. That energy isn’t yet here and would not arrive at all if we followed the trajectory in the “zero emissions” curve.

        I don’t want to dwell on this too much longer because the concept is well understood in the literature even though the word “pipeline” is only Hansen’s way of describing it. Because a number of authors have made the same point, you could email any of them for clarification. If you go to the 2005 Science article by Hansen et al, some of the other papers with the same theme are mentioned there and you could review them to get a better idea of how this concept was developed.

      • First, imo, which doesn’t mean much, Fred is entirely correct.

        Current means today, at the end of which there is apparently some imbalance. Reaching equilibrium, eliminating the imbalance, happens at some future date. Maybe I’m an idiot, but doesn’t that require some sunlight that ain’t been here yet interacting with some CO2 that is already here?

        I think the use of ‘future imbalance’ is confusing.

        I found Gavin’s response here and raypierre’s response here useful.

      • I have not followed every comment by Fred, Chris Ho-Stuart, and steven, but some general remarks about “warming in the pipeline.”

        Net positive climate forcing causes the Earth to retain energy that is primarily stored in the oceans. The presence of a radiative imbalance at the top of the atmosphere (and as a closely related proxy, the increase in energy content of the oceans) implies that the oceans have not yet warmed enough to be compatible with a ~390 ppm CO2 world. In other words, if atmospheric concentrations were to stabilize at 390 ppm, there would still be some extra warming that occurred during the timeframe over which the planetary imbalance decays to zero (which is accomplished by a warming of the oceans, and in turn, an increase in infrared emission to satisfy energy balance). Figure 1 in this paper is a qualitative cartoon that may help readers intuition with this concept:
        http://eprints.whiterose.ac.uk/43307/2/JGR_2009JD012105%5B1%5D.pdf

        So in a sense Fred is correct– if we were to track individual photons, the “warming in the pipeline” may be related to energy that is not yet in the Earth system. We are not talking about “lurking energy” that is here but has somehow not been realized as a temperature increase yet, only to haunt us later. An easier way to view this may be from the top of atmosphere energy budget perspective rather than trying to think about ocean heat storage. Frozen at 390 ppm, the current radiative imbalance needs to decay somehow, and that is accomplished through future warming. If for some reason the outgoing infrared emission did not increase with temperature (as occurs in the optically thick runaway limit), then for a given forcing, the warming in the pipeline would continue forever (at least until the oceans boiled away, stopping any further water vapor feedback).

        There is also the issue of whether or not the committed warming in the pipeline is necessarily committed. This relates to the point of distinguishing between the scenario I presented above (a constant CO2 concentration scenario), and a zero emission scenario (which implies a reduction in CO2 concentration).

        The constant CO2 concentration scenario is rather arbitrary. It implies a ~70-80% reduction in emissions to sustain (see the recent NAS report). It follows (trivially) that an 80% reduction in emissions implies that there are still some future emissions (or 20% of the current value). Yet it is not fair to call the warming in the pipeline “committed” if we are still allowing for some future emissions, even if they are small. That point seems to have been forgotten by many authors discussing this. Thus, it is only fair to call warming “committed” if we allow for zero emissions in the future.

        In this case, it is possible to whipe out the ‘warming in the pipeline’ because CO2 will be declining at the same time there is a radiative imbalance. A reduction in CO2 implies an increase in the OLR, and so it is possible to stabilize the temperature anomaly at the current ~0.8 C above pre-industrial. The problem of course is that no one actually believes we can fall into a zero emission scenario, so this is mostly academic.

  18. “Natural”? What the heck does the term mean anyway? It isn’t as though energy is created or destroyed spontaneously; there is always a reason for the change in the balance between incoming and outgoing. What is the reason for the observed increase in energy in the earth system over the last 100 years or so? Saying, “it’s natural” is about as scientific as saying “God did it”.

    We have a clear record of CO2 increasing over the same period of time as the energy level increase, and we have a clearly understood mechanism for why that would happen. What other mysterious, “natural” mechanism has changed over the same time period? Nothing. Seriously folks, there is no real debate over what is causing the warming. The debate is over how much and how fast, not if.

    Global warming has already had its Galileo; his name was Arrhenius.

    • Chris – Although I’m inclined to agree with your overall assessment, there are in fact other potential warming mechanisms to consider. These include the following:

      Anthropogenic greenhouse gases other than CO2 (mainly methane, but also NO2, ozone, and halogenated carbon compounds)

      Black carbon (mainly anthropogenic from combustion, but also naturally produced from forest fires)

      Solar irradiance changes, including total irradiance and energy in specific spectral regions.

      Declining cooling aerosols, due either to declining production of anthropogenic aerosols primarily of the sulfate type or declining volcanic aerosols.

      Internal climate fluctuations whereby the oceans gives up heat to the surface, cooling in the process.

      These all must be assessed. When that is done, we have good evidence that total anthropogenic greenhouse gases contributed most of the warming during the interval from 1950 to 2005, but their relative contribution earlier, while significant, is less certain.

      • Just to clarify my earlier point on cooling aerosols. Their decline, if it had occurred, would constitute a warming mechanism, but in fact, they appeared to have exhibited a slight net increase over the 1950-2005 interval, and so whatever their magnitude, it would not have contributed to warming. The internal climate fluctuations also did not contribute, or contributed very little, because the oceans warmed over that interval rather than cooled.

      • They have all been assessed. Which one of those do you think is not sufficiently well assessed so as to leave the matter of the major source of recent warming unclear? That’s a serious question, by the way; I’m genuinely curious which of those you think might still reasonably be the basis of some alternative to the enhanced greenhouse effect.

      • Chris – I’m not sure who you are addressing your question to. After 1950, we have enough evidence to attribute most warming to anthropogenic ghgs. Before that, the ghgs undoubtedly contributed, but other forcings plus an unknown level of natural unforced internal climate fluctuations probably played a major role as well, possibly a predominant one.

      • I was asking you, Fred, as I was not quite sure what you were saying. The concluding paragraph was:

        These all must be assessed. When that is done, we have good evidence that total anthropogenic greenhouse gases contributed most of the warming during the interval from 1950 to 2005, but their relative contribution earlier, while significant, is less certain.

        I agree with all of this, except for the apparent implication that the the assessment is still to be done. My take is that the additional factors you mentioned have all been assessed, and we do have good evidence right now that total anthropogenic greenhouse gases contributed most of the warming 1950 to 2005.

        The evaluating the effect of declining aerosols is hard to work out, since that mixes in with the difficult physical problem of cloud modeling; but its still pretty clearly a secondary contributor.

        That’s what I was curious about; I took you as saying we are still waiting on a decent assessment and after that we’ll have the good evidence.

        Agree that contributions prior to 1950 are less certain, that’s always going to be the case since we are necessarily working with more limited data. What data is available suggests that anthropogenic greenhouse gases are still a major contributor prior to 1950, but not as dominant.

    • LOL, Arrhenius was no Galileo. Which global warming paper was his master piece? The one trashed by Angstrom or the one he was too embarrassed to publish 1.6 (2.1)?

      • He’s better known for his kinetic rate law, known in P-chem circles as the “erroneous law”.

      • On the Influence of Carbonic Acid in the Air upon the Temperature of the Ground (Svante Arrhenius, 1896).
        More than 100 years old, but still memorable. That’s the difference to Lügecke’s Paper.

    • Oh, Chris, since you are so well acquainted with the works of the Galileo of Global Warming, page 273 the final table, The value K is CO2 concentration, assuming it was 280 PPM in 1896, 0.67K would be 188ppm or his estimate of the Glacial concentration and 1.5K would be 420ppm. Maximum warming at 420ppm would be at latitude 25N and by Galileo’s reckoning, 4.5 degrees warmer.

      http://www.rsc.org/images/Arrhenius1896_tcm18-173546.pdf

      Interesting read Chris.

    • Climate science has a very interesting and rich history, and certainly more than one person stood out as some of the early pioneers. Tyndall, Arrhenius (who is better known for his work in chemistry), Callendar, Plass, etc all stand out as some of the “pioneers” and much of the physics has developed from work not even intended specifically for climate application– Planck, Einstein, Boltzmann, etc and the quantum revolution helped tremendously in this respect. More recently, people like Manabe, James Hansen, Wally Broecker (who coined the term “global warming”) have positioned themselves into the future history books.

      Some of these early pioneers were by no means mainstream. The work by Angstrom certainly applied as something that criticized Arrhenius’ work, and the “rebuttal” became widely believed by the community, and variations of Angstrom’s arguments were routinely cited as arguments as to why the CO2 greenhouse effect was not really important (at least up until WW2 or so). However, it turned out that Angstrom’s arguments were invalid on a number of grounds. They were based on flawed experimental setups for tackling the problem at hand (supplemented by incomplete spectroscopic understanding at the time), and also a completely wrong idea of how the greenhouse effect actually worked. By the 1950’s and 1960’s, and certainly by the time Manabe and Wetherald published their radiative-equilibrium modeling study, these fallacious arguments would be corrected and a proper conceptual framework would have been formulated for why CO2 warms the troposphere and surface, while simultaneously cooling the upper atmosphere. It is disappointing that some people still live in the stone age and cite some of the fallacies that were corrected well before they were even born.

      Of course, the pioneers were themselves wrong on many grounds too. The original paper by Fourier which is widely cited as introducing the notion of a greenhouse effect (without actually using that term) thought that the “temperature of space” was comparable to Polar Winter and was a major heat source for the planet, when in fact it is now known to be ~2-3 Kelvin and climatically insignificant. Arrhenius, although in the correct ballbark with his climate sensitivity estimates (which was almost certainly lucky), thought that humans could never emit CO2 at an important enough rate to be climatically important. This fallacious reasoning was based on an overestimation of how strong ocean buffering was.

      • Chris, Arrhenius is much closer than I would have expected. if you use the actual radiation layer temperature of the CO2 concentration capable of emitting, not releasing due to collisional de-excitation, it is adequate for estimates. Of course, Arrhenius miscalculated that temperature a touch, 249K would have been much better than 255k, that is the reason his water vapor feedback was so high in the first paper. Interesting history has the Greenhouse effect.

        By the by, what would you estimate the average Antarctic radiant layer temperature to be? Bit of a bugger there t’ain’t it? Nearly halving the temperature with a fourth power relationship really changes things. But as Gavin said to me a few years ago, “We know more than Arrhenius.”

      • More recently, people like Manabe, James Hansen, Wally Broecker (who coined the term “global warming”) have positioned themselves into the future history books.

        http://bit.ly/iyscaK

        Into the history books for the wrong predictions!

    • Chris, ‘natural’ in terms of these papers is what they term a statistical effect with auto-correlation. They don’t mean ‘natural variability’ in the sense we generally use it and in the way Fred is confusing it.

      What they were trying to work out is how much of the fluctuation is due to ‘persistence’ and so remove it from the data to uncover what they call an ‘external trend’ meaning actual trend that was forced by other things than persistence. Those things might be CO2, solar changes, UHI – whatever.

      It doesn’t tell us what causes the changes, but it might be useful combined with other kinds of studies to help work out what is going on by acting as a sort of ‘noise filter’.

      • Agnostic – That is not what they attempted. It would have been if they had not detrended the data before computing Hurst coefficients and standard deviations. What they did do was attempt to evaluate that part of the autocorrelation that was due to non-trend (“natural”) variability as opposed to external trends. However, they confused the latter with anthropogenic trends alone even though such trends can be natural, resulting in the “false dichotomy” I mentioned. That was a minor problem, though, compared with their statistical misinterpretations and the incompatibility of their conclusions with the physical evidence based on ocean heat content.

      • Hi Agnostic,

        I’ve read the paper and I’m still not sure what they meant by ‘natural’ and ‘non-natural’.

        From the abstract of Ludecke et al. “‘Natural’ means that we do not have within a defined confidence interval a definitively positive anthropogenic contribution and, therefore, only a marginal anthropogenic contribution can not be excluded.”

        “Persistence over many months, seasons, years, decades, and even longer periods is usually associated with anomaly patterns in sea surface temperatures, and even with the influence of long-term variations in the sun’s activity” (page 2 of Ludecke et al).

        “As a consequence, a conventional examination of a temperature record can not tell us whether an observed change in temperature is natural or an external trend. In particular, the question of how to distinguish a trend caused by anthropogenic greenhouse gases from a natural fluctuation remains unresolved and has been dubbed the ‘detection and attribution problem’. The detrended fluctuation analysis (DFA) as specified in the next paragraph has recently emerged as an effective tool for tackling this problem.” (pg 3)

        “‘Natural’ dentoes that there is no definite anthropogenic trend in the record” (pg 7)

        “As a consequence, ‘natural’ applied for a<W<(1-a) means that we do not have within the (1-2a) confidence interval a definitely positive anthropogenic contribution in the record and, therefore, a vanishing anthropogenic contributions [sic] must not be excluded. For the sake of a better comprehensibility we prefer to use in the following 'natural' instead of this correct but somewhat clumsy explanation." (pg 9)

        the text around equations 12 and 13 almost immediately redefines 'natural'.

        On page 16 'natural' has lost it's quotation marks.

        "'Natural' means that we do not have within a defined confidence interval a definitely positive anthropogenic contribution and, therefore, only a marginal anthropogenic contribution can not be excluded."

        The definitions and conclusions they draw in the opening post of this topic are somewhat more careful than in the paper.

        Some of these differences in the definition of 'natural' could be due to language difficulties, but I don't think all of them are. The identification of the external trend with anthropogenic contributions is (in the paper) fairly consistent. Quite what they mean by natural fluctuations is left more or less unstated. They note that persistence could be due to changes in the sun's output, which one could consider an external trend (not a linear one though).

        What they claim and what their analysis shows can be, to a certain extent, decoupled. Their analysis shows something. My take on it is that they have showed that for a bunch of different subsets of stations they have calculated the fraction of those stations for which 95% of random series with the same Hurst coefficient have a trend less than the station trend.

        What this tells us about the real world, I don’t know. It certainly doesn’t answer the question “How Natural is the Recent Centennial Warming?” in any sense that a climatologist would understand the question.

      • My take on it is that they have showed that for a bunch of different subsets of stations they have calculated the fraction of those stations for which 95% of random series with the same Hurst coefficient have a trend less than the station trend.

        Nebuchadnezzar – Their descriptions are so confusing and often self-contradictory that it’s hard to know exactly what they mean. In this case, based on the equations, I think they are saying that unforced fluctuations with a given Hurst exponent and standard deviation will yield values of “relative temperature change” Δ/σ that will fall within a defined confidence interval 95% of the time, and so any observed record with a value within that interval can’t be claimed to be unnatural (“anthropogenic” in their terminology although that narrow definition makes no sense). This would be true (if I understand their description) even if, say, the observed data exhibited a value greater than the majority of “natural” samples would be expected to yield. Also, Δ/σ is not a measure of temperature change, but of temperature change divided by standard deviation. A very large temperature change might therefore by interpreted by them as “natural” if it was “noisy”, and so failure to fall outside their confidence interval does not provide information about the magnitude of the warming.

        I posted some additional thoughts on the original method they adopted from Lennartz and Bunde below. The latter authors provide a clearer description of how DFA might legitimately be used, but some caveats are warranted.

  19. Chris,
    BTW, the Arrhenius paper is available on line, great type setting, There is a curious reference to one of Svante’s peers that estimated that without water vapor’s radiant properties, the surface of the Earth might be 200C colder. That would kinda kick up the “Greenhouse” effect a touch, wouldn’t it?

  20. What is natural?

    humans are natural (a part of nature), so is what we do natural? If so, then the debate is over, the warming can only be attributed to natural sources.

    • Indeed. Humans are as natural as any other organisism. Sure we are more powerful and can do a lot of damage to the environment but it is still natural!

    • Nice to see that you share my own experiences with EIKE I mentioned in the first thread, I summarize:
      You are stupid, you don’t understand anything and so I refuse to discuss anything with you. So don’t waste your time any further, remember that you’ve done Judith’s job.

      • Limburg’s piece was badly informed: He had not even bothered to look up my CV. Luedeke’s response is bad-tempered, rude, and wrong. The onus is on the analyst to show that the covariance structure does not change by filtering the data, and that the crucial hypothesis, self-similarity, holds.

  21. Thank you, everyone. A good thread, and I’ve learned a lot.

  22. Mattstat,

    On the altitude thing. The authors have an excellent chance to turn their papers into something of significance. Since the Tmean series cannot adequately separate the origin of force/feedback, comparing to Tmin, there would statistically significant indications of CO2 forcing or UHI effect. Then by comparing Tmax to both Tmin and Tmean, there are physically properties that separate, Aerosols and solar variation. “Natural”, whatever that may be, should be more evenly distributed between all three series.

    The altitude impact, is a CO2 force signature ala the relationship between surface and mid troposphere temperature projected by models.

    There is more than one way to skin a catfish.

    • The altitude impact, is a CO2 force signature ala the relationship between surface and mid troposphere temperature projected by models.

      As was discussed in the recent Troposphere thread, this relationship is a signature of any surface temperature change, not specific to CO2 or GHG forcing.

      • It would be, all things remaining equal. Water vapor change with altitude should allow a more CO2 specific impact to be picked out. No guarantees of course, but the change over time should be observable, especially in Tmin where solar noise is reduced. Worth a grant In my opinion.

    • “The altitude impact, is a CO2 force signature ala the relationship between surface and mid troposphere temperature projected by models. ”

      I might need to add, The integral of CO2 absorption/emission, from the surface to TOA is not equal to the integral of that ratio from the surface to tropopause + tropopause to TOA Which is not equal to the integral from surface to latent shift boundary( LSB) + LSB to Tropopause + tropopause to TOA,

      For some reason that is not obvious, but it is the impact of local emissivity variance and filtering of GHG spectral lines by the GHGs themselves, due to broadening at higher temperature and collisional interaction. Small things matter.

  23. Perhaps best put by Ralph Keeling in reference to another vanity publication

    “The Beck article provides an interesting test case for E&E’s recently advertised willingness to serve as a forum for “skeptical analyses of global warming” (E&E mission statement, Dec. 2006). The result was the publication of a paper with serious conceptual oversights that would have been spotted by any reasonably qualified reviewer. Is it really the intent of E&E to provide a forum for laundering pseudo-science? I suggest that some clarification or review of the practice is appropriate”

    • This should not go unremarked. The original MBH98 also had “serious conceptual oversights.” It is hard to find a paper in climate science that does not have serious conceptual oversights. If a publication is to review its practices, I would suggest that the review begin with Nature and Science. Nature published an execrable paper based on one flood in England. (not to be confused with a very good paper on extreme rainfalls in the Central U.S. published at the same time.) The Luedeke et al papers are not as bad as that article published in Nature.

  24. I have really enjoyed the scientific debate on these 2 papers but you had to work hard to tease out the relevant arguments mainly because the subject was split into so many threads but also because it got so clogged up with political comments etc. Thanks though to Richard Tol, David Young, Fred, Brandon, Agnostic, Mattstat, Webhubtelescope, Richard Samaures, Max and others for making the scientific points.

  25. Even though this thread is now growing old, I thought it might be interesting to comment on an additional limitation in the original Lennartz and Bunde DFA method that Ludecke et al later used (and misinterpreted) in drawing conclusions about twentieth century warming. The limitation doesn’t invalidate the method, which is informative if applied with an understanding of its limitations, but does affect the interpretation of Hurst exponents and persistence as evidence of unforced climate fluctuations.

    As I understand the rationale of the method, external trends (from CO2, solar changes, etc.) can include persistent effects due to the forcings. To evaluate persistence from natural internal fluctuations, therefore, detrending is used based on the argument that trend-induced persistence will be removed, and remaining persistence (found in deviations from the trend) will consist exclusively of persistent deviations from the effects of the CO2, solar changes or other forcing agents – these deviations thus represent persistence of non-trend related fluctuations.

    The limitation in this argument, if I have interpreted the rationale correctly, comes from the fact that what detrending removes are linear trend lines. However, real world forcings will rarely exhibit a straight line form but rather appear as curves of one sort or another. A constant forcing will create a temperature trend line that gradually diminishes in upward slope as it asymptotes toward equilibrium. A rapidly increasing forcing will curve in the opposite direction, and fluctuating or mixed trends will curve in a variety of ways. This means that over any small interval, a forcing will cause the temperature change to deviate persistently from the linear trend line representing its average behavior. This persistence, then, is forcing-induced rather than unforced. For these reasons, Hurst exponents calculated from the detrended data are likely to overestimate the extent of persistence that is exclusively unforced.

    I don’t know how much inaccuracy would result from this phenomenon, but it should probably be considered when interpreting DFA data.

    • as I said: detrending data (and removing the seasonal cycle) alters the covariance structure

    • I thought this was an excellent comment Fred, and an interesting point. I had a similar concern, though not as well thought out as this.

    • This should allay your concerns, somewhat:

      record becomes unsteady. The detrended
      uctuation analysis DFA can be seen as an advanced FA,
      which automatically eliminates trends of the polynomial order  from the pro le
      Yj and – because Yj in Eq. (3) integrates the record fTig – of polynomial order
      ( – 1) from the record itself. In DFA, the pro le Yj of Eq. (3) is divided into
      Ns = int(N/s) non-overlapping intervals of equal length s, and in each interval a
      polynomial P of order  is evaluated, which provides the best t for the pro le
      Yj . Generally, a short segment at the end of the pro le remains. In order not to
      neglect this segment, one repeats the same procedure from the other end of the
      record resulting in 2Ns segments.

      As before, ligatures such as “fl”, “fi” are not translated. Nevertheless, it is clear that they used piecewise polynomials to estimate the trends. Computationally, polynomials are linear functions of the data, but the estimation of the breakpoints is non-linear.

      • So roughly parsing what you and Fed are saying, they broke the data into segments and worked out separate DFAs for each segment, rather than for the whole period as Fred suggests? Because Fred actually articulated my single greatest intuitive concern which is roughly speaking that the degree of persistence in the real world may change due to changing external factors which would be missed by a straight analysis – but I am not sure if it matters if they picked it up by testing it against the real world data in the way they did. I lack the statistical knowledge to articulate what is bothering me, but Fred’s comment comes close. So it is accounted for? Or does it not matter?

      • Although I find some of the text in Ludecke et al hard to interpret, none of their figures depicts a straight line of less than several decades.

  26. Using statistical methods doing the right steps but disregarding all caveats is all too common. That’s more or less, what these papers do.

    They use a method and draw conclusions that cannot be supported by the limited amount of data. They make circular arguments by assuming the result and estimating the parameters based on that. They assume the model that’s really just one out of infinity of possible models.

    They show that the assumptions do not lead to obvious contradiction with the data. They don’t perform all possible statistical tests that could indicate a contradiction, but make only the most obvious first tests.

    The result doesn’t tell essentially anything about the behavior of the climate, as they have disregarded known external influences that would have changed the outcome. They show that the combined effect of natural variability and forced variability satisfies one very flexible requirement, but so what?

    Climate scientists have studied that same variability for years. They have not been able to reach comprehensive understanding, but they have certainly learned enough to prove that the results of these papers are irrelevant. Making irrelevant statistical tests is useless and allows for misleading interpretation. Therefore it’s a common approach for people, whose purpose is not to learn, but to present misleading arguments.

    • They have not been able to reach comprehensive understanding, but they have certainly learned enough to prove that the results of these papers are irrelevant. Making irrelevant statistical tests is useless and allows for misleading interpretation. Therefore it’s a common approach for people, whose purpose is not to learn, but to present misleading arguments.

      I agree with this to a degree, yet if you replace the discipline of climate science with the study of statistical mechanics then relevant statistical tests are the premise and rationale for the entire approach.

      My point is if they were to use some more solid and comprehensive methods, they might get somewhere.

      • Yes, but Stat Mech constrains the system to an incredible degree, so that only tiny fluctuations remain. Indeed it is only by studying these fluctuations in detail that one can find the limitations of Statistical Mechanics.

      • Yes, but Stat Mech constrains the system to an incredible degree, so that only tiny fluctuations remain. Indeed it is only by studying these fluctuations in detail that one can find the limitations of Statistical Mechanics.

        I would consider +/- 6 degrees as a small perturbation to 300.Kelvin. As far as natural variations are concerned I would also consider this a well-traveled state space, with the major constraints well explored.

        What upsets the statistical mechanical apple cart are those unexplored anomalous conditions, such as a huge shock of CO2, and whether we can use it on that large perturbation is your concern I assume.

  27. JCH, I see no daylight between what Gavin said in your link and what I have been saying. The only difference is I have been stating some of the energy has to already be in the oceans. It doesn’t matter if you go by the date of the data: 2003, or if you go by the date of the paper: 2005, time has passed. Today as written in the paper is no longer today.

    • I think must, alas, retract my comment that you are correct. Energy in the ocean now has nothing to do with the pipeline, or committed change, or what the paper by matthews and weaver is dealing with, or what gavin, fred and I are talking about.

      I also note with surprise that a zero emissions future does indeed suggest no increase in temperature, contrary to what I said earlier. To be honest, that still looks wrong to me. But I’ll have to check it out. I would have thought even with atmospheric CO2 dropping fast it would take some time to wipe out the existing imbalance and hence to reverse the warming trend. I’ll have to crunch some numbers for my own sanity check, and also obtain and read the paper to see how they got that.

      On the RC discussion, Pierrehumbert’s comments at 133 look at the concepts proposed by the paper, which is useful.

      Gavin’s comment at 174 underlines that the “pipeline” refers to incoming heat at the current levels of atmospheric carbon. If atmospheric carbon rises, or falls, then of course the flow through the pipeline is increased or reduced. The notion of “climate commitment” being defined as commitment for current emissions is a useful notion; but it is not the same as what is mean by the pipeline, which is a reference to current atmospheric carbon levels.

      Cheers — Chris

      • The flow started 6 or 8 years ago. I would say 8 because that’s the data that was used. You might be able to change the future but you can’t change the past. Our emissions have not dropped below the amount required to produce negative energy flow so the energy flow should be occuring as we argue and has been occuring for some time now according to the hypothesis. Thus there should be energy in the ocean from the “pipeline” and that is the commonly refered to missing heat. You were right to agree with me in the first place.

      • The energy imbalance has been around for a while, and it has been growing.

        The sink for the energy flow is the ocean, or so we infer. We expect the energy from the pipeline to go into the ocean.

        The “missing heat” problem is the problem of actually measuring fluxes sufficiently accurately to see where energy is going directly; particularly as measurements of the upper ocean have not shown warming over a short period of time post 2003. The “pipeline”, on the other hand, is the energy flow due to imbalance at the top of the atmosphere.

        The pipeline does not refer to energy in the ocean. It refers to the energy FLOW from imbalance at the top of the atmosphere.

      • Well if you expect the energy from the pipeline to go into the oceans and the pipeline was the imbalance of the TOA using 2003, data why are you suprised when I say it should have been going into the oceans? You are talking about the same thing as if they were two different things. Just so you get my point, if there were no imbalance at the TOA there would be no energy being added to the oceans, none, zip. So what is the meaning of the energy flux you refer to that isn’t the imbalance at the TOA that you say should be warming the oceans and is the source of the missing heat problem?

      • The energy from the pipeline does go into the oceans. However, then it is no longer in the pipeline.

        The pipeline is not energy that has arrived and is now in the ocean. It is the flux of energy now being absorbed; not the amount of energy already here.

        On the other hand, we have not been able to measure directly where the energy is going. We are confident that’s where it is going. But the measurements are not all there to show exactly where it ends up (at what depths, in which parts of the ocean, etc).

        The pipeline is not the imbalance specifically in 2003. It’s the imbalance, period; regardless of which year in particular people are measuring it.

      • If I were selling oil and I told my customer they can expect 5 barrels of oil through a pipeline per day starting tomorrow, I suspect that in 5 years they will have expected to recieved some oil. I think telling them that no, I said tomorrow and it is today, wouldn’t go over very well.

      • The imbalance is interesting. Per K&T, it is approximately 0.8Wm-2, which I would call a worst case, since they then to be on the most warming side. Since K&T seems to be using the real TOA not the Tropopause, then solar variation could be 25% of that imbalance.

        If that 0.8Wm-2 were at the surface, the the solar variation would only be about 15% of that value, or solar variation could change climate by about 0.1 degrees.

        So it would seem to me that IF solar DOES make a change Greater THAN, 0.1 degrees, that the estimated amount of the imbalance would be too high.

        If that is the case, expect some very creative statistical methodology to explain that occurrence, since it is well know that the impact of 2xCO2 is well understood in atmospheric physics, since the days of Arrhenius :)

      • Careful with analogies. They may or may not help. You have to understand the meaning of the term in climate first, and then pick the analogy.

        Actually, the analogy is pretty good, however.

        When we say there is heating in the pipeline, then in five years time they can have expected to have received some heat. The point is… in five years time, you DO NOT describe the heat they have received as the heat in the pipeline. That is what you are doing when you speak of heat in the ocean as if that is what is meant by the pipeline. It isn’t. That is what WAS in the pipeline but has now been taken up as increased temperature in the ocean.

        The comment from RC is by gavin to comment number 174 which everyone seem to agree is on the money reads as follows:

        [Response: No. This is a fundamental misunderstanding. The net imbalance at the TOA implies that the planet is absorbing heat. This is going into the ocean. This will stop once the ocean has warmed up sufficiently that the outgoing long wave again matches the incoming short wave. The current temperatures are not sufficient to do this (since there is a TOA imbalance) and so there is further warming 'in the pipeline' as long as the the forcings stay the same. The 'in the pipeline' statement is exactly equivalent to the current-concentration commitment discussed above. If you changed the concentrations, the imbalance would change also and the calculation would no longer work. - gavin]

        The heat in the pipeline leads to a FUTURE increase in temperature. Once the temperature increase has occurred — that is, once it is absorbed into the ocean — you no longer have heat in the pipeline. But as long as equlibirum is not reached and there is an imbalance at the TOA, then you have heat “in the pipeline” which is going into the ocean. You do not count what has ALREADY gone into the ocean as being in the pipeline.

        Note also for others: the heat in the pipeline statement is a reference to a current-concentrations commitment; not to the committed change if emissions stop.

      • It is a measure of the heat in the pipeline. The flux of energy into the ocean is a measure of the energy imbalance at the TOA. Also, if we wish to parse words, Hansen states warming in the pipeline. This warming requires that the oceans warm first. If there were no oceans equilibrium would be established rapidly. This makes the oceans part of the pipeline.

      • What is a measure of heat in the pipeline? Try to state it precisely.

        Then compare with the comment from gavin at RC. The measure of the pipeline is the imbalance at the top of the atmosphere.

        You seem to be suggesting that the measure of the pipeline is the thermal energy in the ocean. That’s flatly wrong. Heck, the UNITS are wrong. The pipeline refers to the FLUX of heat based on imbalance at the top of the atmosphere.

        Physically, we are positive that nearly all this energy flow is going into the ocean, but that is a consequence of where the Earth happens to have the largest heat sink. It is not the definition; not the measure. In fact, there is a small percentage of thermal energy heating up the land and atmosphere as well. The point is that Earth is in imbalance, and THAT is what is called the pipeline.

        The measure of the pipeline is the imbalance at the top of the atmosphere.

      • “Ocean heat storage. Confirmation of
        the planetary energy imbalance can be obtained
        by measuring the heat content of the
        ocean, which must be the principal reservoir
        for excess energy”

        To me this seems quite clear. Perhaps you have another meaning for energy imbalance and heat content that I am unaware of.

        http://pubs.giss.nasa.gov/docs/2005/2005_Hansen_etal_1.pdf

      • If I were selling oil and I told my customer they can expect 5 barrels of oil through a pipeline per day starting tomorrow, I suspect that in 5 years they will have expected to recieved some oil. I think telling them that no, I said tomorrow and it is today, wouldn’t go over very well.

        If you use oil as an analogy, then you will have to agree with Chris’s explanation. The pipeline between global discovery of oil regions and production of those regions is about 40 years, perfectly understood by stochastic latencies in the system. And this isn’t always a real pipeline but a virtual one, consisting of various delays and various flow rates (google Th Oil Shock Model).

        This is what a compartment model — sometimes referred to as a box model with latent or lagged responses — is all about.

      • Web, I have no problem with the fact that the amount of heat in the pipeline can change. I stated this previously while arguing with Fred. The argument is if the missing heat is part of the pipeline concept. I say it is because it would indicate how much energy is going into the oceans and thus what the imbalance at the TOA is. Fred states it is a completely seperate issue. The rest of the arguments are just tangents.

      • Don’t mix up the definition or meaning of the pipeline, with the consequence of the pipeline. Where I earlier said you were correct is where I read this:

        The pipeline clearly refers to the current energy imbalance and the flow of energy into the oceans that would eventually cause warming after considering the inertia of the oceans.

        Later you appeared to speaking of the heat already in the ocean as heat in the pipeline. If we could measure the change in the thermal energy content of the ocean over a given period of time, then we’d have a physical confirmation of the pipeline; but the pipeline is not the heat in the ocean. It is the energy imbalance.

      • Where I am still correct is that the ocean warming since 2003 is some portion of the heat that was in the pipeline during 2003. If you know the rate of warming you know what the energy imbalance was. If there is missing heat there is a potential problem with the calculation of the heat in the pipelinbe. The issues are not seperable. The rest is word parsing of which I am rather tired.

      • steven, I’m good with that and agree this is no longer useful.

        The one quibble, which I expect you’ll agree with, is that where you say:

        ocean warming since 2003 is some portion of the heat that was in the pipeline during 2003

        if should be

        ocean warming since 2003 is some portion of the heat from the pipeline since 2003

        since we all agree that the pipeline refers to a flux of energy. Cheers, over and out — Chris

      • Chris Ho-Stuart – I do not know what a flux of energy means. To me the pipeline is sunlight coming into the system and doing its thing day to day within a perturbed system – absorption of a parcel of additional warming – until a new equilibrium is reached, at which time in and out are balanced.

  28. Chris,
    It would seem that since in the pipeline is based on an inadequate relationship, that is would be a bit difficult to determine. CO2 does not create energy, it only restricts a portion of the energy leaving the surface. So “in the pipeline” would only have any meaning if the surface energy AND the atmospheric energy absorbed from the sun remained constant.

    Solar constant does not mean constant distribution of solar energy at locations with constant time constants of release in constant proportions. It is a little more complicated than ln(Cf/Ci), since that relationship is based on Northern Hemisphere response specifically near the Arctic circle where the Galileo of Global Warming lived.

    In other words, the ration of atmospheric to surface absorption cannot be neglected. :)

  29. here are some links to web sites that we link to for the reason that we feel they may be worth visiting