Ludecke et al. respond

by Horst-Joachim Lüdecke, Rainer Link, and Friedrich-Karl Ewert

In his comments R. L. Tol argues that our paper LU and LL should be ignored. We will respond following the order of R. Tol’s objections. We refer to the same shortcuts and references as in our guest post – LU resp. LL, and [1]-[5].

R. Tol claims “However, the question is not yet answered which forcings cause the actual global temperature change”. Actually, this was not at all what our work was aimed for. Everyone who is familiar with persistence, HURST exponents, DFA etc. knows very well that these methods cannot give any information about the nature of trends or persistence (if trends are found). See also the appropriate remark in LL on page 5 where we denote “… neither Eq. (2) nor a power law of the fluctuation of the FA or DFA says anything about its origin”. We assume that this statement is clear enough, but it was not percieved by R. Tol. Consequently, we did not eliminate anything as Tol argues in “They eliminated from their analysis the very thing in which they are interested.

Principally, our papers are neither predominantly interested in nor can give any answers about the forcings. Regarding Figure 10 on page 17 of LL, only the UHI and elevation warming can be clearly identified. Without doubt, there is a leftover of many possible further forcings that could have caused the 20th century warming –  even so the anthropogenic CO2 that is haunting so many people. However, our applied method cannot distinguish between these forcings. Our main task was to show that the 20th century warming was predominantly a natural fluctuation. This answer was given by an elaborated statistical method, which of course like all statistical analyses have its limits.

R. Tol argues “Therefore, LL switch to monthly data. Although this increases the number of observations to 1200, it simultaneously reduces the time step of their analysis by a factor 12. 1200 months is still only 100 years. No information was added. They cannot estimate the annual Hurst exponent with 100 years of data. With 1200 months of data, they can estimate the monthly Hurst exponent – but they still do not know the annual Hurst exponent. “This statement is completely wrong, to be polite. There is no annual HURST exponent in favour of a monthly or daily one. Autocorrelation of temperature records provokes that the autocorrelation function fades with a power law. It is self evident by basic maths that this law, and as a consequence, the exponent of the power law is not effected by the time unit, be it day, month or year (see in LU the beginning of the last segment on page 5. See also in LL on page 2, Eq. 2 and on page 4, Eq. 5 and Eq. 6). R. Tol should first read [3], [4], then the papers cited therein, then our papers and then criticize, not the other way around.

R. Tol notes “data manipulation” in the context of monthly means. However, LL reports about removing the seasonal dependence of monthly means on page 4 in the last phrase of the section above Eq. (3). Removing seasonal effects is indispensable for a correct application of the DFA and has nothing to do with “data manipulation”.  It can be found in every paper that uses monthly means with DFA, for instance in [3], [4] and all appropriate papers cited therein.

R. Tol bemoans “Crucially, LL use the 20th century record to define natural variability. That is, they use the observations of the 20th century to assess whether or not the 20th century was natural or otherwise. This is tautological.” Unfortunately, also this statement is dauntingly wrong. The definition of the naturality of a temperature record is given in detail in Chapter 3 of LL. There is nothing you may find in this chapter that can be associated with Tol’s absurd allegation. We did not use any 20th record to define natural variability. The main item is connected with Figure 3. R. Tol should read this chapter more carefully to understand the procedure.

R. Tol says “LU and LL then estimate the Hurst exponent. The paper omits information on the adequacy of the statistical fit. No indication is given on the precision of the estimates.” In our papers LU and LL the HURST exponents are calculated by an established method, not “estimated”. We give in LU the uncertainty of Alpha values in the last section of page 13. By the way, in Tol’s objections once again the nonsense of an annual HURST exponent in opposite to a monthly one is brought forward.

In the statement “LL then use the estimated model to create a large number of synthetic temperature records, and thus compute the probability of observing the actual temperature record. They find that the observed trend is actually not that rare, which is in line with their Hurst exponent of about 0.65. Again, confidence intervals are not provided, but as they cannot reliably estimate the annual Hurst exponent, confidence must be low.” R. Tol confounds in addition two quite differerent topics. The confidence interval (better the confidence limit) for the assessment that a “trend” in a record is either external or a natural fluctuation caused by persistence has nothing to do with the error in the value of HURST exponents.

The only substantial and really interesting item that we could find in Tol’s critique refers to LU when R. Tol objects “Lüdecke argues that sun has caused the difference in the Hurst exponent between the temperature observations and the temperature proxies. However, he does not compute the long-range cross correlation (the bivariate generalization of the Hurst exponent), so this is speculation.” The paper cited by R. Tol analyzes cross correlation between two monofractal time series that have HURST exponents H1 and H2. However, as Figure 8 (right panel) of LU clearly shows, the sun spot series is not monofractal, i.e. it has no unique HURST exponent. The consequences of this fact and the applied method are dealt with in chapter 6. of LU. R. Tol is completely right that the hypothesis about the sun’s influence on the HURST exponents of tree rings and stalagmite proxies in LU is a speculation. The header of chapter 6 in LU “A hypothesis on the sun’s influence” expresses this clearly. It is common in scientific papers to sometimes add a speculation.

Conclusion:

We regret that R. Tol – as he bemoans – does not learn anything from our papers. We are not responsible for this deficit. R. Tol criticizes predominantly the state of the art of persistence analysis. We did not develop these methods, we used them. Further, we assume that R. Tol’s critique is caused by his inadequate knowledge about the whole persistence and DFA methods. Therefore, we take the liberty to recommend him politely to read the book of Jens Feder “Fractals” for studying the basics. In addition, a closer look on the many papers about FA, DFA and their applications might be helpful. We will willingly provide him with most of the relevant papers (Judy can give him the E-Mail-adress of Lüdecke). It should be stressed that we used an elaborated new method [3], [4]. If R. Tol likes to criticize this method – after beeing better informed about it – we recommend him to write directly to the authors of [3], [4].

Further remarks:

We would like to answer on the many comments on our guest post. However, more than 500 comments – so far – are too much. Most comments have the same shortcoming as Tol’s critique – a fundamental deficit of knowledge about persistence and DFA. However, we found also some important and very interesting objections and comments. For those we prepare a follow-up to our guest post and will publish it on Judy’s Blog. She has already agreed. No paper is free of failings. Therefore, we welcome – as good practice in science – every critique if it is substantial.

JC note:  I invited Ludecke et al. to respond to the comments made in the previous two posts.   I would like to thank Horst-Joachim Lüdecke, Rainer Link, and Friedrich-Karl Ewert, Richard Tol, and those that have made thoughtful and constructive comments on these two threads.  This has been an interesting, albeit controversial, experiment in “blog” science.

Moderation note:  This is a technical tread, comments will be moderated for general relevance.   The “debate” surrounding disinformation, etc. can be continued on the original thread, or at collide-a-scape.

305 responses to “Ludecke et al. respond

  1. 1. The title of LLE is “How Natural is the Recent Centennial Warming?” It is therefore fair to assume that the paper is about the 20th century trend in temperature.
    2. Increasing the frequency of observation does not change the length of the record, and therefore does not add information about trends that are long relative to the period of observation. In other words, if 100 years of data is too short to tell you anything about centennial trends, then 1200 months of data is too short too.
    3. Data “manipulation” does not convey meaning; it is just a technical term. In this case, the seasonal cycle was removed from both mean and variance. This alters the covariance structure and thus affects the estimate of the Hurst exponent.
    4. In LLE, only data from the 20th century were used. The analysis is thus self-referential or tautological.
    5. DFA is a method to estimate the Hurst exponent.
    6. The estimated exceedence probabilities omit the uncertainty about the Hurst exponent.
    7. It is not appropriate to compare two univariate long-range correlation coefficients (while disregarding the standard deviation of the estimates). One should estimate the long-range cross-correlation.

    • “2. Increasing the frequency of observation does not change the length of the record, and therefore does not add information about trends that are long relative to the period of observation. In other words, if 100 years of data is too short to tell you anything about centennial trends, then 1200 months of data is too short too.”

      Indeed? Let us suppose that I have four data files.
      1) An average annual temperature.
      2) A series of 12 average monthly temperatures.
      3) A series of 365 average daily temperature.
      4) A series of 8760 hourly temperature averages.

      It is quite clear that in terms of information, 4>3>2>1. Now if one wishes to plot deltaT vs T, 1) has no information whereas 4) is stuffed to the gills.
      Now I am to dumb to understand information theory, but isn’t the relationship between the level of information in a pair of series a log ratio of the number of observation between the length of two series?
      So the difference in information between 4) and 1), in information content, is in the order of log (1/8760)= about 4?

      • Of course you have more information in 4. In 4, you have information hour-to-hour variations (and day-to-day).
        However, you still have only one year in 4. There is no info about year-to-year variations.

      • True. I agree with this.

        A bit OT, but still technical – Is it reasonable to say however, that IF there was access to hourly information as in 4, you now have information about the variability in the other dominant cycle – the diurnal cycle.

        Requiring a very different analysis.

        MattStat, Richard Saumarez and others here have definitely piqued my curiousity regarding the adequacy of representation of the diurnal cycle in the record, and changes in it that may not be completely captured by Tavg,day and Tmax,day and Tmin,day.

        I fear even for the satellites – sorry Pekka, please debunk me if you want – that 1-2x samples daily coverage is missing important information, and could lead to “aliasing” of the daily cycle. As Pekka said, sure they put a lot of careful thought into this before throwing the satellites up there – but as I said before on another thread, what were the options for improvement? More satellites? Faster satelliltes?

        $$$$$$$$

      • Is a line consisting of deltaTempave vs LogTime with 365 points, each consisting of 1/365 y-1 more information rich than a line with one point?

      • It would be if the trend were monotone — that is, if global warming would imply that Dec 12 2012 is warmer than Aug 12 2012, and that 6 am is warmer than 2 am.

      • DocMartyn,
        Why do we continue to use a shotgun approach, when you might be able to use a rifle to get Global T? If we were to use the Sun as our shooting position and the Earth as our target, setting our crosshairs on THE spot, that gets the best of the best, at say High Noon local time every day. Each day there will be a new target, as the world turns. One spot, one Global Temp for the day? Satellites can get the fix every day, probably cloud free. The end of UHI problems too, is my bet.

      • After thinking some more on this as a possible model that would have a value. You would also need an relitive anti-position Temp, from the evening side of the Earth. Sort of like GMT for Time.

        Go BTDC

      • I don’t agree. Adding more data points can add information, even if the time period remains the same.

        For example, I could trend the yearly averages. Or I could build 12 trends, comparing all January’s, all February’s, all March’s, etc.

        These 12 monthly trends are likely (at a guess) to show more variance that the single averaged annual trends, which tells me there is more information to be gained about centennial variability using 1200 monthly samples than 100 yearly averages.

        In any case, in time series analysis, it is a big no-no to use averaged data to compute a trend when the data that made up the averages is available. It makes no sense to me to argue for using the annual averages, they are more likely to give a misleading result as compared to the monthly data.

      • Brandon Shollenberger

        ferd berple, Richard S.J. Tol’s is correct to say finer resolution does not increase the length of observation. Given this, he is correct to say switching from yearly data to monthly data means the authors

        still do not know the annual Hurst exponent.

        Of course, as the authors point out, there is no reason to require one look at “the annual Hurst exponent.” There is nothing wrong with looking at a monthly (or daily, hourly, etc.) one instead, So yes, switching to monthly data does not allow the authors to calculate an annual Hurst coefficient, but so what? Being unable to do something you aren’t trying to do is hardly a criticism worth consideration.

      • Now I am to dumb to understand information theory, but isn’t the relationship between the level of information in a pair of series a log ratio of the number of observation between the length of two series?
        So the difference in information between 4) and 1), in information content, is in the order of log (1/8760)= about 4?

        No, you don’t understand the critical aspect of information theory, which pertains to order and disorder. The information theory of a periodic component can be modeled as a sinusoidal wave with a frequency (and a few additional harmonics if needed). By modeling it this way the information content is only raised by a few bits and this is independent of the length of the time series.

        Information theory presented as a Shannon entropy metric measures disorder, and periodic or seasonal components are ordered elements and thus do not add much of anything to the entropy measure.

        I am afraid that bringing the information theory aspect boomeranged on you, Doc. This basically shows that very little additional information comes from adding the seasonal numbers.

      • “I am afraid that bringing the information theory aspect boomeranged on you, Doc. This basically shows that very little additional information comes from adding the seasonal numbers”

        ‘Very Little’ as in Log (lenght1/length2) ?

      • So-called information theory has little to do with information, because it does not consider meaning.

      • So-called information theory has little to do with information, because it does not consider meaning.

        True, so if you flip the meaning and realize that it is used to measure the amount of disorder (dare I say disinformation?), then it makes a lot more sense intuitively. In that case it has equivalence to statistical mechanics and the entropy measure of thermodynamics.

      • Webhub: What is “it” referring to in your claim? I have no vague idea what you are talking about. Meaning has nothing to do with entropy.

      • Corporate Message

        HI David.

        Thank you for showing your work. Here is a very nice presentation titled
        “Meaning-Based Natural Intelligence vs. Information-Based Artificial Intelligence”.
        In it you might find they draw some interesting distinctions that could complement your work.

        http://www.google.ca/search?q=Meaning-Based+Natural+Intelligence+Vs.+Information-Based+Artificial+Intelligence&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a

      • Dikran Marsupial

        Data is not the same thing as information. For example, if you were to compress a file of monthly data (say using zip) the file you would get would be less than 12 times as big as the file you would get if you compressed the annual data. This is becuase the monthly data file has twelve times as much data in it, but not twelve times as much information. Compressing a file is a rough indication of how much information it contains (minimum description length principle).

      • So-called information theory has little to do with information, because it does not consider meaning. Monthly data actually has more than 12 times as much information as annual data, because the information includes all the combinations, permutations, and potential analyses.

      • I actually have a theory of information content that relates to this issue. See http://www.craigellachie.us/powervision/Mathematics_Philosophy_Science/information.html

      • did you get to my trenberth question. I posted the quote as you asked

      • Dikran Marsupial

        Yes, despite your childish behaviour on the other thread “Dikranaway”, I did respond to your post once I had noticed it. It is rather ironic that you made a fuss of me not replying to your question immediately (hint perhaps I live in a different time zone), and then missed my response and carried on pestering me without actually checking first to see if I had replied.

    • Just going back to your primary objection in the previous thread (that actually got my interest and forced me to read a bit about DFA):

      ….. Therefore, LLE use DETRENDED fluctuation analysis. That is, they first fit a polynomial of order two to the data, remove this trend, and study the deviations from the trend.

      Having removed the trend from their data, LLE cannot answer the question: What caused the warming? They eliminated from their analysis the very thing in which they are interested.

      There are other flaws …….

      Do I take it from the list posted above that you no longer take this view?

      • No, I did not change my opinion. LLE’s title, abstract, introduction and conclusion make it pretty clear that they are interested in the trend.

      • Are they “interested in the trend” or “interested in the likelihood that there is a trend (assuming the underlying process complies with a 1001 assumptions)”?

        I thought only the latter, although I also note that speculation is rife around the former (including in this blog).

        To the real point of the question. If I postulate that the last century’s monthly temps are such that they can be analysed in a well behaved manner using DFA to partial out the trend etc, does LLE’s method using a synthetic population derived from the Hurst coefficient and other parameters (and putting aside the uncertainty in the estimation of these) to compare with the actuals in principle provide a sound methodological basis for answering the question “What is the probability that the actuals come from that population of time series”? (to repeat, putting aside whether in practical terms assumptions are violated by the dataset or there are other inadequacies in the dataset that cause problems etc).

        I’m aware that you have said DFA is fine, just trying to nail in my own mind if the next bit in the method is an in-principle objection or an in-practice one.

      • Bolding should of course end after “in principle” must leave this fancy stuff to the professionals (sic)

      • Corporate Message

        Richard, understood.

        Now, my question is not regarding whether or not you’ve changed your opinion.
        My question is: are you going to support your claim with the exact words and how in English they mean what you say they mean ?

        It’s not enough to wave these fellows’ response away – now that you’ve come out with quite strong allegations.
        Show it line by line, please. It’s your duty now.

      • Corporate Message

        Richard, this is what requires a better response from you than “it seems”

        “… R. Tol claims “However, the question is not yet answered which forcings cause the actual global temperature change”. Actually, this was not at all what our work was aimed for. Everyone who is familiar with persistence, HURST exponents, DFA etc. knows very well that these methods cannot give any information about….

        … page 5 where we denote “… neither Eq. (2) nor a power law of the fluctuation of the FA or DFA says anything about its origin”. We assume that this statement is clear enough, but it was not percieved by R. Tol. Consequently, we did not eliminate anything as Tol argues in “They eliminated from their analysis the very thing in which they are interested.”

        ..our applied method cannot distinguish between these forcings. Our main task was to show that the 20th century warming was predominantly a natural fluctuation. This answer was given by an elaborated statistical method, which of course like all statistical analyses have its limits.”

        Your response
        “1. The title of LLE is “How Natural is the Recent Centennial Warming?” It is therefore fair to assume that the paper is about the 20th century trend”

        YOu offer only this … along with a couple of non specific references to other parts that lead you to make assumptions ?.

        Is a very insufficient response, considering that you made quite strong claims

  2. Brandon Shollenberger

    Horst-Joachim Lüdecke, Rainer Link, and Friedrich-Karl Ewert, thank you for taking the time to make this post. Some of what you’ve said I wanted to say myself, but never got around to (as I was bogged down in one particular disagreement with Richard S.J. Tol). I do, however, have a question. You say:

    Removing seasonal effects is indispensable for a correct application of the DFA and has nothing to do with “data manipulation”. It can be found in every paper that uses monthly means with DFA, for instance in [3], [4] and all appropriate papers cited therein.

    Tol claims:

    In this case, the seasonal cycle was removed from both mean and variance. This alters the covariance structure and thus affects the estimate of the Hurst exponent.

    Do you know if this is true, and if so, if the impact of it has been analyzed? As I’ve said elsewhere, removing the seasonal cycle is common practice, so you don’t deserve criticism for it. However, if removing it does affect calculations, that would be good to know.

    • Not sure if LLE are going to delve into the comments, but I’ll give you my view on the topic.

      Yes, the seasonal cycle will significantly alter the estimate of the Hurst exponent. The presence of the seasonal cycle will degrade the estimate.

      The reason is simple. The Hurst exponent is attempting to model the stationary element of the time series. The seasonal cycle is a non-stationary component.

      Ideally, we would remove it using a physical model to estimate the variation and subject it from the time series. Unfortunately, the physical models we have are not really up to the job and do not predict it accurately enough to remove it reliably (a residual component will still be left, still contaminating the estimate). So instead we tend to remove the seasonal cycle from an estimate based on the data, effectively applying a filter that (in spectral terms) places a null onto the 1-year cycle.

      This too can have consequences for the analysis. Although it is probably the best option we have available, there is some merit to understanding the consequence of such a step (perhaps through simulation?). My guess is that the consequences are small, but this remains a guess and probably should be quantified for the sake of completeness.

      • Brandon Shollenberger

        Spence_UK, thanks for your answer. Unfortunately though I was unclear. I know removing the seasonal cycle will affect any calculation of the Hurst coefficient. I meant to only ask about the part you discuss in your last paragraph, whether or not the filter used will have consequences. Like you, I’d guess any such consequences would be small, but I’m curious if there has been any work to study the subject. Before Richard S.J. Tol raised the issue, I had never considered it, and I don’t think I’ve ever seen it discussed before.

        In any event, I’m sorry I made you type up more than you needed to to respond to my question, but thanks for your input, especially for helping me make it clear what I’m curious about.

      • The seasonal cycle is a non-stationary component.

        As a matter of fact, the seasonal cycle is a stationary component. The trend is the element that is non-stationary, as that can change joint probabilities when shifted over time.

        In the most elementary terms, stationary means that something or some parameter is predictable no matter where you are located on the time series. Seasons most definitely are stationary aspects and I can tell you that it will be warmer in the summer, for every few months out of each year over 1000+ year spans.

      • I disagree. Only stochastic components can be stationary. Since the seasonal cycle is a deterministic function of time, it is by definition, non-stationary.

        (That is, the mean of the series is locally deviated by the deterministic effects of the seasonal cycle; therefore the statistical properties of the series are not invariant to a change in the origin wrt this cycle)

      • I disagree. Only stochastic components can be stationary. Since the seasonal cycle is a deterministic function of time, it is by definition, non-stationary.

        OK, this is the long-running controversy between definitions. All engineering textbooks will state that a sine wave is a stationary signal, i.e. it will repeat itself identically to infinity, and this allows one to to do useful analysis via Fourier transforms, etc.
        The compromise term is cyclostationary. This has a wikipedia page which provides a good overview and an example in the context of seasons, which is what we are talking about here:
        http://en.wikipedia.org/wiki/Cyclostationary_process

      • So it’s stationary once you remove the cyclo bit…

        Sounds good to me ;-)

  3. Sorry Judith but,

    I did not learn a thing.
    Very confusing views is all with no forward knowledge.

    • Joe, the thing to learn is that this is complex and controversial, as the frontiers of science always are. If you are not an expert on this stuff, as I am not, then there is little to learn except that the experts disagree. On the other hand, I have learned a lot about what I do not understand.

      • I disagree David. I am no expert on this stuff, but IMO Ludecke et al are losing this argument. Those knowledgeable folks here who have taken the time – including sincere skeptics – have found their analysis wanting. Even if Tol’s critique is too sweeping.

        However, your point is broadly applicable to many of the other exchanges that have occurred on this blog.

      • BillC, I disagree back. You have been reading the case for the prosecution, which always looks good in isolation. Only now are we beginning to get the case for the defense. Low and behold, there are counter arguments.

        More broadly, in inductive reasoning one often does not win or lose. One merely makes a case.

      • No, they lost the argument completely. They need to do the DFA analysis on something obvious like the Vostok Ice Core time series data set, if for no other reason than to check how well the DFA can actually work.

        Unfortunately they won’t because it will blow their entire argument out of the water. You see, they see what they want to see, and anything that deviates from that, they ignore. That is one of the tactics of disinformation that has been so well outlined in the recent Climate Etc posts.

      • “Unfortunately they won’t because it will blow their entire argument out of the water.”

        This is a grand statement. Do you have proof to back up such a proclamation? You could do the calculations yourself, replicate what they did and then do the Vostok Ice Core to check the DFA method; that would provide evidence or refutation to your claims, but at least substance. Or link to references that used the same methods and did such a calculation.

      • Ok, the defense is starting to present their case and the counter arguments stink.

      • BillC: Excuse me but you said you were not an expert. How then are you judging the quality arguments? Perhaps the difference between us is that I do not try to be an expert when I know I am not.

      • This is a grand statement. Do you have proof to back up such a proclamation? You could do the calculations yourself, replicate what they did and then do the Vostok Ice Core to check the DFA method; that would provide evidence or refutation to your claims, but at least substance. Or link to references that used the same methods and did such a calculation.

        That’s OK, you just haven’t been paying attention to the ongoing commentary. Check in the previous thread where I put together a synopsis of the other comments I made:
        http://judithcurry.com/2011/11/08/tols-critique-of-the-ludecke-et-al-papers/#comment-136094

        This describes a complete DFA analysis of the Vostok data and a refutation of Ludecke’s estimating a 40% to 90% probability to “natural variability” causing the temperature increase.

        According to the Vostok data, for an arbitrary 100 year time span, a 0.75 degree or more temperature increase will occur with only a 3% probability. In other words the one we are in right now is a low chance occurrence. It is not implausible, but just that the odds are low. Alternatively, one could estimate the fraction of the increase due to potential variability with some confidence interval attached to it.

        That is why Ludecke needs to be corrected. He can’t be parading around that 40% to 90% range like that, when clearly their analysis has fundamental problems.

      • WebHubTelescope | November 10, 2011 at 1:31 pm |
        “In other words the one we are in right now is a low chance occurrence.”
        In that case please explain the earlier equally rising trend during the 20th Century.

      • “According to the Vostok data, for an arbitrary 100 year time span, a 0.75 degree or more temperature increase will occur with only a 3% probability.”

        How can you possibly say that with this new paper out:
        http://wattsupwiththat.com/2011/11/10/new-study-shows-temperature-in-greenland-significantly-warmer-than-present-several-times-in-the-last-4000-years/

      • Richard, that new paper is for Greenland. Vostok is in the Antarctic. It’s quite possible — in fact I would expect it — for temperature variations in Greenland to be greater than those seen in the Vostok cores.

      • How can you possibly say that with this new paper out:

        Then provide me with the raw data and I can check. My email is on my nickname.

      • David:

        To your 1:07 PM – I don’t think we all have to be experts to note flawed arguments. I will continue to think that their arguments are flawed until they present what I think are better ones. To elaborate on why, beyond noting that they seem to be dodging questions, would take more time than I have now but I would consider doing it.

        However, there have been discussions on this blog quite a few times where the arguments and counterarguments were sufficiently good, in my layman’s opinion, that my lack of expertise precluded me from making an assessment.

      • Not being a statistician myself (but knowing what some of the words mean), it looks to me like they’re mostly arguing past each other, and neither side is “winning” or “losing”, except that Tol lost rhetorical points in the beginning for demanding that the paper be blocked rather than debated.

        Having said that, when there is a debate over something I do know about, and one side is clearly crackpot, it often looks like this from a distance. The skydragons seem reasonable from a distance because their individual points may seem reasonable, even if there are enough subtle but fatal flaws when you put the whole argument together to shoot the whole thing down. Perpetual motion promoters (and their first cousins, the cold fusion crowd) are frequently “almost” right. But almost doesn’t count when a cardinal rule is being violated.

        So I’m just going to sit back and watch, and maybe learn something.

      • Well put.

        Is it not also true that even if it is not possible to follow that technical details of what is being disagreed about, there are crystal clear examples of all kinds of other things on display. I find it fascinating. Not only reasoning processes, but attitudes towards the reasoning of others and a whole host of obvious social, tribal, emotional and linguistic traits that would vanish if we were consumed by the substance of the details. Somehow, it makes many processes even clearer.
        And yes, there are things I didn’t know I didn’t know too.

      • Thanks Anteros. To me the logic of scientific debate is one of the most fascinating processes in the world. But then human reasoning in general is amazing. Please allow me to flog my free textbook on this:
        http://www.stemed.info/reports/Wojick_Issue_Analysis_txt.pdf

      • Thank you David. Interesting.
        In climate science, I see major issues about the following viz.
        data collection methods
        spatial and temporal adequacy of sampling
        data ‘adjustments’ for various biases
        data analyses, selection of statistical methods
        interpretation, extrapolation, exaggeration of impacts
        computer model validation
        relevant physical theories
        the uncertainty monster
        literature reviews tainted by selection bias
        political activism and ‘social engineering’
        economic implications and (in)efficacy of the ‘solutions’ proposed

        The technical merits and appropriateness of the statistical methods under discussion here would be beyond the understanding of most (including myself) but serve to confirm that the science is far from settled.

        PS Unfortunately I have personal, extremely expensive experience of a lemon of that particular brand involving failure of the turbo diesel fuel injection system ($7,000), differential ($800), transmission ($2500), fuel pump, water pump (x2), alternator (about $800 each), indicator switch module ($300), fuel tank cap lock ($200), fuel contamination sensor ($600) and the support bracket for the water pump (can’t remember the cost of this). The ‘tree’ amounts to a trunk – any ‘branches’ would be quite insignificant and redundant. All of which outweighs any hypothetical arguments to the contrary. Guess what advice I give my friends?

    • Personally I found that pretty illuminating. I was particularly impressed by their answer to the obvious observation that switching from 100y to 1200m is not going to add any information about centennal autocorrelation.

      Or should I say, lack of actual answer.

      • Personally I found that pretty illuminating. I was particularly impressed by their answer to the obvious observation that switching from 100y to 1200m is not going to add any information about centennal autocorrelation.

        Or should I say, lack of actual answer.

        But it will certainly improve on the counting statistics. I bet all their Hurst coefficients go towards 1/2 as they pick up more data.

  4. Will the authors be addressing adequacy of the 200 year temperature records either here or in the follow up paper? I am specifically interested in the use of data sets with known biases in the 1800s.

    Also of concern is the use of such a small, closely spaced sample set. While I surmise that this is done only because of the lack of other records having such consistency along with length, the small sample size makes any conclusions about climate only a local observation.

    Finally, the appearance, even in this small sample of an increased slope over the last 30 years

    • … is interesting but not addressed (as I believe you were only looking at longer trends).

      Sorry about the split post. It’s hard to edit on a smartphone…

    • This is the citation of the paper addressing the temperature records at Hohenpeissenberg.

      “Revision and necessary correction of the long-term temperature series of Hohenpeissenberg, 1781-2006″ by P. Winker (Theor. Appl. Climatol (2009) 98:259-268) doi://10.1007/s00704-009-0108-y.

      • The authors are playing tricks with the European cities data.

        Consider that Figure 2 of the “How natural … ” paper only has runs to only like 20 years and they label the curves with Hurst exponents mainly above 0.5.

        Compare that to Figure 4 of the other paper, where they extend the data to 100 years, and then don’t show the Hurst exponents. Well, I looked at them and they aren’t much different than 0.5 which is plain random walk territory. You can see this clearly on the bottom-most Vienna curve where it goes straight through the (X,Y) ordinates (10, 0.01) and (1000, 0.1) . sqrt(Y2/Y1) = sqrt(1000/10) = 10 = X2/X1 = 0.1/0.01 which indicates it is classic random walk.

        So why don’t they just use the random walk behavior in their interpretation as it seems to fit the data adequately?

        BTW, the DFA analysis has the equivalent of a Nyquist criteria whereby the longest correlation length used for log-log fitting is only half of the actual time series. This is necessary to balance the sampling over all the L intervals. That is the reason that the numbers only go up to about 100 years whereas the actual time series runs 200 years.

    • Jaytee @8.12am

      Are you aware of the various projects carried out by Phil Jones and D Camuffo on that very subject?
      Here is one of them and they also carried out another under EU aspices.

      http://www.isac.cnr.it/~microcl/climatologia/improveb.php

      tonyb

  5. Prof L., for those who can read a German newspaper, your comments in the German media amount to being a press secretary for a transparently free market think tank. Do you think you have any significant biases in your presentation of the science?

    Let me answer the obvious: yes.

    Sadly, there is a lot of very stupid stuff on the EIKE site, as anyone can see – including you, since you at least attempt to argue on a scientific basis. However, your selection of e.g. Lindzen as a significant influence on your understanding deserves major criticism.

    I prefer to get my climate science information from the U of Heidelberg’s Institute of Environmental Physics and currently working scentists, thanks very much.

    • Interesting analysis from a German perspective Martha.
      I seem to recall that the German people underwent a large scale, decade long, social experiment whereby they removed the adversarial system from their criminal justice apparatus, so as to improve it.
      During the period where the German state removed advocates for the defense of the accused in courts they certainly increased the speed of the process and also increased the likelihood of a Guilty verdict.
      However, it is my understanding that eventually the experiment was concluded to be a failure, and that German society understood the free expression of views was an important safeguard in pursuit of the truth.
      I was unaware that many in Germany were prepared to have another crack at it.

      • Very dry, and funny

      • Yes. Nazi comparisons always ranks high on my list of favorite topics for humor.

      • I find the idea that one can castigate someone for being
        “being a press secretary for a transparently free market think tank”
        authoritarian.
        What is wrong with free market think tanks?

      • I don’t think there’s anything inherently wrong with free market think tanks. I find some of the analysis that comes from them to be interesting.

        But the point is that there is relevance to the origins of scientific analysis. It isn’t dispositive, but it is relevant. You will find arguments about such relevance in virtually each and every one of these threads from “skeptics.” It runs both ways.

      • Joshua,
        So ‘denier’ is going to get condemned by you?

    • It upsets me to see such associative smears of Lindzen, who is deservedly well respected by his peers whatever they might think of his objections to the consensus hypothesis. Lindzens criticisms focus on the magnitude of positive feedback; perfectly valid since no positive feedback has actually been demonstrated in the observational data; only in unvalidated models. The difference between the enhanced greenhouse effect hypothesis with and without this feedback is, essentially, the difference between mild, beneficial warming and severe, potentially dangerous warming. Neither is he a free marketeer….though the relevence of that to the science escapes me.

      The big problem in this debate is the huge amount of smugness too many people exhibit just by exhibiting their belief in the ideology that man must somehow be destoying life on earth just by our very existence and to hell with what the data says. Neither have most of this smug brigade ever actually done anything in the least ecological to justify their smugness.

      Memo to the smug: Stop pretending that a belief in thermageddon somehow makes you automatically better than those who don’t believe it.

      I prefer to get my science from those who talk common sense rather than a mixture of platitudes, cheap smears and downright lies.

    • Poor Martha.

    • Martha: What are your scientific credentials that qualify you to judge Lindzen’s work?
      What are your scientific credentials qualifying you to judge Lüdecke’s papers?
      The climate related research in Germany is heavily biased towards CAGW just because the Governmet, the Media, the Industry (including banking and insurance), etc. only support research that “proves” CAGW, no matter how silly this research might be.

      No statement supporting CAGW = no research funds. It is thus not surprising that most research that may lead to “inconvenient” outcomes are done by retired scientists, many of them respected emeritus professors with long academic careers, who live on their pension funds and don’t fear marginalization or cuts in money supply. (The authors of the papers of this blog are retired). In Climate Reserarch it requires considerable stamina to swim against the current, and even to stay firm, as Judy exemplifies.

      You may attend the forthcoming conference about climate and energy sponsored by Eike (in November 25 -26, 2011 in Munich); among the presenters are people you may know and like: Svensmark, Shaviv, Veizer, Montford, Horner and D. Laframboise (not yet included in the program below, which is outdated).
      http://www.eike-klima-energie.eu/fileadmin/user_upload/Bilder_Dateien/4th_climate_energy_conference_munich/Programm-Klimakonferenz-Muenchen_2011_EIKE_01.pdf

    • Martha, what was your point?
      I would consider your arguments as Argumentum ad hominem and Argumentum ad auctoritaem. A typical reaction from the AGW-crowd when the results are not for their liking.

    • what part of ‘technical thread’ is lost on you.

      • This is Martha’s version of a technical comment: naming her favorite expert on the particular topic.

      • So much for this “technical” thread Dr. Curry. Different day same circle.

        Care to guess which political culture Robert, Martha and Joshua belong to?

      • “This is Martha’s version of a technical comment: naming her favorite expert on the particular topic”

        Oh, Judith, you’re so funny. You’re making things up again. :-)

        The main point is that Lindzen is wrong and this (as well as how and where this influences Professor L’s understanding) is already well documented in the science. The resulting analysis and argument defies both the physical data and observed reality. That is why it is deserving of serious criticism.

        I also commented on the obvious motivated bias of EIKE, and provided a link that facilitates some basic evaluation of the quality of EIKE information and EIKE sources, as just one example. Since you know how to use Google, my expectation was that you could see many other examples for yourself, Judith. Yet you say you know very little about EIKE, so I offered an example of what sorts of things are known and discussed on the blogosphere, just to fill you in a bit so that you don’t remain entirely ignorant (and can’t pretend to be).

        However, what I actually suggested is the Institute for Environmental Physics, University of Heidelberg, as an ‘expert’ source for climate science information and analysis, from German scientists. ;-)

      • Wolfhard Herzog

        Dear Martha,

        trusting the scientific competence of the Institute for Environmental Physics, University of Heidelberg is quite a good habit. So does Lüdecke.
        If you had read his paper completely, you would have found the name A(ugusto) Mangini. Do you know where he is working?

      • Dang! You beat me to that comment.

  6. Many of the issues that have arisen in relation to the papers by Ludecke et al may well have been overstated by Richard Tol and were insufficiently backed up by appropriate citations in his eagerness to his criticisms posted. That’s just a fact of like when a blog is used as a medium of academic debate.

    My preference for any academic paper is for its title and objectives to be clear; the data source and experimental design to be clearly set out and based on generally accepted methodologies; the results of the experiment to then be stated and with what confidence the reader should be entitled to rely on them.

    If indeed the intention was to show that natural variation is mainly responsible for the 20th Century temperature fluctuations then Richard Tol is correct to criticise the papers for not having 19th Century data included in the data set or why not even just performing the same analysis on the 19th Century data so as to obtain a reference point or control?

    I thank all contributors to the latest few threads because I found the discusions an absorbing learning experience. In particular, Dr Curry is to be commended for allowing these issues to be explored in full on her blog.

  7. Norm Kalmanovitch

    Forcing is stated as energy flux in Watts per square metre and change in global temperature is based on energy which musr be stated in energy units such as kWh which includes both the energy flux and the time factor for this energy flux to operate.
    Because of this shortcoming the term “climate sensitivity” needs to be introduced to relate forcing to global temperature. The values for climate sensitivity are “all over the map” ranging from well under 0.1°C for each W/m^2 to 0.75°C for each W/m^2 that gives a 2.78°C of global temperature change for a doubling of CO2 according to 5.35ln(2) = 3.71 X 0.75 =2.78°C
    There is no physical justification for this climate sensitivity term and it appears that this term is derived in a fashion purposfully designed to produce adequate global warming from CO2 to make the case for AGW.
    The only possible physical basis for this climate sensitivity is the 0.6°C of measured global temperature change that occures as CO2 concentration increased from the preindustrial level of 280ppmv to 380ppmv.
    Since at least 0.5°C of this 0.6°C is natural warming as a recovery from the Little Ice Age this leaves only 0.1°C attributable to CO2 increases.
    According to the forcing parameter of the climate models
    5.35ln(380/280) = 1.634W/m^2 and 0.1°C/1.634 gives a climate sensitivity of just 0.061°C for each W/m^2
    Even if we used the entire 0.6°C falsely attributed in full to CO2 forcing we still only get 0.367°C for each watt/m^2 and this leaves the 3.71 W/m^2 of the climate modeling derived forcing for a doubling of CO2 at just 1.362°C which is well below the 2°C critical value for concern stated by the IPCC.
    “R. Tol claims “However, the question is not yet answered which forcings cause the actual global temperature change” but the real question not yet answered is which climate sensitivity properly depicts the actual global temperature change or more importantly whose fabricated climate sensitivity value will provide the most alarmist prediction of global temperature change!

    • “There is no physical justification for this Climate Sensitivity term.”

      That is exactly what is wrong with climate science. Climate sensitivity implies it is know what percentage of the atmospheric effect is due to CO2. It is not known. It is estimate between 5% and 30%. All sensitivity studies would have to be referenced to the same percentage impact or the uncertainty clearly indicated in any analysis.

      Well said Norm!!

  8. Specific question for Richard:
    Do your criticisms also apply to Lennartz and Bunde?

    • In many ways, LLE copy L&B. L&B do not reach any strong conclusion (except that a moderate trend of 100 years is less natural than a strong trend of 50 years) which is in line with the weaknesses of the method.

  9. Dikran Marsupial

    “This has been an interesting, albeit controversial, experiment in “blog” science.”

    The idea of “blog science” is a non-sense (it either is science or it isn’t). There is a good reason why science has long abandoned public debate as a means of establishing the truth and opted for peer-reviewed journals instead, which is that “debates” are won and lost on rhetoric, not science. Rhetoric seeks to win an argument, whether the argument happens to be true or not, unlike science which seeks to determine the truth wherever it may lie. If you doubt this, just compare the number of ad-hominems and insults that crop up in “blog science” and in the peer-reviewed litterature. Blog science is merely a return to a method of pursuing science that has already been found to be faulty and been rejected. Those who are unable to learn from history are doomed to repeat it.

    However, there is nothing wrong with discussing science on blogs, that is a very good thing, just don’t think that what we are doing here actually is the science.

    • My point is diametrically opposite to yours. I know you mean well, but you’re misled.

    • Pal Review is not science. A clique of like minded gatekeepers rejecting papers that threaten their flow of grant money is disgusting. All communications (secret and non-secret) between reviewers must be made public along with the paper.

      • Dikran Marsupial

        again, like the previous comment (now apparently deleted), you couldn’t have made my point any more clearly (as you comment is essentially an unsubstantiated ad-hominem against the scientists). I do have some sympathy with the idea of reviews being made public, but that isn’t blog science.

    • Dikran, I disagree. In addition to peer reviewed journals, science makes extensive use of conferences. Most presentations include a public debate, in the so-called question period. This is where the big fights occur. The scientific frontier is a land of argument. Happily, blogs are extending this medium.

      In contrast the journals are almost ridiculously circumspect. No one ever directly says that someone else is wrong. You have to be an expert to read it between the lines, but it is there.

      The present exchange is certainly science. It is in fact a case of what is called “post-publication peer review,” something that is being widely discussed in the scholarly publishing industry. Many journals and portals have added comment capability for just this reason.

      • Dikran Marsupial

        Conference presentations always never involve debate. There are questions and answers at the end of talks, but almost never long enough to resolve any important issue. I’ve been going to conferences for the best part of 20 years and I have never seen a “big fight”. Where there is more lengthy discussion, it usually takes place over lunch/dinner etc. not during the conference sessions themselves.

        Journal articles do not personalise the issue, or stoop to rhetoric (at least in reputable journals) which one reason why journals are better than “debate”. However that does not mean they do not explicitly state that a theory or argument is wrong.

        It would only be “post-publication peer review” if we were actually “peers”, which we (generally) are not. Very few people conducting “blog science” are actually competent to provide a reliable review. Unqualified “interpreters of interpretations”, regardless of how well intentioned they may be, are mostly just adding to the noise.

      • Debate is not about resolving issues, it is about elaborating them. But perhaps we go to different conferences.

        As for bloggers not being peers, you are making an empirical claim which I think is false. For example, there are a lot of very competent people on this blog.

      • Dikran Marsupial

        No discussion is about elaborating issues, debate requires the participants to hold opposing views, which is not a requirement of “elaboration”.

        “Very competent” is not the same as “peer”, the “peers” are those who excell (rather than being merely “very competent”) in the particular field of study involved in the paper. There are very few in that category here, and curiously enough, one of those (Prof. Curry herself) seems rather unwilling to comment on the science; the blog would be far more useful if she did.

      • Dikran, going back to your original claim, as articulated by our discussion, you are now claiming that science has abandoned debate in favor of review solely by those who “excell” in the relevant topic. I think not and I hope not. Science is about far more than what three people chosen by an editor think about one’s article. Blogs do not change this, they just make it more public.

      • Dikran Marsupial

        Peer review is the only the first step in the acceptance of a scientific idea, the process doesn’t end there. It is basically a sanity check little more than that. However you need to be an experienced scientist actively working in that field to provide that sanity check, it is a bit Dunning-Kruger to think that we (myself included) can provide a reliable review of the science (although there are a few here that can). That doesn’t mean that science isn’t discussed outside the journals, just that peer reviewed journals have become the primary means of progressing the science, and it works.

        I said science has dropped formal debate (i.e. get both parties together in a debating chamber and let them go for it), as it simply doesn’t work – such debates are won by the better orator, not by the person that is actually right.

    • Dikran
      You wrote:
      “There is a good reason why science has long abandoned public debate as a means of establishing the truth and opted for peer-reviewed journals instead, which is that “debates” are won and lost on rhetoric, not science.”

      Until very recently there was no means to rapidly exchange information with a large audience as there is today via the internet. Given that the traditional “peer review process” has been demonstrated to produce garbage result that are intended to advocate a position, this seems like an excellent exchange.

      • Dikran Marsupial

        Rapid exchange of information is one of the problems with blog review. Peer review takes time which means that the reviews and the responses are well considered. This the signal to noise ratio is generally higher.

        BTW the ad-hominem against the reviewers kind of makes my point about the problem with “blog science” being a rhetorical exercise rather than being designed to seek the truth.

    • did you get to that trenberth question I left for you? you asked for the quote and I supplied it?

      • Dikran Marsupial

        Yes, despite your childish behaviour on the other thread, I did. Rather ironic that you complained that I didn’t respond immediately (perhaps I am in a different timezone) and yet you continue to pester me for an answer AFTER I had already given it.

    • There is a good reason why science has long abandoned public debate as a means of establishing the truth and opted for peer-reviewed journals instead, which is that “debates” are won and lost on rhetoric, not science. Rhetoric seeks to win an argument, whether the argument happens to be true or not, unlike science which seeks to determine the truth wherever it may lie. If you doubt this, just compare the number of ad-hominems and insults that crop up in “blog science” and in the peer-reviewed litterature.

      Assertion, without open review we do not know what level of ad-hominem and insult is involved in a closed review process, particularly when a paper that debunks another is reviewed by it’s subject’s author, as in, say, O’Donnell 2010.

      In my view there is only one route for ‘science’ to recover it’s credibility, total transparency of process.

      • Dikran Marsupial

        I would be in favour of reviews and responses to reviews being published. When I review a paper, one of the things that really irritates me is when I take the time to give a detailed review and make constructive suggestions for repairing the defects in the paper the authors only give lip service to them and only do the minimum they think will get the paper through.

        peer review is not neccessarily closed review, openness doesn’t mean that blog-review is any better.

  10. You write:

    Our main task was to show that the 20th century warming was predominantly a natural fluctuation.

    Here you claim that you can pinpoint the reason of the warming as a natural fluctuation. That’s really an unbelievable claim, that can be justified only with a model that is known to have persistence to the extent that it can produce pretty accurate predictions. I.e., it requires a model that’s totally based on information from earlier periods and that makes a prediction with a small uncertainty and in good agreement with the later observations.

    I don’t think that you have anything like that. Thus that claim appears to be totally wrong.

    • Pekka: The claim is not necessarily wrong. The claim is not supported by the analysis. It is therefore wrong to make the claim.

      • Richard,

        That’s what I meant. My statement was about the evidence provided by their analysis, and about evidence that statistical methods of the type they have applied may ever produce.

        Statistical analysis may tell that a time series is consistent with a red noise type of model, but it can never tell that the time series has been produced by red noise as some non-noise mechanisms may be indistinguishable by statistical methods from a particular set of data like a single time series, but still contain a lot of deterministic behavior made visible, when the time series is combined with additional information.

      • For those following along, red noise is the random-walk model of temperature excursions. I believe red noise fits the European cities data that Ludecke uses as well as the Vostok Ice Core data that I use (up to the interglacial periods, where it stops working).

        The problem is that my calibration of the strength in the random walk movements (i.e the diffusion coefficient) is much smaller than what Ludecke get, based on how much they claim that natural variability will account for the recent warming trends.

        Their evidence is very weak and obscured by their focus on Hurst exponents, which just muddies the waters.

      • Random-walk model fits quite well in any temperature records (that should be quite evident in basis of the physical processes behind the process). However, the increments are not completely independent and thus it seems quite evident that bounded random-walk would be much better model.

        Unbounded random-walk models that are constructed using the distribution of the increments of real temperature series seem to produce much more “natural” variation than the real world.

      • Pekka – This is exactly the point I try to make in my comment well below. A statistical analysis may be able to show that the 20th centruy rise in temps is consistent with a certain statistical model of persistence (i.e., we can’t rule out natural causes). However, because a statistical analysis doesn’t provide a specific physical reason(s) for the rise, you cannot reach conclusions as to whether the rise is due to natural or anthropogenic causes or (most likely) some combination of both.

      • Web,
        “That amount of energy is only impressive if it happened over a short period of time, which is the current situation.”

        The change in surface temperature from the 1998 El nino to the 2000 recovery was about 0.26 degrees or ~2.5×10-22 Joules per year if you consider 1999 as the transition year. That was the excursion I mentioned after you estimated +/- 0.2 degrees

      • Pekka,

        I think that you have – more or less – completely misunderstood the primary purpose of this kind of statistical analysis (which is, or should be, normal procedure in any field of science today). The point is that by showing that some time series is consistent with, e.g., Wiener process, is sufficient to cause extra burden of proof for those who claim that there is something else behind the process than just random fluctuations – not the otherway around.

        Especially good justifications should have been required when simplisitic statistical methods are used that are otherwise known to, e.g., produce spurious trends with such time-series – not just that we “know” that there must be AGW lurking behing the data.

      • Eis3Tudi – I think that “consistent” is too broad a concept to impose a burden of proof on anyone. Almost anything can be shown to be consistent with anything else if we allow extremely low probabilities to be included.

        However, you are right that correlative data alone that might also result from random fluctuations would be inadequate confirmation of anthropogenic influences if that were the only evidence.The evidence for those influences, including data inconsistent with the conclusions of the authors of theses papers, has been discussed in hundreds of pages in this blog, and in thousands of articles in the literature as well as many dozens of other blogs. It is far too comprehensive to summarize here, including the arguments for or against the validity of specific items. I believe that this is what Pekka had in mind in his comments, although he will clarify his views himself if he chooses to comment further.My purpose in commenting is to defend him against the charge that he “misunderstood” what the authors were up to. My concern is that the authors themselves may not have completely understood how to interpret the results of the process they used.

      • Eis3Tudi,

        Evidently I could not express clearly enough, what I wanted to say. The message was short by purpose and written assuming that everyone, who reads it, understands, what you wrote.

        What I’m protesting is using methods that are from outset known to be powerless in the particular situation and interpreting the results as if the method would really have brought up something new.

        In this specific case one can see just by looking at the data that there’s no hope that one could learn something relevant by calculating the Hurst exponent. The number of years and the dominance of large fluctuations tell that it’s not a reasonable approach. A paper based on such an inappropriate approach is not worth publishing. Even more condemning for the paper is that it emphasizes conclusions that cannot be supported by this kind of analysis.

        Being sure that some particular analysis is good requires always careful attention to the details, knowing that a worthless analysis is worthless is often very easy, as it is here. Of course people who conclude that an analysis is worthless may sometimes err, but that’s not common, when the case is as clear as it’s here.

        Many people have reached the same conclusion based on related arguments, which may be expressed in many different ways, but which all come back to the same point: the method is not applicable to this task and it’s inapplicability is obvious.

      • Brandon Shollenberger

        I find it humorous how many people are criticizing these authors for their approach without mentioning the fact the authors didn’t create it. Somehow, despite discussing how wrong the approach supposedly is, they don’t even bother to discuss the Lennartz and Bunde paper where the approach was created.

      • Eis3Tudi |
        Thanks, between you and Spence elsewhere in this thread, I am getting some great ideas for how to model the fluctuations.

        What I did this morning was to take the Vostok data and go through the DFA analysis as a start. This gives the characteristic power-law correlation at shorter times and then it flattens out at longer times starting at about 10,000 years.

        I think this behavior has a lot to do with the climate constraints of the maximum and minimum temperature rails, set by the negative feedback of S-B at the high end and various factors at the low end. So to model the historical variations over a place like Vostok, we start with a random walk and modify it with an Ornstein-Uhlenbeck drag factor so that as the temperature starts walking closer to the rails, the random walk starts to slow down. By the same token, you can give the diffusion coefficient a bit of a positive-feedback push when it starts to go away from the rail.

        This doesn’t take much Monte Carlo simulation code at all to cook up, and the results from 8 sequential non-cherry-picked runs of several hundred thousand years apiece are shown below.
        http://img266.imageshack.us/img266/1659/vostokgvaimages.gif
        (click on the image to resize if your browser supports it)

        Look especially at the lower left MC run. I am not suggesting that some external forcing factors aren’t triggering some of the movements, but it is interesting how much of the correlation structure shows up at the long time ranges just from random walk fluctuations.
        I have two paremeters in these curves, a diffusion coefficent of 0.0008 degrees^2/year and a drag parameter of 0.005, where the drag causes a reversion to the mean for long times and the resulting hop rate is proportionally reduced according to how large the excursion is away from the mean.

        The question is whether the same random walk extends to the very short time periods ranging from years to a few centuries. Vostok doesn’t give this resolution but I bet some other data sets do. If we can somehow extend this set of data to a full temporal dynamic range, this would provide an interesting alternative fully stochastic hypothesis for climatic variation.

        I have the full code for the simulation which includes the DFA calculation at less than 40 lines of code available if anyone wants to look at it. I will eventually place this on my blog.

        Clear out play for M.J.

      • Fred,

        I am quite aware of how controversial this topic is. I admit that there may be some other very clear evidence about anthropogenic influences but it does not justify “torturing” temperature statistics without proper statistical reasons. If you cannot definitely show that there is something else than red noise, then – that’s it – only conclusion you can make is that there is not enough data and/or that there is some problems with data quality (if you are not ready to admit that it could be just random fluctuations). What you should not say is that we shouldn’t consider any nasty statistical nuisances because “we all know” that there is CO2 behind all this.

        (PS. sorry about the awkward psedonym – there was one not so nice incidence in another blog that made me realize that it might be wiser to use random pseudonyms generated by Unix “pwgen”.)

      • Pekka,

        I am not defending the methods used in the papers that are discussed here. I agree that the length of time-series is quite likely too short to get reliable results. What I oppose, are your comments about red noise and possibility of that all observed warming trends are just caused by random fluctuations. There are other simpler and more robust methods to show quite easily that temperature statistics are indeed similar to (bounded) random walk.

        In any case, those how can show that temperature fluctuations are like random walk do not have any additional burden of proof. It should be so that those who claim that it cannot be just red noise and that there is some more behind it should proove their claims. Somehow you seem to be quite eager to turn this other way around and impose reverse burden or proof.

      • Eis3Tudi – These are interesting points – more interesting than the Ludecke et al paper in my view. If we oversimplify climate for the moment by neglecting all variables except CO2, we can make some confident inferences. In particular, we can say that the CO2 increases in the second half of the twentieth century caused warming that was non-trivial. This is a deterministic process, by which I mean that rather than saying the CO2 might have caused warming, we can say it must have caused warming. We know this from the physics describing the interactions of CO2 with infrared photons of appropriate frequencies, combined with atmospheric measurements, the radiative transfer differential equations, and other inputs that make it a “must” instead of a “might”. What we can’t say with certainty is exactly how much of the observed warming was due to CO2, although it could not have been very small.

        CO2, however, was not the only deterministic process operating. We can also say that other forcings almost certainly contributed (but probably not as much), and that unforced” natural fluctuations” of a stochastic nature might have contributed to the observed warming. The actual curves, therefore, do not exclude a contribution from the fluctuations, but they preclude the possibility that those fluctuation and/or other forcings eliminate an important role for CO2. This conclusion doesn’t come from trying to derive mechanisms from the curves, but from the basic physics.

        The above is, in my opinion, all deducible from the surface temperature data. Elsewhere, however, I have argued that ocean heat data showing an enormous increase in the total heat accumulated in the climate system during 50 years fairly conclusively excludes a major role for natural fluctuations – at least after 1950. Prior to that, we have insufficient data to do more than estimate that the same is probably true for the first half of the century, but can’t be shown with any conclusiveness.

        I bring this up because we have seen commentary on the Hurst exponent and the possibility of spurious deterministic trends due instead to natural fluctuations operating over long timescales. If our only source of information were the observed surface temperature curves, with no knowledge of the physics, that possibility could not be excluded. From our knowledge of atmospheric physics, combined with the ocean heat data, we can in my opinion exclude more than a minor role for the natural fluctuations.

      • I offer to place this discussion in the perspective of the historical natural variability that occurs over hundreds of thousands of years.

        My premise is that the red noise / random walk statistics are a result of the earth’s climate inhabiting a very shallow energy well in its quiescent state. The well is broad on the order of 10 degrees C, and relatively shallow. Within this potential well, the climate hovers in a metastable state whereby small positive feedbacks of CO2, water vapor, or albedo can push it in either direction. The small amounts of perturbation needed to move the global temperature represents this shallowness.

        The model of correlated Markovian random walk, whereby the perturbation can go in either a positive or negative direction each year matches this shallow energy well model remarkably well. As Spence said elsewhere (apparently agreeing with Tol), no one has been able to reject this model for temperature variations. However, we can’t let that the random walk grow without bound as some temperature rails do exist, mainly from the negative feedback of the Stefan-Boltzmann law.

        So what we do is put a proportional drag on the random walk so the walk produces a noticeable asymmetry on large excursions from the quiescent point (I don’t call it equilibrium because it is really a quasi-equilibrium). This generates a reversion to the mean on the Markov random walk hopping direction, which is most commonly associated with the general Ornstein-Uhlenbeck process. That is essentially how I modeled the Vostok DFA fit elsewhere in this thread.

        This description in no way counters the GHG theory but is used to demonstrate how sensitive the earth’s climate is to small perturbations. The fact that the temperature in places such as Vostok follows this red noise O-U model so well, indicates that it should also be hypersensitive to artificial perturbations such as that caused by adding large amounts of relatively inert CO2 to the atmosphere. The red noise agreement is proof that the climate is sensitive to perturbations, and the CO2 we are adding is the added forcing function stimulus that the earth is responding to. For the moment it is going in the direction of the stimulus, and the red noise is along for the ride, unfortunately obscuring the observation at the short time scales we are operating under.

      • From the last comment here is an example of a simulated time series:
        http://img823.imageshack.us/img823/408/vostoktemperaturechange.gif

        The top curve is simulated but it has nearly the same statistical parameters as the Vostok curve below it.

      • Web,

        Is there enough good data to do the same with the GRIP? I think that had 12 to 18 C range of temperatures.

      • WHT – I think I agree with your main points, although I wouldn’t refer to the boundaries of temperature excursion as “rails”. Rather, the Stefan-Boltzmann response (often called the Planck Response) can be thought of as an elastic leash that pulls back toward the equilibrium state at any level of perturbation but with increasing strength as the perturbation increases. So-called “positive feedbacks” such as water vapor reduce the pull at any given temperature but don’t change its ultimate stabilizing direction. A feedback that could destabilize would threaten us with “runaway climates” associated with extremely high values of climate sensitivity, and that is not how the components of our climate system appear to behave.

        A critical element of this, however, involves the cause of the perturbation. A forcing in the warming direction (e.g., from CO2) will induce a flux imbalance at the top of the atmosphere, causing the system to take in more energy than it gives out, thereby warming the surface and storing heat in the ocean. Conversely, an internal fluctuation will warm the surface by transferring heat to it from another compartment, with the ocean being the only compartment with sufficient heat storage to do this. In each case, the Planck Response will stabilize the system by reducing any flux imbalance toward zero – including any imbalance induced by feedback responses to an internal fluctuation. However, in the case of the internal fluctuation, the surface warming will be accompanied by heat loss and cooling by the ocean, in accordance with the First Law of thermodynamics. The water vapor or other feedbacks may permit the leash to stretch further before an equilibrium (or “quasi-equilibrium”) is restored, but won’t change the direction, which will continue to be toward a restoration of equilibrium rather than a further departure. For this reason, I would suggest that while surface temperature might randomly “walk” fairly far from either type of perturbation during an observed interval, there is no known climate behavior that can walk the climate into accumulating 10^23 extra joules of ocean heat at the same time that the surface is warming – that would require extremely destabilizing climate dynamics.

        Over the hundreds of thousands of years that you mention, initial perturbations have typically involved small forcings from changes in the amount of solar energy the Earth receives, mainly through the “Milankovitch” type changes in orbital relationships, although changes in volcanism and in the positions of the continents have also operated. Feedbacks have amplified the temperature responses (but again have not changed its direction or destabilized our planet into a “runaway” toward a Venus-like “hothouse”).

      • For this reason, I would suggest that while surface temperature might randomly “walk” fairly far from either type of perturbation during an observed interval, there is no known climate behavior that can walk the climate into accumulating 10^23 extra joules of ocean heat at the same time that the surface is warming – that would require extremely destabilizing climate dynamics.

        That amount of energy is only impressive if it happened over a short period of time, which is the current situation.

        In historical terms similar amounts of change had to occur as well because I am sure that interglacials turned over that amount of energy. So I am not impressed by the size of delta change, since we are dealing with a large world and given enough time, albeit at a slow pace, that could easily happen. So this part is meant to only explain the Vostok data, and natural variability.

      • WHT – 10^23 joules is a lot of heat, but what is more important is not the time it takes, but the fact that it is concurrent with surface warming. No matter how fast or slow that heat accumulates in the ocean, concurrent surface warming requires an external forcing, since an internal fluctuation that adds 10^23 joules to the ocean would have to subtract a similar quantity from the energy available to warm the surface and be radiated to space from the surface and atmosphere. This would require the surface to cool rather than to warm. The First Law tells us that for both the surface and ocean to gain heat requires an external forcing. The feedbacks I mentioned above would be capable of modifying this relationship only slightly and temporarily, because of the stabilizing effect of the Planck Response.

      • Eis3Tudi

        I am not defending the methods used in the papers that are discussed here. I agree that the length of time-series is quite likely too short to get reliable results. What I oppose, are your comments about red noise and possibility of that all observed warming trends are just caused by random fluctuations. There are other simpler and more robust methods to show quite easily that temperature statistics are indeed similar to (bounded) random walk.

        My purpose is not to make any claims on on the sufficiency on red noise for the actual warming. What I do claim is that similar enough short time series can be produced by red noise, when no strong limits are set on the strength on the noise and the length of autocorrelations. Up to that point i do agree with the Lüdecker et al conclusions. By similar enough I mean visual appearance and linear trend over the period Such a noise model may be of the Hurst type without much restriction for ability to create such time series.

        The fact that suitable red noise can produce such time series is not a proof that the model is reasonable. There are other ways of studying, what kind of red noise is within likely limits. The Lüdecke et al paper was just a complex way of showing the obvious thing of no scientific novelty and little other value.

        I have found that literature on technical analysis of financial time series is quite revealing on, how easy it is to produce quite surprising time series from red noise. In reading such literature, one learns, how little weight one can put on single noisy time series in reaching conclusions about any property of the actual process.

      • Fred Moolten November 11, 2011 at 11:56 pm

        10^23 joules is a lot of heat, but what is more important is not the time it takes, but the fact that it is concurrent with surface warming.

        We’d be much more worried if the SST wasn’t going up in synch with the ocean heat content since the oceand are strongly coupled to the surface atmosphere.

        Anyway putting that aside the oceans are pretty big even if only going down to 700m. Also this isn’t a closed system. Perhpas your focus on stocks is getting things out of proprtion – think about the flux required to create the recent increase in ocean heat content and its size in comparison with the total incoming flux to the system.

      • We’d be much more worried if the SST wasn’t going up in synch with the ocean heat content since the oceand are strongly coupled to the surface atmosphere.

        HAS – SST and ocean heat content moving in opposite directions is exactly what happens during some phases of ENSO. It is a hallmark of internal climate fluctuations in the absence of a forced trend that changes the total amount of energy in the system. With constant energy, the First Law of thermodynamics dictates that a warming in one component of climate will be associated with a cooling in another.

        The observation that the surface and ocean heat content moved in synch over a long interval precludes a major contribution to the warming from internal fluctuations.

      • Fred Moolten November 12, 2011 at 10:51 am

        That was rather a throwaway comment to make the point that over time the heat content of the ocean and SST are likely to move together.

        One of course do need to be careful when taking about “concurrent warming” because of the differing detail masked by global average stocks in timescales (e.g. daily, seasonal), and spatial variations (North Atlantic vs the rest), not to mention combinations of the two (ENSO as you mention). One of the results of stochastic modeling of the climate is that it may help to understand the way in which short-term white noise can translate into longer-term red noise, and hence how some of this detail can be incorporated in longer timescale models in a systemic and orderly way.

        I had however thought your argument was that the increase in the ocean heat content was so large it could only come from CO2.

        The point I was making is that this is a dynamic open systems. It is the fluxes that are important (time series analysis reinforces this). So if you are worrying about the amount of heat appearing in one point or another in the system just saying that there is a lot of energy over here that wasn’t there before isn’t conclusive proof that it came from a particular source within a particular subsystem. Seeing me climb to the top of a hill doesn’t tell you what I had for lunch.

        In the context of this thread doing time series analysis on temperature ignores where it all might come from. It just gives probabilities that the time series is drawn from the output of a process with certain characteristics. Periodicy (is there a word?) or apparent linear relationships in the signal may arise from anywhere.

        But just to repeat this is a dynamic open system, it needs to be analysed in those terms.

      • Web,

        Is there enough good data to do the same with the GRIP? I think that had 12 to 18 C range of temperatures.

        I looked at Greenland data from this site
        http://www.ncdc.noaa.gov/paleo/metadata/noaa-icecore-2490.html
        ftp://ftp.ncdc.noaa.gov/pub/data/paleo/icecore/greenland/summit/ngrip/ngrip2006d15n-ch4-t.txt

        This data set is problematic because they present it in 1 year intervals but it is clearly reconstructed because little noise is observed up to around 100 to 300 years. The data is highly smooth and it looks like a spline has been applied to fill in all the missing points.

        The fluctuations are quite a bit bigger than the Vostok data, with a random walk diffusion coefficient correspondingly higher.

        I also looked at one of the European cities that they looked at, specifically Vienna from here:
        http://www.zamg.ac.at/histalp/downloads/SM/HISTALP-T01-WIE.zip
        http://www.zamg.ac.at/histalp/Statmod_AT_T01.html
        That is just white noise as far I can tell. I don’t know how Ludecke they can get any coefficients off of that data.

  11. To keep this brief, I basically agree with Peter Davies above. Not being well-versed in statistics, but generally scientifically literate (I like to think), I would say:

    1) The generalizations in the paper are too sweeping.
    2) Tol’s criticism is too sweeping.
    3) The above response is rather weak, mostly reading somewhat like “didn’t you read what I wrote before”?

    Everyone should read Pekka’s comment:

    http://judithcurry.com/2011/11/08/tols-critique-of-the-ludecke-et-al-papers/#comment-136183

    I fear it will get lost because of this new thread.

    The best of all of this is the extended discussion between folks like Brandon SShollenberger, Richard Saumarez, MattStat, Steve Mosher & others who I may have forgotten – particularly the notion that the statisticians and the electrical engineers are talking about a lot of the same things, but in different languages. I hope Ludecke et al’s forthcoming post addresses some of these points.

    Finally, I both agree and disagree with Dikran Marsupial above. I agree that this is not blog science – this is a blog discussion of science. However, blog science is certainly possible and occurs on climate blogs, particularly on skeptic blogs, as (generally skeptical) and knowledgeable folks come up with hypotheses and test them in the blogosphere.

    I think the example of open computing shows us that open science is definitely possible.

    • BIll –

      I have a question but don’t want to put it in this thread. If you’re interested in answering, find my question here:

      http://judithcurry.com/2011/11/07/disinformation-and-pseudo-critical-thinking/#comment-136211

    • BillC,

      I could add to my comment in the other thread that looking at the time series tells also immediately that there is absolutely no hope that they could say anything about the Hurst exponent. The temperature history since 1900 is so strongly influenced by a few major movements that it’s ridiculous to even think that they could tell the value of the Hurst exponent.

      The Hurst exponents tell, how autocorrelations die with increasing time interval for long intervals. The actual temperature history is most definitely such that it doesn’t allow for such estimation. There are time series for which the Hurst exponent may be a valuable characteristics, but there are others, for which is certainly is not. The temperature history since 1900 belongs to the latter.

      In general one should always look critically to the data and think, which methods of analysis may be useful. It’s far too common to use various statistical methods to data for which they are totally inapplicable. This is a prime example of that.

      • The Hurst exponents tell, how autocorrelations die with increasing time interval for long intervals.

        No. The Hurst exponent characterises the autocorrelation function across ALL time intervals, not just long ones.

        If you use the right estimator (and I’m not suggesting that DFA is the best estimator to use), you can get an adequate estimation of the Hurst exponent from a series as short as the satellite record. Of course, due to the nature of the Hurst exponent, it is better to get a long record, but this does not rule out getting a reasonable value from a shorter record.

      • Assuming that there’s a Hurst exponent valid over wide range of intervals you may determine it from part of that range, but that involves the crucial assumption, whose validity is never obvious.

        In this case the existence is known only, when it has been verified and only for the range checked.

        Nothing in that changes the conclusion that then whole idea is without base for temperature record since 1900. That can be immediately concluded from the nature of the variations.

      • Of course, it is virtually impossible to differentiate certain models. I can create an ARMA model with a very long time constant (20kyr+), a series based on Hurst-Kolmogorov dynamics, and you can show that very long periods are needed to differentiate between the two models. This is largely academic though, since those models would (unsurprisingly) yield similar error intervals.

        Given this extraordinary difficulty in distinguishing different time series, you make me laugh when you assert that you can reject one just by looking at the time series. Who needs science and statistics when you can just squint at the data from a distance, and just know what is right and wrong without a single calculation.

        That may work for you Pekka, but for my part, I prefer to stick with the scientific method.

      • Given this extraordinary difficulty in distinguishing different time series, you make me laugh when you assert that you can reject one just by looking at the time series. Who needs science and statistics when you can just squint at the data from a distance, and just know what is right and wrong without a single calculation.

        Strong words. Care to back them up with some examples of your own data analysis applied to this particular problem, considering lots of data is publicly available?

      • Spence_UK,

        That may work for you Pekka, but for my part, I prefer to stick with the scientific method.

        I was very brief and that may easily lead to misunderstanding. What I really claim is that certain features in the data are known to influence the results of particular analyses so strongly that minor changes in them change the outcome of the analysis very much. At the same times it may be clear that those changes cannot be considered as really significant for conclusions. Combining these two observations that can indeed be made by just looking at the data, one can conclude that the particular method of analysis cannot give significant results.

        Such considerations may be misleading, but they do at the minimum tell that the results of the suspect analysis cannot be given much weight without careful study of the points that make them suspect and specific proof of the claim that the method is valid. In this particular case, one should study, how sensitive the results are to such changes in the empirical data that are thought to be consistent with the same general picture.

        When the dominant features in the data include oscillatory behavior of apparent period of some 60 years, its clear that estimating Hurst exponents from about 100 years of data is meaningless. This statement is fully consistent with using the scientific method, as using scientific method requires that inappropriate forms of analysis are not used.

      • Pekka, there is a certain irony in your post I think you are doomed to never understand.

        Power spectral density of a Hurst time series is 1/f.

        Which means large oscillations near the length of the time series are expected to be present in a series exhibiting the Hurst phenomenon.

        So the very thing you insist is something that shows it can’t be Hurst is exactly the thing you would expect from a Hurst series.

        If you wish to show something is wrong, you really need to understand it first.

      • FAO WebHubTelescope:

        In answer to your question a couple of comments back up. I was referring here to the problem of not knowing the “correct” model for natural variability and the difficulty in determining what the correct model might be. I think it is useful to demonstrate this from a theoretical perspective rather than a “real world” example.

        An autoregressive model can be set up such that the power spectral density will be (approximately) a flat line to the left of the “knee” of the time constant (lower frequencies), and a declining line at 3dB per octave to the right of the time constant (higher frequencies).

        You can also set up a Hurst time series with a decay rate of 3dB per octave. Now if we take a realisation of those models but select a time period sufficiently short that all measurable frequencies are to the right of the time constant (i.e. time series shorter than the time constant, excluding DC because we are dealing with anomalies), then the two spectra are indistinguishable. And given there is a one-to-one mapping from spectrum to time series, the time series will be indistinguishable as well.

        One thing you can do is falsify a short time constant quite easily, but it is impossible to separate the possibility of a longer time constant vs. Hurst. This then feeds into your (correct) observation elsewhere in these threads that longer time series need to be analysed to determine the correct model.

        Where Pekka is wrong is that if we know a priori that Hurst is an appropriate time series, then we can estimate the Hurst exponent from a relatively short stretch of data. If we know a priori the time series is autoregressive, we need data much longer than the time constant in order to determine the time constant.

        The other problem is that if we don’t know which is correct*, and apply autoregressive estimators to a Hurst time series, you can be pretty much sure that you will get wrong answers out.

        * The case with real data – and of course this is not just restricted to Hurst or autoregressive; the “real” model could be something else again

      • I understand your point, but it’s still not valid.

        It’s an interesting observation that many phenomena follow power laws over a wide range. At the same time it’s true that this is much more an empirical observation than based on some general theory. The power laws are surprisingly flexible and that helps in making them so ubiquitous. In almost all cases the power law is, however, valid only over some limited range and fails on both sides. The Hurst exponent is just the exponent of such a power law and it’s never known without empirical verification, whether some time series corresponds to a Hurst exponent over a wide enough range.

        There’s just really too little evidence for a power law in the behavior of the temperature time series. As many people have said, the short term variations are to a very large extent periodic with periods of 24 hours and one year. No reasonable study on the variability can be done without elimination of these periodic phenomena, but the elimination is not a unique process and the way it’s done will influence the outcome essentially. Therefore the shortest term data of any significance is formed by the annual averages. Having about 100 data points is hopelessly little for determining, whether some power law is present. That’s even more so, when we notice hat there are strong oscillations present, which last sometimes less than 20 years and sometimes around 60 years.

        Separating phenomena of different periods from such limited data is very dependent of the method used. That’s immediately obvious, when one tries to do it directly without any specific statistical methods. When the reason is partially in the very limited amount of data, it’s also true that using any less direct more sophisticated method is futile. The results may be almost anything and they have no significance. Statistical methods may be very valuable, when there’s a lot of relevant data, but they are only misleading when used with too little data. This was the original point of Richard Tol, and this has been expressed in different ways in many other comments by people, who have real life experience from statistical methods. My claim is that the conclusion was evident by a very cursory look at the temperature time series.

        The sophisticated methods are valuable, when used with care and when their applicability is checked and confirmed, but common sense is often much more valuable than indiscriminate use of most powerful statistical tools.

      • One should, of course, test whether the temperature series really is fractionally integrated. LLE do not do that. However, there are 10 or so published papers that do test this hypothesis and cannot reject it.

      • Sorry, but this simply isn’t the case. The Huybers and Curry 2006 sediment reconstruction dating back circa three million years show the 1/f Hurst behaviour extends back five orders of magnitude further than you give it credit for.

        See slides 28-29 here for a climacogram extending over millions of years.

        Analysis of modern data (even down to satellite measurements) yields Hurst exponents exactly in line with what we see over millions of years. This model is so good just a couple of parameters achieves that.

        But as I said last time – you need to understand the topic before this makes any sense. I’m afraid your statements about 60 year cycles somehow showing Hurst phenomenon cannot be present shows that your understanding of the topic needs further work before you can bring credible criticism to the table. It is like explaining multiplication to someone who has only ever seen addition and then having them turn around and say multiplication is impossible since there is no way you can get a number as high as 144 from 12 times 12.

      • Dr Tol – I absolutely agree. That is part of the problem I was alluding to in my reply at 4:36pm (although perhaps I am not always clear enough). It can be extremely difficult to tell the difference between (as an example) an autoregressive series with a long time constant and a Hurst time series.

        We cannot reject at this time a Hurst time series, but this doesn’t mean it is necessarily the right model.

      • I’m not referring to th 60 year period as evidence against the presence of a power law and the validity of of some Hurst exponent. I’m only saying that it’s totally impossible to verify the presence of a power law from the data starting from 1900. As it’s presence cannot be verified, there’s also no possibility of knowing whether the value of Hurst exponent obtained from that data would be correct even, if a power law is valid over some range.

        The paper is trying to do something that’s is very obviously impossible. It doesn’t give justification for attempting to do such a thing. Finally it draws strong conclusions that are not to the least supported by the analysis and are actually such that they could not be supported even by much better data.

      • Spence_UK: We cannot reject at this time a Hurst time series, but this doesn’t mean it is necessarily the right model.

        I think that is true about a lot of climate modeling. Based on hypotheses developed and refined up til now, and ongoing data collection, I expect more definitive statements to be supported 2 decades from now.

        To pick just two examples familiar to all of us here, there is a clear difference in the predictions from Girma’s model and the GCM ensembles for the next 20 years. Right now Girma’s model is looking better than the GCMs, just barely.

        If Luedeke et al are correct, the covariation of temperature change and CO2 change will end, as it may have already. Two decades ought to be about enough time to find out, because the forcing from CO2 will continue to rise.

      • Pekka, your last reply makes little or no sense.

        Serially correlated natural variability exists. No one model has (so far) shown to be conclusively correct. If we take your philosophy to its logical conclusion, we should use no model since none have been shown to be correct. But that is a ludicrous proposition. So it is a question of rejecting inappropriate models, and as Dr Tol correctly notes, many people have attempted to quantitatively reject long term persistence and failed to do so. (Please remember science is based on a failure to reject; e.g. we use relativity not because we know absolutely that it is a perfect law, but because numerous experiments have failed to reject it)

        The Hurst model can be no more “verified” than any other model (e.g. autoregressive), so there is no reason on the basis you describe to choose one over the other.

        Furthermore, the climacogram linked above gives quite strong support to either (a) an autoregressive model with a time constant >> 10s of millions of years or (b) a Hurst model, both of which would yield near identical results on a 100-year scale anyway. On this basis alone, Hurst seems a pragmatic choice of model for serially correlated natural variability.

      • I try to explain in somewhat different terms, why I don’t think that this analysis doesn’t tell anything on the evidence for a AGW signal in the time series that cannot be seen just by looking at the data.

        From one side an opposite outcome, i.e. an outcome that the observed trend exceeds the limits of likely trend based on the model would exclude only a statistical model that is known to be nothing else. The result could not exclude some other type of natural variability. A cursory look at the data shows that it’s not difficult to invent models that would produce similar warming from variability, when no constraints are set on the model.

        From the other side, one may claim that the level and type of variability that the model represents contradicts some valid constraints. This is really, what main stream climate scientists claim, if not with these words.

        Furthermore we may look at the analysis consider in better detail, what kind of AGW trend the main stream climate science considers important. That’s nor linear over 50 years. It’s not either linear over 50 years, as the main stream interpretation is that the first part of the 50 year period was flat due to the influence of aerosols and that the influence of CO2 was still rather weak around 1960. Thus the trend that should be considered is not linear but flat at the beginning and steepest at the end. That kind of trend appears as fluctuation in the linearly detrended data. The time scale of this erroneously interpreted trend is similar to some of the natural fluctuations that are likely present in the data, but this falsely interpreted fluctuation influences the estimate of the Hurst exponent.

        The analysis tells that the observed overall linear trend is within the limits easily produced by the fluctuations, if the Hurst exponent is correct, but the result does not test the need of AGW, because the AGW signal would have influenced the Hurst exponent.

        But all this comes back to my starting point. there just is too little data to support this kind of analysis and using a method that’s not applicable is likely to produce spurious results. Because it’s likely to produce spurious results, every result that is obtained must be considered as spurious until strong contrary evidence is given. Using methods that are inapplicable in such a way is not scientific without comprehensive analysis of the significance of the results.

      • A cursory look at the data shows that it’s not difficult to invent models that would produce similar warming from variability, when no constraints are set on the model.

        What a ridiculous comment, “when no constraints are set on the model”. Of course if there are no constraints you can get anything. The Hurst phenomenon is not a “no-constraints” model, in fact it is a by-product of very real constraints on the environment; entropy maximisation, non-negativity, and a lack of a singular time constant in the climate system. Furthermore, it is additionally constrained by the Hurst exponent and the standard deviation appropriately estimated from the data. Furthermore, as Dr Tol explained to you earlier, many scientists have attempted to reject the presence of the Hurst phenomenon using statistical tests and have failed to do so. That’s about as good as it gets in science.

        However, I agree with you on one point: using a linear trend as a test is inappropriate, and the alternative tests developed by Rybski et al 2006 and Koutsoyiannis 2006 are more appropriate than the linear trend tests. So this point has been addressed in the wider literature, just not by this paper.

        Also, your suggestion that we cannot interpret natural variability from the data remains ludicrous. Natural variability exists, and we have to incorporate it into our statistical models; you can’t just ignore it, otherwise you find the last ten years disprove global warming (etc.). What you seem to fail to realise is that any model you put in place has absolutely no more justification or support than the Hurst model. Arguably, autoregressive models with short time constants are actively evidenced against by the ice core data.

        This isn’t a “hurst or not” debate. It’s “hurst or something else”. You have to show why “something else” is better than hurst, and so far you have singularly failed to do so.

      • The real Earth system is constrained by all laws of physics. The conservation of energy sets some constraints, which are certainly very important in determining, what kind of warming is possible. How this constraint enters is not known in detail, but it’s real and important.

        I agree fully that there are important natural fluctuations, but finding out that the data are consistent with a stochastic model that sets very weak requirements for consistency over the period considered is not really useful, when the model doesn’t tell much and when few other models cannot be excluded by this analysis. It doesn’t provide any physical understanding. It’s neither a counterargument for any other interpretation of the observations that’s based on a more extensive set of empirical data and theoretical understanding.

        We don’t know, how much of the variability is natural. From the fact that the variability is in agreement with a fixed value of the Hearst exponent over some range is no proof of the share of natural variability in the data. It might very well be that the natural variability alone would not follow the power law as well or that it would have a different Hearst exponent. The limitations on the data make it totally impossible to say anything on these issues.

        The question of the strength of natural variability and autocorrelations over time scales of few hundred years is interesting, but this analysis cannot tell about that. It cannot really tell much either about natural variability at the time scales of several decades (e.g. 60 years). It gives a number when the variability is assumed natural, but it cannot tell at all, whether that assumption is justified. The outcome is largely controlled by the assumptions made.

        The way forward is in learning to understand more about the physical processes. Without that, every statistical analysis remains empty play with numbers.

      • Pekka, any alternative you choose to Hurst in the absence of better information is no better supported than Hurst; again, it is not a choice of “Hurst or nothing”, it is “Hurst or something else” and that something else needs to be justified.

        Also: as noted elsewhere, there is already a fully formed physical argument as to why we should prefer Hurst as a basis as described in this paper:

        http://itia.ntua.gr/en/docinfo/1102/

        Koutsoyiannis, D., “Hurst-Kolmogorov dynamics as a result of extremal entropy production”, Physica A: Statistical Mechanics and its Applications, 390 (8), 1424–1432, 2011

        It explains the differences in constraints that lead to different forms of serial correlation, and notes that the constraints governing climate lead to Hurst, not autoregressive correlation.

      • Also: as noted elsewhere, there is already a fully formed physical argument as to why we should prefer Hurst as a basis as described in this paper:
        http://itia.ntua.gr/en/docinfo/1102/
        Koutsoyiannis, D., “Hurst-Kolmogorov dynamics as a result of extremal entropy production”,

        Thanks, interesting stuff. It might not be that apparent to the casual reader, but I think Koutsoyiannis has a very pragmatic approach. That is an excellent idea to use the Vienna data. From the Ludecke paper, the data fits very well to a random walk as the power-law goes as 1/2 as you can see from the log-log scale.
        Unfortunately,they left the data on an arbitrary unit scale so I cam’t tell the diffusion coefficient. I will have to get the Vienna data myself to calibrate.
        I am looking on http://climexp.knmi.nl/, but can’t find the exact time-series to use.

      • Incidentally, I thought I would note that Cohn and Lins 2005 (a paper I quote often!) gives a p-value of around 7% for the trend in HadCRUT up to when their paper was published based on DFA, Hurst+ARMA – pretty impressively close to your 3%, considering the different approach they used (they estimate the variability from the instrumental record).

        Good to see two very different approaches give similar answers.

      • Incidentally, I thought I would note that Cohn and Lins 2005 (a paper I quote often!) gives a p-value of around 7% for the trend in HadCRUT up to when their paper was published based on DFA, Hurst+ARMA – pretty impressively close to your 3%, considering the different approach they used (they estimate the variability from the instrumental record).

        Eyeballing the Greenland data that came out, I get a sense that my 3% number is on the low side for a localized projection.

      • Pekka Pirilä November 11, 2011 at 12:25 pm (also for Fred)

        I fear that it is the nature of the thing that multiscale modeling incorporating stochastic modeling will be the future of climate research. Just as modeling reactions at molecular scales require bottom up and top down analysis, some of it stochastic, so does the climate.

        You can see this already with the diminshing returns coming from the grand (largely deterministic) enterprise of GCMs.

      • I fear that it is the nature of the thing that multiscale modeling incorporating stochastic modeling will be the future of climate research. Just as modeling reactions at molecular scales require bottom up and top down analysis, some of it stochastic, so does the climate.

        You can see this already with the diminshing returns coming from the grand (largely deterministic) enterprise of GCMs.

        Should we fear this change or embrace it?
        Many physicists would suggest that this is a coarse graining approach, which is analogous to the premise of applying statistical mechanics to modeling macro behaviors.

        I purchased the book by Mumford called “Pattern Theory: The Stochastic Analysis of Real-World Signals” a few weeks ago and am applying lots of his ideas to problem solving and modeling. It’s amazing how simple many of them are, and have been embracing these kind of ideas for a few years now.

      • Don’t forget that plain old random walk is a power law and that appears in many different domains. This will also give the huge excursions that get larger with elapsed time. The fundamental question we are trying to address is whether the excursions can get large in the shorter time intervals as well.

        Instead of random walk/Brownian motion, can we model this better as an Ohrnstein-Uhelenbeck process? This can give the random walk/red noise characteristics like we are seeing from the DFA log-log plots but it also has the behavior of reverting to the mean.

        Thanks for the Koutsoyiannis slide presentation. This is starting to get interesting.

        The latest Greenland data that someone else mentioned in this thread is very interesting to extract power-law behavior from. It has the appropriate time scales of several thousand years.

      • Web, that sounds more like the plan to me. One thing though, if you compare a southern temperature based random walk or process to a northern hemisphere random process you should find possible larger excursions with pseudo-synchronization. Spell check says that’s a word :). Global series tend to mask response IMHO.

      • Web, that sounds more like the plan to me.

        I am in the middle of setting up a Monte Carlo-based Ohrnstein-Uhelenbeck process that I talked about. Very simple in that the random walker develops a “drag” as it approaches what could be considered a reflecting boundary, either at high temperatures or at low temperatures during glaciations (I keep it symmetric for now).

        All I can say is, wow, to the preliminary plots I have for matching qualitatively the simulated DFA to the Vostok data. You get the random walk sqrt power-law at the short times and it bends over at it approaches the upper or lower limit. Watch this space as I am off work today and I can make some progress.

    • Dikran Marsupial

      I certainly agree that blog science IS possible, however it is rather less likely to happen on an issue as contentious as climate change – the rhetoric/politics gets in the way. If the discussion can be kept to the science then it can be done, it is just not necessarily a good way to do it.

  12. The big problem is that too many people are putting faith in fancy statistical methods to solve the twin obstacles of inadequate data and inadequate theory. Well it just can’t do that!

    Hence Tols criticism is valid in general, if not in the specific. But…similar criticisms can be made about all papers that attempt to demonstrate that the temperature rise since 1950 is predominantly manmade. I assume that Dr Tol has not previously criticised such papers which hang on even shakier pegs than the ones he prefers to criticise here, because if he had done that he would have been quite simply disinvited from the IPCC process. He knows it and so does Judith.

    I remain a supporter of Dr Tol for stating the obvious about Stern’s bunkum when other economists remained silent and for doggedly pointing out the obvious but apparently politically incorrect fact that moderate warming, whether manmade or not, would actually be beneficial. However, he like many others, would have remained in academic obscurity without this manmade global warming prop.

    Just as with the economic theory failures of recent times it is easy to see a rising trend, fool yourself that you know why it is rising and predict it will keep rising. What we’d like to see is some self-reflection when it is obvious these predictions have failed and perhaps also a less simplistic approach to complex, non-linear systems than the extrapolation of a 100 year rise of 0.5 degrees to 3 degrees the next 100 years based on no more than hubris, tuned, unvalidated models and fake temperature reconstructions. An understanding of natural cycles would be a good first step. Without that understanding, everything is speculation.

    • There are too few hours in the day to debunk all the nonsense out there. My career is not advanced by criticizing others.
      In this case, Barry Woods forced me to back up a loose statement. If I had known about the 1900 comments, I would have kept shut.

      • Sorry to disagree slightly but in the current state of affairs the following is more true:

        Your career is perfectly safe while you criticise those who say global warming is not a problem.

        Your career is enhanced when you criticise those who say global warming is a problem.

        Your career will fall off the rails should you violently diasagree with the consensus.

        Hence this is a safe criticism for you! We all know this! You won’t lose funding, nor be smeared like Lindzen above, nor be disinvited to conferences. Keeping silent is ok if it is done in a bipartisan manner and with regard for truth rather than career. Unfortunately there is currently too much silence of the lambs – as Steve McI describes it.

        As i said, I remain a supporter of your attempts to chip away at some of the nonsense economics from the inside. It is a shame it has to be done so subtly. Good luck with AR5 as judging by the first draft you are standing alone with mere facts to counter numerous contradictory model projections :)

      • Perhaps the take away is there is no statistical method of time series analysis that guarantees confidence levels adequate to predict the future.

        If there were, all economists would be multibillionairs and climate would be easily predictable.

      • My own takeaways are:
        1. The few economists who dared oppose the consensus were proven correct come the crash.
        2. The crash came about by a widespread reliance on computer models with poor assumptions and not enough common sense being used.
        3. All scares of the recent past that were rooted in our modern lifestyles proved to be false ‘ and their were very many.
        4. Witchunts are still common. Though they no longer burn anyone, they can still do fatal damage to a career.
        5. As Bertrand Russel said the real problem is that clever people are so full of doubts while fools are always so certain.

      • There’s a reason why #1 has to be true. Investing is a zero-sum game. Running with the crowd can’t get you rich in the long run (though it can in the short run). On average, you can only earn in the long run what productivity increases are.

        Does climate have zero-sum characteristics? Conservation of energy says so, but as with economics, there are also other things going on rocking the boat making it hard to see the conservation, and making it not as simple or as dominant as one would think.

      • There’s a reason why #1 has to be true. Investing is a zero-sum game.

        Absolutely and a zero-sum game falls under the category of game theory. Game theory involves a human psychological component that makes shrink-wrapped economic predictions virtually useless.

        So don’t lump climate science in with economic predictions, as nature does not have a brain and is much more predictable when this is taken into account.

      • The “zero sum” part is the non-negotiable part. The “game” part is an insignificant detail.

      • The zero-sum is a constraint when phrased as conservation of energy or conservation of momentum, or when dealing with probabilities that have to sum to unity. These constraints actually make certain problems easier to solve.

        Modeling the “game” is the challenging part because every economic game is quickly taken advantage of and modified by the participants, which then has an effect on the outcome.

      • “if I had known about the 1900 comments, I would have kept shut.”

        Dear Dr Tol,
        The laugh i just got from reading this would makes me forgive your first faux pas…

        Not as funny but truer…”If I had been less passionate and sweeping and more precise in my statement I would have avoided the concomitant comments”. Even better would have been “Sorry”.

        Let me indulge in a little psychology: I hope that Dr Curry and others would also recognise that the same thing that makes you call a spade a spade despite pressure to toe the line, the same thing that drives you to criticise the second rate or dishonest, is the same impulse that makes you occasionally cross the line.

        For what it’s worth, you have one more fan in your club.

      • Finally we have an admission on what you euphemistically call a “loose statement”. It was loose, and also dumb and insulting. You have failed to back it up. Why don’t you apologize, and move on..

      • I continue to think that Judith deliberately spread information that she should have known is false.
        I regret writing that — I should have kept it to myself.

      • For some reason that seems to be a common thought, though I highly doubt there is any true to it. She seems to be more into dispelling misconceptions and promoting honest debate in the the real nature of uncertainty.

        These papers are very enlightening for many on the statistical pitfalls one can encounter in analyzing time series applications for a very complex system.

        I will admit, her Italian flag waving can be a bit distracting, but dealing with uncertainty seems to be much to much a personal choice, not a scientific standard in climate science..

      • You did not attempt to provide some reason to believe the claim. You did not attempt to explicate the claim. You did not respond to criticisms of the claim.

        So far, the one thing we have learned about your beliefs about Dr. Curry is that you have them. Anyone who is at least a somewhat regular reader of this blog, excluding the most self absorbed trolls, will testify that what you said is false. I would like to offer once again my defense of Dr. Curry. It is followed by my criticism of your remarks about her. You managed to ignore them over the last two days. Please try to respond to them.

        Defense of Dr. Curry:

        “Many people do not understand Dr. Curry’s purposes in creating and managing this blog. I do not understand all of them and I am quite happy to have them revealed in Dr. Curry’s good time. I do understand some of her purposes and some that she considered but rejected.

        Among the purposes that Dr. Curry rejected is the purpose of serving as an advocate for a particular point of view. Dr. Curry scrupulously avoids being an advocate for the IPCC and CAGW. She scrupulously avoids being an advocate for scepticism about climate science. She is quite willing to tell either side about its errors, usually of excess, and does so often on this blog. Dr. Curry has always made it clear that she does not serve as the arbiter of truth about climate science.

        Dr. Curry’s main purposes in creating this blog include creating an exchange for ideas about climate science and ideas about communication among climate scientists and other interested parties including the general public. In addition, this blog serves as a laboratory in which Dr. Curry can observe various communication processes taking place. She has posted many articles whose theses she does not endorse but whose subject matter challenge us to think more clearly and more completely about a particular topic.

        Dr. Curry has always pursued a lively commitment to critical thinking. She is a pioneer in showing how science communication can be vastly improved through an online community dedicated to rational discourse about a topic such as climate science. In pursuing this goal, she occasionally posts articles not because their content is necessarily true but because they nudge the discussion toward a clearer and more complete understanding of a topic. Some of the articles that she posts are offered more as class exercises than as steps forward in science.

        Anyone who claims that Dr. Curry engages in disinformation has failed to understand Dr. Curry’s purposes in this blog. They have substituted their own purposes for hers. No doubt many people wish that Dr. Curry would be an advocate for the IPCC and CAGW. She will not. Many wish that Dr. Curry would use this blog to create a definitive statement of the truths of CAGW. She will not. Dr. Curry is a student of communications about climate science who upholds high standards of rationality and critical thinking as she leads her online community through a valuable education about today’s debates on climate science. Her example is inspiring. Unfortunately for the rest of us, her combination of talents seems to be unique.”

        Criticism of Tol’s remarks about Dr. Curry:

        “Tol’s most outrageous claims, in my opinion, are that Dr. Curry speaks authoritatively through her blog and that she made some kind of moral error by “lending” her authority to Lüdecke, Link, and Ewert when she posted their work. In response to my criticisms of the first claim, Tol wrote that “I indeed think that Judith Curry is an authority. I also think that others think the same.” Clearly, Tol does not believe that there is a need to explain why he believes that Dr. Curry acts as an authority through her blog. Apparently, he believes that the fact that he says she is makes it true. We do a disservice to scientists and to science if we permit scientists to remain wholly inarticulate on moral questions. We do a double disservice if we permit them to insist that discussion of science must be the only focus of blogs such as Dr. Curry’s and that any means may be employed by scientists seeking to post so long as they believe that they further the discussion of science.

        One purpose of Dr. Curry’s blog is to learn about communications between climate scientists and the general public. I believe that Tol’s example establishes that some scientists believe that they are permitted to dismiss any questions from the public that do not bear in the most direct way on their scientific claims, that they are morally justified in this act of dismissal, and that they have no duty to explain why they are morally justified. I cannot imagine a better definition of unbridled arrogance. I believe that we have learned that some scientists cannot avoid arrogance in their communications with the public. Needless to say, such arrogance does not bode well for climate scientists who believe that their main problem has been a failure to get their message to the public.”

      • Very few human actions result from malice. What we often forget is that almost everyone acts from noble intentions. This includes saints and sinners.

      • Brandon Shollenberger

        ferd berple, you say:

        Very few human actions result from malice. What we often forget is that almost everyone acts from noble intentions. This includes saints and sinners.

        I’m not sure about “noble” intentions being that common (I think they’re usually more apathetic than noble), but I agree with you wholeheartedly. This is an important point, especially since Richard S.J. Tol is not currently claiming ill intent. In this sense, what he says is in line with your comment.

        Unfortunately, his initial remarks said Judith Curry was spreading disinformation, a claim which necessarily requires ill intent. This is not what he is saying now even though he attempts to portray it as such. Perhaps he doesn’t realize the distinction between misinformation and disinformation, but people would not have been so bothered by his comment if he hadn’t specifically (and perhaps unintentionally) claimed Curry was being deceptive.

      • The Road to Hell is paved with good intentions, and so is the Road to Heaven.
        ===========

      • Would you have time to critique the Santer et al article that settled on the 17-year period as minimally necessary to assess a hypothesized change in the climate trend? Maybe you did and I missed it.

      • MattStat

        I’d love to see the Santer method applied to BEST on it’s first and last half centuries and the century between by someone more competent than I.

        I suspect — if valid — the method would produce distinct and progressively shorter periods.

  13. I think that this set of statements sums up what people should take from this ‘analysis’.

    ‘Principally, our papers are neither predominantly interested in nor can give any answers about the forcings. Regarding Figure 10 on page 17 of LL, only the UHI and elevation warming can be clearly identified. Without doubt, there is a leftover of many possible further forcings that could have caused the 20th century warming – even so the anthropogenic CO2 that is haunting so many people. However, our applied method cannot distinguish between these forcings. Our main task was to show that the 20th century warming was predominantly a natural fluctuation. This answer was given by an elaborated statistical method, which of course like all statistical analyses have its limits.’

    So there you have it. They claim they are not trying to diagnose the forcing, but then claim that the warming was not externally forced by human emitted greenhouse gases. That or they are using a very funny definition of ‘natural’.

    More than that, again, they purpose no physical model to explain the supposed ‘natural fluctuation’ that caused the warming parsed out from the statistical methods they used.

    So with no physical mechanism to explain the warming in any meaningful way and a seeming contradiction in the interpretation of their results, I think there is more than enough reason not to delve any further into this work.

    • Brandon Shollenberger

      maxwell, you seem to have misunderstood the authors. You say:

      More than that, again, they purpose no physical model to explain the supposed ‘natural fluctuation’ that caused the warming parsed out from the statistical methods they used.

      There is no reason one would need to propose a physical model to explain the sort of fluctuation the authors discuss. Nobody disputes the fact temperatures are autocorrelated. The authors simply claim this autocorrelation, along with the natural variability one would expect in a noisy data series, is enough to give rise to the sort of trend we see in temperatures. If they are right about this claim, the reason for the autocorrelation isn’t important for their position.

      As an aside, the idea that temperatures are autocorrelated is not controversial, and we do understand (at least partially) why they are. You are basically criticizing the authors for not giving an explanation for what is commonly known in climate science.

      • And if nobody can define what are the bounds of natural variability, who can prove that they are wrong? Maybe telescope with the vostok ice core data, a couple of trees, upside down sediments?

      • as I’ve said repeatedly, its the responsibility of those who claim “natural variation” as an explaination to define their null with numbers,

      • Why?
        They don’t want you or I to spend any money

      • AC. we are discussing logic and inference and hypothesis testing.

        when you master it, join us.

        As for your argument one could easily say that continuing to put GHGs into the atmosphere is a likely danger, so prove it safe.

        Neither that argument nor yours is on point, but you and others continue to say stupid things. I cant stop your stupidity but I can point it out.

      • No, it’s the responsibility of those who are ruling out natural variation to at least define what are the bounds of natural variation. I will play your game though and claim that the global mean temperature has varied by as much as 2C per century at least 4, 566 times in the last 8 million years. Prove me wrong and I promise that I will believe that the recent warming, whatever it may be, is unprecedented and is caused by man-made CO2 emissions.

      • Null with numbers? Solar change 0.1%, Conductivity change with temperature 0.1%, Cloud cover change 0.1% Ocean air thermal boundary change with acidification 0.05% Doubling of CO2 change ~1%, Small changes compared to small changes are no longer small changes. That should be the definition of non-ergodic systems.

      • as I’ve said repeatedly, its the responsibility of those who claim “natural variation” as an explaination to define their null with numbers,

        Ludecke et al did that. Don’t you agree?

        But why only those who claim natural variation? Why not those who promote AGW?

      • AC. we are discussing logic and inference and hypothesis testing.
        when you master it, join us.
        I do not need to master it to call you out for you to define your “logic” of a biased statement.
        So my question is as “on topic” as your statement was

      • Brandon – While we can use statistical models to describe a temperature time series and even show that it could be part of a longer term natural pattern, temperatures and temperature changes are the result of the interaction of various physical processes. To be able to attribute a temperature trend to either natural fluctuations or anthropogenic causes, you need to be able identify the physical causes for the temperature change. It really is that simple. Even if Ludecke et al.’s statistical analysis is 100% correct, the strongest conclusion they can make is that you can rule out natural causes which is different than concluding that it is part of a natural fluctuation.

        Relative to the autocorrelation of temperatures, I agree that at a single specific location or area, there are good reasons for concluding that temperatures are autocorrelated. I am not sure that there are as good reasons for assuming that estimates of some global-averaged temperature should necessarily be autocorrelated.

      • Brandon Shollenberger

        Even if Ludecke et al.’s statistical analysis is 100% correct, the strongest conclusion they can make is that you can rule out natural causes which is different than concluding that it is part of a natural fluctuation.

        I’m assuming the word I made bold should have been “cannot.” Typos are easy to make, and this sentence doesn’t make seem to make sense as it is. If I’m missing something though, feel free to correct me.

        As for what you say, I agree with your position, but not with your depiction of the authors’ work. Assuming, for the sake of argument, the authors’ calculations are absolutely correct (and meaningful), all we know is there is a high probability we could “see” the observed warming without anthropogenic influence. As you say, this doesn’t rule out the possibility of a meaningful anthropogenic influence. The authors actually agree with this point. The problem is they use a definition of “natural” which can lead to confusion. This is how they define it:

        ‘Natural’ means that we do not have within a defined confidence interval a definitely positive anthropogenic contribution and, therefore, only a marginal anthropogenic contribution can not be excluded.

        By this definition, when they say the observed warming is a “natural fluctuation,” they’re actually saying the same thing you are. This definition is just not one people normally use. If you know their intended meaning, their comments are clear, but if not, you will most likely be misled. I think it’s fine for them to use this definition within their papers where they make it clear, but I think they are (unintentionally) misleading people when they say things in their guest post like:

        Our main task was to show that the 20th century warming was predominantly a natural fluctuation.

        Most people who read this post will not have read their papers, and consequently, they will not be aware of the specific meaning of “natural” the authors are using. Because of this, they will be misled. Of course, the authors are discussing their work, so they (somewhat reasonably) assume people will be familiar with the definitions they use within that work.

        In the end, I think this issue mostly comes down to one of definitions. The authors used a somewhat peculiar definition, and this has created confusion. Nobody is “wrong.” They’re just misunderstanding each other.

      • Brandon Shollenberger

        I wish I knew why I have so much trouble with blockquotes. Sorry about that. I’d repost it, but I think it’s still fairly legible.

      • Brandon,

        I’m not a statistician. I’m a physicist. That means I care about physics, not statistics.

        So it’s great that they get auotcorrelated temperatures, I guess. But they are using statistics to make a physical argument without a physical model. To me, a physicist, that makes no sense.

        It’s really that simple.

        More than that, when simple physics is applied to their conclusions, natural fluctuations would have to raise the level of heat in the oceans and the surface by enormous amounts. Amounts for which there is no known physical mechanism.

        So again, as BobN has already pointed out, correct mathematics does not equal correct physics.

        It is just that simple

      • Dikran Marsupial

        I’m (essentially) a statistician, I am also more impressed by a physical model than a purely statistical one.

      • To BobN and maxwell

        A physical basis for serially correlated time series in climate trends, as well as a method for determining whether autoregressive models or Hurst models are more appropriate, can be found here:

        http://itia.ntua.gr/en/docinfo/1102/ (click through to preprint)

        Reference:Koutsoyiannis, D., “Hurst-Kolmogorov dynamics as a result of extremal entropy production”, Physica A: Statistical Mechanics and its Applications, 390 (8), 1424–1432, 2011

      • D’oh, typo, delete “climate trends”, replace with “climate data” or somesuch :)

  14. It seems to me that Tol cannot abide a deterministic mathematical method that leads to but a single outcome and all the while wishes to entertain the notion that CO2 has mystical properties not observed in nature that inexoribly lead to runaway global warming. And of course we are left to imagine for ourselves that rivers will run red unless more power is channeled to the liberal fascist model of an ever bigger and controlling secular, socialist state comprised of government bureaucrats and toady supplicants.

    • You do cheer me up on a wet and windy daty in Wales.

      I especially loved this – “rivers will run red unless more power is channeled to the liberal fascist model of an ever bigger and controlling secular, socialist state comprised of government bureaucrats and toady supplicants” made me smile

      No doubt both your and my comment will shortly be removed – not technical – but I just wanted to say that you’ve brightened my day. Monty Python at its best (Life of Brian and the Peoples Liberation Front or was it the Liberation of Front People?)

  15. Autocorrelation of temperature records provokes that the autocorrelation function fades with a power law. It is self evident by basic maths that this law, and as a consequence, the exponent of the power law is not effected by the time unit, be it day, month or year

    These guys don’t think like scientists at all. Monthly data corresponds to seasonal variations, especially at the local scale where they seem to get some of their data from (i.e. the various European cities data sets). Obviously seasonal variations show a periodic component which is just a nuisance for estimating longer term trends. What is so hard to understand about that — put a box filter at 12 month intervals, the seasonal fluctuation is removed, and you can go on with an analysis.

    Consider an alternate argument: They are assuming that the underlying stochastic process is some sort of random walk, be it ordinary or fractional. Why else would they bring the Hurst exponent into the discussion? The seasonal variations is highly deterministic in its periodic component. Remove the periodic and the non-stationary trends if you want to see the stochastic behavior. What am I missing that goes against first order principles needed to understand the underlying stochastic process?

    • –>What am I missing that goes against first order principles needed to understand the underlying stochastic process?

      That the mathematics of reductionism can be used to adequately quantify an intensive variable.

  16. My take on the papers is that theLudecke papers are more about detection than attribution, in the parlance of the IPCC. For the distinction, see this previous post:
    http://judithcurry.com/2010/10/17/overconfidence-in-ipccs-detection-and-attribution-part-i/

    • The D in DFA presumes that there is a trend. It’s not detection so.

      • Just how much of reality should we be willing to sacrifice to find a trend? Indulging in the mathematics of reductionism by using ever smaller boxes to represent ever larger areas and then using ever longer time intervals—e.g., annual instead of monthly data—GCM model-makers systematically eliminate the quality of entropy and impose cardinality to take curiosity and coincidence out of the chaotic features of the physical world.

    • Brandon Shollenberger

      Judith Curry, I think this is a pretty obvious point. The papers have parts which deal with attribution, but by far and large, they are about detection. This is made clear by what the authors say in this guest post:

      Principally, our papers are neither predominantly interested in nor can give any answers about the forcings.

      • And made very much less clear at the end of that same paragraph, when the authors say:

        “Our main task was to show that the 20th century warming was predominantly a natural fluctuation.”

      • Perhaps Brandon, like me, is still trying to figure that one out.

        It sounds like the authors were saying we didn’t do it, but our main task was to do it. That can’t be right. What do the authors mean?

      • No, “predominantly” still leaves up to 49%. All that statement does is shave the extreme scenarios off.

      • Brandon Shollenberger

        MMM, it is only less clear if you do not pay close attention to what the authors define as natural. There definition is a somewhat peculiar definition, and overlooking it can cause confusion. From their abstract:

        ‘Natural’ means that we do not have within a defined confidence interval a definitely positive anthropogenic contribution and, therefore, only a marginal anthropogenic contribution can not be excluded.

        As you can see from this, when they say something is “a natural fluctuation,” they don’t mean to say they know it wasn’t caused by anthropogenic influence, merely that they cannot confirm it was caused by an anthropogenic influence.

        This isn’t a definition which normally gets used, but within the context of their papers, it does make sense. Unfortunately, this guest post will be read by many people who have not read the papers, and those people are likely to be misled. Of course, the authors likely wrote this post with the thought they were discussing their papers, so people reading it would have read their papers. This would make them expect people to understand their intended meaning. It’s a reasonable expectation, even if it isn’t right.

        As far as I can tell, this is just a case of people misunderstanding each other for understandable reasons.

      • Agreed. This is my take on the situation as well. A lot of comments are being generated by people superimposing THEIR expectations of analysis goals onto the paper.

        Now i’m still (personally) unclear as to whether the particular DFA is appropriate, but that’s due to specific gaps in my knowledge which i’m looking into as we speak. However i think, from my current understanding that their analysis is interesting and if nothing else, i think it shows that the determination of a natural ‘trend’ is possible (though read that this DOES include the anthropogenic side too).

        Finally, i think those suggesting confusion over the aims of the paper really need to have a re-think. It is perfectly acceptable to state the aim of a paper as a hypothesis test. Doubly so if the paper then supports said hypothesis.

        I just don’t understand the ho-ha over this point.

  17. With all due respect for Tol, my job is to ignore and “refudiate” his insistence certain papers not be published/shared/discussed, and to think for myself. Dr. Tol – when did it become fashionable to think for someone else? Stop it NOW!

    I appreciate your posting all sides of this, Dr. Curry.

  18. “So there you have it. They claim they are not trying to diagnose the forcing, but then claim that the warming was not externally forced by human emitted greenhouse gases. That or they are using a very funny definition of ‘natural’.”

    I too was puzzled by what seems to be such an obvious contradiction. Perhaps Ludecke et al.didn’t mean it the way it sounds.

  19. Adding more data points will add information., even if the time period remains the same.

    For example, I could trend the yearly averages. Or I could build 12 trends, comparing all January’s, all February’s, all March’s, etc.

    These 12 monthly trends are likely (at a guess) to show more variance that the single averaged annual trends, which tells me there is more information to be gained about centennial variability using 1200 monthly samples than 100 yearly averages.

    In any case, in time series analysis, it is a big no-no to use averaged data to compute a trend when the data that made up the averages is available. It makes no sense to me to argue for using the annual averages, they are more likely to give a misleading result as compared to the monthly data.

  20. Discussing these papers (particularly LL, which addressed twentieth century warming) has been an interesting experience. I have learned almost nothing from the latter paper itself, whose conclusions can be shown to be untenable on the basis of basic physical principles regardless of statistics, but the statistical discussions have been informative. Nevertheless, based on what I interpret the authors to be claiming, I have the impression that they would have failed a basic high school course in statistics.

    Their main conclusion is that most of the observed warming is probably “natural”, as opposed to a phenomenon caused by an “external trend”. (Later, they narrow the external trend definition to anthropogenic warming, but there is nothing in the paper to distinguish between long term anthropogenic forcing and a similar forcing from solar changes.) In any case, for them, such a trend is “not natural” (“notnat”), and their probability P that a trend is “natural” is Pnat = (1 – Pnotnat ). They state for the groups of series they evaluate that “the probabilities that the observed temperature series are natural have values roughly between 40% and 90%”.

    How did they arrive at that conclusion? Based on the calculated Hurst exponents and observed temperature change), they computed the probabilities that a series with only “natural” fluctuations would yield warming during 50 or 100 years consistent with the observed warming. They then said, in essence, “Well, if a “natural” series has a 60% probability (for example) of giving that warming, then it is 60% probable that the warming was natural.” (See for example their Table 2). If that was not their intended meaning, it’s certainly not clear what they did intend.

    It doesn’t matter, though, because to my knowledge,there is no way we can translate the probability that a “natural” phenomenon will give a result into a probability that the result was natural. This is one of the most basic principles of statistical tests of significance involving a null hypothesis. We might show, for example, that in the absence of a particular effect we are evaluating, we could exceed a specified criterion less than 1% of the time (P less than 0.01), but that does NOT mean that there is less than a 1% probability that that there was no real effect (i.e., more than 99% probability the effect was real).

    In this case, the null hypothesis is “no external trend” or “no external anthropogenic trend”. The authors have concluded that the null hypothesis can generate their data with substantial probability. What is the probability, therefore, the observed warming was “natural”? We simply don’t know from those data. To compute that probability, we would need to know the prior probabilities for both the null hypothesis (“natural”) and the external trend hypothesis, as well as the likelihood that the latter could also yield the observed results.

    This seems like such a fundamental error on the part of the authors that it becomes hard to take the more sophisticated statistical treatments seriously, but perhaps I’ve overlooked something that contradicts that conclusion. If so, I would like to know how the probabilities that natural fluctuations can yield the observed warming can tell us the probability that the warming was natural.

    • I must completely disagree with you. I think you are grossly oversimplifying how statistics works, if not outright misunderstanding some aspects of it (at least in your writing in this one post, not necessarily you yourself). For instance, if I say a result has a certain P value of 0.01, what that means is that the likelihood that my result is related to the null result is only 1%. Or put another way, that if I had sampled all values from the distribution of the null, I only have a 1% chance of pulling out the results I got for the number of samples.

      Think of it more like overlapping bell shaped curves (yes, that’s normal distribution, and many things in statistics are not normally distributed, such as these temperature trends, but bear with me for sake of example) of population values, and probabilities are the chance any one value comes from any one curve rather than another.

      You are right when you say we don’t necessarily have the prior distribution (not probabilities, as you wrote. Again: probability is the chance of being in a certain distribution giving the value) for natural trends verses external trends, since we have just one Earth. But, that’s what their analysis was attempting to get at, from what I understand. Trying to build a “population range of trends” based on previous data points which we can consider most “natural”, to construct a testable distribution from which to get a probability of being in said distribution for the current trends in temperature we are experiencing. Again, that’s my understanding of their DFA/HURST stuff.

      From that, you can indeed calculate the probability of being within the “natural” trend population. There is nothing impossible about that in theory, at all. And when you say “If so, I would like to know how the probabilities that natural fluctuations can yield the observed warming can tell us the probability that the warming was natural,” I think you didn’t realize that the probabilities that natural fluctuations yield the observed trends is indeed the exact same thing as saying that the warming was natural by this probable amount–exact same thing, just reworded.

      Now, in -technical- terms, the actual execution of such an analysis based on the data, and reliability, of what we do have, is a -completely- different matter from the theory of being able to do it in the first place. And that is what’s under discussion: if their methods of analysis are valid, or if they create spurious ‘distribution’ results (in a crude sense) from which spurious probabilities are calculated.

      In all, these objections of yours are completely off base from what I see, as it is more than possible to find the probabilities that a trend we see is within the population of natural trends previously. Notice, I’m ignoring all caveats such as the trend range changing based on what range the temperature is within, and so much more. Actual physics may not apply when doing sheer statistical analysis, but are necessary to understand the ‘why’ later down the road, and these papers can’t address that either.

      • think you didn’t realize that the probabilities that natural fluctuations yield the observed trends is indeed the exact same thing as saying that the warming was natural by this probable amount–exact same thing, just reworded.

        Ged – I don’t want to be too blunt, but if you believe that, your understanding of probability and statistics is terribly inaccurate. That is simply not true. To translate the probability that a “natural” series will give a particular outcome into the probability that the outcome was “natural” is impossible without knowing the prior probabilities. This is elementary stuff, and by itself, seriously undermines any other statistical inferences the authors drew.

        However, if you don’t believe me, try it with a range of hypothetical prior probabilities for both the null hypothesis and the external trend hypothesis, along with a test set of likelihoods that an external trend would yield the observed results. I expect that with choices that are not too unrealistic (based on climate data outside of this paper), you could achieve a probability that the result was natural to be less than one percent, and the probability that it was a forced trend (more likely than not mainly anthropogenic) exceeding 99 percent. The authors’ conclusions simply have no basis.

      • To illustrate further, you stated:

        For instance, if I say a result has a certain P value of 0.01, what that means is that the likelihood that my result is related to the null result is only 1%. Or put another way, that if I had sampled all values from the distribution of the null, I only have a 1% chance of pulling out the results I got for the number of samples..

        Your second sentence appears to be correct. Your first sentence appears to contradict the second sentence and is wrong, if you meant by it only a 1 percent probability the null hypothesis was true.

      • italics off, I hope.

      • Just to elaborate a bit more, I’ve offered evidence in more three than previous threads that I believe shows almost all the post-1950 or post-1956 warming to be due to forced trends, with less than, I would estimate, about a 0.0001 probability it was due mainly to “natural” fluctuations of the type described by the authors. This is a prior probability based on the physics of the atmosphere/ocean dynamics and is estimated without the data provided by the authors of this paper. I also estimate as a prior probability that the physics would predict a warming at least as much as cited with something like a 99% likelihood. Then, if we accept the authors’ claim the “natural” fluctuations would do the same with a 60% likelihood, what is the probability that the warming was “natural”? I calculate a P value for “natural” of about 0.00006, which is quite a bit lower than 60%. My priors might be wrong, of course (they were probably slightly exaggerated but not much), but that doesn’t invalidate the principle that priors are necessary to determine the probability the outcome was natural as opposed to a forced trend.

      • A small point – I agree that pdfs rather than single prior probability numbers would be a more precise way to address the issue, but that changes the underlying concepts very little. The failure of the authors to understand the basic statistical principles lies elsewhere – in an understanding of the need for prior probability estimates for any meaningful estimate of posterior probabilities.

      • I calculate a P value for “natural” of about 0.00006, which is quite a bit lower than 60%

        Fred, is that first number 0.00006 or 0.00006% ?

        If it is the latter I agree with it because that is the probability that we would see a 1.2 degree C temperature shift in 100 years due to natural variability.

        If it is the former, my approach differs from whatever you are doing.

      • A small point – I agree that pdfs rather than single prior probability numbers would be a more precise way to address the issue, but that changes the underlying concepts very little.

        I am using PDF’s and CDF’s as well, tuned to specific intervals of interest. This is the one I am using:
        http://4.bp.blogspot.com/-k1vYjQ8Qthk/TqdulOhrSEI/AAAAAAAAAko/0H9kUnzIcZs/s1600/vostok_temperature_changes.gif

        That is the only way to go. If Ludecke is not doing something as obvious as a cumulative for calculating exceedance probabilities, then that is another strike against their paper.

      • Better I think to let people provide their own priors, but I basically agree with Fred. Better though to just give the diagnosticity ratio. If P(D|H1)=1 and P(D|H2)=.6, then this is Diagnosticity = P(D|H1)/P(D|H2) = 10/6. If you started with 50/50 priors, your posterior odds favoring H1 would be 10/6. If you really have confidence that P(D|H1)=1 of course. Fred is right that it’s a common error to mistake P(H|D) for P(D|H).

    • I agree that the probability of the actual data, given the hypothesis, is not the same as the probability of the hypothesis, given the data. The conclusion that “there is no way we can translate the probability that a “natural” phenomenon will give a result into a probability that the result was natural”) is wrong in principle, but is almost certainly right in practice if one is trying to make objective judgements about the other relevant probabilities.

      There are many competing hypotheses about the causes of the temperature record. For many scientists and commentators, the probability of their own preferred hypothesis, given the data, is dominated by their own assessment of the prior probability of that hypothesis. For many, there appears to be a strong tendency to place a high prior probability on a hypothesis which incorporates various accepted physical laws.

      For me, I think that too many people are far too confident about what they know. I also think that any statistical research which shows high probability of the data, given some very simple hypothesis, should cause the scientists to think a lot harder about our state of knowledge about the climate.

      • Hi Patrick – Like you, I am made somewhat uncomfortable by subjective priors, even though some subjectivity is often inevitable in the absence of comprehensive data. I gave an example of priors above based on physical data on ocean heat storage (described in a previous thread). The number I arrived at was certainly in part a subjective estimate, but I believe the objective ocean data make a compelling case that my estimate in favor of a forced trend was in the right “ballpark” range. Even If I was wrong, it would still illustrate the principle that available evidence prior to a probability calculation from a set of data strongly affects the eventual probability value that emerges.

  21. Brandon Shollenberger | November 10, 2011 at 10:33 am | Reply
    There is no reason one would need to propose a physical model to explain the sort of fluctuation the authors discuss.

    Agreed, the authors shows that the fluctuation is explained as a statistical property of the numbers. The underlying model is that fluctuation is inherent in all things.

    This type of testing goes on all the time in industry. For example: No two cars off the assembly line are identical. Does this mean one of the machines/people on the assembly line is malfunctioning? Or is the variability simply normal for this assembly line?

    • Brandon Shollenberger

      I agree, as long as the fluctuations being observed aren’t beyond reasonable expectations. For example, no two cars will be identical, but if a car winds up with three doors instead of four, we can’t just shrug our shoulders and say, “Variation.”

      That’s why I made sure to point out autocorrelation in temperature is not a surprising result, but rather something we already knew to be true. If nobody had ever heard of that idea before, then yes, the authors would have to explain why it would be true.

  22. Fred Moolten | November 10, 2011 at 11:24 am | Reply
    I would like to know how the probabilities that natural fluctuations can yield the observed warming can tell us the probability that the warming was natural.

    The IPCC concludes that late 20th century warming is outside of natural fluctuations. On this basis it concludes that the cause is very likely CO2.

    However, if observed warnings are within the range of natural fluctuations, there is no reason to conclude CO2 is “very likely” the cause. Natural fluctuations could be the cause, or CO2 could be the cause. Neither is then “very likely” the cause, because neither has been ruled out.

    • xactly ferd,

      I would like to see someone offer convincing evidence that there have not been at least 4, 556 thirty year periods in the last million years that have had temperature excursions, just like that from 1979 to 2009.

    • The IPCC concludes that late 20th century warming is outside of natural fluctuations. On this basis it concludes that the cause is very likely CO2.

      I’m continually confused as to what comprises a “technical” comment in these kinds of threads. But seeing as how this kind of comment are all over the thread……

      To be accurate, the IPCC says that it is 90% likely that more than 50% of the warming is caused by CO2 emissions.

      Natural fluctuations could be the cause, or CO2 could be the cause. Neither is then “very likely” the cause, because neither has been ruled out.

      You are arguing that natural variations and CO2 emissions are mutually exclusive as causal explanations – the IPCC does not make such an argument,.

      • I realize that I should clarify as my comment above wasn’t really accurate. – the IPCC does rule out the likelihood that some 50+ % of the recent warming is due to natural variations. The IPCC does argue that CO2 and natural variations are likely mutually exclusive with respect to that 50+% – but not to “late 20th century warming,” speaking more generally.

      • That’s one of the problems with the Ludecke paper, in that they say there is a 40 to 90% chance that it is all due to natural variation, instead of suggesting correctly as IPCC did that a proportion of the rise is due to natural variations with a given confidence interval.

        This either demonstrates that they are sloppy or take extremist views to get their point across. Just another reason why the papers are so weak.

      • actually, Ludecke is addressing detection, while the IPCC statement you refer to addresses attribution.

      • Judy – I really don’t know how to resolve the idea that they are discussing detection with their conclusion that the warming trend was part of a natural fluctuation, which sure sound like attribution to me. Could you please expand on your thinking on that?

      • Web, All of what and what Proportion of what. 40 to 90 percent of the variation in the times series could be due to “natural’ variation. Before Man took over, pretty much all of the variation had to be natural. Since man got busy, there is a trend and there is variation in that trend. Is the variation in the modern trend man made? Could CO2 have non-linear impacts?

        For the IPCC “some portion” in the trend is natural, what proportion? CO2 may cause 5% of the atmospheric effect. Co2 may cause 30% of the atmospheric effect. Co2 may cause 5% sometimes and 30% sometimes, which is much more likely in my opinion. What is the real number?

      • BobN – sorry If I’m wrong on this, being only a Germanist – what I think Judith is saying is that the Luedecke paper addresses the chances of a variation such as the one we have seen being down to natural causes, but not the chances of the *actual* change we have seen. In other words, if you have a change in the climate trend N the attribution of that change might be x% natural plus y% anthropegenic, on the other hand, you might have a change in the climate trend of character N which is 100% natural (i.e. only x). So the Luedecke paper says: what are the chances that said change is of a magnitude and character *consistent with* only natural change (only x), without necessarily saying that it isn’t for sure some combination of x and y.

        Clear as mud? Sorry for my late-night explanation.

      • To be a bit more accurate, the IPCC says that it is 90% likely that more than 50% of the warming is caused by anthropogenic greenhouse gas emissions, with CO2 accounting for ~50-60% of GHG forcing

      • Interesting. I didn’t know that. Thanks.

  23. I think most would agree this conversation has (on many levels) been worth having. I feel privileged, even a little humbled by what can be described as an object lesson in how valuable the blogosphere can be under certain conditions.

    I wonder if Professor Tol might be rethinking his charge that Dr. C is spreading “disinformation.” Sir, do you still as you put it, “mean every letter” of that harsh word?

    • “To be accurate, the IPCC says that it is 90% likely that more than 50% of the warming is caused by CO2 emissions.”

      Uh huh. I’d say it’s 90 percent likely that there’s a better than even chance the IPCC is simply making this up as they go along.

      • True, true AGW theory is more social than science and ‘very likely’ a mass social delusion.

  24. Whether deliberately or by accident, it does seem that while some parts of the paper have provoked reasoned debate, others (as somebody said earlier) have been at cross-purposes. In particular Tol’s point about the 100 yrs vs. 1200 months seems to have been misunderstood in the Ludecke et al. rebuttal. Do I sense some linguistic difficulties from scientists arguing in their 2nd language?

  25. In our papers LU and LL the HURST exponents are calculated by an established method, not “estimated”.

    I’ll make a minor terminological note of the usual sort made by statisticians. For inferential purposes, the value computed from the sample is an estimate of the relevant corresponding value in the population.

    It is self evident by basic maths that this law, and as a consequence, the exponent of the power law is not effected by the time unit, be it day, month or year (see in LU the beginning of the last segment on page 5. See also in LL on page 2, Eq. 2 and on page 4, Eq. 5 and Eq. 6)

    This is true. We readers, writers and speakers sometimes get into an unconscious habit of thinking of the decay of autocorrelation with the index of the times series, not the lag time. Because the relevant dimension is time, you do not need equally spaced (or interpolated) values to estimate the autocorrelation function, you just need a more complicated algorithm than is supplied in most packages.

    Pedantry Note! it should be “affected”, not “effected” in the quoted paragraph.

    It’s a pity that Dr. Tol did not address his critical acumen to the Santer et al study (17 years and all that.) On the whole the two featured papers by Lüdecke et al are better than that paper. It is good that they were published.

    • oops, this is slightly off: Because the relevant dimension is time, you do not need equally spaced (or interpolated) values to estimate the autocorrelation function, you just need a more complicated algorithm than is supplied in most packages.

      Interpolated values can be useful for estimating the autocorrelation at shorter lags. I was thinking of the case with few missing data, missing completely at random.

  26. I would like the authors to identify and make available the actual data that they used. Code as well.

    Same rules for them as for mann and jones

    • Except Mann & Jones didn’t of course.
      But have you asked them if their data & code is already available somewhere?

      • From my entry into this matter I have had a consistent approach. papers are not science. They are advertisements for the science. The real science is the data and the methods. I am under no rational obligation to consider or evaluate any advertisement for science that does not explicitly include the data as used and the method (code) as run. This is a standard I have held Mann and Jones to. No skeptic stepped forward to attack this principle. In fact, they encouraged this. Now, when skeptics publish people seem to forget the principals they endorsed when the shoe was on the other foot.

        no cookie for you. i’ll suggest you advise them to post their science.

      • Steven

        I am with you on this.

      • Girma and Steven

        I agree.

      • Michael Larkin

        Mosh,

        I agree with the principle, but give them a chance – have you asked for the data?If you do and they refuse, then your tone is justified, otherwise not.

      • This is WHY it is necessary to provide the data WITH the paper.
        and why it is not enough to merely point at a pile (as Jones did)
        and exclaim.. “I got my data there”

        I agree.

        I think the message is getting out slowly.

      • “They go back for
        somewhat more than 220 years, with the earliest being 1757 for Paris and the
        latest starting in 1781 in Hohenpeissenberg and Munich and can be accessed at
        [11], [22], [43], [57].”

        Looking at the first two citations

        11. http://data.giss.nasa.gov/gistemp/station data/
        22 http://xmarinx.sweb.cz//KLEMENTINUM.xls

        citation 11 does not specify WHICH data is taken from GISS and does not specify which version of the data they used.

        citation 22 is a dead link

        This is WHY it is necessary to provide the data WITH the paper.
        and why it is not enough to merely point at a pile (as Jones did)
        and exclaim.. “I got my data there”

  27. I dont pretend to understand the statistics, and the arguments between the two sides, but my impression of what is happening is that it is similar to the Spencer/Dessler debate. There is a dearth of observed data with which to support or refute the hypothesis of CAGW. The proponents of CAGW say that this is irrelevant, since (sarc on) the output of their non-validated models proves beyond all doubt that CAGW is real (sarc off).

    However, there are people around who believe in the “scientific method”. Spencer is one and Ludecke is another. They are trying to find ways to test the hypothesis of CAGW against observed data; Spencer with the ERBE and CERES satellites, and Ludecke with global temperatures, and advanced statistical techniques. Both have concluded that the claims of CAGW are wrong. In the case of Spencer, the “Team” is represented by Dessler, and in the case of Ludecke, the “Team” representative is Tol. In matters not to the Team what is right; all that matters is that the heresies of Spencer and Ludecke are rebutted. Then they dont need to be considered by the IPCC for the AR5. I suspect that it is really that simple.

  28. Judith

    I trust you asked the authors to answer comments. For the benefit of a gues post you would be within your rights to ask the authors to come and answer questions. We all bitch and moan about (say) gavin or mann’s reluctance to answer in venues where they dont control everything ( gavin is to his credit improving) so how about some fundamental fairness. Richard Tol came into a hostile enviroment (ok maybe he peed in the tent) and took questions,
    will the authors stand and deliver here?

    • Some people are uncomfortable with the rapidfire blog response environment (especially with almost 1000 comments), and they prefer to write a well thought out response in a synthesis of the main concerns. Ludecke et al. are planning a response to some of the questions that they found most interesting including some critiques. So this is a different kind of blogospheric engagement, which I still think is worthwhile. I’m experimenting here. But a more measured and careful response from them, given the controversy, is not a bad strategy.

      • Well, anyone can claim they are uncomfortable with answering direct questions from the public. I see no reason to believe them and no reason to encourage this type of engagement. But let’s see how it pans out.

      • Some people have a hard time with bumper-sticker sized questions and answers with a subject that needs more than that. Hockey isn’t a game for ballet dancers.

      • which giss dataset was used.

        pretty effin simple?

        the link to other datasets you reference is dead.

        pretty effin simple.

        Of course some people go off discussing a method before even checking the data.

      • They’re probably more worried about dumb and leading questions, but yeah. The data should be boilerplate. Anybody tagged with “denier” can expect the Spanish Inquisition, though.

      • Michael Larkin

        “Well, anyone can claim they are uncomfortable with answering direct questions from the public. I see no reason to believe them and no reason to encourage this type of engagement. But let’s see how it pans out.”

        For God’s sake, man. Just look at what you are saying and the way you are saying it. It’s totally unwarranted and you ought to be ashamed of yourself. For all you know, this may be their first experience with the blogosphere. Have some patience and wait to see what transpires. There’s more than one way to skin a cat.

      • Brandon Shollenberger

        To be honest, if I were an author of a paper, and steven mosher made comments like this, I wouldn’t send him anything. I might publish the data somewhere where everyone could get it, and I would certainly respond to reasonable requests other people sent to me, but I wouldn’t do a single thing for the sake of a person acting like him.

        Maybe that makes me a “bad person.”

      • Michael – Steven can defend himself, but he’s probably aware that these authors are not neophytes when it comes to promoting their positions or engaging in blog combat. See Capo’s comment below for some information on this. Ludecke also emailed his LL paper to Dr. Curry unsolicited, which is how this entire process started. I don’t have strong feelings about whether they deserve some special treatment if they post here, but Steven’s view seems to be not unreasonable.

      • steven mosher

        Back off.

        This blog is in English, and it is apparent that they are no more comfortable in rapid-fire debating in the English language than you would be on a German language blog.

        But you are right, of course, that they should “explicitly include the data as used and the method (code) as run”.

        Max.

    • Steven,

      the authors didn’t answer questions on their own Blog EIKE when they had introduced their papers. In fact, most questions were censored, the rest which passed moderation were not almost 1000, just about ten.

      I suspect the true reason is, that the authors are neither climate scientists nor statisticians so I doubt, if they are able to respond quickly. Remember, that they have just replicated a method used by Lennartz and Bunde on their data. The only part going further is their conclusions, and this part is wrong according to R. Tol. The few “answers” given on their blog consisted of accusing the questioner of being stupid, being a lay man, he should read some literature and he should read the paper properly, full of arrogance.

      So the most interesting part for me was, if Ludecke would choose the same strategy in his answer to R. Tol, when he is the amateur and Tol the expert. I would say he did, but in a more subtle manner.

      (Excuse me for writing that in a technical thread, I will not repeat)

  29. WebHubTelescope | November 10, 2011 at 1:31 pm |

    Unfortunately there is no reply link below this comment of yours, so I hope you see it further down.

    You say:
    “According to the Vostok data, for an arbitrary 100 year time span, a 0.75 degree or more temperature increase will occur with only a 3% probability.”

    My statistics aren’t very strong, so perhaps you can tell me how pick out an arbitrary 100 year period of temperature from data that has an ever widening data point range. It goes from 20 years at present to more than 600 years 400,000 years ago. Wouldn’t much of the temperature information have been smoothed away due to compression?

    • My statistics aren’t very strong, so perhaps you can tell me how pick out an arbitrary 100 year period of temperature from data that has an ever widening data point range. It goes from 20 years at present to more than 600 years 400,000 years ago. Wouldn’t much of the temperature information have been smoothed away due to compression?

      The way that random walk works is that if something gets to a certain point proportionately to the square root of time, then you can often interpolate how far the excursion is in a fraction of this time.
      \Delta T = \sqrt{Dt}
      Not having all the short time periods available in Vostok is OK as the longer periods make up for it, until we get to the really long interglacial periods where the long-range correlations are observed.
      http://img713.imageshack.us/img713/3023/vostokdfa.gif
      Those are arbitrary units on the y-axis but they do correspond to expected excursions both + and – from an arbitrary starting point.

      As for the weird compression/decompression, I did an interpolation that forced points on 100 year intervals just to keep the DFA algorithm simple. If someone has problems with that approach, come up with a better strategy, perhaps segment the data into different ranges and do it that way. I really doubt that you will see much different behavior. The power law line has to go through the origin or something is very weird.

      I could try it again and run on 20 year intervals and only go back 20,000 years. This will give me 1000 DFA points and can see if the random walk curve changes much near the origin.

      • Thanks for the reply.
        Looking at the Vostok data I see 5 periods of thirty years or less with a positive change greater than 0.75C between 190 YBP and 695 YBP. With another 2 periods showing a negative change at least that size. That time frame covers 21 data points. I can’t help but think your 3% figure is due to a lot of lost information in the larger data point gaps. It also indicates no 30 period in the 20 century can be considered unusual

      • I see that too now that you mention it, but a large statistical analysis is what I did and I specifically avoided cherry-picking data. When I said there is 3% probability that you will find a temperature change of 0.75 degrees in 100 years, that says is should happen about 100 times in a data set of 3300 points at 100 year intervals.

        If there are greater sized fluctuations during more recent times or the fluctuations are greater when the climate is warmer as it is now, that may be an influence. If that is truly the case, it is interesting on its own as it may indicate that cool periods fluctuate less.

        The random walk ft the the data should scale over all ranges. Because the sampling is 100 years interpolated, I can’t say anything other than what I fit works best in the 1000 to 100,000 year level and it scales backward. I discuss this elsewhere in this thread.

        In any case, if someone wants to do the DFA on this data set, they are free to see if they can come up with a different statistical result across the entire range.

  30. Repeating a point I made in the Tol thread, statistical analyses that purport to be able to determine “natural” versus “anthropogenic” without providing a physical causitive bases are essentially useless.
    And this works both ways.

    Weather and climate are due to physical interactions between the sun, the solar system, the galaxy(?), the oceans, land masses, atmosphere, and biosphere. Statiscal analyses may be useful to discern patterns of cyclicty or oscilllatory behaviour and may be able to show that it is possible that the centennial warming was “natural” but ultimately such methods have no explanatory power as to the cause(s) of recent warming.

    • True, true… nominally there is but a single independent variable: it is the sun, stupid. While it is also true (although amazing) there are those who will argue against such an obvious fact of nature 100 million monarch butterflies nevertheless headed south this winter to Southern California and Mexico–as always–and, the monarch can’t NOT follow that pattern. It is a pattern humans can easily discern.

      There is another pattern that humans would do well to pick up on. The historical record of the Earth shows us there are these periods of global warming separated by long periods of global cooling.

      “Solar scientists predict that, by 2020, the sun will be starting into its weakest Schwabe solar cycle of the past two centuries, likely leading to unusually cool conditions on Earth. Beginning to plan for adaptation to such a cool period, one which may continue well beyond one 11-year cycle, as did the Little Ice Age, should be a priority for governments. It is global cooling, not warming, that is the major climate threat to the world, especially Canada. As a country at the northern limit to agriculture in the world, it would take very little cooling to destroy much of our food crops, while a warming would only require that we adopt farming techniques practiced to the south of us.” ~R. Timothy Patterson .

  31. Webhub telescope

    The Vostok temperature record can not be used to give any prediction of the likelyhood of a 0.75 degC change in temp over any 100 yr period. The resolution in the dataset is not sufficient for this purpose.

    • Good point.The publicly released datasets have 1 meter resolution, which as you say is not enough. There are studies that have been done using higher resolution studies of parts of the core, but I think Webhub telescope was using the generally available data, and if that is so then it indeed cannot say anything useful about change on the 100 year scale.

      • Rob Rose said:

        The Vostok temperature record can not be used to give any prediction of the likelyhood of a 0.75 degC change in temp over any 100 yr period. The resolution in the dataset is not sufficient for this purpose.

        That is a lie, and you probably know it. Look at the Vostok DFA analysis for a time range from 1000 years to 10000 years. This fits perfectly the random walk trend of sqrt(D*t) and importantly it passes through the t=0 origin. Unless there is something very pathological going on, this behavior should extend to the 100 year and lower range.

        This is the Vostok data
        http://img526.imageshack.us/img526/4814/vostokdfa2.gif

        This is a Monte Carlo sim over the entire range
        http://img809.imageshack.us/img809/2428/mcdfa.gif

        The pathological case would occur if the curve developed a double kink at short times as it would have to increase in slope first, then decrease in slope until it met up with the sqrt(D*t) line (and then continued with that slope until it hit the interglacial long-range correlations). That would be some very strange diffusional behavior that would likely not have a physical explanation, especially the decrease in slope, which would be a reverting-to-the-mean pull (the only reverting to the mean I see is the interglacial limits).

        The point is that scientists are always doing these kinds of inferences, interpolations, and extrapolations based on coarse graining on one scale that will get transferred to a scale that can’t be detected.

        What someone can do is take the European cities data from the Ludecke paper and verify that it is also a sqrt trend up to 100 years. That way we have a piece wise stochastic reconstruction that will only differ in the diffusion coefficients.

      • Brandon Shollenberger

        That is a lie, and you probably know it.

        This is a fascinating sentence. Doesn’t it make you wonder how WebHubTelescope knows the statement is a lie if he isn’t even sure Rob Rose knows it to be false?

  32. Edim | November 10, 2011 at 9:03 am | Reply

    My point is diametrically opposite to yours. I know you mean well, but you’re misled.

    You are aware that the Vostock does not match up to the modern Thermometer record aren’t you?

  33. BlueIce2HotSea

    Edim –

    Not to pile on here, because I don’t even have a point. However, I have been led to understand that your point is diametrically opposite to mine and not the other way around. What do you say?

    :)

  34. Chris Ho-Stuart | November 10, 2011 at 4:20 pm | Richard, that new paper is for Greenland. Vostok is in the Antarctic. It’s quite possible — in fact I would expect it — for temperature variations in Greenland to be greater than those seen in the Vostok cores.

    ———

    Yes I know, if the Arctic was warmer back then, what are the chances the Antarctic would be too? High. The point is, the current warm period is NOT unusual, can’t have a 3% probability of happening with this new analysis.

    • I don’t follow. I wasn’t talking about “warmer back then” but about century scale variations up or down. I would expect significantly more century scale variation in Greenland than in the Antarctic. At the least, the idea is not immediately absurd, which is what you were apparently suggesting.

      The better criticism of WebHub’s quoted probability of century scale excursions seems to me to be the time resolution of the publicly available ice core data sets from Vostok.

      • That would have to be tested. The point is his claim that the probability that the current warming has a 3% probability of being natural is bogus.

        The problem of resolution also means they are still bookended with a short time frame of modern temperature readings, and nothing to compare to. Hence the AGW claim is fantacy.

  35. WebHubTelescope | November 10, 2011 at 4:43 pm | How can you possibly say that with this new paper out:

    Then provide me with the raw data and I can check. My email is on my nickname.

    ———

    I don’t have that data, but I’m sure the authors of that paper will provide it. Point is, your 3% claim is refuted by this paper.

    • I don’t have that data, but I’m sure the authors of that paper will provide it. Point is, your 3% claim is refuted by this paper.

      Without the data available but just glancing at it, it does show higher variability than the Vostok data; yet it also shows a higher propensity for reverting to the mean. What that means is that if I place a larger Markov random walk coefficient into a generator then it does a better job of matching the scale of the fluctuations but it will also start wandering off the chart with large excursions. Check this at about 5x the diffusion coefficient of Vostok. Notice how it matches the minor fluctuations but the major ones are much bigger on the simulation, as it won’t revert to the mean.
      http://img546.imageshack.us/img546/5263/greenland2.gif
      It is possible that there is more uncorrelated white noise in the Greenland data due to the spikiness. I think that is why the authors draw the green line through the data, as it may be measurement noise that they are filtering out.

      Overall this Greenland data would be appropriate for a log-log fluctuation plot. The trending is very minimal (as it looks to revert to the mean) so the DFA would be very easy to do. But not having the data, so can only guess that yes, Greenland does show more variability but can’t say how much.

      Tough crowd, get nailed by hot off-the-press data. I am glad I only do this as a hobby. And remember, oil depletion will bite us well before climate change does :)

      • WebHubTelescope | November 10, 2011 at 7:43 pm | Reply
        Point is, your 3% claim is refuted by this paper.

        As you ignore most my of my questions or answer with ad hom, I ask you again, if it is 3% please explain the earlier equal rising trend in the same Century?

      • As you ignore most my of my questions or answer with ad hom, I ask you again, if it is 3% please explain the earlier equal rising trend in the same Century?

        You have to clearly label the starting dates and ending dates and the starting temperature and ending temperature, so I can figure out exactly what you are referring to. It also might help if you graph this and label the chart with the region of interest.

      • Are you seriously trying to suggest that you are so familar with all the temeperature records and yet unaware of the 1910-1940 warming period?
        Have you ever looked at the Temperature record data prior to the 2000 versions, ie before all the “Adjustments” were made?

  36. Dear Bloggers

    Why don’t we accept what the data says?

    http://bit.ly/sxEJpK

    http://bit.ly/szoJf8

    What pattern does the data shows regarding the global mean temperature since record begun 160 years ago?

    Does not the data show a single pattern?

    Does not the data show a single pattern that has a long-term global warming rate of 0.06 deg C per decade with an oscillation of 0.5 deg C every 30 years?

    Why don’t we accept what the data says?

  37. The LL paper is uninformative. They only *fail to reject* the null hypothesis that local linear temperature trends are caused *entirely* by fractional noise. This can tell us nothing about *how much* of local linear trends are caused by fractional noise versus other processes, nor anything about non-linear trends. It provides *no evidence* for causation by fractional noise, *nor evidence against* other processes. It doesn’t even support an assertion that the fractional noise contribution is *non-zero*. Temperature series are modeled as second order polynomials plus fractional noise. Unfortunately the authors present no evidence that this model is a particularly good model for temperature time series. There is no testing of model assumptions, no goodness of fit tests, no examination of fitted parameters, and no comparison with plausible alternative models. The selection of a second order polynomial to represent deterministic trends is entirely arbitrary without knowledge about physical processes. Representing temperature variation over century and half century periods by a linear trend is even worse, since we *know* that the forced contribution is non-linear in the global average, and even more so at the local level which is effected by forced changes in the general circulation. Tests are normally done on the global average because these have the most power to reject the null of unforced variation. The author’s reasons for choosing to test at the local level, which has the weakest power, are rather weak. Finally the authors attempt to turn the non-informative failure to reject the null into a positive attribution, by defining ‘natural’ as failing to reject the null!

    “‘Natural’ means that we do not have within a de fined con dence interval a defi nitely positive anthropogenic contribution”

    They then claim that failing to reject the null is evidence for ‘natural’ causation. Which is effectively saying that failing to reject the null is evidence of failing to reject the null. Which is a rhetorical tautology.

    Unfortunately, this is followed by a non sequitur, false interpretation of the meaning of hypothesis tests…

    “therefore, only a marginal anthropogenic contribution can not be excluded”

    In my view the largest failing of the paper, common as far as I’m aware to every other claim about fractional noise in association with temperature variation, is their failure to test the method with GCM output where the forced component is a priori *known*.

  38. Dear JC,

    Just thought you would appreciate the latest opinion today from a poster to Bolt’s blog in Oz who goes by the monicker Polyaulax:

    ‘Judith Curry is no longer a credible voice on climate change..’

    Nice bit of roley poley eh what?

  39. Umm Nobel Peace Prize?

  40. Dear Lüdecke

    I have read your work with great interest.
    You are introducing a set of good sound sceptic approaches (elevation, UHI etc) , but in reality you still underplay the sceptic argument.

    But you are approaching something useful, obviously.

    My main objection is, that you cant get real sense out of a pile of data using ANY mathematical approach if the pile of data itself is not sound (or cherry picked).

    Yes, elevated stations have a severe (WARM) problem in representing the bulk of lower lands, this is correct.
    But for some reason, for example, over half the long rural stations of W Europe mainland that are public available are located in the Alps… And thus, by eliminating cities with more than 1000 inhabitants you get a better result, but still, you are working with a pile of data with severe geographical (WARM) problems, and you still end up in fact underplaying the sceptic argument – even though people in comments may think otherwise.

    But non the less, what you did is a truth seeking good job, thankyou.

    I recommend that you check out my work on RUTI:
    http://hidethedecline.eu/pages/ruti.php

    K.R. Frank