Site icon Climate Etc.

Ludecke et al. respond: Part II

by Horst-Joachim Lüdecke, Rainer Link, Friedrich-Karl Ewert

It is nearly impossible to answer more than 1000 comments that the previous three threads [here, here and here] on the surface temperature data records have generated. Most technical comments and errors would settle themselves if the commentators better understood the methods applied.  This large number of comments demonstrates the interest in the BEST and the LU, LL publications.

We respond here to comments and questions that we regard as relevant or interesting, using the same abbreviations as in  our original guest post.

What is meant by ‘natural’ in autocorrelation (persistent) time series?

The following example using dice is intended for unexperienced readers, not for experts. Imagine an “autocorrelated” dice. If you throw it and perhaps the 5 comes up, then a mechanism within the dice moves a balance weight on the opposite side of the 5 (the 2). Consequently, the probability rises to throw again the 5. Nevertheless, after repeatedly throwing the dice again at some time a new number will arise, perhaps the 1. Consequently, the mechanism takes away the weight from the 2 and adds it to the opposite side of the 1, i.e. on the 6. This procedure holds for all 6 numbers. As long as the same number comes up the weight remains. However, it is slided to the new place when a new number arises.

Remarkably, the autocorrelated dice is fair because each number still arises with the same probability of 1/6 in the very long run. The only difference compared with an ordinary dice is that “unusual looking” series of the same numbers appear. Note that the autocorrelated dice is not determined by an external trend. Therefore, one would like to designate both the records generated by an ordinary dice and by an autocorrelated one as purely stochastic or ‘natural’. The persistence of the dice increases with increasing balance weight.

A game with an autocorrelated dice is an extremely simplified analogy of autocorrelated temperature series without external trends. They exhibit unusual looking long ‘natural’ trends. In addition, even purely autocorrelated synthetic records are denoted as ‘natural’, which sounds a little bit strange without the knowledge of the causes. In contrast to this, an external trend changes the character of a record into ‘unnatural’. If one person in a group of players is cheating, an external trend in the record of this person becomes apparent in the long run.

What is the strategy to distinguish between ‘naturalness’ and ‘external trend’ of a temperature record?

Assume that a real (observed) temperature record shows a warming trend. Without major effort the answer whether this trend is caused by autocorrelation or by an external trend cannot be given. The detrended fluctuation analysis (DFA) assumes that the real temperature record consists of a ‘natural’ component and – if existent at all – an external trend. Importantly, the DFA yields the HURST exponent of the natural part in the real record. The value of the HURST exponent corresponds to the strongness of persistence.

In the next step, one generates a large quantity of synthetic records with the same fixed HURST exponent as the real record and checks how many of them have the same or greater warming than the real record. If the number is very small one assumes that the warming of the real record is too predominant to be still ‘natural’. Consequently, the warming of the real record is classified as ‘external’. Otherwise, one assumes that it is ‘natural’. In other words, the question that has to be anwered is the following: does the amount of warming of the real record match the probability of seeing this warming in its ‘natural’ component?

Could a ‘natural’ fluctuation become an external trend?

Nature does not distinguish between “external trends” and events that increase the persistence of a record. However, this could happen by considering a time scale that is too short. In instrumental records of roughly 200 years, the detection of persistence is restricted to a maximum of about 50 years caused by the numerics of the DFA. A natural fluctuation with frequency  that is long compared to 50 years (for example the variation of the Sun’s magnetic field), therefore, is automatically removed by DFA as an external trend.

Does one get different HURST exponents by using daily, monthly or annual means for the DFA?

In principle, the Hurst exponent of a record is the same by using mean temperatures of days, months, or years. This comes from the nature of the power function and is basic maths. Monthly means of temperatures are most adequate for temperature analyses because the extreme short-term persistence of daily means could disturb the numerical evaluation of the HURST exponent. Further, one is not interested in extreme short persistence. Yearly means are not feasible for instrumental records because the DFA requires a minimum of about 600 data points. If one uses monthly means the seasonal influences must be eliminated. Seasonal oscillations are regular and, therefore, the HURST exponent is feigned detrimentally without removing them.

Is energy conservation violated by DFA?

The HURST exponent of monofractal time series goes from 0.5 (completely random) to values > 1. For values > 1 a record becomes unsteady and, therefore, could never come back. Consequently, for HURST exponents > 1 energy conservation could be violated in the long run. However, for values between 0.5 and < 1.0 no such problem arises. All of our observed temperature records have HURST exponents below 1.

Do conflicts exist between the outcomes of LU and LL?

Yes. Except for Vienna (1791-1890), LU shows that all five long-term instrumental records (Hohenpeissenberg, Paris, Vienna, Prague and Munich) have 100 year fluctuations that reveal external trends. On the other hand, most records of the 20th century analyzed in LL show natural 100-year fluctuations only. Obviously, the five instrumental stations in LU belong to the group of records worldwide that are determined by external trends.

The results of LU and LL in a nutshell  

LU evaluates 100 year fluctuations only – over a period of 2000 years. LU finds that those  fluctuations (∆/σ)i exceed the value of 2 very often and sometimes even of 4. ∆ is the rise of the backward 100-year linear regression line through the temperatures Ti-99, …,Ti, σ is the standard deviation around the lines, and i counts the years. See for this the left panel of Figure 6 in LU. In contrast, for the instrumental records the value of 2 is roughly kept, see Figure 3 of LU. As a consequence, the 100 year fluctuation of the 20th century compared with the climate history 2000 years back has to be assessed as quite normal. Under the restrictions of LU – only 100 year temperature fluctuations, the use of proxies, and only five long-term instrumental records – the results contradict an unusual forcing in the 20th century as a rather modest conclusion. Unfortunately more instrumental records that go back further than 230 years are not available.

LL follows a different strategy. Here a large quantity of 100 year and two 50 year long records are analyzed under the question “how natural are their courses”. With the advanced method of [3] and [4] LL found naturalness at a high percentage. The most important results of LL is that the UHI and a stronger warming with the station elevation are verified. To our knowledge the latter phenomenon was never observed in the literature before, except for a short remark about the strong warming of three mountain stations (Zugspitze – Germany,  Säntis – Switzerland and Sonnblick – Austria). LL evaluates the probabilities for the ‘naturalness’ of the 20th cenrtury warming and does not at all exclude additional warming by anthropogenic CO2. However, the applied method cannot distinguish for this additional part between CO2 and many other possible forcings. The results of LL are roughly compatible with the known climate sensitivity of the CO2 without feedback of about 1 °C. Also in an earlier paper of Link and Lüdecke – not discussed here –  the same was concluded. However, LL cannot find any indication of a feedback, i.e. a higher climate sensitivity of the CO2 as the assumed 1 °C.

What do we judge as the strongest shortcoming of LL and all the papers about “global” records?

Clearly the extremely unequal station density over the Earth. We did not make any attempt to correct this and are quite aware of this deficit of the available local stations. Further we argue that global records are not the feasable tool to examine global temperatures. The too high HURST exponents of global records and their too small standard deviations make them inapplicable for DFA. Until now we have no idea how to solve the problem of the extremely unequal station density over the Earth.

Why we did not apply general error estimations?

We applied error bars only in Figure 10 of LL because here the usual frequency error estimation could be applied. Further, we noted an error of ± 0.2 for the evaluation of the HURST exponents in LU page 13 last section. However, we did not make error estimates of the further analyses for the following reason. The errors that are statistical, systematic, and caused by the changing environment conditions of the screens at the individual stations are desperately unknown and scatter widely. As a consequence, a correct error evaluation would by far exceed the scope of our papers. Looking at the literature concerning temperature analyses with DFA one finds hardly papers that try appropriate error estimates.

JC comment:  This post is a response to the comments on the previous three threads.  I have edited the post slightly for clarity. Guest posts reflect the opinions only of the guest posters.  The presence of a guest post at Climate Etc. reflects my judgement that the topic is of interest to the participants here, and does not imply any endorsement by me of the analyses/ideas presented in the post.

Moderation note:  this is a technical thread and comments will be moderated for relevance.

Exit mobile version