Site icon Climate Etc.

Two new papers vs. BEST

Guest Post by Lüdecke, Link, and Ewert

Our two papers [1], hereafter LU, and [2], hereafter LL, were published almost simultaneous with the release of the BEST papers. The basic objective of all of these papers is the same – to document reliably the surface temperature of the Earth from the beginning of the 19th century until the present.

LU analyzes the period  2000 years before present (BP), whereas LL examines the 20th century only. The BEST analysis covers the period  1800-2010. Quite different methods are used in our work versus BEST. The BEST global temperature curve is a patchwork of more than 35,000 mostly short temperature series. LU analyzes five of the longest available instrumental series and two proxies (stalagmite and a tree ring stack). LL examines 2249 unadjusted local temperature surface records. Further, both LU and LL use a new method [3], [4] which combines detrended fluctuation analysis (DFA), synthetic records, and Monte-Carlo simulation. Using this new method, the exceedance probability of the naturality of an observed temperature change is evaluated. Finally, LL derives the basic overall probability that  the global warming of the 20th century was a natural 100-year fluctuation. The instrumental records applied by LU and LL  are monthly means because the DFA requires a minimum of about 600 data points.

The 19th century

The database for the instrumental temperatures used by LU consists of the following long-term series going back at least until the year 1791 AD: Hohenpeissenberg, Paris, Vienna, Munich, and Prague. During the 100 year period 1791-1890, all of these locations show an overall temperature decline with roughly the same magnitude and natural probability as their appropriate 20th century rise. Additional  long-term instrumental records – not analyzed by LU – include Innsbruck, Kremsmünster, Stockholm, and Kopenhagen, which show consistently similar temperature declines. Finally, the 19th century cooling discussed by LU is confirmed by reconstructions [5]. However, the enlargement of the NH temperature decline as a global phenomenon is limited by the absence of appropriate instrumental temperature series of the SH. Note that the 19th century cooling is not present in the BEST curve.

The 20th century

LL takes the data from the GISS temperature pool that contains about 7500 series in total. From this number, 2249 reliable continuous records as monthly means were selected. Those are 1129 stations over the 100 year period 1906-2005, 427 stations over the 50 year period 1906-1955, and 693 stations over the 50 year period 1956-2005. As the criterion for the selection, no more than 10.5% voids are allowed in any record. This constriction ensures that the DFA analysis is still reliable.

Every record of the GISS pool following this condition was selected. It can be assumed that the GISS pool contains the most long-term records as monthly means worldwide. LL used only unadjusted raw data. In addition, no homogenisation, smoothing or grid-procedures are applied, except for the linear interpolation by filling voids in the records. Figure 1 depicts the frequencies of the station latitudes and the temperature changes for the 1129 records of 100 years length.

Fig. 1: Frequencies of station latitudes (left panel) and of the temperature change ∆ of linear regression lines.

The results from the first step of LL’s analysis are as follows:

  1. During the period 1906-2005, the 1129 stations of 100 year duration show a mean 0.58 °C warming. About one quarter of all these stations show cooling. The mean reduces to 0.52 °C if stations with less than 1000 population only are included, which documents the UHI. Further evidence of the UHI is given in  Figure 2. After all, the mean value of global warming reduces further to 0.41 °C if stations below 800 m above sea level only are allowed. Figure 3 depicts this effect, of which the cause is not known.
  2. As the left panel of Figure 1 demonstrates, the available stations are concentrated between 20° and 70° latitude. In particular, the station density is sparse in the SH. However, the warming is feebler in the SH than in the NH.
  3. In the period 1906-1955 the mean of all 125 SH stations is actually negative (see in LL Table 1 and Table 2 for group B5). As a consequence, the global warming as the mean of all local stations worldwide for the first 50 years of the 20th century can be assumed to be somewhat feebler if it would be established from surface stations of equal density distributions over the Earth.
  4. A total of 1386 stations with no voids within the appropriate periods shows that the mean temperature change is -0.34 °C between 1998-2010 and -0.15 °C between 2000-2010.

The results of items 1. and 4. are not in accordance with BEST.

Fig. 2: UHI in 1129 records of the 100-year period 1906-2005

Fig. 3: Warming due to increasing station elevation in records of the 100-year period 1906-2005

Probability analysis for the 20th century

Temperature records are persistent (long-term correlated). This is well known since a warm day is more likely to be followed by another warm day than by a cold day, and vice versa. Short-term persistence of weather states on a time scale of days until several weeks is caused by general weather situations and meteorological blocking situations. However, the causes of long-term persistence over many years and even several decades are largely unknown. Persistence – a purely natural phenomenon – is measured by the HURST exponent α and is explicitly opposed to external trends like the UHI or warming by anthropogenic CO2.

Both autocorrelated real temperature records without external trends and autocorrelated synthetic temperature records that can be generated by special algorithms are denominated as ‘natural’. As the main feature of autocorrelated natural records, extremes arise that seem to be external trends. This poses a fundamental problem because without further effort, an external trend and an apparent ‘trend’ that is caused by persistence are not distinguishable. Figure 4 depicts this effect.

Fig. 4: A synthetic purely autocorrelated (natural) record that nevertheless seems to be determined by external trends.

The method of [3], [4] that tackles this problem is based on the assumption that an observed real record has the following two constituents: a natural part, which is governed by autocorrelation; and (possibly) an external trend. Next, the probability has to be determined for how much an observed real record is ‘natural’. To this end, only two authoritative parameters are needed: its relative temperature change ∆/σ and its HURST exponent α, whereas the DFA requires that α comes from the ‘natural’ part only. ∆ is the temperature difference of a linear regression line through the record and σ is the standard deviation around the line.

The analysis yields the exceedance probability W for the occurrence of the value ∆/σ, including all stronger values, in a natural record of a fixed α, which is the same as this of the observed real record evaluated by DFA. Next, (for warming) one has to check whether the value of W is below a defined confidence limit. If this is the case, the observed real record is gauged to be determined by an external trend. Otherwise it is assessed as ‘natural’. The method provides no information about the nature of the trend. In a final step, the overall natural probability of the stations in a group is basically evaluated from all W values. As a result, the probabilities of naturalness lay between 40% and 90%, depending on the stations characteristics and the periods considered (1906-2005, 1906-1955 or 1956-2995).

It is stressed that in general the applied procedures to establish global records from local ones result in unrealisticly small values of the standard deviation σ. This becomes in particular obvious regarding the BEST curve by eye. Therefore, in general, we assume that globally averaged records are not feasible for an autocorrelation analysis.

Conclusion

LL demonstrates that the 20th century’s global warming was predominantly a natural 100-year fluctuation. The leftovers are caused by UHI, the warming effect by increasing station elevation, changes to the screens and their environments in the 1970s, variations in the sun’s magnetic field that could influence the amount of clouds, warming caused by increasing anthropogenic CO2, and further unknown effects. However, the station density over the Earth is strongly irregular, which makes any global record but also the results given by LL disputable. The SH stations of the GISS data pool show less warming (resp. stronger cooling) than the NH ones. Since the available stations worldwide are concentrated in the NH, the real mean of the 20th century warming could be even somewhat smaller than LL have evaluated. LU and LL compared with BEST reveal differences in the following items:

[1] H.-J. Lüdecke, Long-Term Instrumental and Reconstructed Temperature Records Contradict Anthropogenic Global Warming, Energy & Environment, Vo. 22, No. 6 (2011),

[2] H.-J. Lüdecke, R. Link, and F.-K. Ewert, How Natural is the Recent Centennial Warming? An Analysis of 2249 Surface Temperature Records, International Journal of Modern Physics C, Vol. 22, No. 10 (2011),

[3] S. Lennartz and A. Bunde, Trend evaluation in Records with Long-term Memory, Application to Global Wariming, Geophys. Rev. Lett. 36, L16706, doi: 10.1029/2009GL039516 (2009)

[4] S. Lennartz, and A. Bunde, Distribution of natural trends in long-term correlated records: A scaling approach, Phys. Rev. E 84, 021129 (2011)

[5] T.J. Crowley et al., Causes of Climate Change Over the Past 1000 Years, Science 289, 270 (2000), doi: 10.1126/science.289.5477.270

Prof. Dr. H.-J. Lüdecke

Dr. R. Link

Prof. Dr. F.-K. Ewert

Exit mobile version