by Richard Saumarez
A recent debate at Climate Etc. has involved autocorrelation and trends. Essentially, the argument is that the temperature signal is perturbed by a random signal, and the influence of this random fluctuation persists for an unknown length of time and so the temperature at any particular instant is influenced by its past history.
This is a presentation of some of the issues that were raised by the papers recently presented by Ludecke, Link and Ewert [see here, here, here and here].
Why does persistence produce trends?
It may not be intuitively obvious why trends should occur in persistence models. Figure 1 shows a resting point (stage 0) at zero. A random, normally distributed value is added to it and it moves to point 1.
Figure 1 An Illustration of how persistence leads to trends. Upper trace shows how persistence stops alters the probability of the temperature returning to baseline, creating a trend. The bottom trace shows how a lack of persistence does not bias the probability of the next step.
The value at point decays slowly, it has persistence, so that after a time interval it reaches point 1a. A second random value is then added to it. Notice that probability of it returning to zero or below is the area of the distribution curve that is below zero, i.e.: very small. This illustrates why trends occur with persistence.
As the value is increased, note that the rate of decay increases – compare the decay rate between points1 and 1a with that between 5 and 5a. Therefore the observed value will remain above zero until there has been a run of negative disturbances that will return it.
The lower trace shows a nonpersistence model. In this case, the value has returned to zero before the next disturbance and the observed values will have a mean of zero.
This is the basis of the argument that “persistence” will cause trends in temperature. If an impulse of radiation is stored as heat, it carries over so that at the next impulse, the temperature will be slightly raised and become raised still further, resulting in a temperature trend. The controversy is how significant is this effect and can observed temperature trends be explained on the basis of a simple heat reservoir and a random change in radiative fluxes?
What models of Heat storage or Persistence can we use?

There are two basic models, one is a power law model in which the temperature decays by a power of time and is often called the Hurst law, who defined in terms of Hydrology.
T = (Ht)^{g}
(JC formatting note: the g superscript should be gamma).
(t is time, H is a scaling parameter and g, the exponent, describes how the process varies with time – g is negative).
Using this idea, Figure 2 shows a model, consisting of a leaky reservoir, which is surrounded by a dam that is more porous at the top than at the bottom. If it is filled suddenly with water at a high temperature, water will escape rapidly from the top and as the level subsides, there will be a slower leak through the bottom levels. Thus the level will decay at a slower rate as the reservoir empties, not only because the pressure head is smaller, but also because the rate of transfer is reduced by passing through a smaller area of less porous material.
Figure 2 A model of a power law system with the response shown (black). The responses for g=0.60.8 are shown in blue and the responses to the reservoir being filled to different levels are shown in red.
I have solved this model to give the decay curve and corresponds to a gamma of 0.7. It is important that the response to an impulse of water entering the reservoir will decay differently depending on how full the reservoir is (red curves) and therefore this system is nonlinear. When computing a response, the system can be treated as quasilinear by assuming that the actual level of the water does not vary much from its mean level. This allows linear methods of processing to be applied. If however, there is a wide range of levels, because the range of inputs is large compared to volume of the system, the behaviour of the system must be solved explicitly through its governing differential equations. Whether one can really treat the temperature system as quasilinear, or not, depends on measuring the inputs (if we knew what they were) and the responses of a series of temperature records and determining the coherence between them.
The alternative model is a multicompartment model that has exponentially decaying compartments as shown in figure 3. It is well known that temperature persistence does not conform to a simple, single exponential decay, but a system with a fast component and a large volume slow component can produce much longer levels of persistence. As shown in figure 3, this model produces a decay that is very similar to a power law decay and, if communication between the compartments can occur, the decay curve becomes almost indistinguishable from a power law model.
Figure 3 The response of a multicompartment model. The power law responses for g=0.7 and 0.8 are shown in blue. Note that a 3 compartment model has a very similar response to a power law model.
There is a very important assumption in the use of persistence models to calculate global temperature trends.

This assumption allows one to assume an “average model” for persistence and to drive it in order to produce a simulated temperature record.
Determination of the model and its relationship to trends
This was done in a number of steps:
1) Temperature records were extracted from the HADCRUT III, UTAH and GHCNM databases and all those at least 100 years long with less than 5% of the data missing were selected. The missing data was interpolated linearly across the gaps.
2) The yearly (“seasonal”) fluctuations were removed.
3) This data was used to generate an autocorrelation and signal power model.
4) A system response was created using either a biexponential or a Hurst model.
5) Gaussian noise was used to create a sequence 1364 years long that have the same variance as the residual signals. A one hundred year record was randomly selected from each sequence, in order to account for measured temperature trends to start at a random time with respect to any trend in temperature. The amplitudes and duration of the trends were extracted from 100000 sequences.
Figure 4
Autocorrelation functions
The ACF is a measure of how a sample in the record is correlated to its neighbouring samples and gives a measure of the persistence as it demonstrates the effects of past samples in determining the current sample. In principle, a copy of the signal is made and is shifted in time by one sample. The correlation coefficient between the shifted copy and the original version determines the relationship between the signal and that at one sample apart. This is repeated for all possible shifts of the copy of signal and the correlation coefficient is plotted against the value of the shift (Figure 5). In practice this can be done efficiently through the discrete Fourier Transform of the signal.

There are a number of problems in calculating the ACF. The first is that as the signal copy is shifted and each sample multiplied with the sample at the same position in the original signal, the samples at the end of the shifted signal “drop off” the end and so do not contribute to the correlation at that shift, making the estimate less reliable. Therefore only shifts of about 1/5^{th} of the total signal length should be used, in our case about 20 years.
The method used by L and LLE is slightly different from the autocorrelation function, in that they calculate a cumulative sum of the temperature excursion and manipulate this sum as a function of time by fitting a second order polynomial as a smoothing and detrending step. They then derive a persistence factor a =1g/2. Hence an alpha, which they quote, of, say, 0.6 corresponds to a g of 0.8.
Extracting the ACF from temperature records is quite difficult and may lead to artefacts.

The underlying temperature signal is dominated by the yearly variation as can be seen from the amplitude spectrum of the signal shown in figure 6, and this has to be removed to estimate the longterm persistence. The method by which this done can lead to quite different results for the ACF. One approach might be to simply lowpass filter the signal with a cutoff point well above 1 year^{1} as shown by the purple filter response in figure 6. This has a huge effect on the ACF as shown by the purple broken line in figure 5. A selective bandpass filter can be used but this appears to produce “ringing” in the short term ACF and to alter its persistence in the tail. The method, used here, which causes least disturbance to the ACF is to interpolate across the spectral bands in question and the effect of this filter on random noise is shown as the green line in figure 5.

Figure 5 The Mean ACF of the temperature records shown as blue squares with a 2 compartment model (black) and a power law model (red) fitted to the data. The dashed purple line shows the effect of using a low pass filter to eliminate seasonal trends (i.e.: disastrous!) and the green line shows the effect of the seasonal detrending method having very little effect on the ACF
Once the data has the yearly variation removed, it is normally distributed.
The Yearly ACF values are plotted as blue squares in figure 5 and the bestfit power law (red g=0.875) and biexponential (black) models are shown. In this data set, he biexponential model appears to fit slightly better, however, when the log ACF is plotted against the log delay, it is clear that there are considerable errors involved and it is not possible to say how closely the data conforms to a power law model.
Figure 6 Mean amplitude spectrum of the temperature records with peaks at 1,2,3 year^{1. }The red line shows a selective band pass filter and the purple line a low pass filter designed to remove seasonal trends. Interpolation across these peaks gives least artefact in the ACF.
Figure 7 Mean ACF plotted on a loglog plot. The vertical blue lines show the 95% limits of the distribution and the red dashed lines show a bicompartment model.
It is not possible with this data set to be certain of the model to describe persistence, but one can use a power law or multicompartment model with different assumed values and determine their abilities to produce trends.
Figure 8 Simulated record (blue) with low pass filtered version (black) and trends superimposed in red.
Figure 8 shows the residual temperature (light blue) using a powerlaw model with g=0.875. [This has been calculated using the ODE, not through the assumption of linearity and convolution]. The magnitude of disturbance has been drawn from a distribution so that the power in the model signal is drawn from the same distribution as in the observed seasonally detrended signals. This is repeated 100000 times to give a distribution of the expected residual signals. This has been repeated for gamma of 0.7.
The global temperature signal is considered to be the sum of a number of temperature signals drawn from stations that are perturbed with a highly correlated disturbance. Since we have only one longterm temperature record, we cannot say anything about the statistics of this signal. But, we can determine the ACF of the simulated signals and these show considerable variability as shown in figure 9. The mean ACF and its 95% limits are shown for a power law model of g=0.7 (red) and for a 2 compartment model which approximates g=0.9. Irrespective of how persistence is extracted from these signals, there is a considerable range of g fitted to the signal and the power law and 2 compartment model are indistinguishable given a single temperature record. The 95% limits for a true g =0.7 is 0.580.87.

Figure 9 Mean ACFs and their 95% error limits for a power law model (Redg=0.7) and a 2 compartment model (Black)with an approximate g of 0.9.
Estimation of trends in the simulated signal.

Figure 10
The black line in Figure 8 shows a lowpass filtered version of the signal with trend lines plotted in red. There are clearly shorterterm trends within the signal and, if one decomposes a signal into very shortterm trends, it is not possible to distinguish a longterm trend. The approach used here, is to low pass filter the signal heavily, so as to eliminate the rapid fluctuations and use the maxima and minima of this signal to determine the ends of trends. The trend line is then fitted to the data between these intervals.
This is shown more clearly in figure 10 where the low pass filtered signal is shown, and the trends extracted, using a filter with cut off points at 1/30 years (solid red line) and at /100 years (dotted red line). In general, the magnitude of the trend is less as longer trends are identified and there will be more variability around the trend.
Figure 11 Histograms of trend lengths and excursions. Power law processes with , A, g=0.9 (upper) and, B, g=0.7 (lower). The trends > 100 years are shaded red and the region between 60 and 100 years is also shaded
Using this approach, the distribution of trends for power law gammas of 0.9 and 0.7 (i.e.: a=0.55 and 0.65) can be calculated and are shown in figure 11. According to these calculations the probability of a 100 year trend in either direction with g=0.9 is 0.02 with a mean value of 0.27^{o} C. When g =0.7, the probability of a 100 year trend is 0.09 with a mean value of 0.4^{o }, while the probability of a >100 year trend of >0.7^{o}C is 0.02.
Conclusions
The purpose of the analysis presented by Ludecke, Link and Ewert is to characterise the Earth’s temperature as a system, in particular the lag in temperature with energy changes.
Any system that involves lag can produce trends. Whether the lag is imposed via a Hurst process, a multicompartment system or anything else is a secondary consideration. However, the problem is to characterise the process sufficiently accurately to make predictions about the magnitudes of trends is another matter. The data available is strictly limited and one cannot easily distinguish between different physical models that may underlie the trends. Satellite temperature and radiation flux data will eventually lead to much better temperature series but this data would be better incorporated into physical models that incorporate the mechanisms of energy storage and would render the approach taken here unnecessary.
One of the major problems in this analysis is that temperature signals at different points on the Earth are not independent. If heat is stored in the ocean and transferred by currents to another point, or by some other mechanism, the temperature rise at this point is not an “external trend” as stated, which has to be eliminated, but is an important part of the mechanism of the creation of regional trends within the system as a whole.
Figure 12 A model of energy transfer between different regions with peristence
The addition of energy into a region, as shown in figure 12, will give a false value for the persistence. However as system similar to that in figure 12 is analysable, in principle, provided the system is linear[1]. If one is using a power law representation of the persistence, the problem is far more difficult. Essentially, one writes the energy flux equation for each component in the system as a set of coupled differential equations, or in the complex frequency domain, and solves then to obtain the parameters of the persistence in each element. This can be done via the ACF of the temperature at each point. In practice, this is practically impossible with the data available and the solution is highly unstable and very sensitive to small errors in data, even for a restricted subset of the problem. However, LL&E have attempted to address this problem by detrending the data, which is an arbitrary approach and smoothes the data, which may not necessarily give a better value for the calculated persistence.
A major problem is the quality of the data on which a model is constructed. As I have pointed out, using correlation methods to determine persistence, longterm records are required and the number of “clean” long term records, especially in the tropics, are strictly limited and, to mind, global conclusions based on this data should treated sceptically.

[1] See Bendat JS & Piersol AG. “Random Data: Measurement and Analysis Techniques”. Chapter 7 – Multiple Input – Output Models, Conditional Spectral Density Functions.
Bio note: As Professor Curry asked me to give some biographical detail, I should explain that after medical school, I did a PhD in biomedical engineering, which before BME became an academic heavy industry, was in an electrical engineering department.
Richard Saumarez’ previous posts at Climate Etc.:
 Climate, control theory, feedback: does it make sense?
 Does the aliasing beast feed the uncertainty monster?
JC note: I received this post via email from RS. I posted this at Climate Etc. based on the interest generated by his previous posts and also by the Ludecke papers, and the fact that this is a topic I want to learn more about. This is a guest post, which I am very appreciative of, but the content reflects the perspective of RS and not myself.
Persistence in a complex system, yes!
Persistence in a system where the measured parameter is (Tmax+Tmin)/2. NO!
The damned energy in your model has radiated into space.
Look for persistence of driftwood on a tidal beach on the (High Tide + Low Tide)/2
Absolute garbage.
Not absolute garbage: The ratio of garbage to gems is high in computer model predictions based on statistical analysis of multivariable systems.
Those doing the analysis tend to lose contact with reality: E.g., Large numbers of solar physicists used the same helioseismology data from the Global Oscillation Network Group (GONG) and almost unanimously concluded: Helioseismology data support the standard solar model of a giant ball of hydrogen.
One solar physicist, Dr. Carl Rouse*, did his own analysis of the same data and reaffirmed his earlier report that the Sun has a “small, highZ, ironlike core” [Astronomy and Astrophysics, 149, 6572 (1985)].
http://articles.adsabs.harvard.edu/full/1985A%26A…149…65R
Although hundreds of precise analysis on material in the solar wind, solar flares, solar photosphere, solar emissions, meteorites, the Moon and the planets [http://www.omatumr.com/abstracts/gong2002.pdf] suggested Dr. Rouse was correct, his work was mostly ignored in the physics community.
*http://www.omatumr.com/Photographs/Carl_Rouse_desc.htm
Link correction:
http://articles.adsabs.harvard.edu/full/1985A%26A…149…65R
Or that link works on reference #95 here:
Is it so so difficult to be polite?
I assume your last sentence refers to youself?
Some clarifications—Are you assuming a linear system?
What about time and signal amplitude switching points and memory effects?
No, the Power law is a nonlinear system and is solved through its ordinary differential equations. For small fluctuations, it may be linearisable. A multicompartment model is of course linear and is solved via a convolution. “Memory effects”, I assume, means lags which are inherent in both models. Changes in inputs are drivers of the model and are derived from real data.
“One of the major problems in this analysis is that temperature signals at different points on the Earth are not independent. If heat is stored in the ocean and transferred by currents to another point, or by some other mechanism, the temperature rise at this point is not an “external trend” as stated, which has to be eliminated, but is an important part of the mechanism of the creation of regional trends within the system as a whole.”
The use of a global temperature statistic for determination of trends has never been very convincing to me because climate IMO is essentially a regional phenomenon.
Richard
Thanks for your explorations.
For a thorough test, compare the statistics of your method with the HurstKolmogorov dynamics quantified by Demetris Koutsoyiannis et al. at ITIA (and their work on climate, and persistence.)
Markonis, Y., D. Koutsoyiannis, and N. Mamassis, Orbital climate theory and HurstKolmogorov dynamics, 11th International Meeting on Statistical Climatology, Edinburgh, International Meetings on Statistical Climatology, University of Edinburgh, 2010.
Koutsoyiannis, D., HurstKolmogorov dynamics as a result of extremal entropy production, Physica A: Statistical Mechanics and its Applications, 390 (8), 1424–1432, 2011.
Koutsoyiannis, D., and A. Montanari, Long term persistence and uncertainty on the long term, European Geosciences Union General Assembly 2007, Geophysical Research Abstracts, Vol. 9, Vienna, 05619, European Geosciences Union, 2007.
I certainly agree with the last quotations. I am not denying the presence of lags in the temperature record, but I would question whether a) to simply say that it follows a power law dynamics is a sensible (or resolvable) model, and, b) can one draw robust conclusions from the available data?
RC
Can a comparison of temperature lags from solar input provide useful means to validate models?
See David Stockwell’s Solar Accumulation theory with links at my previous post.
David, thanks for all these references
Yes, David, your comments and the links you provide are excellent.
I do not claim to understand computer model predictions based on statistical analysis of multivariable systems.
However, there is considerable evidence of unreliable computer model predictions based on statistical analysis of multivariable systems that undermines confidence in the climatologists, economists, politicians and solar physicists who ignore experimental observations that falsified their predictions.
Autocorrelation is an amazingly robust process that does not require missing datapoints to be filled in.
It is best to run the autocorrelation on the finest sampling available which would be the raw monthly values and then filter out the periodicity of the annual seasonal variation in the autocorrelation itself.
This way the autocorrelation for 100 years will consist of 1201 points insteadm of 101 points preserving subtlties that would otherwise be lost in averaging process to remove the seasonal variation.
Fourier analysis of the autocorrelation will show just one fundamental periodicity of about 65 years cvorresponding to the warming and cooling cycles represented by the warming from 1910 to 1942 and the cooling from 1942 to 1975.
If you use the full HadCRUT3 dataset back to 1856 the warming to 1880 and the cooling to 1910 will provide a second cycle to this 65 year periodicity.
This should raise red flags about the latest period of warming from 1975 to 1998 which only lasted 23 years instead of the 32 of the previous warming cycle from 1910 to 1942. This break in the periodicity of the cycle is in all liklihood related to solar cycle 24 which is mimicking the start of the Dalton Minimum that brought an extension of the Little Ice Age in the Early 1800’s.
Fourier analysis is the perfect tool to expose the failure of the climate models projections of global temperature.
Hansen produced temperature projections for three scenarios back in 1988 giving us two decades of data to work with.
A cross correlation of each of Hansen’s three scenarios with Hansen’s GISS global temperature data for the past two decades should tell us which of the scenarios most closely matches GISS temperatures in a quantitative fashion with the correlation coefficient “R”
Since we are emitting about 10% more CO2 than Hansen used for Scenario “A” if the models are valid there should be a higher correlation coefficient for scenario “A” than for “C” which assumed an emmediate reduction to the 22gt/year level of 1988 considering that the current level is now in excess of 33gt/year (2010 value).
If there is a higher correlation coefficient for scenario “C” it is incontrovertable proof that the premise for the climate model global temperature projection is wrong.
The question is why hasn’t this been done by Hansen considering that it his own GISS temperature data that would be used? Is it because of the obvious outcome considering how close the global temperature matches scenario “C”!!
Thanks, Richard. Your posting increases my conviction that the human brain is too susceptible to selfdeception to use the output from modern computers discriminately (wisely).
Very interesting. I would like to see this notion of persistence applied to some region of Earth’s climate system with a bit more thermal inertia, and therefore less prone to the shortterm noise of natural variability– maybe like ocean heat content.
I would like to do that as well. I found the ARGO data a while back and started to do some autocorrelations and temporal crosscorrelations between the surface and subsurface layers to estimate the effective thermal diffusion coefficient and perhaps the average optical depth for sunlight.
That figure was as far as I got when I realized this would become a massive datacrunching exercise. If someone wants to help out with this, maybe we can join forces. I am convinced there is enough data to get some meaningful results. If someone has done this already, that would also be good to know.
WebHubTelescope That figure was as far as I got when I realized this would become a massive datacrunching exercise. If someone wants to help out with this, maybe we can join forces.
What software are you using and what is your operating system? How much computing power do you have, how much RAM and how big are your disks?
This is massive data crunching exercise. I probably can not help, but may be able to. I have 2 quadcore AMD processors 32 MB of RAM and 3 large hard drives. I have a dualboot computer: Windows XP Pro X64.and Suse Linux. Odds are that I can’t help, but it is worth a discussion.
Web,
I don’t get much from the figure. I might be willing to help. Can you give me an idea of the dimensions of the data and how you would approach it?
The problem right now is more about a strategy for parsing the files and eliminating the missing data.
Tropical Atmosphere Ocean Project
Maybe the actual CPU processing shouldn’t be too bad, more of systematically going through a sufficient number of data sets.
http://www.pmel.noaa.gov/tao/data_deliv/deliv.html
There are 8 depths plus the sea surface temperature, delivered with a temporal resolution of your choosing. The idea is to do crosscorrelations of the temperatures between the various depths and try to deconstruct the delay lags of temperature with depth. Each depth difference will have a computed time constant from the crosscorrelation and then from this can estimate the effective diffusion coefficient. Tricky because the solar will penetrate so that the initial interface is smeared out, perhaps to the optical depth. I have a way of compensating for this I believe. May be able to detect differences between day and night.
This is a sample data set:
seriously this doesn’t look too bad. those ascii files are not long. did you download them already? what is the missing data you’re looking to eliminate?
OK I looked at it a little more. Different sites have different amounts of data and it looks hard to tell what the maximum time resolution is without going through a bunch of iterations. for instance it only looked like maybe 2 or 3 pacific sites had 1 minute data.
how about this – what data did you specifically download and what if any additional data do you think it would be worthwhile to get? (#sites, type of sites, time increment)
I am looking at one site in the middle. If you don’t go back too far, then there aren’t as many missing data points. I am going after 10 minute resolution and about an 8 year interval. I will experiment on this one for evaluation purposes.
Did some crosscorrelations of SST with 10 different subsurface levels. The correlation falls off as expected after 20 meters down and no correlations below 180 meters with a lag of 2 months.
Couple of interesting books.
The nearsurface layer of the ocean http://books.google.com/books?id=4p6gNsSsvIC
Atmosphereocean interactions http://books.google.com/books?id=ylOAAAAMAAJ
ok if you think there is something worthwhile here and you want help crunching numbers or whatever, post it on a technical thread somewhere in the future. your last two posts sound like you are moving on for now, so I am going to stop checking here. periodically i get interested in crunching climate numbers, but i usually stop when i figure out it’s been done or being done.
The data set has idiosyncracies in that the number of depth levels varies over the years but that is OK. Ten minute intervals is adequate resolution and correlation windows of 2 months should be enough to extract the lags. I will have a more comprehensive analysis on my blog.
I tend to agree with Doc and Peter. The Northern Hemisphere tends to be driving the GMT. Regional reconstructions reconstruction there seem to be showing LTP from volcanic activity. Mann et al recent paper mentions that volcanic impact was underestimated in tree ring proxies http://www.meteo.psu.edu/~mann/shared/articles/MFRNatureGeosciAdvance12.pdf
If I am not mistaken, that appears to place the little ice age approximately 0.4 degrees cooler than estimated in 18141816, which puts a little more emphasis on the rise from that point. So there should be about a 200 year trend in the temperature data, if the temperature data is representative of the true global mean. Since the measured warming attributed to increased CO2 is predominately northern hemisphere, the volcanic cooling was predominately northern hemisphere, I would think the regional impacts seem to not be properly weighted in the GMT average of (Tmax + Tmin)/2.
RE Figure 2. The time it takes is a mixture of distances (L) and rates (R). Then the distribution of times is essentially T = L / R. This is what is referred to as a ratio distribution. Ratio distributions are important with respect to propagation of uncertainty and yes it can lead to a power law. If the variants are each Gaussian, then this is a cumulative distribution function of power 1, and it will lead to a similar curve that you show in Fig. 2.
The behavior is well known in terms of breakthrough curves for monitoring tracer dispersion in a watershed. The tracer material will take different paths and travel at different speeds leading to a similar curve. This stuff is much easier to explain in terms of disorder in the diffusion, convection, and geometrical parameters than by invoking some hokey HurstKolmogorov model.
For instance, if it is a diffusive model, it will be closer to a 1/sqrt(time) or 1/t^0.5 falloff, and if is an advection model, say driven by a gravity head, then the falloff will be closer to the 1/t. Since your exponent is between 0.5 and 1, then it is likely that both diffusion and advection are involved and we can say it is a dispersed flow.
I am going through this because I am not sure what you are getting at. Are you trying to describe propagation of uncertainty?
The model in figure2 is an example, as is that in figure 3.
Where I am going is as follows:
1) What is a reasonable model to describe long term serial correlation observed in temperature? Can you distinguish between a multicompartment model and a power law model on the basis of available data? I doubt it.
2) Are there methodological problems in parameter extraction from the available temperature data? Yes, particulalarly in view of the available data.
3) Can “persistence” per se, account for the change in temperature over the last century? I doubt it. Can it account for relatively short term changes in temperature? Possibly.
4) Is this a sensible way to look at temperature trends? I doubt it it..
Well of course if you don’t know much about the system there could be degenerate solutions or ambiguity in the problem description, but the way the problems were described the solutions seem very straightforward. These are setup like college physics lab problems, so the strawman element escaped me. Like I said, it has the elements of uncertainty propagation and that’s the first thing one learns in a physics lab course, such as in a gravity timeofflight experiment.
Everyone knows when the surf is up… so what’s the big deal? The tides & temps are always right on man.
http://www.surfline.com/surfreport/steamerlanecentralcalifornia_4188/
Hodads too?
One of the issues about using autocorrelation versus using the multiscale variance that Tsonis et al have used for temperature timeseries synchronization is that instead of seeing the long scale correlations in a damped tail, it places it in a rising trend. In this way it becomes much easier to see (1) where any significant synchronizations exist and (2) the maximum variance allowed in the system.
As an alternative to multiscale variance, Ludecke used something called Detrended Fluctuation Analysis (DFA), which is not that much different from the basic variance approach other than it removes trends (but why remove trends?).
I have experimented with the variance technique and do find that it works better than autocorrelation in quickly picking out the features, for example it picks out the long term variance limits for the Vostok data as well as picking out the interglacial periods:
http://theoilconundrum.blogspot.com/2011/11/multiscalevarianceanalysisand.html
But then again for something like wind speed data, autocorrelation works very well in discovering out interesting synchronizations. This plot is an autocorrelation from almost two years of wind speed data at one MET site, with data collected at 5 minute intervals:
One can easily pick out the daily correlations as well as some odd harmonics. However, if one does a Fourier analysis on this data, it is very difficult to isolate the periodicities at the daily level. On the power spectrum from the raw data it looks like a blip riding on a 1/f^2 falloff.
The best bet is to use all the techniques at your disposal and hope that one of them will reveal the salient features.
A problem is that people want to apply familiar, readymade techniques (as though shopping from a textbook table of contents list) rather than work from fundamental concepts to develop superior metrics for any highly specific context. There are infinitely many possible techniques. Some of them need to be developed from scratch.
Why remove trends? or What do the different trends mean?
If you detrend the new BEST 1800 start data you find a break point around 1900, that appears to be reduced volcanic suppression. If you detrend starting in 1816, then the break point changes, it all looks like recovery from a major volcanic suppression. If you use the longer term proxy reconstructions prior to Mann 2012, the 1910 break point appears again. But using the new Mann Proxy corrected for volcanic, 1816 appears as the break point indicating recover from volcanic suppression.
Then if you detrend both Best and the proxies, the shorter term trends tend to have an entry slope greater than the exit slope, which is where I think the information really is. If the entry and exit are the same, the system is pseudo stable, if the entry is greater than the exit, that is long term recovery, if the exit is greater than the entry, that is long term decline.
So the best I can tell, the Anthropogenic impact is the change in the ratio of the entry versus exit. It is there BTW, just smaller than estimated and more predominate in the northern hemisphere.
So what would be a good method to quantify the change in the rate of change regionally?
The essential issue is that longterm trends are possible lowfrequency components. When these get filtered out, or “detrended”, they impact the analysis of the power spectrum. In other words, no one can guarantee that the detrended parts were not an important part of the mechanism behind the time series, since power laws impact all frequencies in the spectrum.
Bottomline is that if detrending is used for compensation of calibration drift, it is OK, otherwise it may throw away important information.
I quite agree with WHT. I have made this point in the conclusion because the trend is either a property of persistence or energy flowing in from another region. If it is the latter, detrending is not analytically correct but needs to accounted for in a spatially distributed model. (A pig of a problem). Incidently, in the simulations, there is no need to detrend the data because we know there are no “artificial” trends. The variability of the ACF or the fitted Hurst exponent is still very large.
The point is that LL&E started a priori by stating that temperature records showed persistence and therefore power law dynamics applied. All I have done is show two very simple examples that are indistinguishable using real data. However, if you start with the assumption of power law dynamics, then you are assuming physically that there is a simple underlying model as I have shown.
I do not use that you should use all the techniques at your disposal. I think that you should start with a defined model and then attempt to see if the data fits that model using appropriate techniques.
First of all I consider HurstKolmogorov as not a real model but a set of heuristics corresponding to a notion of fractal dimension.
When I looked at LL&E’s Vienna time series, all I saw was uncorrelated white noise. Yet LL&E were able to pick out a Hurst exponent of about 0.65. I would call them on this because they obviously found what they were intentionally looking for. This is confirmation bias brought on by not checking the data with enough different analysis techniques.
Somebody else can contradict me but that would actually require some effort to download the data and run through the analysis as a second opinion. That is why I am skeptical of the skeptics. They can go all Heartland on us at any time.
@WHT
I don’t want to sound argumentitive, but isn’t that what I have done?
I have used series graphed in LL&E and others used several methods to estimate the Hurst exponent (haven’t included it in this post but also fitted the exponent directly to the times series itself as a check). In doing so I have paid some attention to details that can derail the calculation of “persistence”.
I have then calculated a series of temperature profiles using the observed correlation structure and observed variances and measured the distribution of trends in these profiles.
I was simply wondering about the exponent, and you say you have calculated it for Ludecke’s data but didn’t post the numbers. I would say the correlation is flat at all scales indicative of white noise, and the only uptick I see corresponds to recent temperature trends I have that graph for historical Vienna on the link I gave earlier.
I think we are square on this now.
I think that LL&E have underestimated the Hurst exponent. I can detect some persistence in the temperature records but the gamma is around 0.9. For this value of persistence, the probability of substatial 100year trends is small.
The temperature fluctuations are white noise in the data that Ludecke used, except for a detectable uptick for current warming.
What again is the Hurst exponent of white noise or pink noise? I think it is zero. Ludecke and his buddies have been chasing phantoms.
You got me with hydrology but does anyone else talk about autocorrelation and climate? I use it in the sense of a generic property of complex dynamical systems – and a potential key to predicting the onset of criticality.
‘There is an extremely cool paper in this week’s Nature by Scheffer and colleagues. I’m too busy right now to write much about it, but I wanted to mention it, even if only briefly. The thing that I find so remarkable about this paper is that it’s really not the sort of thing that I usually like. The paper essentially argues that there are certain generic features of many systems as they move toward catastrophic change. The paper discusses epileptic seizures, asthma attacks, market collapses, abrupt shifts in oceanic circulation and climate, and ecological catastrophes such as sudden shifts in rangelands, or crashes of fish or wildlife populations. At first, it sounds like the vaguely mystical ideas about transcendent complexity, financial physics, etc. But really, there are a number of very sensible observations about dynamical systems and a convincing argument that these features will be commonly seen in real complex systems.
The basic idea is that there are a number of harbingers of catastrophic change in time series of certain complex systems. The model the authors use is the fold catastrophe model, where there is an attractor that folds back on itself like a sideways “N”. As one gets close to a catastrophic bifurcation, a very straightforward analysis shows that the rate of return to the attractor decreases. As the authors note, one rarely has the luxury of measuring rates of return to equilibria in real systems but, fortunately, there are relatively easily measured consequences of this slowdown of rates of return to the attractor. They show in what I think is an especially lucid manner how the correlations between consecutive observations in a time series will increase as one approaches one of these catastrophic bifurcation points. This increased correlation has the effect of increasing the variance.’
http://monkeysuncle.stanford.edu/?p=505
The leaky well model is an idealised groundwater aquifer. The flow out of the aquifer and into rivers between rain events is a function of the length of time since it last rained. There is an exponential decrease over time as you have shown. In real world hydrology it is called a baseflow regression. Baseflow is the flow in rivers between rain – and hence I suppose Hurst and the Nile River.
To model climate you would have to assume that ‘porosity’ varies as both the radiative and albedo terms change chaotically in the simple global energy budget. The situation is very different to a baseflow regression where the outflow is dependent only on the storage in the aquifer – albeit in a relationship that is aquifer specific.
Having said that – there must be persistence in climate. Looking at the simple energy budget –Energy in – Energy out = dS/dt – where S is global energy storage.
People seem to think in terms the influence of events such as Volcanoes disappearing when the energy flux (at TOA) effects dissipate as atmospheric sulphates return to prevolcano levels. But in the meantime there is energy lost through reflected sunlight that is accounted for in the total heat content of the planet. This takes time to replenish just as it takes time to dissipate excess heat. So the state of the planet at any time reflects the net of inputs and outputs over time but with thermal inertia.
Scheffer et al 2009 – https://groups.nceas.ucsb.edu/sustainabilityscience/weeklysessions/session10201311.01.2010emergentpropertiesofcoupledhumanenvironmentsystems/supplementalreadingsfrommoderatordiscussantjimheffernan/Scheffer%20et%20al%202009%20early%20warnings%20of%20critical%20transitions.pdf/view
Chief, thanks much for this reference
The Scheffer work is extremely interesting. Let me know if you ever have time to do a guest post on this
As a brilliant statistician once warned me, abstract math is an infinitely large world in which one can get lost on any isolated, stimulating branch that has little or nothing to do with anything one might study in real world work.
One of his colleagues with good intuition about human nature emphasized that an abstract idea need not have anything to do with reality in order to gain popular traction; it need only be cute. This insight, gained from experience editing technical journals, was offered with a grin. It was both a marketing tip and a narrativecrafting tip.
=
What leaped out at me when I read the article to which “chief” linked?
I counted over 50 uses of the word “may”.
=
That’s not a good sign of feet on solid ground; quite the contrary, that’s a good indicator of dreamy abstract cuteness.
Sober suggestion for sensible folks to consider: We should take a very hard & serious look at cutting some of the streams funding such hopeless confusion.
It is a review article – in a field at the boundary of the known that has tremendous importance to humanity. You nonsense is again noted Paul.
Very well “chief” – since your bad manners persist (without moderation) I’ll have to call you for naivety &/or deception.
I come to the table with hard data:
1. ftp://ftp.iers.org/products/eop/longterm/c04_08/iau2000/eopc04_08_IAU2000.62now
2. ftp://ftp.iers.org/products/geofluids/atmosphere/aam/GGFC2010/AER/
I point out a seminal paper:
Le Mouël, J.L.; Blanter, E.; Shnirman, M.; & Courtillot, V. (2010). Solar forcing of the semiannual variation of lengthofday. Geophysical Research Letters 37, L15307. doi:10.1029/2010GL043185.
I demonstrate that the results are robust across methods & extensible to AAM:
Despite this, I see the Svalgaard/Lean “uniform 0.1K” solarterrestrialclimate narrative being pushed (or at least not being recognized as objectively rejectable) despite it’s strict inadmissibility under the data:
https://judithcurry.com/2012/02/09/aq/
Then I see a stabilizing pattern of commenters (& host too unfortunately – innocently I believe) continually lapsing back into misconceptions patiently cautioned against by Tomas Milanovic.
Temperature is just one variable.
EOP are core indicators of climate.
I have an open mind.
If you want to convince me of the utility of temporal chaos modeling in climate exploration & research, I’ll need to see it addressed convincingly with hard data in the context of EOP.
If core people here want to (a) keep hopelessly forgetting the fundamental difference between temporal & spatiotemporal chaos patiently explained by Tomas Milanovic on many occasions and (b) continue operating with blinders towards robust insights from hard EOP data, then trust will most assuredly continue to be eroded.
So according to “chief”:
A. Insight from hard EOP data = “Nonsense”.
B. A dreamy abstract paper that uses the term “may” over 50 times is a better place to look for sensible insights into nature.
Paul,
1 – yes
2 – maybe
Robert, that’s some rather deep ignorance you’re clarifying. You’re missing core background and discarding all hope of trust if you claim EOP are unrelated to climate.
I haven’t actually proposed any model. LL&E proposed that the distribution of trends could be calculated from a random input and a powerlaw response that has a specific autocorrelation structure.
I am certainly not saying that the “power law model” is realistic, but I spent some time trying to think of a simple example of the physical processes underlying a powerlaw that could be applied to temperature. The point being that one cannot simply attribute the dynamics of temperature to a power law unless you have a credible physical model and you can show that the results of that model are distinguishable from other models.
The correlation structure of the temperature record shows that there is persistence, but this does not necessarily imply a power law mechanism. A number of different mechanisms could be involved and I am not sure that these can be distinguished using available data. Nevertheless, the problem is simply whether the entire 20th century warming is attributable to a random trend created by persistence. My view is that this is unlikely unless one assumes a degree of persisitence that is not apparent in the temperature records that I have examined but shorter trends of reduced magnitude are quite likely.
I am not convinced that a blackbox model of this type is particularly helpful
The amount of energy in the system can have persistence, but the average temperature cannot.
We know that the Earth is bathed in the same energy flux at the spring and summer equinoxes, we also know that global temperature is different at these two points. This is a true ‘lag’, but not all changes have ‘lags’
We know that we can approximately the same amount of heat in the planets system and yet have a massive difference in surface temperature; In 1998 the global temperature anomaly was 0.6 and the next year it was 0.33. Now in one year there was not a change in the Earths heat content corresponding to this change in the steady state temperature. Instead, heat was moved within the Earth system. Warm water from one location moved to an area where the surface temperatures were typically cool.
The amount of kinetic energy contained in ocean currents is huge, what the hell causes changes in circulation patterns I don’t know. I do however know that eddy currents and persistent patterns are common in water movements. I have no problem with episodic oceanic currents giving rise to temperature changes that can be observed. I have no problem with episodic iceberg calving giving rise to local changes in salinity/temperature and strangling off one leg bifurcated pathway of a nascent circulation pattern.
However, temperature is not in anyway persistent.
I reread my post this morning Richard, and note i come across as a real jerk. Sorry for being so nasty, the point I should have made was that unless you have internal positive and negative controls, this type of analysis is quite without statistical merit. It is very easy to get very pretty rubbish by data crunching.
Doc,
If you look at a point in space, inject an impulse of thermal energy, I.e. heat, and watch the temperature evolution via an autocorrelation, it can give you an idea of the thermal diffusion coefficient. This is measuring persistence in some fashion. The autocorrelation is not as useful in this case as it is when the impulses are not well characterized or defined. It will take more work but the thermal properties can still be extracted.
In this way, autocorrelations are related to convolutions, and the process of decoding an underlying model is related to the idea of deconvoluting an output from the signal to understand the impulse response.
I am very unconvinced that a “blackbox” model is appropriate. However, LL&E claim on this basis that the trends in temperature can be accounted for by persistence. Persistence can of course produce negative trends.
Any blackbox model contains a physical assumption and I had to think about what the power law assumption represents in its simplest form. I do not think that one must neccessarily assume that a power law model is correct, simply because one cannot distinguish it from other models that have a long term persistence using the available data (see fig 9). If one assumes the presence of persistence, one can calculate the response of a model to a random fluctuation that results in observed temperature variance. On this basis, I think that long term major trends are unlikely, but possible, but shorter (3040 year) excursions could be accounted for by this mechanism.
I quite agree that energy is likely to be transferred to from one region to another in long term processes. Unfortunately the “detrending” does not seem to model this well and creates further uncertainty in the calculation of persistence and a more formal approach is required. Regrettably, the available long term data renders this rather questionable.
Chief Hydrologist
Thanks.
On catastrophic change, Robert Essenhigh finds there is no potential for that in his analysis of radiative forcing of greenhouse gases involved in the atmospheric lapse rate.
Prediction of the Standard Atmosphere Profiles of Temperature, Pressure, and Density with Height for the Lower Atmosphere by Solution of the (S−S) Integral Equations of Transfer and Evaluation of the Potential for Profile Perturbation by Combustion Emissions, Robert H. Essenhigh
Energy Fuels, 2006, 20 (3), 10571067 • DOI: 10.1021/ef050276y
Chief,
You might recall I mentioned that there are some interesting things going on with limnology and carbon cycles. Have you ever had a chance to take a look at that topic?
Hi,
I did reply I think – but it is so long ago.
The carbon cycle is one of those things that are long on rhetoric and short on data. The freshwater aspect was interesting – but I had trouble fitting it into a concept of a global context – so gave up.
Cheers
Not that freshwater thing of Hunter’s again. Google how many times he has mentioned it. Yes, freshwater lakes can contain seaweed and algae. The northern climate lakes “turnover” and mix their thermocline twice a year, which both oxygenates and adds CO2 to the lower levels so they don’t eutrophicate as easily. That pattern has been going on for ages.
The only geology class I took in college was on limnology.
Webby has a habit of dropping simplistic, irrelevant and bombastic comment. It’s a talent. The adjective is euthrophy and the noun eutrophication. Trophic refers of course to nutrients. Turnover results from cooling and changing densities of the surface water which becomes dense enough to sink on an annular basis. It is by no mans universal. There is another much more rare process called limbic eruption – where CO2 rises rapidly from the bottom and is released from the surface.
It has nothing to do with carbon sequestration in fresh water that Hunter was researching.
Chief –
“After examination of longer timescales, it has been suggested that the increasing Pleistocene climate variability may be interpreted as a signal that the near geological future might bring a transition from glacial–interglacial oscillations to a stable state characterized by permanent midlatitude Northern Hemisphere glaciation.”
The paper is fascinating, lucid and an example of the type of work that should be pursued. However, since they claim that, for example, a sudden stock market change could be preceeded by either a high level of volatility or a low level, I won’t panic about being buried under the ice just yet!
cb (51.2N).
Judith,
The slippery slope of balance and tipping points…
Our planets systems were NEVER designed for balance.
Just constant change.
Balance and tipping points are man made concerns of NOT understanding this planet.
An auto correlation of a time series is a rather blunt tool and incapable of determining any periodicity outside the overall length of the time serise used, but any periodicities within this time series length are robustly displayed.
Figure 8 shows an overall increasing temperature trend but we have no way of knowing the periodicity of the cycle that this is part of unless we use a long enough time series to depict this period.
If we use the time series of the graphic from the 1991 IPCC First Assessment Report depicting the Medieval warm period and the Little Ice Age we have a pretty clear indication that this warming is part of the overall warming and cooling cycle that included both of these events.
This makes a lot more sense than trying to rationalize this increase with an abstract concept such as “persistance”
On the other hand if the autocorrelation is filtered to remove the annual seasonal periodicity there is a clear indication of a dominant periodicity of around 65 years demonstrated by the red trend which would be much more clearly demonstrated if a high cut filter was used instead of the low cut filter used in this presentation.
“It is not possible with this data set to be certain of the model to describe persistence, but one can use a power law or multicompartment model with different assumed values and determine their abilities to produce trends.”
“Figure 8 Simulated record (blue) with low pass filtered version (black) and trends superimposed in red.”
Ah, remember when scientists were helping mankind to be more productive? AGW science produces cost for all. Here we have the results from the old model.
http://www.sfgate.com/cgibin/article.cgi?f=/g/a/2012/02/20/bloomberg_articlesLZNZ0D07SXKX01LZO0R.DTL
How many watts will this save us all down the road? I like science.
“…and so the temperature at any particular instant is influenced by its past history.”
In this context, ‘the temperature’ means the actual temperature not the ‘official’ temperature. The ‘official’ temperature is arrived at by locating ‘official’ thermometers at airports and exposed to chemiicals and jet exhaust and runways that are regularly cleared of snow.
To compare, the actual temperature is ‘the temperature’ that you dress for in the surrounding countryside. The ‘official’ temperature is what you meet in Cancun about to discuss over margaritas provided courtesy of the productive who are at home actually providing value to society..
Richard – Thanks for an interesting essay on this topic. I don’t want to revisit Ludecke et al because I think they misapplied and misinterpreted the DFA method they used, but the concept remains useful. Persistence is a description, not a mechanism, and as you point out, modeling an apparent persistence in terms of mechanisms is a challenge. In the case of global temperature, we can speculate that a rising temperature can carry over from one interval to the next from a combination of random fluctuations and stored heat, but it can also be perpetuated by an external mechanism (a forcing) that continues to exert a warming effect over multiple intervals.
In theory, a persistent but constant forcing should be characterized by a warming effect that diminishes over time as the TOA flux imbalance tends toward zero, while a forcing increasing at a modest rate might be characterized by a constant flux imbalance and a stable warming effect, and a rapidly rising forcing by an increase in warming rate due to an increasing flux imbalance.
In practice, I expect this would be extremely difficult to extricate from the climate noise we observe due to multiple changes in forcings and chaotic fluctuations, but I would be interested in your opinion on this. Two future scenarios might be worth considering. One is a world with an exponentially rising atmospheric CO2, where we might anticipate a long term increasing flux imbalance (all other things being equal). The other (not exclusive with the first) would be a world with a long term decline in solar irradiance, as some are predicting. The forcing magnitudes are predicted to be greater in the CO2 scenario than the solar scenario, but the principles are similar. In each case, a very long term persistence would be expected if the forcing and its changes were very long term.
Also, would a mechanism involving both a temporary heat storage term and a longer forcing term conform better to a multicompartment model than a power law model, or would it make no difference?
“Coupling is a property of an individual mode’s phase relative to the phases of other modes. When two modes’ phases lock, i.e., retain a fixed relationship for a succulently long period of time, then regardless of the phase lag between them those modes are considered coupled…
“There are several important details regarding the definition of coupling in terms of trends in mode evolution with respect to time. First, even if the modes are strongly coupled, trend phases among the different modes in general will not occur simultaneously, as those modes could have physically based phase lags relative to each other. Hence, in the definition of coupling it is necessary to define a window over which to search for trend phases. For the situation here, we are interested in interannual to decadal changes in the coupling, so a window of 57 years in length is appropriate…
“It is hypothesized that persistent and consistent trends among several climate modes act to kick the climate state, altering the pattern and magnitude of airsea interaction between the atmosphere and the underlying ocean…”
[Swanson and Tsonis: Has The Climate Recently Shifted? January 26, 2009 (Draft)]
…Insofar as this sequence of events without fail led to a shift in the climate state during the 20th century as well as in climate model simulations, this strongly suggests that the climate state has recently shifted. If the 20th century past is indeed prologue, such a shift should mark another break in the global mean temperature trend… Over this period, there have been 6 cooling episodes. Three of these are associated with tropical volcanic eruptions (Agung 1963; El Chichon 1982; Pinatubo 1991), while the 1955 and 1973 events coincide with large amplitude La Nina events…” (Ibid.)
If modelmakers wish to support AGW alarmism they must ignore the issue of autocorrelation maintain the charade that the data provides even the slightest evidence for predictability. McShane and Wyner demonstrated that there absolutely no ‘signal’ in the proxy data that underlies MBH98/99/08 (aka, the `hockey stick’ graph).
I have the same problem with a lot of the hunts for regularities which have occurred on various threads. A century ago Cantor proved that there are uncountably infinitely many functions which fit any finite set of data. So there must be infintely many regularities out there; and if a few hundred are found, how do we know which if any are real?
All of these analyses need to be constrained in the first instance by real physics, not the kind that says “what function do we get if we model the climate like this system or like that system”? In the first instance, the physics of a body which is slightly overheated is that its temperature decays back nearly exponentially to its equilibrium, because the rate of heat loss is nearly proportional to the excess temperature. Other functional forms only start to kick in when the excess temperature becomes very large so that proportionality breaks down. Since we are considering temperature shifts of less than than 1C in a planet whose temperature is about 300C, the excess temperature is very small and we should expect nearly exponential decay. (This approximation that the rate of adjustment is proportional to the extra stress is the basis of most perturbation theory, for example. It ultimately derives from Taylor’s theorem, that almost any nonlinear function has a linear approximation close to the starting point.) Specific suggestions for physics that produces an instantly nonproportional response to small stresses are welcome, but I know of none except where systems are unstable to begin with; and planetary temperature is certainly not like that.
I think that an autocorrelation analysis constrained by good physics is likely to yield important insights, but it also needs to follow techniques that statisticians and econometricians already know to work well, and here Richard seems to be reinventing the wheel. He focuses on the autocorrelation function, but this is not particularly informative by itself, because several different types of autocorrrelations cause it to decay away in a very similar manner. The old BoxJenkins procedure for identifying the right correlation model makes use of the partial autocorrelations as well, which Richard does not mention. But in any case there are now better procedures for estimating the most likely correlation model, which are implemented in standard statistical software.
(For the avoidance of doubt, I do not suggest that anyone should run off to push a raw temperature series through one of these analyses. To learn anything, we first need to have a physically plausible description of what is happening, and then examine the autocorrelation structure of the residuals that are not explained by that description. That structure may then suggest that the system stays close to whatever the physical model predicts, with fluctuations rapidly dying awway; or it may suggest that the fluctuations tend to be persistent and the temperature may wander far away from equilibrium for long periods. But until we have a believable physical prediction of the equilibrium path, we have no set of residuals to examine.)
I agree with your comments. However LL&E claimed that Hurst law dynamics was inherent in the temperature record and that by extracting the parameters of this model, long term trends were probable.
My points are simply that one cannot even distinguish between a linear and nonlinear model with substatially different dynamics using the available data and that, by using simulation of the temperature record, the distribution of the fitted parameters is very large. The range of parameters is such that any statements about the likelihood of trends is almost meaningless. Given a relatively crude fitting approach, I believe that LL&E have overestimated the persistence and this stems from the excessive smoothing involved in DFA, which is of course a correaltional method.
My view is that one can either go forwards: one starts with a physical model and identifies the model applying whatever technique to the available data or one goes backwards and applies a technique without a physical model and then wonders what the results mean.
In this case, there is a physical model (which I regard as completely naive) and the object is system characterisation. Since we don’t have an input function, we cannot do a formal characterisation and we are then left fitting a model to the correlation. I agree that there are many ways of doing this, but this has to be thought about in the context of the data, which is frankly awful and too short to look for second order effects. The residuals following first order correlation are practically random and there seems little point in going further, especially as we wouldn’t understand what the results mean and don’t materially affect the conclusion.
I would point out that there is a real problem in writing a post for a blog. This is not a scientific paper and, while there are many knowledgeable people reading it, there are those who are not intimitely involved with the finer points of signal processing or whatever. Therefore one has to take a simplified general approach.
Temperature doesn’t fall off exponentially damped if the only dissipation is diffusion. That consideration is huge because many geothermal paths are diffusional, such as beneath the ground and a significant proportion of oceanic.
Those are the models I am applying for thermal response.
Sorry, it does. Look at the Wikipedia article on heat diffusion, which gives Fourier’s solution to the onedimensional case; this decays exponentially wih time from the starting temperature. Be careful of the Green’s function solutions – they solve the problem of how temperature changes at some point when a heat source is placed nearby: the temperature starts at zero, then rises as the heat arrives. But that is not the case of an atmosphere which is first heated and then returns to equilibrium as the heat diffuses away; it models how temperatures underground respond if the atmosphere is heated aboveground.
The solution is exponential for special cases, which are very unlikely to occur in practice, it’s not exponential more generally. For that kind of situations that are relevant for this discussion the solution is with certainty far from exponential in time.
Could you elaborate on that. There is Newton’ law of cooling which is applicable when the rate of heat conduction is much greater than the external heat flow. This means that it is heading toward a steady state with the environment at all times.
I guess the difficult case is the well insulated situations, such as a planet sitting inside a vacuum, or the earth shielded from its core, or the deep oceans. I don’t think any of those are exponential and those are the most critical for climate change considerations.
Pekka and WHT, could you clarify please? The situation under discussion is that “the temperature signal is perturbed by a random signal”; put a little more carefully, some oneoff event (widespread unusual snowfall pattern one winter, disturbance from the sun, whatever) causes the earth’s lower atmosphere to be say 1C warmer than normal on a particular date. After that, this particular shock goes away and the forcings return to normal. The extra heat in the atmosphere then radiates to space, diffuses into the ground, and convects into the ocean. Under all the conditions that I know of, the atmosphere’s temperature then decays back exponentially to normal (until modified by further shocks as in Richard’s diagram), because all of these transport mechanisms remove heat from the atmosphere at a rate that is proportional to the (small) excess temperature; and the heat capacity of outer space, the oceans and the earth are effectively infinite compared to the atmosphere, so that their temperatures do not change appreciably. Pekka’s comment about relevant situations and WHT’s mention of external heat flow suggest that you are thinking of some other setting, but I can’t see what it might be that would be relevant. Or is one of my assumptions about the physics of this situation off base?
Dunmore,
Sure I can try to clarify. Any 3D thermal problem will need to be solved with a finite mesh and all the numerical complexities that entails. The mesh will be structured around the heat equation to establish continuity of flow between the nodes. If there is convection and advection involved, then a fluid dynamics mesh and perhaps some additional compartmental modeling to keep the situation under control. After this is all said and done,we don’t know what will come out of the mix.
That said, I would like a simpler solution, and that is why I am not using meshes.
This is a very good laboratory paper that shows how the damped exponential and diffusive response solutions intersect.
“An experiment on the dynamics of thermal diffusion”
When strong heat losses occur through conductive paths then they multiply an exponentially damped factor. If they operate the experiment in a vacuum, then that path is missing and the thicker tails from 1/sqrt(t) remain (see Figure 5). So the question is how strong the conductive paths are versus the diffusive paths.
I like these American Journal of Physics papers because they aren’t always trying to find some novel bit of physics, but instead trying to help clarify our understanding so that students can learn the fundamental principles better.
Another good set of papers to look at is James Hanson’s work from the 1980’s.
Thank you WHT, that’s very helpful. The particular model in the paper you referred me to does not apply because the temperature at the point where the heat pulse is applied falls off as the inverse square root of time; thus the temperature is infinite at the moment the pulse is applied. That does not happen to the atmosphere. The author’s focus is on how temperature changes at various places along the rod, which does not have infinities but is not relevant to our situation. A better model would involve a perfectly conducting cap (a wellmixed atmospheric column) on the end of the rod (the ocean or ground beneath it), with the cap suddenly heated a little bit and then warming the rod. The question would be how the temperature of the cap falls as heat is conducted into the rod. I can’t see a solution, but it does appear to be neither exponential nor powerlaw decay.
Anyway, thanks for the references and for setting me straight on this; that there really are cases where the rate of cooling is not proportional to the temperature excess.
Paul,
You can check from the Wikipedia article the spatial structure of the exponentially damping: it’s a sinusoidal oscillation in the spatial coordinate. That’s the only spatial structure that disappears exponentially.
There are many alternatives for the initial and boundary conditions to use for studying the warming or cooling of practical interest. No single choice is definitely the most natural and best. All reasonable choices have, however, the common property that they are very far from the Fourier solution. They are infinite sums of Fourier solutions, which each have a different time constant in the exponential. Such infinite sums have a time dependence that can never be exponential and is very far from that in practice.
@Richard Saumarez
Do you use complex wavelets?
One could, but I don’t for this problem. I use a variant in my real work which is the prediction of sudden cardiac death.
RC Saumarez, thank you for an interesting discussion of the topic.
I initially cringed when I saw LLE being mentioned. Not because of their use of DFA, which is not incorrect as many have claimed, but because of the second half of the paper and overreaching conclusions.
But your discussion is really nicely presented, and thoughtful. There is a lot of useful information here.
David Hagen has already linked a number of Dr Koutsoyiannis’ articles here, so perhaps my contribution is not necessary :).
I agree with many of your points Richard; there exists a dependency within the time series; and statistical analysis of the time series precludes the ability to determine which model of autocorrelation is correct. But there is another approach; we can apply well understood physical principles to argue for a preferred autocorrelation structure. This is what Dr Koutsoyiannis does with his Physica A paper (linked by David Hagen upthread).
There are a few comments here that caught my eye. WebHubTelescope complains that HK dynamics is not a “real model”. To me, this is a bit of an oxymoron, models by definition are not real. They are a mathematical representation of something physical, or statistical. But Dr Koutsoyiannis does have a physical explanation, rooted in statistical thermodynamics.
I think I see similar arguments in the debate about quantum mechanics vs. classical mechanics*. Many people (both at the time, and even still today) complain that classical mechanics links back to intuitive physical reality, but quantum mechanics is not “real”. Quantum mechanics is based on probabilities, uncertainties – nonintuitive things which do not seem physical to many. Yet quantum mechanics has far greater predictive power than classical mechanics. So which one is “real”?
Well, I’ve talked about the fact that we cannot verify the predictions of HK dynamics within the time series itself. That is a problem, because useful predictions must be falsifiable. However, there may be an answer to this too :)
Dr Koutsoyiannis took the approach used to demonstrate HK dynamics (maximisation of entropy production) and applied it to another well known problem – the disagreement between the classical climate textbook application of the ClausiusClapeyron relation, and the empirical Magnus equation. For a little background, the climate textbook application of ClausiusClapeyron assumes a constant latent heat of vaporisation, which is known to be incorrect. This to me seems to be a classic climate science fudge – we don’t know what this is, so we’ll represent it with a mean. It yields a theoretical term which is not unreasonable, but is in disagreement with empirical measurements.
Dr Koutsoyiannis applies his methods to this problem, maximising the uncertainty in the unknown parameter within the constraints of statistical thermodynamics, and comes up with a new relationship – one which accurately matches empirical equations. Much like quantum mechanics leading to a description of the spectrum of a hydrogren atom that classical mechanics could not achieve, these new methods lead to a correct description of the ClausiusClapeyron relation that classical methods could not achieve. (ref below)
So, we have established the predictive power of these new methods. But does that show without doubt that HK dynamics governs climate? I would say it is not yet beyond doubt, but given the available evidence, it is the most credible… perhaps, most “real” model we have to date. But no model should ever be confused for reality.
Ref. link: Koutsoyiannis, D., ClausiusClapeyron equation and saturation vapour pressure: simple theory reconciled with practice, European Journal of Physics, 33 (2), 295–305, 2012
*Note I am not comparing HK dynamics and quantum mechanics as theories here! I am just comparing the reaction to these theories by people and scientists, and noting certainly similarities in the reactions, not necessarily the theories themselves.
Thanks for your nice comments.
I have read some of Koutsoyiannis’ work when thinking about the LL&E problem and was impressed.
My basic problem is this: Should we assume a priori that temperature follows a power law and, if so, what assumptions are we making about the physics of the system? As WebHubTelecope has pointed out, and it arises out of very simple modelling as in figure 2, that a mixture of diffusion and and advection will result in something close to a power law, depending on how complex one makes the system. The difficulty is supporting a power law hypothesis with real data, particularly historical temperature records.
The problem is compounded by not knowing the input (forcing) of the system and also whether one can linearise a powerlaw response adequately.
As I’m sure you know, parameter fitting to data can be highly illconditioned, for example fitting two exponentials to decay data. When thinking about the problem, I tried fitting the Hurst parameters to the actual time domain signal using GaussNewton. The coefficient Matrix had a spectral radius of 50100. This implies that small changes in the data will give diffent fits. Turning to the ACF, the difficulty is that the record (100 years) is to short to resolve the tail and get a good measure of the persistence as shown by the simulations in figure 9.
Therefore, I don’t think it really matters what model we use in this instance, and arbitrary series describing the serial correlation would be no less or more valid than a powerlaw or a multicompartment model. But I don’t think we can resolve the parameters of any model, or simply the serial correlation, such as this with sufficient accuracy to make a statement about its realworld ability to produce trends. Essentially one is dividing a toadstool by a moonbeam and expressing the reult to 6 places of decimals.
My feeling is that LL&E have overestimated the probability of 100 year trends. I make it about 1%, it could be 10% but I would be surprised if it were 50%.
What I want to show is the difficulty in applying the HurstKolmogorov approach to come up with something that you can just as easily derive from straightforward uncertainty analysis.
What I did was reanalyze one of Koutsoyiannis very recent recent papers on variability in rainfall and do my own uncertainty analysis. This is the paper I looked at today:
Can a simple stochastic model generate rich patterns of rainfall events?” by S.M.Papalexiou, D.Koutsoyiannis, and A.Montanari [2011]
Most of the details are in this linked blog post I just completed:
http://theoilconundrum.blogspot.com/2012/02/rainfallvariabilitysolved.html
The analysis uses a set of rainfall data from U of Iowa, and I apply a doubly stochastic MaxEnt uncertainty analysis, once for the variability within a rainstorm, and another for variability across rainstorms. The fit to the actual data is astounding because it only assumes a single mean value rainfall rate:
This is nowhere near a powerlaw distribution. Now I think I understand why Koutsoyiannis has a hard time getting published with his research work. I have a feeling he is a tad sloppy in applying immature methods such as HK where propagation of uncertainty from basic statistical physics firstprinciples works much better. As Pekka said, the “Hurst type statistical properties cannot in most cases be derived rigorously”. I have stayed away from that stuff because I don’t understand it like I understand basic statistical mechanics.
It looks as if Koutsoyiannis reads this blog’s comments and I wonder what he will say about my analysis. I see my stuff as essentially bulletproof (after all, I spent a couple of hours on it) and used the same rank histogram approach that he used (he calls it a Weibull plotting position). What I didn’t do was any kind of autocorrelation analysis because I don’t have direct access to the Iowa data, just the graphs.
WHT,
My apologies for missing this. Pekka reminded me of this comment and I went to look at it. Demetris has responded to your comment (link below) and his responses will be far clearer and better expressed that any attempt I would make :)
https://judithcurry.com/2012/02/20/godandthearrogantspecies/#comment173746
Figure 10 would make me tear out my hair, if I had any. It’s a sinusoid, not a bunch of piecewise alternating trends.
You do not have to have a cyclical driver to get a sinusoidal response. PDE solutions for bounded continuum systems can almost always be expanded as a series of 2nd order systems with more or less lightly damped sinusoidal responses. A random excitation to a lightly damped such system will produce, in accordance with the excitation power and the damping ratio, a phase and amplitude modulated sinusoid.
The oceans are bounded. The atmosphere is bounded. Such behavior is usual, unremarkable, commonplace, even expected. I just cannot fathom why climate science is so behind on the learning curve. These concepts have been used in industry for decades.
I absolutely agree. Without a trend in the forcings, what goes uo, goes down. Obviously, as you say, a low pass system excited by a random input will produce cycles, whose shape depend on the phase spectrum of the inputs. If the phases of the 1st, 3rd, 5th low frequency harmonics of forcings are aligned, then you will get long cycles that might approximate to a triangular wave. However from a statistical point of view, the temperature could be viewed as a constrained random walk, which if unbiased, will meander around zero.
The issue is whether the 20th century warming is an actual trend or simply a random cycle with a long period. This analysis is compounded by the paucicity of data because we don’t have 200 years of record to determine whether there really are cycles. The scope of this post was to question whether, given lags that are discernable from temperature records, large temerature excursions are probable.
It’s more than that. It isn’t a low pass system, or at least not a runofthemill low pass system with critical or higher damping.
It is a system which is the response of a sum of lightly damped 2nd order systems, as in the plot on chart #6 here as the damping ratio “zeta” tends to zero. Such a response is what you typically get by solving partial differential equations governing a bounded system with weak energy dissipation (which is what provides the damping).
It’s like a bell – you strike it, and it vibrates for an extended period. You keep striking it randomly, and it vibrates forever at the same natural frequency, with long term randomly varying amplitude and phase. If you hit it really hard, you excite many natural frequencies, and the output is the superposition of all the excited “modes.” Exciting the higher frequency modes requires more energy, so the fundamental harmonic is most often what you mainly hear.
The input forcing does not itself have to be cyclical to induce a pseudocyclical response from such a system. And, by pseudocyclical, I mean cyclical with long term random variation in phase and amplitude.
As an example, I created such a system model, truncated to two harmonics, to model the major harmonic components in the sunspot data here. (I did not have time to delve too far into it, as this is not my day job, but sunspot activity could be statistically predicted by incorporating this model into a Kalman Filter, running it backwards and forwards over the data to initialize the states, then running it forward in time. The Kalman Filter formalism would also produce error bars for the prediction which would, naturally, grow with time.)
I did make some runs driving the system model with white noise, and got results which are qualitatively similar to actual sunspot data here and here.
This is the way the natural world works. Such modeling is used extensively in designing structures, fluid conduits, and any other system which can be described by partial differential equations over a bounded domain.
The dynamics of the oceans and atmosphere, in theory, could all described by partial differential equations on a bounded domain. The heat flow through these media likewise. There are going to be natural frequencies of the system. I believe the ~60 year component we see so clearly in the average global temperature metric is a manifestation of the fundamental (lowest energy) natural mode of that system.
This is similar to the concept of stochastic resonance. The broadband noise tickles the natural frequencies of the system at which point they resonate (or at least escape from being filtered).
Think of something as simple as ocean waves. A single wave crest has a certain amount of potential energy depending on how high it is. The amount of time it takes to flatten is determined by gravity. The wind provides the broadband noise to get the waves going, and the waves then lock into a disordered periodicity based on the time constant.
A washboard road is another. These do not happen spontaneously but are the result of random external disturbances that resonate with modes of the system. And these often have the characteristic that the peaks equal the valleys in magnitude.
‘The concept of stochastic resonance was invented in 198182 in the rather exotic context of the evolution of the earth’s climate. It has long been known that the climatic system possesses a very pronounced internal variability. A striking illustration is provided by the last glaciation which reached its peak some 18,000 years ago, leading to mean global temperatures of some degrees lower than the present ones and a total ice volume more than twice its present value. Going further back in the past it is realized that glaciations have covered, in an intermittent fashion, much of the Quaternary era. Statistical data analysis shows that the glacialinterglacial transitions that have marked the last 1,000,000 years display an average periodicity of 100,000 years, to which is superimposed a considerable, random looking variability (see Figure 1). This is intriguing, since the only known time scale in this range is that of the changes in time of the eccentricity of the earth’s orbit around the sun, as a result of the perturbing action of the other bodies of the solar system. This perturbation modifies the total amount of solar energy received by the earth but the magnitude of this astronomical effect is exceedingly small, about 0.1%. The question therefore arises, whether one can identify in the earthatmospherecryosphere system mechanisms capable of enhancing its sensitivity to such small external timedependent forcings. The search of a response to this question led to the concept of stochastic resonance. Specifically, glaciation cycles are viewed as transitions between glacial and interglacial states that are somehow managing to capture the periodicity of the astronomical signal, even though they are actually made possible by the environmental noise rather than by the signal itself. Starting in the late 1980’s the ideas underlying stochastic resonance were taken up, elaborated and applied in a wide range of problems in physical and life sciences.’ http://www.scholarpedia.org/article/Stochastic_resonance
In climate it is the concept of feedback of ice and snow in a nonlinear dynamically complex system. It helps to have concrete examples.
The energy of ocean waves derives from wind shear – but the pattern of peaks and troughs emerges from interference. Waves reinforce and cancel – much the same as in the double slit experiment.
Washboard roads are a bit mysterious – but yes the corrugations do have amplitude.
And somewhat off topic, why have google honoured Hertz with a series of semicircles rather than sinusoids?
Do they realise the bandwidth wasted to create those semicircles? :)
http://notrickszone.com/2013/09/24/hansvonstorchonwarmingpausefellowscientistsareveryhardpressedforanexplanation/
“warmist Ministry of Environment Dr. Harry Lehmann is asked if all the uncertainty is a problem for him. He responds with “yes and no“,…”
That’s some kind of surrealist, philosophical joke, right ?
It reminds me of the OED definition of the word recursion: “see recursion “.
Ѕujet très fascinant