by Vaughan Pratt
Paul Clark, the developer of the immensely useful WoodForTrees website that plots climate data, has kindly joined the discussion at Climate Etc., to clarify the meanings of ‘From:’, ‘To:’, and ‘Isolate’ which had been giving some people trouble. In this post I’d like to focus on the third of these, ‘Isolate’, whose utility may not have been fully appreciated.
Those of you who’ve played around with the site Wood For Trees will have
found the moving-average function “Mean n” very useful, where n is the number of months. What “Mean n” accomplishes is to filter the signal with a so-called “box filter” or moving average of width n months. The effect is to completely kill any sinusoidal components of the signal in question whose periods are integer
submultiples of n months, namely periods of the form n/i months where i is an integer, otherwise called the i-th harmonic of that period.
(I’m assuming we’re all nerds here, and can follow this.)
For example “mean 60” will ensure that sine waves of period 60 months,
60/2 = 30 months, 60/3 = 20 months, and so on, will be completely
removed, not a single trace remaining. These removed components of your
signal are the harmonics of the fundamental frequency 158 picohertz, a
very slow oscillation compared to your 3 gigahertz CPU, or the million
gigahertz of ultraviolet light.
If you change “mean 60” to “mean 120” then the harmonics of 79 picohertz
are removed. These include all the harmonics that “mean 60” removed,
but they also include all the frequencies exactly midway between the
harmonics of “mean 60.” So exactly twice as many harmonics are removed
in this way.
In the digital filter business, “Mean n” is what is called a low-pass filter. The WoodForTrees function “Isolate n” is the corresponding high-pass filter that passes through exactly what “Mean n” takes out. The original signal (with ceiling(n/2) months deleted from each end) is equal to the sum of the Mean n and Isolate n signals.
If you want to be honest when filtering, exhibit both the mean and isolate signals together, along with their sum; that is, show all three signals. These are all shorter than the original, having had a total of 2*ceiling(n/2) months removed. State that the third signal is the sum of the other two (and hence is the original signal shortened). People can then judge for themselves whether they like what you’ve done. If you only show the low-passed portion they can’t judge whether they believe what you’ve removed is merely the high-passed stuff.
The downside of “Isolate” used in isolation is that the amplitude of the high frequency information (seasonal cycles in particular) tends to drown everything else out, including any low frequencies that mean n failed to pass through. For this reason it is often worthwhile to do one or two Mean n’s with smaller n as well.
A single use of Mean n convolves (technical term, google it, but basically it means filtering) with a box. Two uses is equivalent to convolving with a triangle instead of a box, which takes out most of the high frequencies that one box leaves in. That plus an Isolate 2n then gives the highest frequencies retained by the mean n’s.
Means and isolates all commute so you can give them in any order.
If instead of using Mean n twice you use Mean n and Mean 2n/3, the
latter flattens the lobe between 3n and 3n/2 (the 2nd harmonic of 3n)
even better but without taking out as much of the frequencies below
period 3n. For example you might use Mean 30 in conjunction with Mean
20, or Mean 54 with Mean 36. This kind of pairing makes for quite a
nice filter. More elaborate filters can be constructed along the same
lines with as many Mean’s as you want. The fact that they all commute
simplifies the space to be explored. It is highly unlikely that you’ll
ever see much benefit from using more than three Means; I’ve never had
occasion to explore beyond two.
With careful choices you can see all sorts of interesting things in the
various signals Paul has provided. For example the Joint Institute for
the Study of the Atmosphere and Ocean (JISAO)’s temperature record
of the Pacific Decadal Oscillation since 1900 can be smoothed with
this filter to remove essentially all seasonal variation. Note the
4.1 °C (7.4 °F) plunge in the 15 years between 1941 and 1956!
There is more to ocean variation than just the 0.1 °C amplitude
Atlantic Multidecadal Oscillation.
But why is the PDO called “decadal”? Is there a 10-year (120-month)
cycle? We could try to answer this using isolate 120 which passes
exactly 100% of any 120-month period, in combination say with the 2-3
punch of mean 120/2 mean 120/3. However this doesn’t give as good a
result as when you isolate 90 months (7.5 years) (with mean 30 mean 20),
which gives this beautiful plot. This alternates half-degree oscillations
(1910-1930, 1960-1985) with one-degree oscillations (1930-1960, 1985-2010).
What else is in the PDO data? Let’s try to isolate 240, which gives this
interesting plot. We see a 20-year-period oscillation roughly matching the Hale cycle. We’ll see (and explain) the Hale cycle again in the HADCRUT3 data below.
Anything else? Easy, just change “isolate” to “mean” as in this graph and we get a 60-year-period cycle with an impressive swing (2x amplitude) of more than a degree!
We can play around like this with other datasets than the PDO, for example HADCRUT3 (to get away from ocean oscillations for a bit). This plot picks out a cycle from the global land-sea temperature that correlates well with the solar cycle, sunspots, with the exception of a curious speed-up during 1920-1930 that is too fast to be sunspots. Put that on the stack of things to look into later.
In 1903 astronomer George Hale discovered that at every other sunspot
cycle the magnetic field of the Sun reversed, as though it contained a
magnet spinning with a period of 256 months (64/3 = 21.33 Earth years).
Two sunspot cycles are now called a Hale cycle in his honor. Pyramids and magnets are good for your health on Earth, but could a magnet spinning nearly a hundred million miles away have any perceptible impact on global climate Here’s a graph designed to test this hypothesis using isolate 250. You be the judge.
Incidentally I picked the first mean, mean 100, at random, with mean 67
following the above 2/3 rule to eliminate all high frequenices. Had the
second mean been 50 we’d have got this plot. Note the curious oscillation that suddenly pops up during 1920-1930. This consists of three cycles of a mysterious
67-month-period oscillation that snuck through the gap between mean 100
and mean 50.
Oh, but that was what we put on the stack four paragraphs ago. How very
interesting.
Obviously one can play this game forever, finding more and more spectral
curiosities in the various datasets at woodfortrees. Here for example is a single
60-year cycle in HADCRUT3. But you get the idea. Anyone can play.
Which raises the question, what exactly is the game here? Is it physics, metaphysics, philosophy, computer science, statistics, the latest weapons for skeptics and warmists to wage war, or what? My answer would be statistics. While I trust theory up to a point, I trust even more the evidence of my senses augmented by the powerful sensing instruments we’ve developed to date.
When there is a discrepancy between observation and theory, my first
thought is that the observational instrument is at fault. But when I
fail to find any fault, my attention turns to the theory. And sometimes
the theory is shown to be wrong, or at least incomplete. And this is
how science, our understanding of nature, has been progressing for as
long as anyone knows.
My high school French teacher, a white Russian who’d emigrated from Paris to Sydney, was in the habit of saying in class that if he’d taught even one of us to speak and write French properly he would have fulfilled his role here on
Earth. In that same spirit, if I’ve changed the mind of even one reader
of Judith’s blog, even by a little, I will feel that this post was not
entirely in vain.
This is very interesting Vaughn, and indeed finding statistical patterns in climate data is a well established genre. Consider Fred Singer’s book on 1500 year cycles, for example. However, these are not signals and I think signal is a misleading metaphor. The CO2 is not in there trying to get a message out. It is just statistics in our search for explanations.
It would also be interesting to do statistical pattern analysis on the 130,000+ comments here to date.
David Wojick –
Are you close to volunteering to read through all 130,000 comments? Do you feel the need to perform a penance for something? :)
One benefit of a statistical analysis is you don’t have to look at everything being analyzed. It’d be interesting to read everything posted on this site, but I doubt anyone would want to sit down and do it.
That said, I’ve read at least 80% of all the comments on this site, and it isn’t that bad. If I had unlimited time to spend on anything I wanted, I’m sure I’d go back through all the blog posts and read (or at least skim) everything.
“For example “mean 60″ will ensure that sine waves of period 60 months,
60/2 = 30 months, 60/3 = 20 months, and so on, will be completely
removed, not a single trace remaining. These removed components of your
signal are the harmonics of the fundamental frequency 158 picohertz, a
very slow oscillation compared to your 3 gigahertz CPU, or the million
gigahertz of ultraviolet light.”
What?
Look at it this way ; day/night frequency = 11.574 micro Hertz (i think)
(I’m assuming we’re all nerds here, and can follow this.)
I guess that doesn’t apply to you. For the rest of us, this is real basic (fundamental?) stuff.
Really? It is surprising to see the number of errors and misunderstandings that arise from confirmation bias in many comments both sides of the AGW debate.
The North Atlantic Oscillations, according to the BEST team, appear to be the most influential. I think I have isolated the fundamental one:
http://www.vukcevic.talktalk.net/CET-NV.htm
This bastard thinks you used my suggestion http://judithcurry.com/2011/11/04/pause/#comment-133377 to make generate it randomly and showed you what you should have known before. You cannot statistically prove everything, for one thing you need initial assumptions. Most or your assumptions are incorrect. You cannot say statistics trumps philosophy or common sense or any other field.
Even most French people don’t speak french “properly” whatever that mean.
Statistics are like alienists – they will testify for either side
-Fiorello La Gaurdia
The North Atlantic Oscillations according to the BEST team appear to be the most influential. I think I have isolated the fundamental one:
http://www.vukcevic.talktalk.net/CET-NV.htm
Calling
a) M A Vuckuvic (?spelling)
b) Dr Svensmark (Cue comments about cyclomania?)
Apologies for the spelling & Vukcevic’s comments weren’t visible when I composed my initial post!
Not to wary, I am a bit of a pain to solar physicist who dismally failed to predict current solar cycle SC24, so as a bit of reprisal they declared certain m.a.vukcevic ‘cyclomaniac in supreme’ and a ‘danger to society’ on account of this prediction, which after 7 years the other day became reality:
http://www.vukcevic.talktalk.net/NFC7a.htm
Vaughan Pratt (and Judith Curry)
Thank you for this presentation. It helps clear away ancient cobwebs, and inspire new questions.
And.. http://www.woodfortrees.org/plot/best/mean:85.33/mean:128/isolate:256 is innnnteresting.
Yes it is.
Well, it may be – IF the axes were labelled. Ho hum
Okay Bart R, some questions from far on the other side of the bell curve. I can’t figure out how you arrived at 85.33. I thought it might have something to do with 30 years, but that can’t be it.
Wait, 256/3 = 85.33. But why?
I got how you did the rest: 128 – 85 = 43, 43 + 85 + 128 = 256.
To me it’s about as interesting as shopping with my wife.
JCH
Hale Cycle estimated at 21.3 years, or 256 months.
Not a terribly useful number, as the solar cycle is so variable, but it’s fun to pretend.
The rest you were correct about.
I recommend you get your wife flowers, and never ever let her read you comparing me to her. ;)
Looks like something funky going on with the data set. Reminds me of string being plucked.
reminds me of some work I did long ago. after finding the golden cycle I built the model.. really good. sweet.
then much to my dismay I found out I had grabbed the wrong column of data and instead of an actual measurement I had referenced garbage bits. dang I found cycles in random bits.. go figure..
listen to PI
http://www.youtube.com/watch?v=YOQb_mtkEEE&feature=related
As EDA it’s fun. use half of your data, put the other half aside to test the crazy things you are destined to find.
Steve… That vid is AWESOME!!!!
yes you can find harmony in randomness.
without a physical model to explain the cycles they remain music of the spheres.
It’s interesting, but good grief get a musician to do it:
Wow, that was incredible. The guy is truly amazing.
Jake’s one of my musical heros …
w.
That’s nice, I guess. Anyway, this is for Josh and the rest of them, who expect the earth to burn up. I can get you tickets (for a steep price) on the next Mothership, departing February 7, 3:15 AM, from Houston, as usual:
http://www.youtube.com/watch?v=uQFGkGk4PSc
Mothership doesn’t appear until about 8:30, but it is an entertaining wait.
Steven,
I was part of the entourage for the earth tour when it started in New Orleans and on to Houston in Oct ’76. The Last time I saw p funk was in Oakland in Jan. 1977. George Clinton had screwed up the group shortly thereafter, and Glen Goins died. I don’t think I want to see the geriatric version. I might cry. Diaperman, Gary Shider, died last year. We are getting old Steven.
He’s good, but not this good:
At least on his own…
It’s bassless. Call Bootsy Collins. He will lead them. We need the funk.
Hard to believe that Don actually wants to get funked up. Make my funk the P-funk.
Oh no, Josh! I thought that you would be more into the two white men playing one-and-half guitars genre, like Larkin posted. I had my Superfly period, when I was an undercover brother in Oakland. I was a sight. Big pimp hat, long blond wavy locks, blue eyes, and about 6’6″ in my platforms. I am from Detroit, so I knew the moves. Used to sing in black churches when I was a kid. Those were the days. The ladies didn’t expect much out of a man, and the music actually involved the playing of musical instruments. And I maintain to this day that I never ingested any illegal substances, when I was working.
haha. P funk.. Don next time they come to SF ( last time they played at Yoshi’s on Fillmore) you should join us.
Steven,
My reply is above. Late night and creeping age. My eight year old son is running circles around me.
But Pi is an irrational number, so the sequence should go on indefinitely without repeating!
like house music
oh wait.. that only repeats
OT
Thanks Vaughan for the post.
Using the “isolate” feature, here are the turning point years=>[1880,1910,1940,1970]
http://bit.ly/v8QQ3U
The same result I had before:
http://bit.ly/uXy8jw
Girma
Indeed.
http://www.woodfortrees.org/plot/best/mean:42.67/mean:64/isolate:128/plot/best/mean:12/mean:12/plot/best/from:1850/to:1910/trend/plot/best/from:1910/to:1970/trend/plot/best/to:1850/trend/plot/best/from:1970/trend/plot/esrl-co2/offset:-322/scale:0.014/mean:7/mean:5
Turning points?
Yes.
Amplitude high enough to explain post 1970 trend?
No.
But look what does.
that is cute!
MattStat
The power of Isolates:
http://www.woodfortrees.org/plot/best/mean:120/mean:180/isolate:360/plot/best/mean:12/mean:12/plot/best/from:1850/to:1910/trend/plot/best/from:1910/to:1970/trend/plot/best/to:1850/trend/plot/best/from:1970/trend/plot/esrl-co2/offset:-322/scale:0.014/mean:7/mean:5
I’ve hit this curve with every variation I could plausibly construct over the day, trying to prove myself wrong and Girma right.. and no luck.
Doesn’t mean it can’t be done, but if there’s a way Girma could be correct, it’s beyond my skill.
While I’m not nerdy (your word) enough to follow this very well, as a scientist/artist I would like to contribute the fact that pareidolia, the perception of patterns from vague stimuli seems to be hard-wired in our brains. While this hard-wiring may have allowed our species to perceive the camouflaged predator hiding in the shrubbery, it also allows us to perceive the Man in the Moon. When I read your hopefully, tongue-in-cheek statement that, “Pyramids and magnets are good for your health on Earth,” I wasn’t entirely sure which lens to view this posting through. Could you clarify what you meant be this?
I guess I need to play with this stuff some more.
http://www.woodfortrees.org/plot/hadcrut3vsh/mean:40/mean:60/isolate:240/plot/hadcrut3vnh/mean:40/mean:60/isolate:240
This just don’t make no sense.
dallas
As others are pointing out, apparent patterns absolutely will fool the eye.
These spurious fictions appear as convincing and valid and real as oscillations supported by observed mechanical causes.
Manipulating curves is extraordinarily likely to lead one astray if one does so without a highly compelling logical basis for believing the signal ought be there.
This need for a prime mover is part of why people will look to the planets and sunspots, cosmic rays and libration, the plane of the galaxy and the plane of the solar system for regular impulses to bind oscillations in Earthbound systems.
Mostly, we see too much noise in the system to either disconfirm such causes or to confirm them as anything but the tiniest (on sub-century scales) influences.
Perhaps it is the interference of multiple cyclic causes.
More likely, truly ergodic oscillations (that is, which may have every wavelength or frequency from as little as less than daily to as much as infinite), dominate the roots of what we see, and thus we cannot rely on any regular period to emerge as anything but low-amplitude waves superimposed on more major trends.
Bart: Manipulating curves is extraordinarily likely to lead one astray if one does so without a highly compelling logical basis for believing the signal ought be there.
Sadly, it isn’t that simple. Reliable signals have been found when there was no compelling logical basis for believing that the signal ought to be there. And “compelling” lacks an agreed definition applicable to all cases. Even logical bases for belief that everyone found compelling have sometimes turned out to be false.
MattStat
Agreed.
Sort of.
Reliable signals that have no physical correlation that are ever derived sometimes appear, too, yes.
“Compelling” is one of those questions of “what is proof” best addressed by experts like Dr. Wojick.
The last point, however, logic that is proven false, that’s the hope of the Scientific Method. I can’t therefore count it against what I’ve said.
Very true Bart, with the solar system having a ergodic and the Earth climate system that’s somewhat ergodic, or is it non-ergodic? I always get that wrong. It would be nearly impossible to find a squiggly line in circa 1994 that somewhat matches a squiggly line in the 1920s that means anything. Of course the phantom 1994 squiggly line is offset for some reason.
Never mind, its nothing I am sure.
Given enough time, it will become ergodic as the system can navigate through all its possible states. The problem is that we can’t observe all the possible states because we haven’t been monitoring it for a long enough time. Unless of course we use any insight we can get from something like the Vostok ice core data and other proxies. Those give us some level of expectation on the natural stationary aspects of the climate system.
Unfortunately, we are off the scale WRT to Vostok, as we are nearing 100 PPM higher values of CO2 concentration than was ever observed over several hundred thousand years of core data. This means that we could be entering another part of the phase space, but one that has never been traversed before in what we thought was an ergodic process.
That is the scientific quandary confronting us. It’s kind of like asking what would happen if it took 25 hours and not 24 for the earth to revolve around its axis. What kind of side-effects would this cause? Don’t know because we have no data.
WHT
“This means that we could be entering another part of the phase space, but one that has never been traversed before in what we thought was an ergodic process.”
Well, not traversed in 20 million years or so.
And more like if it took 17 hours. (For those who fall back today.)
Bart, take a minute to think about spatial aggregation criteria. You can create any temporal pattern you want just by changing sampling unit boundaries & weightings. The level of ignorance of spatial pattern fundamentals in online solar & climate discussions is tragic. This is a catastrophic obstacle to sensible conception.
If you filter a random signal, the output will be oscillatory. The more you filter it, the more oscillatory it will become.
If you want to filter a signal, you are making a hypothesis about some component of the signal; you wish either to enhance it or remove it. It is not sufficient to say that the filtered signal has oscillations; the question is whether the signal is bandlimited noise or the oscillations are deterministic. This is simply because a random signal has a uniform mean power spectrum, but a random phase spectrum. As the bandwidth of the signal is reduced, so the spectral compnents of that are left will become oscillations in the time domain and their exact shape will depend on their relative (random) phase.
When considering the statistical properties of filtered signal,there is a mass of signal processing theory relating the degrees of freedom, correlation properties and distributions of the signal.
The overwhelming problem with filtering a signal, such as the temperature record, is the difficulty in telling whether the output signal is a random component or not. What is important are the statistics of the filtered signal. If one takes multiple records of a signal, if there is a determinstic component at a particular frequency band, the distribution of the phase spectra will be deterministic, which is equivalent to determining the correlation between the signals at a particular frequency. The difficulty with the temperature record is that its length is only a few times the period of putative oscillations within it. If one has a model of why the oscillations are caused, and assuming that the relationship is linear, one can determine the correlation between the “driving” signal and the observed signal. (Yes the effect of the filtering on the statistics of correlation can be be handled quite easily). Some non-linearities may be analysable using homomorphic methods.
The difficulty is that, with the temperature record, one may not have enough data to obtain a reliable estimate of whether a particular component is in fact deterministic. Nevertheless, it is likely that more formal signal processing methods will become important in analysing climate signals and their use will incease.
Thank you.
That’s good except for one detail: The more you filter it, the more oscillatory it will become.
With appropriate filtering you can turn in into a straight line, either flat or with a non-zero slope.
I’ve just discovered a 4 year cycle in the data and OMG! it’s getting worse!
http://woodfortrees.org/plot/wti/mean:10/fourier/low-pass:10/high-pass:4/inverse-fourier/scale:4
I realize that this graph was meant as a jab, but it does reveal the possibility of natural stochastic resonances that can occur. The way that these resonances work is that the underlying process is noisy but a specific phenomenon can amplify one of the noise frequencies and this turns it into a full blown resonance. Neither of the effects alone can cause the effect but the combination of the forcing function and the internal response can amplify both. Using WFT as a band-pass filter like you are doing is the artificial compliment of the natural effect that may occur.
And of course this has been investigated as a possibility for explaining long term climate change:
“Stochastic resonance in climatic change”
http://onlinelibrary.wiley.com/doi/10.1111/j.2153-3490.1982.tb01787.x/pdf
The question as always is whether a large forcing function such as increased CO2 concentration will trigger some not-before-seen stochastic resonance. It is a perfectly valid scientific question.
I remember sitting in on a session where a statistical consultant was advising an ecologist. The advice given was hopeless, as none of the inference assumptions held. The statistician had not a clue about spatial heterogeneity, patchiness, edge effect, etc. And the biologist hadn’t a clue about pseudoreplication. Neither understood each others’ points. Both ended up angry at me (I have a lot of landscape ecology field experience along with stats teaching experience) after I translated both ways.
Shoot the messenger?
Or respect spatial gradients?
Formalizing time-only analysis doesn’t compensate for ignorance of spatial dimensions. Time travels in only one direction but spatial gradients can turn in time. Nonstationary spatial aggregates become conflated with alleged temporal evolution.
The input of aggregation experts from advanced physical geography is needed. The puzzle’s big enough to enlarge the pie. More careful data exploration (mindful of lurking phase-conditioning) is prerequisite to meaningful inference.
Google “modifiable areal unit problem” to learn about a major type of paradox that is commonly encountered in spatially-referenced data exploration. The online climate discussion is stuck at a bottleneck as long as such spatial pattern fundamentals are completely ignored by all but a handful of contributors.
great post.. answered quite a few questions on this subject that has been rattling about in the (large) empty spaces of my mind.
Loved the “Pyramid Power” reference: :-)
“Pyramids and magnets are good for your health on Earth, but could a magnet spinning nearly a hundred million miles away have any perceptible impact on global climate”
If it works, use it. It is certainly better than making predictions based on linear projections. Curve fitting the data with a Fourier type series has the advantage that it filters out the noise without significantly decreasing the signal. For example, fft used in IR spectroscopy.
Vaughan
Compliments on helping us learn how to use Wood for Trees.
You note:
and calculate “a graph designed to test this hypothesis using isolate 250.”
Edward Fix modeled the sun in damped oscillation moving about the barycenter (center of mass of the solar system.) Ed Fix’s model “predicts two consecutive, weak solar cycles, each eight years long”. That is easily testable over the next one to two decades.
You can read Fix’s paper “The Relationship of Sunspot Cycles to Gravitational Stresses on the Sun: Results of a Proof-of-Concept Simulation,” in Evidence-Based Climate Science 2011 Don Easterbrook ISBN 9780123859563 Ch. 14, pp 335-349. (Search for “barycenter” or “quail” or “beavercreek” or go to page 335.) See especially Fig. 6 and Fig. 10.
I recommend comparing your 250 month filtered graph (flipped top to bottom) with Ed Fix’s solar cycle model based on damped movement about the center of the solar system. i.e. compare the 250 month filtered figure overlapping Fix’s model with the Hale solar cycles.
Note David Stockwell in his solar accumulation theory shows that the ocean temperature should lag the solar cycle by Pi/2 or 90 degrees. e.g. by 3 months in the annual cycle, or 2.75 years in the 11 year Schwab cycle, or 12 years in the 60 year PDO cycle. e.g. see Phase Lag of Global Temperature.
David R.B. Stockwell Key Evidence for the Accumulative Model of High Solar Influence onGlobal Temperature 22 Aug 2011 viXra:1108.0032
Your 60 year PDO cycle can be compared with <a href=D’Aleo and Easterbrook Fig 41. , or with Easterbrook’s PDO based global temperature predictions. See Fig. 41 in “Where are we headed during the coming century?
David: Scroll down to Figure 42 in your link:
http://myweb.wwu.edu/dbunny/research/global/coming-century-predictions.pdf
Note the surface temperature graph. It’s bogus. Looks like they spliced TLT anomaly data onto a surface temperature dataset at the peak of the 1997/98 El Nino. That looks like no dataset that I’m aware of.
Regarding Figure 41, the PDO does not represent the SST anomalies of the North Pacific in any way shape or form. ON decadal timeframes, the PDO is actually inversely related to the the detrended SST anomalies of the North Pacific. In other words, any relationship implied by Figure 41 is coincidental and likely due to the fact that both are responding to ENSO.
Regards
Thanks for clarification.
Messié (2011) appears to analyzed those issues:
Messié, Monique, Francisco Chavez, 2011: Global Modes of Sea Surface Temperature Variability in Relation to Regional Climate Indices. J. Climate, 24, 4314–4331. doi: 10.1175/2011JCLI3941.1
Prof. Don Easterbrook writes: Global Cooling is Here: Evidence for Predicting Global Cooling for the Next Three Decades
Thanks Dr Pratt for the tutorial.
Using (very usefull so many thanks also to P. Clark) WfT data sets and tools one can easily see how variations of T° & [CO2] are doing regarding the ones of PDO & AMO (many new data available such as AMO index !).
o Here for [1960 – 1985] period
o Here for [1985 – 2010] period
Note : I’ve divided [1960 – 2010] period in 2 periods of 25 years each to get a better zoom.
One can easily see how variations of PDO are preceding those of T°, which are themselves preceding those of [CO2]. This formally proves that [CO2] is definitely not the main climate driver, thus falsifying AGW theory.
Earth’s Climate is changing due to natural processes among which PDO and AMO obviously play a leading role.
Vaughan Pratt, you wrote in the post, “Note the 4.1 °C (7.4 °F) plunge in the 15 years between 1941 and 1956! There is more to ocean variation than just the 0.1 °C amplitude Atlantic Multidecadal Oscillation.”
Vaughan, the PDO data you’re discussing from JISAO is not presented in deg C. The data has been standardized. There are no units.
And your attempted comparison of it to the “0.1 deg C amplitude Atlantic Multidecadal Oscillation”, is wrong for that reason and for another. The AMO data is detrended North Atlantic Sea Surface Temperature anomalies. Very simple. All one has to do is download the complete long-term SST anomaly data for the North Atlantic (0-70N, 80W-0). ERSST.v3b, HADISST, HADSST2, or the dataset that ESRL uses, Kaplan, can be used. Then detrend the data and you’ve got the AMO.
But that’s not how the PDO is prepared. To create the PDO data from JISAO, the SST anomaly data from each 5 deg X 5 deg grid of the North Pacific north of 20N is detrended. Then a principal component analysis is performed on all of that data. The PDO is the leading Principal Component. Then the data is standardized by dividing it by its standard deviation. And for the PDO data, JISAO uses two obsolete SST datasets (UKMO and Reynolds OI.v1) through 2001 and the current Reynolds OI.v2 SST data from 2002 to present. The end product of all that, the PDO, does not represent the Sea Surface anomalies of the North Pacific, north of 20N. It is actually inversely related to the detrended SST anomalies of the North Pacific north of 20N. I’ve written a couple of posts about the PDO to discuss what it represents and, more importantly, what it does not represent. Here’s a link to one:
http://bobtisdale.wordpress.com/2010/09/03/an-introduction-to-enso-amo-and-pdo-part-3/
If you were to run through all of the PDO-preparation process and not standardize the data, you’d find that its variations are comparable magnitude to the AMO. Or if you compare detrended North Pacific SST anomalies (north of 20N) to detrended North Atlantic SST anomalies (the AMO), you find their variations are comparable in magnitude. (They can run in and out of synch. )
Of the PDO and ENSO, the dominant ocean oscillation is ENSO. The variations of the PDO, if the data has not been standardized, are dwarfed by those of NINO3.4 SST anomalies. For example, I prepared the following graph during an extended discussion of the PDO versus NINO3.4 SST anomalies at another blog. It compares the 1st principal components of the detrended SST anomalies for the North Pacific north of 20N (the PDO), for the Eastern Tropical Pacific (as a reference), and for the NINO3.4 region (a common ENSO index):
http://i43.tinypic.com/bijuvt.jpg
The PDO is an also ran.
Regards
And a regards to you too, Judith.
Just playing, but this seems interesting:
http://www.woodfortrees.org/plot/hadcrut3vgl/isolate:720/mean:240/mean:160
If you change the isolate to a mean then what is left is a very smooth curve of about 0.5º per century.
http://www.woodfortrees.org/plot/hadcrut3vgl/mean:720/mean:240/mean:160
f=1/t
f – frequency (Hz)
t – time (secs)
You have to be careful with filtering, be it active/analogue or digitial/numerical, because filtering in one domain (say frequency) will amost certainly induce change in the other domain (say time) that may lead to phase lags/leads which affects interpretation of the output.
Glad you’re enjoying it! If ‘isolate’ floats your boat, just wait til you find the Fourier stuff :-)
http://www.woodfortrees.org/plot/jisao-pdo/fourier/high-pass:2/low-pass:25/inverse-fourier
One big warning to everyone though: Remember the graphs are self-scaling. Applying a band-pass can generate some lovely looking curves, but if you compare them with the original signal you’ll find their amplitude is tiny. Best plot them with a slightly smoothed original for comparison…
http://www.woodfortrees.org/plot/jisao-pdo/fourier/high-pass:2/low-pass:25/inverse-fourier/plot/jisao-pdo/mean:12
and since we’re talking about harmony, try this:
http://www.woodfortrees.org/audio/jisao-pdo/fourier/high-pass:2/low-pass:25/inverse-fourier
Glad to see someone is using WFT for more than 10-year trend lines ;-)
Paul
I’m going to rain on everyone’s parade. Aside from the obvious point (as demonstrated) that you can find what you’re looking for with enough filtering, who cares?
It’s my understanding that the problems have more to do with the original sampling. Daily temps taken once a day, or Tmin/Tmax effects. Satellites sampling 2x a day in some locations, 1x a day in other locations.
Pekka noted the most obvious cycles we know are the diurnal and annual (seasonal) cycles. Are we sampling the diurnal cycle accurately enough?
“John Christy is a big fan of Tmax” – as quoted elsewhere on this blog or perhaps CA. Shouldn’t all skeptics be? After all, Tmin is supposed to be more sensitive to AGW in some places. In fact, after reading JC’s textbook chapter on planetary atmospheres and GHE, I sort of get why global warming is supposed to affect lower temperatures (in both time and space) more.
However doesn’t that throw a monkeywrench into the attribution of severe storms?
Enough rambling…
On 1920-1940:
http://wattsupwiththat.files.wordpress.com/2011/10/vaughn-sun-earth-moon-harmonies-beats-biases.pdf
[I left out the section on geomagnetic aa index – another time – (it shows the 1920-1940 pattern too, in a patently nonrandom twist of anomalies…)]
Paul, In the link you ask what mechanisms. One I think you left out is UV and Near infrared impact on the ratio of atmospheric/surface solar absorption. During a solar minimum, the surface absorption shifts slightly from a deeper ocean depth slightly which has a longer time constant to less deep ocean absorption and the near infrared which changes less than UV maintains atmospheric absorption closer to average, amplifying cloud negative feedback. Depending on cloud cover percentage, this can be significant. This tends to vary with natural climate variations, so it is not evident every cycle.
The distribution of wavelengths changes over a cycle. Does it change in the same way over every cycle? No, but changes lead to change in the shape & motion of the atmosphere. (It’s not the simple, linear, spatially-uniform, temporal-only effect on a global average that some central mainstreamers naively suggest!)
We have enough info to solve the ENSO puzzle. The solution could be close at hand if we can attract the best geometric minds and empower them with sufficient resources.
The various metrics we have are catching shadows of systematic shape & flow changes. The central mainstream focus has been on averages, but it is GRADIENTS that drive flow. For anyone looking for a quick primer, I gave an overview here: http://wattsupwiththat.com/2011/10/15/shifting-sun-earth-moon-harmonies-beats-biases/#comment-769231 (start at the 5th paragraph).
Whenever you see a mainstreamer denying the significance of this result [ http://wattsupwiththat.files.wordpress.com/2011/10/vaughn4.png ], you’re witnessing severe ignorance of the fundamental building blocks of natural variations.
Since the result is so fundamentally trivial, the central mainstreamers might as well argue that 1 + 1 does not equal 2 in their conceptual framework.
There’s a parallel story for geomagnetic aa index. The aliasing turns out a little different since the asymmetries (in the magnetic field) have a different spatial pattern (than land-ocean distribution). I caution everyone to be very careful with the confounding, keeping in mind shared aliasing across different physical pathways with some common parts.
This is a nice feature in principal but running mean sucks big time as any kind of filter.
This graphic shows some el nino data fitered with a gaussian and running mean.
http://tinypic.com/r/10cw9sl/5
It’s hard to believe that this was exactly the same numbers going in . In particular note the complete reversal of the 1941 peak. Other key features are peaks bending to one side when the base line differs (1960) and the significant amount of HF noise that gets through this supposed low pass RM filter. (around 1950-55)
Running mean is a very poor choice of filter. Please help it die with dignity (and fast).
Can you plot that with the raw data also? I’m not sure I like the gaussian filter that much better. It seems to be over enhancing the peaks and troughs.
Repeat narrow-kernel integration does something very similar to the gaussian of which you’re so fond. It actually has superior qualities for many, if not most, exploratory purposes. For example, it eases interpretation & diagnostics if the kernel width is set to dominant modes such as the day or year. It seems we agree that wide kernels introduce phase reversals that can unnecessarily complicate interpretation & diagnostics for most (not all) exploratory purposes.
One thing we should keep in mind is that temporally-local & -global dominant modes can be pinpointed via critical points in the boxcar integral scale derivative. The forefathers of spectral analysis knew this, but my impression is that many online climate discussion participants are of the school that throws data at black-box statistical software without ever finding time to independently build conceptual foundations from the ground up. The timescale of statistical summary phase-reversal is critically informative. The same is true in space (not just time) and we need to remain cognizant at all times that marginal spatial & temporal distributions are not the same thing as a joint spatiotemporal distribution. The climate scientists & physicists involved in the online climate discussion show zero awareness in this area of pattern fundamentals. Summary scale discontinuities are critically informative about constraints, for example constraints on alleged “spatiotemporal chaos”. You can contain chaos in a box. And how will you recognize an invisible container that changes shape with the seasons and with interannual & multidecadal physical aliasing? It can be done with multiscale summaries in 16 dimensions (integral, 0th, 1st, & 2nd derivatives for x, y, z, & t). Both the magnification & focal length of the microscope are adjustable.
Oh , the other RM “feature” if forgot: total reversal of peaks. Look at the period around 1970 in that last plot. The running mean has actually _inverted_ the peaks.
How useful is that if your are exploring for correlations in climate ?
Good to see someone brought up this issue. Repeat narrow-kernel integration is an alternative that preserves dominant peak & valley locations.
The bigger issue here, though, is ignorance of spatial aggregation criteria. I could could count on 1 hand the number of online climate discussion participants who ever even mention such fundamentals. Hopefully people aren’t naive enough to think this is something they can just ignore. (That’s the impression one gets, even from ‘experts’, who keep the issue neatly swept under the rug …if they have any awareness of it at all.)
“…total reversal of peaks…”
Perhaps you did not account for the group delay, which is 1/2 of the width of the running average? To align things in time, you should assign the time tick at the midpoint of the filter.
But, looking at your graph, I think you simply are comparing apples and oranges, a fast roll-off gaussian weighting verses a mere -20 dB/decade roll-off average filter. You can get better suppression of higher frequency noise if you “average the average”, which is equivalent to a triangular weighting, or an average of the average of the average, and so on. Why go to that trouble? Well, if you intend to downsample the data, you need to prevent aliasing to zero frequency, and an average filter with full-width decimation does that.
A running mean is not a great filter, but it does have one property which makes it very desirable for processing data which is to be downsampled: if you line up the weighting functions end-to-end (throwing away N-1 points of every N in a running average, where N is the length of the average), you will not get aliasing to zero frequency. This is terrifically important to avoid, because if you subsequently numerically integrate the data, signals aliased to zero frequency will cause the integration to diverge – a cyclic signal which aliases to zero frequency is a bias. A noisy signal which aliases to dc, when integrated, begets a spurious random walk. A weighting profile which looks like a Gaussian curve generally will not prevent aliasing to zero frequency when you downsample the data.
If you combine averages one on top of another, you should not decimate by a greater amount than the shortest average to prevent aliasing to zero frequency. So, for example, two equal length averages on top of one another is equivalent to a triangular weighting. A triangular weighting should not be more than half-width decimated to avoid aliasing to zero frequency.