Two contrasting views of multidecadal climate variability in the 20th century

by Judith Curry

Our new stadium wave paper is now published.

Context

Almost a year ago, Wyatt and Curry published a paper discussed at CE in a post The Stadium Wave.  Further discussion/explanation of the stadium wave is found at Marcia Wyatt’s web site wyattonearth.

Our new paper was motivated by Mann’s recent paper characterizing the AMO, which was critiqued by Nic Lewis:  Critique of Mann’s new paper characterizing the AMO.  In the Mann et al. paper, they wrote:

“Claims of multidecadal ‘stadium wave’ patterns across multiple climate indices are also shown to be likely an artifact of this flawed procedure for isolating putative climate oscillations.”

So, do Mann et al. have a valid point re the stadium wave?   We immediately started discussing a response, although Geophysical Research Letters does not allow replies, notes or correspondence regarding its published papers, a policy that I personally don’t like.   So, led by Sergey Kravtsov, we decided to write a stand alone paper that takes a broader perspective on the issues raised by Mann et al. and also conducts a deeper statistical analysis of  the stadium wave.

Two contrasting views of multidecadal climate  variability in the 20th century

Sergey Kravtsov, Marcia Wyatt, Judith Curry, and Anastasios Tsonis

Abstract.  The bulk of our knowledge about causes of 20th century climate change comes  from simulations using numerical models. In particular, these models seemingly reproduce the observed nonuniform global warming, with periods of faster warming in 1910–1940 and 1970–2000, and a pause in between. However, closer inspection reveals some differences between the observations and model simulations. Here we show that observed multidecadal variations of surface climate exhibited a coherent global-scale  signal characterized by a pair of patterns, one of which evolved in sync with multidecadal swings of the global temperature, and the other in quadrature with them. In contrast, model simulations are dominated by the stationary — single pattern — forced signal somewhat reminiscent of the observed “in-sync” pattern most pronounced in the Pacific. While simulating well the amplitude of the largest-scale — Pacific and hemispheric — multidecadal variability in surface temperature, the model underestimates variability in the North Atlantic and atmospheric indices.

Key points:  

  • An objective filtering method identifies a global mode of climate variability
  • Space–time structure of climate in historic simulations reflects forced signal
  • Modeled multidecadal climate variations are too weak, especially in atmosphere

Published online by Geophysical Research Letters, [link] to abstract.  Full manuscript is found here Kravtsov et al.  Previous versions of the manuscript plus reviews and responses are posted [here].

Marcia Wyatt’s post

Marcia Wyatt has an extensive post on her website describing the paper [link].  Here are some excerpts:

Mann et al. claim that flawed methodology has generated an apparent, or false, propagation in the [stadium wave] signal. The contended flawed methodology is linear detrending, a statistical step once innocent in its ability to highlight lower frequency behavior within a time series – a signal often associated with the AMO – but today, a source of controversy, a portion of which has been aimed at the stadium wave.

Kravtsov et al. (2014) consider the challenge, adopting a strategy that evaluates phase uncertainties of the propagation, as well as spatio-temporal patterns of the signal in modeled and observed databases. Findings show: i) The propagation of the “stadium wave” is highly unlikely to be due random occurrence or flawed methodology; and ii) pronounced and fundamental differences occur between analyses using observation-based data and analyses using model-generated data. Differences involve spatial patterns of the signals: ocean indices of the Atlantic and Pacific and atmospheric indices across the hemisphere play significant roles in the observed stadium wave; while only the Pacific appears significant in model generated data. Differences also involve temporal patterns: the model-based signals are in-phase, stationary ones, requiring only one mode of variability to explain its profile; while observation-based signals are not in-phase, requiring two modes of variability to explain their alignment.

 To better “see” these undulations [e.g. AMO], researchers traditionally have removed the long-term linear trend of a time series so as to highlight the higher-frequency fluctuations, the multidecadal ones among them. Yet, with the attribution issue (i.e. forced versus intrinsic) unresolved, appropriateness of this technique has come into question. The argument against it contends that linear detrending assumes removal of the forced signal. Thus, it is argued that if a linear trend is removed, a vestige of the forced wiggle is imprinted upon the remaining signal. In that case, if one interprets that detrended product to be of intrinsic character, the role assigned to it will be overestimated. 

Linear detrending is implicated as a fatal flaw in a relatively new hypothesis regarding multidecadal-scale climate variability – the stadium-wave hypothesis. The stadium-wave hypothesis of multidecadal-scale climate variability assumes that synchronized network behavior governs the low-frequency quasi-periodic oscillatory component shared among a collection of interacting ocean, ice, and atmosphere indices. Phasing-offset among the synchronized network members reflects hemispheric propagation of the signal, the pace of which appears to be governed by variability in the AMO.

If AMO is linearly detrended, is there, or is there not, a vestige forced signature imprinted upon the residual, thereby exaggerating the perceived role of internal processes? We arrive back at the impasse. Yet this impasse is not to be conflated with fundamentals of the stadium-wave signal. Stadium-wave propagation is hypothesized to have an intimate connection with AMO – the latter being its pace setter. Perhaps not immediately intuited, this association says nothing about the driver of AMO. Propagation of the stadium wave proceeds, so the hypothesis goes, irrespective of the source of AMO oscillatory energy, be it external forcing, internally generated variability, or a combination of both.

Mann et al. (2014) contend that the propagation – the distinguishing signature of the stadium-wave hypothesis – is no more than a statistical artifact of flawed methodology – i.e. of linear detrending. Linear detrending is a step in the analysis used to document the stadium wave, the intended purpose to remove the centennial-scale trend to highlight multidecadal variability. But, regardless of intended use of the method, it is worth taking into account the findings of Mann et al.

Kravtsov et al. (2014) considered Mann et al.’s contention that the stadium-wave propagation is no more than an artifact of methodology. Mann et al. illustrated that a random realization of interannual variability (white noise), superimposed upon their artificial climate indices – an in-phase forced signal common to each – would, once linearly detrended and smoothed, produce a false appearance of propagation. Choice of noise realization would dictate propagation sequence and phase offsets. Thus, one could generate a variety of different “stadium waves” according to the nature of the white noise imprint, an outcome implying that the propagating stadium-wave signal identified by Wyatt and collaborators was illusory, and any apparent stadium-wave lags were statistically insignificant.

Kravtsov et al. concede that a collection of indices constructed from a commonly shared, in-phase, forced signal, whose only differences are those imposed by regional noise 4 processes, do generate false “stadium waves”, once linearly detrended and smoothed – as was done in Mann et al. In methodological contrast, Wyatt and collaborators, in their stadium-wave analyses, have sought to identify timescales of co-variability among network indices. Their use of Multi-channel Singular Spectrum Analysis (M-SSA) — a generalized application of the more commonly known Empirical Orthogonal Function (EOF) analysis, adept at identifying propagating signals and shared variability among indices — has documented multidecadal-scale stadium-wave propagation (a structure of M-SSA-generated phase-shifted signals) in a variety of geophysical index collections. The phase shifts between the “real” stadium-wave indices are, of course, subject to uncertainty, just as are the indices in the synthetic example of Mann et al. (2014). However, the real question is whether these uncertainties are so large as to render the stadium-wave propagation statistically insignificant. That is the point Kravtsov et al. first investigate.

Merging this view of M-SSA generated phase-shifted signals plus noise with the strategy of Mann et al. in constructing surrogate networks, Kravtsov et al. show that the phase uncertainties of each index are significantly smaller than the actual phase lags (lag time in years between propagating indices) among those indices in the “real” stadium wave. This finding supports the Kravtsov et al. counterargument to Mann et al’s contention that artificial propagation is a product of sampling associated with climate noise. According to Kravtsov et al., such sampling variations are unlikely to explain the propagation observed in the “real” stadium wave; thus weakening Mann et al.’s challenge.

JC reflections

The reviews on this paper were thorough (see here for previous versions of the paper, the reviews and our responses) – three rounds of reviews involving four reviewers.  The reviewer comments weren’t particularly substantive with regards to the actual analysis, but did help clarify the paper.

If you check the reference list, you will see that Mann’s papers figure prominently – a natural editorial decision would be to invite Mann to review the paper.  We specifically requested that Mann not be involved in the review process, owing to public statements that he has made about me.  I frequently make such a request when submitting a paper (related to public statements made by people that I regard as too conflicted to review a paper of mine).  As far as I can tell, the editors have always honored this request.  In the absence of blind peer reviewing, I regard this to be a very useful strategy to help support fair peer review.

The insights from the stadium wave relate to the propagation of the multi-decadal signal; per se the stadium wave does not directly provide insights into the the issues of 20th century attribution or transient climate sensitivity.  The point raised by Mann and the alternative perspective provided by our paper does raise the broader issues of whether you can separate forced from intrinsic variability, and if so, how to do this.  The big unresolved question remains as to the effect of multidecadal internal variability on our estimates of climate sensitivity and 20th century attribution of warming.

 

345 responses to “Two contrasting views of multidecadal climate variability in the 20th century

  1. Congratulations, Professor Curry!

    AGW promoters are more desperate as The House of Cards Collapses

    http://centinel2012.com/2014/09/28/a-doj-original-series/

  2. Nice to see science done the way it is done best. I look forward to reading discussions of this work.

    • judith never engages critics.

      • moshpit plays dumb.

      • Mosh doesn’t need to “play”

      • It’s over, moshpit.

      • OT, but WRT engaging critics (or not), when you leave a comment at ATTP why don’t you take a Wayback snapshot of it? Or don’t they even make it out of moderation before being “removed by the moderator”?

      • Im done with ATTP. Same mod games as rc.

      • Mosh

        Have you ever commented at the Guardian Blog? That has the mother and father of attack dogs who will remove any dissenting comments they think might lead somewhere they don’t like-questioning the Monbiot view of the climate world

        tonyb

      • Oh noes, censorship!

        You weren’t being a jerk by any chance?

        You have form.

      • If that were the first comment zapped it would be one thing. But it’s not. There is nothing wrong with moderation. However the mod has a history of changing rules rhapsodically. And commenting on moderation or asking for clarification is not allowed. Perfect climateball approach. Change the rules arbitrarily apply them willy nilly and then preclude any discussion that seeks to clarify.
        Climateball 101.

      • Michael, do you ever wonder why the more skeptical sites ban so few people, and the “realist” sites ban just about anybody who asks uncomfortable questions? Which side is more likely to be trying to enforce a groupthink?

      • With a snapshot, couldn’t you make an issue of it? And if they put in a robots.txt file to block Wayback, couldn’t you make an issue of THAT?

      • TJA,

        Who is banned?

        Mosher had a comment removed.

        The funniest thing at ATTP was when the mod snipped one of Anders comments!

    • OMG! Little joshie has gone with intended irony, this time. He’s learning, slowly.

    • His comment had something to do with Willard’s past bullying of BS.

      • ==> “His comment had something to do with Willard’s past bullying of BS.”

        ??

      • A better alternative as I have explained for years
        Is to leave offending comments up. Point out the rule violated and issue a snip warning
        Violate a snip warning and you get a ban warning.
        Violate a ban warning and get banned.
        Willard is a bully
        Mod. This is adhom. Your next violation will be snipped
        Willard is a (snip)
        Mod
        Your adhom has been snipped
        Stop or you will be banned
        Willard is a dunce
        Mod. You are banned

        That way people can see.
        The behavior
        The rule
        The judgement

      • Looks like little willie is drunk on power. And he used to be respectable, in real life.

      • As much as I would like to pay some deference to the definition that willard gives for climateball, I also take notice of the fact that language changes.

        Team ClimateBall.
        willard, joshua, micheal,stokes, there are others.

        Instead of being a general term to describe “changing the rules” I think the term will change ( via specialization) to mean whatever those guys do.
        Nicknames are hard to shrug off. The dwarves and stooges was too limiting. ClimateBallers will stick

      • How about climateballer runts?

      • runts is too pejorative

      • Where I come from, “ballers” implies proficiency at a desirable skill set. Runts is more appropriate for these little varmints.

      • It’s not important, but I do ask people not to refer to me as “BS.” The abbreviation has unfortunate connotations.

      • if I can live with SM you can live with BS

      • I think you demean the site and the owner when you call people “runts.”

      • That was my initial thought as well ;-)

      • Joseph –

        ==> “I think you demean the site and the owner when you call people “runts.””

        I disagree. Don’s (and mosher’s) use of insults is evidence about nothing other than how Don (and mosher) approach debate, think you construct valid arguments, etc. – and we knew that anyway.

        Don’t sweat it. It’s information. It’s marginally instructive about the debate. Sticks and stones and all that… It is of no real consequence to anyone’s life.

      • I did not moderated that thread, and labeling is not nicknaming.

        Auditors ought to get their facts straight.

        Black hat marketers do whatever the invisible hands pay them for.

      • I love how you guys demonize the “invisible hand” concept in economics, yet have zero problems with the “invisible hand” which controls survival of the fittest in biology. You guys are nothing but believers in “Intelligent design” when it comes to economics because you want it both ways.

      • Willard climateball champ.
        I didn’t say you moderated and wouldn’t take your word that you didn’t.
        You called bs a name. Bullyball

      • I’d rather just call them the Hockey Pucks.

      • Steven Mosher | September 29, 2014 at 11:40 am |
        “A better alternative as I have explained for years
        Is to leave offending comments up. Point out the rule violated and issue a snip warning….”

        The world will be a better place….

      • I did not moderated that thread, […]

        Are you prepared to respond to his comment if he posts it here?

      • I don’t know Joshua. It does lower my opinion of the site. My opinion may not be representative, but I am confident that there are people who are turned off by it as well. It is quite frequently found at places like WUWT or Yahoo. Not necessarily always name calling, but other forms of insults intended to demonize or demean your perceived opponent.

      • From the article:

        1. THE EARLIEST HOCKEY GAMES WERE PLAYED WITH CHUNKS OF FROZEN COW DUNG.

        http://mentalfloss.com/article/32285/11-fun-facts-about-hockey-pucks

      • > I didn’t say you moderated.

        Black hat marketers can try to dodge with plausible deniability after whistling “the mod has a history of changing rules rhapsodically”, which has been heard by Don Don to imply me.

        Auditors are known to confuse saying and implying:

        http://andthentheresphysics.wordpress.com/2014/09/29/the-ghost-of-present-climateball-tm/

        Two common ClimateBall ™ moves, with varying efficacy.

      • big babies

      • In the original klingon ClimateBall is muD-gho

      • Nerd!

        It’s also an anagram of Meatball Lic

  3. The point raised by Mann and the alternative perspective provided by our paper does raise the broader issues of whether you can separate forced from intrinsic variability, and if so, how to do this.

    I would say intuitively no. The effects of “forcing” on each of the mechanisms involved in the “stadium wave” would likely be different, and until you can actually account for them in theroetical terms, how could you predict the effect of increased pCO2 on each?

    I certainly don’t see how it would be warranted to assume that the primary effect of “greenhouse forcing” would be linearly additive.

    Oh, and congratulations!

    • Here’s a thought:

      The residual variability in the GFDL runs is interannual; therefore, the lowest-frequency variability in the 20th century GFDL runs can indeed be represented as the sum of linear trend and the in-phase “stadium-wave” signal, the run-to-run differences in this variability coming about through slight differences in phasing of the stadium-wave index components. Finally, because both the trend and the stadium-wave component are essentially the same in all GFDL runs, to display this forced variability it is natural to use the ensemble mean, since it provides a clearer view of this variability by smoothing the sampling variations.

      I’m not sure if this approach would make sense, but could you take the various GFDL outputs, and distort their time-scales until they were in phase, with the observations and one another, then look at how they varied in detail?

      Or would that be too much like angels and pins?

    • The point raised by Mann and the alternative perspective provided by our paper does raise the broader issues of whether you can separate forced from intrinsic variability, and if so, how to do this.

      More direct Mann: “The problem with this is that there is no way to use it to advance my agenda!”

  4. “On top of a uniform linear trend, they identified an oscillatory looking wiggle….”

    What is that?

    • there is a world of difference between playing dumb and being stupid.
      read the text.

      “On top of a uniform linear trend, they identified an
      oscillatory-looking wiggle with a common multidecadal time scale, but with different phases across the different indices of the climate network, thus manifesting a signal that propagates in the space of climate indices. The authors termed this propagating signal the
      “stadium wave,”

      “The secular multidecadal signal for the time series considered here is well represented by the sum of two leading RCs, which we will hereafter refer to as the stadium-wave signal.”

      The RCs or reconstructed components come out of M-SSA ( a variant of EOFs)

      easy peasy

      • Why not just say;

        ““On top of a uniform linear trend, they identified another trend with a common multidecadal time scale, but with different phases across the different indices of the climate network, thus manifesting a signal that propagates in the space of climate indices. The authors termed this propagating signal the “stadium wave,”

        “Oscillatory-looking wiggle” ? WTF.

      • Michael said:

        Why not just say;

        ““On top of a uniform linear trend, they identified another trend with a common multidecadal time scale, but with different phases across the different indices of the climate network…

        Because oscillatory behavior is not a trend. The stadium-wave analysis looks for propagating waves superimposed on trends.

        Do you really know so little math? I mean, we’re talking high-school level stuff here. How can you hope to intelligently comment on the field when you don’t even know things any high-school graduate ought to?

    • So what is the difference between a “linear trend” and a tiny piece of a long term trend? The ground sure looks flat from where I sit, yet the Earth is not. Doesn’t calculus depend on this simplification working?

      • So what is the difference between a cup and a mug

      • about 100ml?

      • Matthew R Marler

        Steven Mosher: So what is the difference between a cup and a mug
        Are you saying that a cup is indistinguishable from a tiny piece of a mug? How does your comment relate to the comment above it?

        It is hard in this case for a reader to distinguish whether you are stupid or playing dumb.

        TJA: The ground sure looks flat from where I sit, yet the Earth is not. Doesn’t calculus depend on this simplification working?

        In answer to your question: Yes.

      • A mug is cylindrical and a cup is more cone shaped with a flat bottom.

        Now all we need is a mechanism to predict what is being propagated.

      • I guess I didn’t finish my point that a sine wave looks like a linear trend when examined closely enough relative to its frequency.

      • In the US a cup is a half pint and a mug closer to a pint. But then in the UK a pint is almost three cups. So I guess the difference between a mug and a cup is subjective.

      • michael hart

        Didn’t we go here before? They are both donuts.

      • The Zen philosopher Bashar once wrote… “A flute with no holes is not a flute, and a donut with no hole is a danish”.

  5. Figure 1 in the paper makes me think the system is resonant, and that what happened in 1975 might be mostly a result of what happened in 1910, so that it might not take much unusual forcing to keep the AMO oscillating.

  6. The leading “wave” for the last rise starting in 1970 was CRUTEM4 NH (NH land) which is not included among the stadium wave indices for some reason. With land leading, this implies external forcing. 45 years after the turn, the land is still showing no signs of slowing, which as the leading index portends that the other indices won’t turn any time soon either. All the stadium wave modes will be stuck in this end warming mode for the foreseeable future.

    http://www.woodfortrees.org/plot/crutem4vnh/mean:240/detrend:0.8/offset:0.5/mean:120/plot/esrl-amo/mean:240/mean:120/plot/hadcrut4nh/mean:240/mean:120/detrend:0.8/offset:0.5

  7. [W]ith the advent of climate models and “new era” of CMIP experiments, providing “physical understanding” of an observed phenomenon has somehow become synonymous with the ability of the models to reproduce this phenomenon in the first place. Yet, it is not a matter of argument that these models, while very useful and skillful in simulating a wide range of climatic processes, are imperfect and may easily “miss something.” The latter statement, while obvious, is often overlooked and the model experiments are in a sense given disproportionately large weight as the primary method for climate analysis and prediction. In contrast, we think that a symbiosis between climate modeling and observational analysis is absolutely essential in trying to reconcile models and data.

    Throwing down the gauntlet?

  8. “The point raised by Mann and the alternative perspective provided by our paper does raise the broader issues of whether you can separate forced from intrinsic variability, and if so, how to do this.”

    And if you can’t separate forced from intrinsic variability, doesn’t that mean you can’t determine attribution and therefore can’t determine climate sensitivity?

  9. I note the paper states “There are two possible explanations for this documented inconsistency between climate model simulations and observations.”

    I don’t wish to give offense, so please take none, but is it possible there are more than two possible explanations? If so, what might they be, and if not, why not?

    When there is a discrepancy between observation and theory, a raft of things need to be considered, which I will not expand on here.

    It just seems a bit limiting, and I think I understand why. I am not sure, however, hence my query.

    Live well and prosper,

    Mike Flynn.

    • But essentially, when there’s a discrepancy between observation and theory, it’s the thory that needs to be modified, is it not? Facts, after all, are chiels that winna ding.

      • I would think that intense annual crop farming would absorb more CO2 for photosynthesis (providing that the residue is ploughed in rather than burned) than a conifer forest of same or even larger size.

    • Michael290352,

      Absolutely – with a caveat or two. How accurate is your observation, and are you sure there are no unknown influences, for example.

      Now if a physicist’s theory has a particular measurement calculated to be 1, with a small unknown – but minor – adjustment for the affect of light on an electron, then an experimental result of 1 plus or minus a little bit would be acceptable to many. The theory was sound, and obviously verified by experiment.

      Except that it wasn’t. Another theory calculated a value of 1.00115965246 give or take, whilst experiment gave a value of 1.00115965221 with a slightly smaller give or take.

      Now the theory and the observation are closer, but is the theory correct? I believe so, as I haven’t heard any real arguments to the contrary. So for the moment, the theory is useful, and possibly even correct.

      But if it is correct, why the difference between theory and observation? Which is correct?

      The problem with continued re examination of historical temperature records is that it is impossible to verify their accuracy. Given rises in population, demographic changes, patterns of industrialisation and so on, it seems fairly brave to say that the theory that rising levels of CO2 cause rises in global temperatures. There is no reproducible scientific experiment to show that the temperature of an externally heated body can be raised by surrounding it with CO2 at any concentration. Not one.

      Therefore, if someone tells me that observations of recorded past temperatures prove or disprove a particular theory, I would say depending on dodgy data is dodgy practice. Proxy temperature reconstructions are even dodgier.

      If you believe the observations to be relevant, accurate, and complete, and that all possible external influences have been accounted for, you may well decide a particular theory was incorrect, at least where the past was concerned.

      I merely say it is impossible to demonstrate the Warmist greenhouse effect – mainly because it’s physically impossible. And so far, so good. In spite of billions of dollars spent on research, no experimental verification – precisely no observations to back up the theory.

      To my mind, all the climatic models are a waste of time, effort, and money. They can predict the future no better than you or I, and in many cases far worse.

      Live well and prosper,

      Mike Flynn.

      • Mike

        You mention your doubt about the value of proxy temperature reconstructions which of course are the reason why we are in this mess at the moment. According to them temperatures had been slightly falling but otherwise reasonably stable for 1000 years until Man came along. This directly contradicted the former prevailing view backed up by mountains of evidence which was that our climate has been hugely variable throughout the Holocene.

        This seems to me to come back to a discussion on boreholes and tree rings I have been having with various people which intensified when Fan posted borehole data which showed the opposite to what he had been claiming. I have just posted something on this in response to Rob on the other thread but repost it here as being relevant.

        The question really is can we trust any of the data that we rely on to prove one climate state or the other. In which case which ones?

        ——- ——- —-
        Rob

        Last night I was pointing out that boreholes seemed to accurately reflect the upwards rise in temperatures that started around 1700Ad. I wa asking how accurate they are as in truth all paleo proxies are all over the place with the Hockey stick giving exactly the opposite answer.

        JimD seems to be backtracking a bit as to their value and Fan gave a very long explanation as to why the data HE had originally posted needed to be treated with caution.

        Tree rings are useless. I can not think why scientists gave them house room other than for dating and possibly precipitation.

        Boreholes I would like to believe as they closely mirror CET as described in my article ‘The long slow Thaw.’

        However, wishful thinking on my part is hardly the basis for scepticism which is why I have tried to determine why boreholes should be better than any other proxies-if they are. Having looked at all the paleo proxies in MBH98 it seems to me that by due selection you can get any result you wanted.

        If Judith sees this an article about how paleo proxies and the resultant reconstructions are gathered and why we should give credence to any of them would be useful.
        —— ——- ——
        tonyb

      • Climatereason,

        From personal knowledge, I can assure you that at least some of the official meteorological service temperature readings are objectively inaccurate, to put it politely.

        Proxy records, apart from those such as tolerably believable records of fairs held on the frozen Thames, fossil remains in Antarctica prior to the ice buildup, and such like, appear, at the very least, less reliable than official records which I personally know to be unreliable.

        You’re right. It’s easy to see what you want to see. Waves, cycles, repetitive patterns, whatever supports your point of view. Even if such things are real, the past doesn’t predict the future. That’s my story anyway, and I’m sticking to it. LOL.

        I’m only a dilettante, after all.

        Live well and prosper,

        Mike Flynn.

      • tonyb,

        I’m interested in your opinion as to whether rising temperatures since the 1700s or the LIA could be consistent with anthropogenic land use change and GHG emissions.

        not a trick question.

      • Bill c

        Good question. Man has cut down forests, and tilled the earth for Many thousands of years. Both contain substantial amounts of carbon which presumably are released then trapped again. We also burn a great deal pf carbon based fuels in increasing quantities.

        We must have had some effect locally, but it is evident that there have been periods as warm and warmer than today prior to 1700 . I would cite the early 1500’s when the northern sea route was possibly being navigated, the mid 1300’s and much, but not all, of the period from around 850 to 1180 AD.

        It was also warm in roman times and to this day the web site for upland dartmoor close to me confirms the climate was warmer in bronze age times. The physical evidence for this can be seen to this day

        So, co2 seems to me to have a limited effect above a certain concentration , with that concentration being around 280ppm. So any anthro effect since 1700 is likely to be limited which is no excuse of course to needlessly damage the environment.

        Tonyb

      • I would think that intense annual crop farming would absorb more CO2 for photosynthesis (providing that the residue is ploughed in rather than burned) than a conifer forest of same or even larger size.

      • tony & vuk – thanks. with GHG forcings being logarithmic I also wonder what effect small increases over preindustrial conditions would have had. I don’t know enough paleo to know if land use change could have impacted temps over large regions during say Roman times, but clearly a lot of other things (forcings/variability) were going on throughout…

    • Bill c
      An ineresting book that addresses this issue is Earth’s CLimate, Past and Future by William Ruddiman from UVa at the time. He wrote that agriculture and especially cultivation of rice had big impacts on the climate starting 8,000 years ago. Last edition was 2007 or so.
      Scott

  10. I enjoyed reading the review process. You have to laugh (or cry) when you find your work rejected based on a review of what the paper doesn’t contain rather than whats actually in it.

  11. Matthew R Marler

    It was interesting to see that data and code were made available online; and it was interesting to see the interchanges between reviewers and authors. I hope that more authors do the latter.

    Congratulations on publishing another good paper.

  12. ‘There are two possible explanations for this documented inconsistency between climate model simulations and observations. The first one is that the propagation detected by Wyatt et al. [2012] is an artifact of their statistical analysis and the lagged phasing of
    8 various climate indices is entirely due to sampling [Mann et al., 2014]. An alternative explanation is that a coherent propagating multidecadal climate signal in the Northern Hemisphere is real, in which case one needs to look further into the inability of climate models to simulate this signal.’

    Don’t take offence Flynn the Dingbat – but there are two explanations. The models are right or wrong.

    ‘The abrupt changes of the past are not fully explained yet, and climate models typically underestimate the size, speed, and extent of those changes. Hence, future abrupt changes cannot be predicted with confidence, and climate surprises are to be expected.’ http://www.nap.edu/openbook.php?record_id=10136&page=1

    ‘A vigorous spectrum of interdecadal internal variability presents numerous challenges to our current understanding of the climate. First, it suggests that climate models in general still have difficulty reproducing the magnitude and spatiotemporal patterns of internal variability necessary to capture the observed character of the 20th century climate trajectory. Presumably, this is due primarily to deficiencies in ocean dynamics.’
    http://www.pnas.org/content/106/38/16120.full

    Both the deficiencies of model ocean dynamics and the inevitability of a globally connected system seem quite obvious.

    • Rob Ellison,

      I never take offense. I hope this doesn’t offend you.

      Which models are you referring to? I believe there are a variety.

      Which observations are you talking about, and in respect to which models? Are you seriously proposing that that you cannot think of more than two reasons why even one particular model may disagree with one particular set of observation purporting to be appropriate?

      There are only two possibilities – either you are attempting to have a joke at my expense, or attempting to be gratuitously offensive. Is it not so?

      Keep up the good fight,

      Live well and prosper,

      Mike Flynn.

    • Don’t take offence Flynn the Dingbat – but there are two explanations. The models are right or wrong.

      It is clear that all the models are wrong. They don’t match real data.

      • popesclimatetheory,

        I just asked a question. That’s all. It had occurred to me that for any particular model, and any particular set of observations supposedly representative of the model’s outputs, in the case of an observed inconsistency, there were many factors which might well need to be taken into account.

        Obviously, if there is discrepancy, at least the following outcomes are possible –

        1. Model correct, measurements incorrect.

        2. Model incorrect, measurements correct.

        3. Model incorrect, measurements incorrect.

        This is just the start. Many times, scientists have assumed that measurements accord with their theory, only to discover later that it wasn’t actually so. Sometimes, but not frequently, the theory turned out to be correct, the measurements turned out to be correct, but the relationship between the theory and the measurements was affected by an influence unknown at the time.

        The whole thing can become complicated, and sometimes an apparent minor and seemingly irrelevant discrepancy turns out to be the starting point for a scientific advance.

        I agree, that if 100 models have differing outputs, then at least 99% are wrong. I have no way of knowing if, perchance, one or more of these models may turn out to provide useful answers in the future.

        That’s all. Dogmatism of the form “there are only two possible answers” may not advance science as much as asking if there are any other possible answers. After all, there may be unknown unknowns, as they say. I leave it to you to decide.

        Live well and prosper,

        Mike Flynn.

    • Gratuitously pointing out that you are a dingbat I find personally satisfying regardless. Your response is not something I concern myself with. I don’t especially regard it as being honest at any rate. It is a foil against your dissembling, quibbling and accusations of my being a ‘warmist’. Which are obviously ludicrous in the extreme.

      There are two possibilities – the models are right or wrong. At last count there were 55 – they all get ocean dynamics wrong. This is pretty much guaranteed because the theories are inadequate. Observation, obviously, are how you know they don’t get it right.

      • It is easy to know which of the models is right. It is the one that fits one’s agenda best. Same as the way one chooses among the various temperature series, or which historical temperature proxies to trust or disregard.

      • no there is one fact. all models are wrong. The question is how uselful are they to people using them to make decisions. If you choose to make a policy decision with a time horizen of 100 years, then you have two options
        a) use a statistical model approach
        b) use a physics model approach.

        Both will be wrong. Policy makers, who choose to decide that a 100 year time horizon is a good one, will get to choose which type of model they believe in.

      • Matthew R Marler

        Steven Mosher: all models are wrong. The question is how uselful are they to people using them to make decisions.

        There is never just one question. Other obvious questions in this context: (1) what accuracy, over what time span, has been demonstrated for each model? (2) how useful in fact was their previous use to date?(3) for what goals have they been shown to be useful?

      • A platitude is not all that useful. The models either capture vigorous decadal variability – or they don’t.

      • Mosher

        To write “all models are wrong” is a highly misleading comment at best.

        Effective models perform within quantified margins of error over time.

        The truth is that the performance of GCMs and other climate models have been greatly oversold and are currently unsuitable for government policy decisions because nobody has any reasonable idea how badly these models will perform over different time periods.

        That does not mean that government policy should be based upon a physics models either. There are currently to many unknowns for those to be reliable for much either.

      • rob starkey, there’s also the issue of some models being perfect. There are many things with discrete values which can be modeled with no uncertainty. I’ve created several (relatively simple) models when discussing strategies in video games that had perfect precision and accuracy.

      • Brandon– I am not aware of any model that is perfect and in regards to climate modeling there certainly are not any and there is no need for perfection. In the development of GCMs, they were put into use without any/sufficient characterization of what margin of error they should perform within over different periods of time.

      • Perfect models are minority, but they can exist under the right circumstances. For instance, suppose you were designing a strategy in a game which involved an AI opponent. Knowing what the AI would do under different circumstances could be quite valuable. In some games, it is possible to perfectly predict, or model, the AI’s behavior.

        There obviously aren’t perfect models for atmospheric issues and the like, but the point is still relevant. Any model can have perfect accuracy if it is imprecise enough. The question shouldn’t be how accurate models are. The question should be how precise an answer can the models give with a certain level of accuracy.

        When modeling an AI for a video game, we may be able to achieve perfect accuracy and precision. When modeling an engine part, we may be able to model the amount of movement to one millimeter with 99.99% accuracy. When modeling global temperatures, we may be able to model them to .5C with 80% accuracy.

        When judging models, we ought to specify a level of accuracy/precision we need from them to consider them useful. We might specify many different levels for different degrees of usefulness. It’s not a matter of, “All models are wrong.” It’s a matter of, “All models have different levels of precision/accuracy, and what those levels are determine how we should interpret their results.”

      • Matthew R Marler

        Brandon Shollenberger: The question shouldn’t be how accurate models are. The question should be how precise an answer can the models give with a certain level of accuracy.

        Why?

        I would say that sufficient accuracy to achieve a goal requires sufficient precision to achieve that level of accuracy. mean squared error (one measure of accuracy) is the sum of the variance (one measure of precision) and the bias^2. Increasing accuracy requires reducing both variance and bias.

      • Steven Mosher: all models are wrong. The question is how uselful are they to people using them to make decisions.

        There is never just one question.

        Sure there is. You don’t ask the questions.

        Other obvious questions in this context: (1) what accuracy, over what time span, has been demonstrated for each model?
        depends on the metric

        (2) how useful in fact was their previous use to date?
        ask the people who used them to make decisions. Since the time scale is
        typically 100 years its too early to tell even if anyone made a decision

        (3) for what goals have they been shown to be useful?

        For making decisions. And yes, policy makers get to have political and social goals.

        useful does not mean correct.

      • A platitude is not all that useful. The models either capture vigorous decadal variability – or they don’t.”

        Of course it’s useful. And yes the models capture some aspects of variability within certain error limits. Not all, not perfectly.

      • Mosher

        To write “all models are wrong” is a highly misleading comment at best.

        No its not.

        Effective models perform within quantified margins of error over time.

        all models perform within quantified margins of error. sometimes the error is large sometimes it is small. Look at GCMs, the get the absolute temperature correct within +-3K over all time spans. There, quantified error. It happens to be big. That makes it less useful.

        “The truth is that the performance of GCMs and other climate models have been greatly oversold and are currently unsuitable for government policy decisions because nobody has any reasonable idea how badly these models will perform over different time periods.”

        you dont get to decide whats useful or suitable. Policy makers do.
        You might like better models.. tough. you dont get to decide what is useful.

        That does not mean that government policy should be based upon a physics models either. There are currently to many unknowns for those to be reliable for much either.

        You dont need a reliable model to base a decision on it.

        Your mistake is thinking that you get to decide what people, what policy makers should do. You dont.

      • Matthew R Marler:

        Why?

        I would say that sufficient accuracy to achieve a goal requires sufficient precision to achieve that level of accuracy. mean squared error (one measure of accuracy) is the sum of the variance (one measure of precision) and the bias^2. Increasing accuracy requires reducing both variance and bias.

        Typically, precision is taken as a measure of consistency while accuracy is taken as a measure the proximity of results to the true value. MSE combines information from both of these. That means it combines the concepts of precision and accuracy I refer to in my comment.

        We can, of course, define MSE as measuring “accuracy,” but the fact we can doesn’t change the plain meaning of my comment.

      • To clarify my previous comment, I should point out one can decrease the precision of a model while increasing its accuracy. Conversely, one can increase the precision of a model while decreasing its accuracy. Setting only a single target value, such as with MSE, is often insufficient to ensure you get results suitable for your purposes. It is perfectly possible to have two different uses for a model which would require similar MSE scores yet different underlying levels of precision.

        A model with very low precision that converges to the true value when run enough times can be more or less valuable than a model with potential biases but extreme precision (perhaps depending on the amount of computation time available).

      • All model comments are wrong.

      • Matthew R Marler

        Brandon Shollenberger: Typically, precision is taken as a measure of consistency while accuracy is taken as a measure the proximity of results to the true value. MSE combines information from both of these. That means it combines the concepts of precision and accuracy I refer to in my comment.

        Yes, I already wrote that. How does accuracy become irrelevant?

      • Matthew R Marler:

        Yes, I already wrote that. How does accuracy become irrelevant?

        Huh? I said:

        The question should be how precise an answer can the models give with a certain level of accuracy.

        I’m not sure how you’re interpreting that as suggesting accuracy becomes irrelevant. I was trying to say we should ask both how accurate and how precise the models are.

      • Matthew R Marler

        Brandon Shollenberger: The question shouldn’t be how accurate models are.

        I think that you have backed away from that statement. To me, that (“hoow accurate models are”) is the most important question; and the point I made about precision is that precision is required for accuracy, if accuracy is measured by mean squared error.

      • ‘Atmospheric and oceanic computational simulation models often successfully depict chaotic space–time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but nonunique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model. Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.’ http://www.pnas.org/content/104/21/8709.full

        Did I accidentally start this off topic meta-irrelevance? Precision is not the right question – irreducible imprecision is.

      • someone said: all models are wrong — For the Nth time, This is just plain wrong, even if you intended to say no model is a 100% accurate depiction of the entity it is modeling.

      • Rob Ellison wrote:

        1.) “Did I accidentally start this off topic meta-irrelevance??”

        Apparently. Now we once again have model parsing chaos. That’s what you get for leaving meta-tinder lying around.

      • Steve Mosher

        Your initial comment referenced choosing to make policy decisions over 100 year timeframes and gave a choice of two options- statistical models of physics models.

        You then write- “You dont need a reliable model to base a decision on it. Your mistake is thinking that you get to decide what people, what policy makers should do. You dont.”

        Your initial point could be considered broadly true if the use of “statistical models” meant the use of historical weather trends combined with an extrapolation of expected changes in the population. This data is adequate for planning to construct and maintain robust infrastructure in a area. Over 100 year timescales this information is not highly reliable regarding changes in population trends in many regions.

        Imo, you are wrong in your comment about thinking about the quality of information upon which policy makers based decisions in the modern information age. If a policy maker can be shown to be basing their positions on faulty or highly suspect information, they are less likely to maintain their policy making position. If they are strong advocates for enactment of policies based on models that can be demonstrated to be highly suspect then they are less likely to stay in their position.

        Isn’t a representative democracy a wonderful thing! (in theory at least)

      • Matthew R Marler:

        Brandon Shollenberger: The question shouldn’t be how accurate models are.

        I think that you have backed away from that statement. To me, that (“hoow accurate models are”) is the most important question; and the point I made about precision is that precision is required for accuracy, if accuracy is measured by mean squared error.

        What? How do you think I “backed away from that statement”? I said the question shouldn’t be how accurate models are. I then immediately said the question should be how accurate and precise they are. In other words, we shouldn’t use only one measure of models’ skill, we should use two.

        If you feel we should define accuracy as being measured by MSE, which uses information from both accuracy and precision, then you’re doing what I said we should do. You’re just doing it in a less than ideal way. When MSE combines precision and accuracy into a single measure, it loses information about both. MSE gives a single value for a combination of two variables, necessarily meaning there is an infinite number of possible permutations for any given value. I believe that lack of specificity can undesirable when judging models.

        I could also point out MSE is a bad measure of “accuracy” in a number of situations. Its quadratic utility function does not accurately represent many of loss functions we’d encounter when judging models, and its susceptibility to outliers could be similarly problematic. I could go on about why MSE shouldn’t be adopted as our measure of “accuracy” in this discussion, but it doesn’t matter.

        The meaning of my comment was quite simple. I have not backed away from anything. What I said is we should not consider accuracy as the sole measure of a model’s skill. I said we should consider both accuracy and precision, specifically in relation to one another.

        On a pedantic note, including information about precision in our measure of accuracy does not preclude us from examining precision on its own.

      • Oh, I forgot to point out something before:

        I would say that sufficient accuracy to achieve a goal requires sufficient precision to achieve that level of accuracy. mean squared error (one measure of accuracy) is the sum of the variance (one measure of precision) and the bias^2. Increasing accuracy requires reducing both variance and bias.

        The final sentence of this paragraph is just wrong. As you yourself point out, MSE combines two variables into a single value. You do not need to change both variables to change the final value. You can increase the accuracy measured by MSE by reducing variance, you can increase it by reducing bias, or you can increase it by reducing both. In fact, you can even increase accuracy by reducing bias but increasing variance, depending upon how much each changes by.

        Which is the reason I hold we should examine the individual components separately. Why require combining two components in a way which loses useful information? MSE has its uses, but it is not as informative as examining the individual components it combines.

      • Matthew R Marler

        Brandon Shollenberger: I could also point out MSE is a bad measure of “accuracy” in a number of situations. Its quadratic utility function does not accurately represent many of loss functions we’d encounter when judging models, and its susceptibility to outliers could be similarly problematic

        Sure. And you can measure precision by some other measure than variance. I used MSE, variance and squared bias as examples. With those, the discussion is unambiguous. You can make the discussion unambiguous by specifying other measures of accuracy and precision. For whatever measures you choose, achievement of a sufficient level of accuracy requires achievement of a sufficient level of precision.

        Given two models of equal accuracy, you have a choice between the one with the smaller bias and the one with the greater precision. I think that accuracy is the first consideration, and precision is the second, when evaluating both in the choice of model.

      • Matthew R Marler, I can’t help but notice your latest response doesn’t even attempt to address my recent comments as a whole. You apparently misunderstood my intended meaning, which is fine, but that we’ve now had this many comments go by without you acknowledging it is a shame. My meaning was quite simple, and there is no reason we should be unable to agree what it was. I suspect the problem is you continue to say things like:

        With those, the discussion is unambiguous. You can make the discussion unambiguous by specifying other measures of accuracy and precision. For whatever measures you choose, achievement of a sufficient level of accuracy requires achievement of a sufficient level of precision.

        Which are nothing more than hand-waving. There is no inherent reason a “sufficient level of accuracy” as measured by MSE would require “a sufficient level of precision.” You’ve done nothing to support the claim there is save to falsely claim improving accuracy requires improving both accuracy and variance.

        There are times when significant variance is acceptable but bias is not. In these cases, a large degree of imprecision might be tolerated. In other cases, bias may be acceptable while only a little variance could be tolerated. In these cases, only a small degree of imprecision might be tolerated. Despite having very different requirements, both of these cases could be represented by the same MSE value. So when you say:

        Given two models of equal accuracy, you have a choice between the one with the smaller bias and the one with the greater precision. I think that accuracy is the first consideration, and precision is the second, when evaluating both in the choice of model.

        You’re merely stating an opinion which does not match all real-life requirements. It may match many, and perhaps it even matches the GCMs used in climate science, but it is not some inherent truth everyone must accept simply because you tell them to.

        I honestly don’t know why you take issue with my position. I’ve scarcely ever had someone suggest looking at more measures of a model’s skill is inappropriate.

      • Matthew R Marler

        Brandon Shollenberger: You apparently misunderstood my intended meaning,

        I think that’s a fair statement. Did you write your “intended” meaning clearly?

        The question shouldn’t be how accurate models are. The question should be how precise an answer can the models give with a certain level of accuracy.

        I asked why. Given two models with equal accuracy, you can choose the one with the smaller bias or the one with the smaller variance. If you don’t like MSE, variance and bias, which are well-defined, you can choose other measures of accuracy, precision and systematic error. Accuracy should be the first consideration: given two models with unequal accuracy, you should prefer the model with the greater accuracy, I should think. If you have a model with high precision (low variance, or low median absolute deviation from the median) but high bias, you can be pretty sure of getting the wrong answer — as with the GCMs whose variance is not low, but is low compared to the bias so that they consistently predict too high a temperature.

      • Matthew R Marler

        Brandon Shollenberger, and others, here is a model and some of its predictions: http://wattsupwiththat.com/2014/10/01/test-driving-the-solar-notch-delay-model/

        I claim that our primary interest over the next 10 years is in the accuracy of its predictions.

      • Matthew R Marler, sorry for the delayed response. I forgot to check for one. To answer your question:

        I think that’s a fair statement. Did you write your “intended” meaning clearly?

        Yes, I did. Perhaps one couldn’t figure out what I meant when you take that sentence out of context, but given the entirety of my comment, it was quite clear. I don’t know why you’ve had a hard time understanding it, but I’ll note your behavior may be part of it.

        I don’t think a single measure of accuracy is an ideal measurement to use as our standard. I’ve explained why mulitple times. I’ve pointed out you’ve said things about accuracy as a single measure which are wrong, which you’ve simply ignored (i.e. you falsely claimed improving accuracy requires reducing both bias and variance).

        Given you won’t even address your blatantly false claim which goes directly to the reason I disagree with using accuracy as a single measurement standard, I take your inability to understand my original remark with a grain of salt. I find people tend to have difficulty understanding points of discussion when they adamantly refuse to address points of discussion.

        .

        Given the length of this exchange I’ll sum up my view to make things clear. Accuracy is merely a combination of measurements of bias and precision. We are better off using actual measurements of bias and precision than a value which combines both in a way which discards useful information. I believe this because there is no benefit to throwing away useful information.

        Moreover, the way one combines information for bias and precision is a source of unnecessary uncertainty. There are many possible ways to combine information for bias and precision. Which one is chosen is generally arbitrary. This gives a source of uncertainty which is often unquantified, and it can even be a source of bias where one (even subconsciously) favors desired results.

        Accuracy is a good measure of skill only as a shortcut or simplified tool. When one wishes to truly measure the skill of a model, it is better to focus on the full body of information available.

  13. Thanks. I have downloaded all the papers provided by Marcia Wyatt over on her site: http://www.wyattonearth.net/publicationsstadiumwave.html

    I am impressed by how far this group of papers takes us in understanding natural climate variability.

    I do have difficulty with the term “secular” as used in Wyatt and Peters 2012. In a footnote, the authors define secular as “occurring one or fewer times per century”. However, “secular” is usually used in opposition to “periodic” or “cyclical”. “Secular” means “trending long-term monotonic upwards or monotonic downwards” and captures the Latin “saecula-saeculorum”, in English “for ever and ever”. In the context of climate, “secular” does not seem appropriate to describe a century-scale phenomenon.

    The various components of our climate system have evolved during the Holocene epoch as one of several interglacial periods of the Quaternary Period, marked by a series of quasi-periodic glacial and interglacial events.
    Within the Holocene we can distinguish quasi-periodic climatic events, such as the mid-Holocene Climatic Optimum and a later series of warming and cooling events, including the Little Ice Age and the Modern Warm Period. “Secular” does not seem to be an appropriate term to describe any of the fluctuations in climate during the entire Quaternary Period.

    “Secular” is more appropriate for the claim that the climate system has been altered by man-made global warming (AGW) during the 20th century and will continue to be altered so long as man changes the composition of the atmosphere to make it more opaque to outgoing long-wave radiation.

    Is the “stadium wave” itself quasi-periodic or is it secular? Is the stadium wave secular in the sense that AGW is secular? And if so, is the stadium wave driven by AGW?

    I conclude that using the term “secular” to describe century-scale climate variability related to quasi-period ocean oscillations introduces a spurious concept not supported by the evidence.

    I think that the authors have inadvertently marred their work. Nevertheless, this is a guess on my part.

    Can someone please clarify?

    • Matthew R Marler

      Frederick Colbourne: I do have difficulty with the term “secular” as used in Wyatt and Peters 2012. In a footnote, the authors define secular as “occurring one or fewer times per century”. However, “secular” is usually used in opposition to “periodic” or “cyclical”. “Secular” means “trending long-term monotonic upwards or monotonic downwards” and captures the Latin “saecula-saeculorum”,

      You raise an interesting point, but “secular” is widely used, and the authors’ note reflects the fact that given the limits of the data, what has been proposed as a part of a rising phase of an oscillatory process with a long period appears in a short time series as a secular trend. Do you have another short word that can be used in place of “secular”? If what they have is indistinguishable from a secular trend, then it might as well be called “secular”. As to the implication that it will continue in the future, all presentations of modeling/analytical techniques observe that the fit of a model to a time series does not by itself imply that the model will apply accurately in the future.

  14. Judith, What do you make of this paper mentioned at the Schtick.
    http://hockeyschtick.blogspot.com.au/2014/09/new-paper-finds-global-temperature-data.html

    Seems it causes real problems for anyone using pre-1950 (and I suspect anything before the satellite era) temperature series, such as Had, Giss, and any that are related to them.

    Even WFT before 1950 (before 1979, imo) becomes a real problem.

    Someone said the other day, “its all we have”, but does that mean we should still be using it ?????????

  15. Am I the only one who finds the idea of Mann complaining about “statistical artifacts” hilariously funny?

    • fizzymagic,

      Not at all, although I will possibly have to dig my way out from under an abusive tirade, for daring to laugh – even snigger – at that scion of non existent science, that doughty champion of the Warmist cause, that inestimable self appointed Nobel laureate, complaining about other people allegedly making stuff up.

      The Michael Mann of climate science complaining about a statistical artifact. Indeed. Almost too delicious for words, what?

      Live well and prosper,

      Mike Flynn.

    • No, considering that the “hockey stick” can be described as being no more than a “statistical artefact”

  16. No. I chuckled at “…is no more than a statistical artifact of flawed methodology..” as well. Does that make us terrible people?

  17. Abstract. The bulk of our knowledge about causes of 20th century climate change comes from simulations using numerical models.

    Shouldn’t that be the bulk of our “thinking”, rather than “knowledge”?

    And again, the Wyatt quote posits a comparison between “observation-based data” and “model-generated data”. But isn’t the latter simply an oxymoron? Data means the givens, not the generated.

  18. Just finished reading the
    https://curryja.files.wordpress.com/2014/09/kravtsov-et-al.pdf
    not that I can claim complete understanding (reading once of twice more, which I will do, may not change that). I occasionally comment (here and elsewhere) on the N. Atlantic natural variability usually supported by simple graphic illustrations.
    Relationship between the SST and the SLP does appear to be supporting the ‘stadium way’ hypothesis. The relationship is often described as non-stationary during the last 100 or so years, but I would describe is as ‘regressive’ from the N. Atlantic’s SST stand point.
    The SST- SLP ‘elastic band’ relationship looks as if it is approaching a snapping point; most likely sometime during the forthcoming solar ‘grand minimum’, then going back to the square one.

  19. I cringe at linear detrending. That would be a third reason to reject climate science as being a science.

    However the apparent controversy comes out, both sides affirm that climate science is a science.

    Rather than a cargo cult thing.

  20. Judy, you may have missed this one in your review

    http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0081591

    Understanding the Causes of Recent Warming of Mediterranean Waters. How Much Could Be Attributed to Climate Change?
    Diego Macias, Elisa Garcia-Gorriz & Adolf Stips
    November 27, 2013DOI: 10.1371/journal.pone.0081591

    It shows a very nice deconvolution of the Mediterranean SST and AMO.
    If your regional model gets a match to their quite different deconvolution, you may have external support for the analysis.

    • It’s obvious that the mode of variability known as AMO can be observed globally and in most (all?) regional sea and land surface temperature indices. North Atlantic SST is just a manifestation of this global mode of variability. Here the AMO, which is a detrended North Atlantic SST, compared with the detrended HADCRUT4 SH:
      http://www.woodfortrees.org/plot/esrl-amo/plot/hadcrut4sh/detrend:0.745/plot/hadcrut4sh/detrend:0.745/trend

      Furthermore, the removed ‘secular’ linear trend is very likely just another longer-term quasi-oscillation. There is no evidence that it’s anthropogenic.

      • Edim, it is more a question of phase. How heat is transferred from the Atlantic to the Med, is interesting, given that it is unlike to be via the strait.
        Many are under the impression that ECS/TCR is 1.5, based on the axiom that land responds to atmospheric ‘forcings’ much more rapidly than oceans. Heat transfers from the Atlantic to the Northern Hemisphere Land and also to the Med, suggest that this isn’t so.

      • Edim has linked to his graph several times now. How can it be at all surprising that removing the trend of the SAT results in a close match to an area of the surface that has had its trend removed?

        The AMO is a follower. It doesn’t drive anything. In 2014 the SAT has been going up, and the AMO is going right with it: like a puppy on a leash. The PDO was the real deal, but now not so much.

      • Doc, I agree it’s a question of phase, but are we certain that the North Atlantic SST leads the change? I’m not so sure. Why detrend the temperature index? The mode of variability known as AMO is neither (North) Atlantic nor multidecadal only. It should be called global climate variability or short climate change.

      • Matthew R Marler

        Edim: Furthermore, the removed ‘secular’ linear trend is very likely just another longer-term quasi-oscillation.

        Certainly that is a possibility, but why do you say “very likely”?

      • Matthew, because both observational and proxy records of climate change (during the Holocene) show quasi periodic variations over a wide range of time scales.

      • Take a look at the Atlantic Ocean salinity anomoly. Salinity has been proposed as the driver of the AMO. So, there’s a potential mechanism. So, the AMO MIGHT be the driver.

        From the article:
        “There are recurrent cycles that are salinity-driven that can store heat deep in the Atlantic and Southern oceans,” Tung said. “After 30 years of rapid warming in the warm phase, now it’s time for the cool phase.”

        Rapid warming in the last three decades of the 20th century, they found, was roughly half due to global warming and half to the natural Atlantic Ocean cycle that kept more heat near the surface. When observations show the ocean cycle flipped, the current began to draw heat deeper into the ocean, working to counteract human-driven warming.

        https://judithcurry.com/2014/08/21/cause-of-hiatus-found-deep-in-the-atlantic-ocean/

    • Doc, thx for this ref

    • Matthew R Marler

      DocMartyn: http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0081591

      Thank you for the link. It is interesting.

  21. No idea if all this is right, of course – let the scientists work it out – but this is very exciting to me. This kind of thing suggests that someone could use it to learn how to make a climate model that includes another level of macroscopic understanding and emergent phenomena. Could the result be a climate model that can accurately predict a larger number of macroscopic variables, for a considerably longer time? Currently, to validate GCMs you have to wait a couple of decades, because they’re basically not claiming to be able to model anything but global surface temperature (and so far not even that). If you can provide a larger number of modelable features, GCMs could be validated in much less time.
    Dr. Curry, could you make a GCM including this?

  22. So at least we know that CO2 is not the major climate control nob that politics portrays!

    • I don’t think we really know that at all. Definitely not from this paper.

    • Put the champagne away. If I’m not mistaken this doesn’t tell us anything about attributing the rise in temperature. It doesn’t even tell us if natural variability has contributed any nett positive effect. Overall it could still be zero or negative.What it MIGHT throw light on is how energy moves through the system which then MIGHT help us with attribution in the future.Well that my take on it anyway

    • ……. In fact its possible it may still clarify the role of CO2 in the system by better accounting for all the lumps and bumps in the temperature record

  23. All semi reliable climate computer generated models have to have feedback from real data imputed as they go along.
    None of them can accurately predict the future due to the “foundation ” effect of Asimov.
    The fact that all models used mimic the past is because the input data was the past.
    Whether you run a model forwards or backwards from a set time it will be inaccurate as soon as it starts moving and will become more inaccurate as time passes.
    The climate and weather models are very similar to tree rings, when you pick one out which fulfils your fantasies, it will turn on you as soon it is given free rein, ie as soon as you use it for forecasting.
    Even worse when you input assumptions, like high climate sensitivity,which turn out to be wrong, the deviation from reality intensifies dramatically, as is happening now.

  24. Sorry to pick on your stadium wave , Judith.
    A concern is that yes you have identified patterns but not causation.
    Hence the pattern is there but the reality is that it will deviate with time from your description, in intensity, in frequency and to a lesser degree in direction, naturally.
    Because it is multi decadal it will take a long time for this to become apparent.
    It is still very useful in showing how natural variability develops.
    Trying to apply it to the simulated climate models with limited inputs and fixed false assumptions is,as you say, not very helpful.
    Sorting out the patterns from the real data, such as it is, shows the waves exist with their implications so thank you for that.

    • Thermodynamics gives direction and kinetics gives velocity; the order of procession is informative.
      Think about the ridges you find amongst wind driven sand dunes; now think sand dunes encircling a globe, they are not fixed, but follow a procession.

  25. So, what will the weather be in, say, 50 years?

    • Variable.

      • Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know. ~Donald Rumsfeld

        There are at least two things about which we all can be certain. First, given that climate change is inevitable, the past is prologue and nothing we can do will ever stop the climate from changing, our best adaptation strategy is to have the courage to do nothing! Finally, society must face the fact that the Golden Goose is on the mat, the economy will continue to shrink, futures will be dashed and our liberties will be trampled if we cannot get the weasels out of the chicken coop.

        Probably there is no other field of applied science in which so much money has been spent to effect so little progress as in weather forecasting. ~H. C. Willett, The Forecast Problem, Compendium of Meteorology (1951)

      • Here’s a golden oldie from renowned map colourist and leading light of Australia’s BoM:

        “David Jones, the head of the bureau’s National Climate Centre, said there was some risk of a worsening El Nino event this year, but it was more likely to arrive in 2010 or 2011.” (2009)

        “The 2010–12 La Niña event consisted of two peaks over successive summers; the 2010–11 peak was one of the strongest on record” (BoM website now)

      • There always seems to be a lot of uncertainty related to the possible development, timing and strength of the next El Niño and I guess we should add to that, the duration, which given current circumstances, should consider prospects of cooling oceans over the next 30 years.

  26. Congratulations on the paper. Quite the gauntlet it had to run.
    So it turns out that the master of statistical artifacts is likely wrong, again. The stadium wave exists in nature, while the hockey stick is just an artifact of short centered PCA.

  27. I wonder how much uncertainty there is around the nature of this “stadium wave?”

  28. If one takes in the background warming trend since the little ice age (estimated from law dome CO2 in this example), the rate of modern warming that can be potentially attributed to humans is reduced by half.

    Uncertainty in the proxy /surrogate measure of temperature prior to 1850 is plausibility offset by the increase in variance which is greater by double in the modern record. For example, what I now call the marcott fallacy (TM) of variance:

    Taking into account natural climate change denial and the Marcott fallacy, Judy and Nic’s mean estimate of 1.6 deg C per doubling of CO2 is halved, so is commensurate to the Lindzen and Choi value of 0.7 deg C /2xCO2

    Lindzen wins.

  29. We construct a network of observed climate indices in the period 1900–2000 and investigate their collective behavior. The results indicate that this network synchronized several times in this period. We find that in those cases where the synchronous state was followed by a steady increase in the coupling strength between the indices, the synchronous state was destroyed, after which a new climate state emerged. These shifts are associated with significant changes in global temperature trend and in ENSO variability. The latest such event is known as the great climate shift of the 1970s. We also find the evidence for such type of behavior in two climate simulations using a state-of-the-art model. This is the first time that this mechanism, which appears consistent with the theory of synchronized chaos, is discovered in a physical system of the size and complexity of the climate system.
    http://heartland.org/policy-documents/new-dynamical-mechanism-major-climate-shifts

    This paper is a skirmish in a paradigm war. The idea was given some quantifiable substance in a 2007 paper by Anastasios Tsonis, Kyle Swanson and Sergey Kravtsov – although the substance had been seen in data and in theory long before that.

    In this approach, a complex system is presented as a set of connected nodes. The
    collective behavior of all the nodes and links (the topology of the network) describes the dynamics of the system and offers new ways to investigate its properties.

    The stadium wave addresses the question of how the signal passes through the connected nodes. It paints an entirely different picture of how climate works – and the collective of papers on this theme marks a fundamental paradigm shift in climate science. It is the most important and powerful idea in climate since Arrhenius.

    Climate is ultimately complex. Complexity begs for reductionism. With reductionism, a puzzle is studied by way of its pieces. While this approach illuminates the climate system’s components, climate’s full picture remains elusive. Understanding the pieces does not ensure understanding the collection of pieces. This conundrum motivates our study. Marcia Wyatt

    The two views are of climate as a collection of pieces acting independently and the components linearly summing to a climate forcing. Alternatively – Earth climate is a complex and dynamic system with all that implies for abrupt change and emergent behavior in climate. There are many unanswered questions – but the significance of the paper is that it builds momentum – slowly – for the new paradigm.

    Michael Mann in this context seems proof yet again that science progresses one funeral at a time.

  30. I’m just a software engineer. We come up with ideas too, about how to make things that interact. One thing we are often asked to do is to help our testers to evaluate our products. This involves describing what the product does, so test can determine whether it is behaving as it is meant to.

    Regarding Stadium Wave (or models, etc.), it seems to me these too would benefit from a series of tests that prove the theory or disprove the theory. Or guidelines of what would prove or disprove the theory. Any such thoughts? I’m supposing within the paper, stadium wave oscillations ought to be predictable to some degree, and their effect on climate or other metrics.

    I suppose this is probably naive, for some reason or another, but without objective measures that improve or degrade the theory, it feels lacking.

    • ‘Technically, an abrupt climate change occurs when the climate system is forced to cross some threshold, triggering a transition to a new state at a rate determined by the climate system itself and faster than the cause. Chaotic processes in the climate system may allow the cause of such an abrupt climate change to be undetectably small…

      Modern climate records include abrupt changes that are smaller and briefer than in paleoclimate records but show that abrupt climate change is not restricted to the distant past.’ (NAS, 2002)

      It is part of the reason for the patterns of variability seen in climate data.

      http://watertechbyrie.com/2014/06/23/the-unstable-math-of-michael-ghils-climate-sensitivity/

      • Not sure what you are arguing here: it’s hopeless to get a sense of whether Stadium wave has good explanatory power? There ought to be a hundred year test, or oceanic paleo that could help discover the truth.

      • Data is used to identify the progress of the signal through the Earth system. This demonstrates a mechanism at the heart of climate – and the mechanism of dynamical complexity suggests the system should evince certain behaviours. The stadium wave is part of a bigger idea.

        There are two questions. How would you show a signal moving through the system as represented by nodes on a network? The answer is this sort of analysis. It is not an idea in need of proof but something that emerges from both analysis and theory. It is a matter of mapping the system.

        The second question asks whether climate behaves as expected based on the broader theory. Does it change abruptly?

        You might see my earlier comment for the origins of this new theory of climate.

      • On the more interesting side of this website, I have often experienced a deja vu, in the sense of seeming to observe yet again the particle/wave dichotomy debates on the nature of matter – in this case of course, the linearity/chaos dichotomy on climate changes

        At the risk of being simplistic, Judith C has consistently tacked towards linearity with you on the chaotic tack. A lot of the more interesting commenters splay across the divide, as it were

        I’m an empiricist, I let the theory fallow where it may. After about 40 years of hard world-wide observation on modifying geological models with observed facts (often collected after the initial models suggested where and how to look for them), I am firmly in the chaos camp. The interaction of various geological forces over time (all of which can be individually reduced to basic theory, such as gravity) produces results which are extraordinarily different according to initial conditions, despite the various forces and initial materials not being a consistently new array of inputs which have never been seen before

        I find the analogy compelling, if obviously imperfect

      • The stadium wave is part of a bigger idea.

        Do you mean it’s like one of many dynamic systems that interact? I could believe that. I could also believe it is a major component to climate. Do you disagree?

        If you did, how could you tell?

      • There is particle/wave duality but no doubt that photons behave as a particle and as a wave. One of the fundamental mysteries.

        We need to think in terms of control variables. Perhaps CO2? It pushes the system past a threshold and the system establishes a new and emergent state.

        If the control variable is still changing – it is additive to the new state and pushes the system – via the stadium wav -past a new threshold.

        There is no substantive means of determining a priori what these new states will be – which is the essence of the problem in a chaotic climate.

        I am inclined to think Judy understands this idea.

      • Ed Barbar | September 29, 2014 at 9:21 pm |

        The stadium wave is part of a bigger idea.

        Do you mean it’s like one of many dynamic systems that interact? I could believe that. I could also believe it is a major component to climate. Do you disagree?

        Yes and no I don’t disagree.

        Anastasios Tsonis, of the Atmospheric Sciences Group at University of Wisconsin, Milwaukee, and colleagues used a mathematical network approach to analyse abrupt climate change on decadal timescales. Ocean and atmospheric indices – in this case the El Niño Southern Oscillation, the Pacific Decadal Oscillation, the North Atlantic Oscillation and the North Pacific Oscillation – can be thought of as chaotic oscillators that capture the major modes of climate variability. Tsonis and colleagues calculated the ‘distance’ between the indices. It was found that they would synchronise at certain times and then shift into a new state.

      • It was found that they would synchronise at certain times and then shift into a new state.

        So in this theory, and it makes sense to me, the question is how does one determine the “Stable states,” and perhaps also what conditions will pull the system out of the stable state into a transitionary period. You may not know what “stable state” you will end up in, as it’s too chaotic.

        I’m wondering whether you would agree with this analogy. In the Gore v. Bush election, Florida was essentially “in the noise.” The well named Butterfly ballot of a well intentioned Democrat election official ended up throwing several thousand votes to Bush (and the press calling FL for Gore cost Bush votes). The result was a fairly significant change across the world, as Bush was president, instead of Gore. The new state was very hard to determine.

        However, the instability was clear running up to the election, that it was highly unstable.

        Anyway, this makes a lot of sense to me. The problem is it becomes much harder to say much at all about what’s going to happen.

  31. D o u g   C o t t o n  

    As I pointed out on my earth-climate website back in 2011 there is a high correlation between the 1000 year (maybe 934 year) cycle and its superimposed 60 year cycle and the inverted plot of the scalar sum of the angular momentum of the Sun and nine planets. The latter, calculated from planetary orbit data, indicates 500 years of long term cooling starting in about 100 years from now. The current slight cooling will continue for the 30 years beginning around 1998-2000.

    The greenhouse conjecture is totally wrong because they over-looked the now proven gravito-thermal effect and they did not understand that thermodynamic equilibrium has unbalanced energy potentials, so that the sum (PE+KE) is homogeneous, meaning KE (temperature) is not.

    Of course water vapour is what is absorbing about 20% of the incident solar radiation and warming the troposphere with this energy from the Sun on the sunlit side. But water vapour also radiates and thus makes the temperature gradient less steep and the surface cooler than it would have been without that radiation.

    It is not the radiation from water vapour that warms the ocean surface on a cloudy day: it is convective heat transfers from the clouds and above. (This downward non-radiative diffusion and advection is restoring thermodynamic equilibrium.) Much of that energy entering the non-polar ocean surfaces makes its way down to the bottom of the thermocline and thence to the polar surfaces. So there must be a huge energy input to the oceans from the atmosphere in such circumstances. How else could the ocean surface get warmed to its observed temperature? It can’t be warmed by a mere 161W/m^2 of solar radiation that mostly passes through it. That would only raise the temperature of an asphalt covered Earth to 235K – not even 255K found higher up in the atmosphere before the solar radiation was further attenuated. And the solar radiation being absorbed by the atmosphere (20%) is also nowhere near enough to explain the mean temperature of the troposphere. GH conjectures have so many glaring errors that it’s no longer funny.

  32. D o u g   C o t t o n  

    Sorry that should read “thermodynamic equilibrium has no unbalanced energy potentials.”

    • The Earth is not, and has never been, in thermodynamic equilibrium.

      • And is unlikely to ever be Ghil 2008

        As the relatively new science of climate dynamics evolved
        through the 1980s and 1990s, it became quite clear – from
        observational data, both instrumental and paleoclimatic, as
        well as model studies – that Earth’s climate never was and is
        unlikely to ever be in equilibrium.

      • Which makes it more similar to a living cell (or other organism) than a piece of rock.

      • And it gets further from equilibrium the faster the forcing changes. It is always trying to catch up, but the current time is particularly difficult as the forcing is now changing at a rate of 4 W/m2 per century, which is rather unprecedented.

      • JimD, do try to keep up, you 4 W/m2 spiel is as bogus as your fear of it

      • DocM, yes, that is quite fast, isn’t it. If you reduce the aerosol effect as much as Lewis and Curry have that is what you are left with, possibly even more.

      • Doesn’t matter.

      • It is the extensive properties of the non-equilibrium climate system that leads to warming and cooling. Would of thought this was central.

      • And annual to decadal changes seem relatively large.

      • Steven Mosher ,”Doesn’t matter.”

        It does matter. Right now the ocean imbalance is in the ballpark of 0.5 Wm-2 which is based on 0 being “normal” or near equilibrium. Where we are in the precessional cycle and the uncertainty over what TSI is “normal” including spectral intensities less “normal” volcanic aerosols and cloud composition, 0.25 Wm-2 imbalance could easily be the real “normal” imbalance, since there can be a 1700 year ocean lag if every forcing was suddenly fixed at a near perfect amount. With models having +/- 3 C “accuracy” on absolute “surface” temperature, they are just about a waste unless reset for current conditions similar to an “equilibrium” or at least an expected steady state for a bunch of not all that well known conditions.

        So no matter what you call it, it kinda matters. You can’t determine abnormal until you know normal. Now if we didn’t have a bunch of geniuses claiming they know what normal is supposed to be and that everything is abnormal meaning it’s time to panic, you are right, it wouldn’t matter. Last I checked though, that wasn’t the case.

      • Jim D, you say: “And it gets further from equilibrium the faster the forcing changes.”

        So you believe that there is (at least hypothetically) an equilibrium condition for the Earth’s climate? How do you reconcile your belief with the statement “that Earth’s climate never was and is unlikely to ever be in equilibrium”? Or do you disagree with that statement? What do you believe is the equilibrium surface temperature of the Earth?

      • The forcing is never steady enough for long enough to reach equilibrium. It can get close, like during the pre-industrial period, the forcing was only changing up and down by less than 0.5 W/m2, so the temperature only changed by a few tenths of a degree. To most that would be a steady climate. Now it is going to change by ten times that, so we would call that an unstable climate. It’s a matter of degrees.

      • We don’t know what pre-1979 changes in forcing were – and they can’t cognitively process the post-1979 data.

      • Change since pre-industrial seems about 1.5 degrees C.

        http://www.earth.lsa.umich.edu/climate/core.html

        Most of that natural.

      • Jim D, the statement “that Earth’s climate never was and is unlikely to ever be in equilibrium” means that a somewhat steady climate lasting a few hundred years is not an indicator of equilibrium. As I’m sure you know there are huge inertias in Earth’s climate system. In geological time a few hundred years is the blink of an eye.

      • Yes, a slow decline from the Holocene Optimum is seen as a response to a slow change in forcing from the current Milankovitch cycle. Temperature follows forcing in that sense, even when it is quite a weak forcing. We also see solar and volcanic forcings changing the temperatures on other time scales. There’s a strong correlation on a variety of scales, and the forcing is never steady.

      • doesnt matter. you dont need to know normal from abnormal. you just need to know that C02 will warm the planet not cool it .

        That is all you need to know.

      • Still not necessarily true.

        ‘A fifth very serious impact of climate change may be on ocean circulation. Palaeo-analogues and model simulations show that the Meridional Overturning Circulation (MOC) can react abruptly and with a hysteresis response, once a certain forcing threshold is crossed. Discussion on the probability of the forcing thresholds being crossed during this century lead to different conclusions depending on the kind of model or analysis (Atmosphere-Ocean General Circulation Models, Earth system models of intermediate complexity or expert elicitations) being used. Potential impacts associated with MOC changes within the marine environment include changes in marine ecosystem productivity, oceanic CO2 uptake, oceanic oxygen concentrations and shifts in fisheries. Adaptation to MOC-related impacts is very likely to be difficult if the impacts occur abruptly (e.g., on a decadal time-scale). Overall, there is high confidence in predictions of a MOC slowdown during the 21st century, but low confidence in the scale of climate change that would cause an abrupt transition or the associated impacts. However, there is high confidence that the likelihood of large-scale and persistent MOC responses increases with the extent and rate of anthropogenic forcing.’

        With the current low 65 degree north summer insolation – things could get very tricky in as little as a decade.

      • Mosher, “doesnt matter. you dont need to know normal from abnormal. you just need to know that C02 will warm the planet not cool it .”

        CO2 warms the planet up to a point and it doesn’t warm the planet uniformly. This is the reason that Dougs exist. Because of trying to over simplify a basic equilibrium Ein=Eout at TOA without allowing for millennial scale settling times and having over-confident confidence intervals. That is just as whacked out as the Sky Dragons.

        You have the Dougs creating “effects” versus the Desslers creating data. Until they, as in both, don’t matter, it matters.

      • Name a time in the last 4.5 billion years when we have not been going up on a down escalator or down on an up escalator. The only question has been how fast each way and when will it switch for the next direction.

      • Pleeze stop, dougie. You are just pulling stuff out of Urananus, again.

    • Change is episodic – internally generated as a result of small changes. We don’t know what the change in albedo was – but this is the largest factor in climate variability by a very large margin.

  33. This more a sea ice comment than a stadium wave comment:

    “During winter, the Arctic’s atmosphere is very cold. In comparison, the ocean is much warmer. The sea ice cover separates the two, preventing heat in the ocean from warming the overlying atmosphere. This insulating effect is another way that sea ice helps to keep the Arctic cold.”
    “With more leads and polynyas, or thinner ice, the sea ice cannot efficiently insulate the ocean from the atmosphere. The Arctic atmosphere then warms, which, in turn influences the global circulation of the atmosphere.”
    http://nsidc.org/cryosphere/seaice/environment/global_climate.html

    They seem to be saying the sea ice goes away and that warms the atmosphere, though of course it’s more complicated than that. I think the biggest impact on Arctic atmospheric warming would be sea ice or the lack of it. But the atmospheric warming is really a sign of the Arctic ocean attempting to cool itself. While this would in the short term raise the Arctic SAT, if the Arctic ocean is trying to cool itself, will it at some point succeed with that and then build sea ice and then cool its SAT?

    Arctic sea ice retreat is as we’ve been warned about, the heat coming out of the oceans, by removing the insulation. While we add insulation to the atmosphere with more CO2, in the Arctic we remove it from that ocean by reducing sea ice. In the long run, these two forms of insulation might work together. In a glacial we’d have low amounts of CO2 but plenty of sea ice insulation. Now, we have high amounts of CO2 and low amounts of ice insulation in the Arctic. If these two work together having both effects or neither might be interesting.

  34. This may be a litle off thread, but demonstrates the difficulties of practical predictions of supposedly well known things.

    One might assume that the Earth would have constant rotational speed, or at least that its rate of slowing down due to tidal friction would be constant, to a usable degree. The assumption would be wrong.

    The following is from an article written by the Director of the U.S. Naval Observatory, and a past head of the USNO’s Time Services Dept.

    “During the mid-1930s, astronomers concluded the Earth did not rotate uniformly, basing their findings on measurements of the most precise clocks then available. We now know that a variety of physical phenomena affect the Earth’s rotational speed. This caused the second to be redefined in 1960 in terms of the Earth’s orbital motion around the Sun. The new second was called the “Ephemeris” second and the time scale derived from this definition was called Ephemeris Time (ET).”

    Another quote –

    “Although we have accurate estimates of the deceleration of the Earth’s rotation, significant variations prevent the prediction of leap seconds beyond a few months in advance. This inability to predict leap seconds, coupled with the growing urgency for a uniform time scale without discontinuities, makes it appropriate to re-examine the leap second’s role.”

    Two points arise. The first is that observations show the Earth does not rotate uniformly. As an aside, I believe it slows down and speeds up erratically.

    The second is that even with astronomical observations going back to the 1600s, and accumulated scientific knowledge to date, it is impossible to predict leap seconds more than a few months in advance. And we are talking about a fairly massive body here – slowing down and speeding up enough to be of concern. A GPS system depends on accurate timekeeping, and microseconds are important, let alone a million of the little beggars!

    So, if accurately predicting the rotation of the Earth more than a few months in advance, at most, is impossible, how much more difficult must it be to predict weather and climate, if the atmosphere behaves chaotically?

    If a butterfly flapping its wings in the Brazilian rainforest can cause a tornado in Kansas, imagine the effect the whole Earth slowing down or speeding up might have, Throw in a few factors such as continents slowing down and speeding up, changing direction at random, bobbing up and down without rhyme or reason, and it easy to see why highly trained and qualified experts, backed up by billions in research and with access to previously unimaginable computing power, cannot forecast weather any better than a reasonably intelligent 12 year old, given an hours coaching.

    So yes, analysis of a data series, whether it be corn futures, stock market movements, temperatures, or any number of other things, can often lead to the conclusion that are composed, in part, of various regular waves. Unfortunately, the predictive power of such extracted waves is generally nil, as economists, hedge fund operators and others have discovered to their sorrow.

    You may well find patterns, waves, synchronicities and correlations wherever you look. It is in our nature, I believe.

    Just an opinion on the usefulness of waves extracted from the data produced by a probably chaotic system – the Earth on which we live.

    Live well and prosper,

    Mike Flynn.

    • No one assumes that the rotation of the planet is constant – indeed the LOD was included as a factor in the progenitor of this paper. It is caused by changes in ocean and atmospheric circulation – changes in regime that are seemingly random but completely deterministic.

      If the accuracy of prediction of these events is no better than tossing a coin – http://www.geomar.de/en/news/article/klimavorhersagen-ueber-mehrere-jahre-moeglich/ – it leads at least to looking for your keys where they are lost – not under the lamppost. Still less to remaining at the bar pontificating incoherently.

      • Rob Ellison,

        You wrote –

        “No one assumes that the rotation of the planet is constant – indeed the LOD was included as a factor in the progenitor of this paper. It is caused by changes in ocean and atmospheric circulation – changes in regime that are seemingly random but completely deterministic.

        If the accuracy of prediction of these events is no better than tossing a coin – http://www.geomar.de/en/news/article/klimavorhersagen-ueber-mehrere-jahre-moeglich/ – it leads at least to looking for your keys where they are lost – not under the lamppost. Still less to remaining at the bar pontificating incoherently.”

        Oh well, in light of your unsubstantiated assertion that changes in the length of time of the rotation of the Earth are caused by changes ocean and atmospheric circulation, then the following must be just incoherent pontification –

        “The average length of the mean solar day since the introduction of the leap second in 1972 has been about 0 to 2 ms longer than 86,400 SI seconds. Random fluctuations due to core-mantle coupling have an amplitude of about 5 ms.”

        I apologise for not being clearer in my previous post. I was foolish enough to assume that you knew that there are more factors causing random fluctuations than only the ones you dogmatically assert, but I was wrong, yet again.

        As to your assertion that prediction of some unspecified events is better than tossing a coin, I don’t really understand you. Your link appears to follow the AFOMD principle – being at best tangentially related. I’m not sure what a climate model claiming to accurately predict the past is good for. Maybe to you it has some utility, to me maybe not so much.

        Keep fighting the good fight!

        Live well and prosper,

        Mike Flynn

      • (atmosphere, oceans, and hydrology) provides one of the major sources for the manifold spatio-temporal variations in Earth rotation. The observed fluctuations in the motion of our planet are classically divided into three components: variations of the rotation speed, reckoned in changes in length of day (LOD), the motion of the spin axis in a reference frame tied to the Earth is known as polar motion, and changes in the orientation of this spin axis in a space-fixed reference frame are referred to as precession-nutation. Specifically, subdecadal and non-tidal changes in LOD are almost entirely related to atmospheric dynamics, while polar motion variability at periods from a few days to several years is mainly driven by the atmosphere, the oceans, and hydrology to a lesser extent. Earth’s nutational motion relates mostly to the gravitational interaction with other celestial bodies, but can be affected by quasi-diurnal atmospheric and oceanic excitation at the level of 0.1 mas (milliarcseconds). http://ggosatm.hg.tuwien.ac.at/rotation.html

        The climatically important changes in ocean and atmospheric circulation which is the subject of both the stadium wave and my link is the dominant cause of change in LOD as it relates to changes in atmospheric angular momentum.

        I am pretty sure that Prof. Mojib Latif concluded that prediction of these regime changes – abupt changes in 1976/1977 and 1998/2001 – is presently no better than tossing a coin.

        The problem in a chaotic climate is not one of quantifying climate sensitivity in a smoothly evolving climate but of predicting the onset of abrupt climate shifts and their implications for climate and society. The problem of abrupt climate change on multi-decadal scales is of the most immediate significance. A difficult problem – but one with a somewhat pressing need to solve.

        This contrasts strongly with Flynn’s triple plus unscience cobbled together from snippets of ideas less than half comprehended, names that he imagines lends gravitas and richly embroidered with fanciful nonsense in a stilted formalism. Along with the inevitable accusations, protestations, disparagement and dissembling of course.

      • The dynamic interaction of the solid Earth with its fluid envelope (atmosphere, oceans, and hydrology) provides one of the major sources for the manifold spatio-temporal variations in Earth rotation. The observed fluctuations in the motion of our planet are classically divided into three components: variations of the rotation speed, reckoned in changes in length of day (LOD), the motion of the spin axis in a reference frame tied to the Earth is known as polar motion, and changes in the orientation of this spin axis in a space-fixed reference frame are referred to as precession-nutation. Specifically, subdecadal and non-tidal changes in LOD are almost entirely related to atmospheric dynamics, while polar motion variability at periods from a few days to several years is mainly driven by the atmosphere, the oceans, and hydrology to a lesser extent. Earth’s nutational motion relates mostly to the gravitational interaction with other celestial bodies, but can be affected by quasi-diurnal atmospheric and oceanic excitation at the level of 0.1 mas (milliarcseconds). http://ggosatm.hg.tuwien.ac.at/rotation.html

        The climatically important changes in ocean and atmospheric circulation which are the subject of both the stadium wave and my link is the dominant cause of change in LOD as it relates to changes in atmospheric angular momentum.

      • I am pretty sure that Prof. Mojib Latif concluded that prediction of these regime changes – abupt changes in 1976/1977 and 1998/2001 – is presently no better than tossing a coin.

        The problem in a chaotic climate is not one of quantifying climate sensitivity in a smoothly evolving climate but of predicting the onset of abrupt climate shifts and their implications for climate and society. The problem of abrupt climate change on multi-decadal scales is of the most immediate significance. A difficult problem – but one with a somewhat pressing need to solve.

        This contrasts strongly with Flynn’s triple plus unscience cobbled together from snippets of ideas less than half comprehended, names that he imagines lends gravitas and richly embroidered with fanciful nonsense in a stilted formalism. Along with the inevitable accusations, protestations, disparagement and dissembling of course.

        By all means delete the top three – what is triggering the moderation bot?

      • D o u g   C o t t o n  

        If we were back in Roman times, but we had the technology and knowledge of today, we could have predicted the Medieval Warming Period, The Little Ice Age and the current warming period which is to be followed by 500 years of long term cooling, starting in about 100 years from now. Yes, we could have predicted even the peaks in the 60 year cycle around 1940, 2000, 2060 etc and for all these predictions we would have used the inverted plot of the scalar sum of the angular momentum of the Sun and nine planets. See what I wrote about this back in 2011, and my predictions archived in August that year near the foot of my home page on my earth-climate dot com site.

      • The future like the past is chaotic – although in principle deterministic it is fundamentally unpredictable at present.

    • ‘hot polished silver lamp cover can be transferred to the adjacent air molecules by the collision process called diffusion (similar to conduction). The emissivity of silver is very low so we do not expect much radiation and, indeed, the fingers on my left hand do not feel any warmth. But the fingers above the lamp do feel considerable warmth. This means that the heat is being transferred to air molecules by diffusion and then the warm air molecules are rising straight upwards by convection’

      Doug, I have an incandescent bulb. I have tungsten filament at 3000 °C, a vaccum, a thin glass shell and a black painted thermometer.
      How does the glass of the bulb warm?
      Why is my black thermometer warmer than the glass?

    • Mike Flynn – “You may well find patterns, waves, synchronicities and correlations wherever you look. It is in our nature, I believe.”

      What a joy to read! Yes, humans are excellent at pattern recognition and are fooled by randomness.

  35. Why don’t the people in Kansas just hire some Butterfly collectors and send them to Brazil?

  36. Judith, has anyone tried to look for a stadium wave signal in CO2 data? I bet it’s there. But like I said, I think something likely mutes and damps the CO2 in ice cores.

    • The main points are settled
      1. That it is warming
      2. That the effects of rising GHGs are easily large enough to account for the warming
      3. That manmade emissions are easily large enough to account for the GHG rise

      • 2020 Academy Awards

        Most important main character: the sun

        Most important supporting character: the oceans

        Most influential cameo appearance: CO2

      • Lewis and Curry showed that even a small sensitivity can allow GHGs to account for all the warming we have seen. Lindzen has done something similar by ignoring aerosols.

      • Can doesn’t have the definition of did. Reconstructions of both the N Atlantic and the Indian Oceans indicate the modern warm period is not unusual and is easily explained by changes in ocean heat transport. Aerosols should be ignored. There is no regional evidence that industrial aerosols change regional temperatures on an annual basis and in those regions where it shows a seasonal change it is during the dry season, read that as dust. Since reconstructions show the AMO influences monsoons in Asia that is no great suprise. Wrap it up. Cameo appearances are only good for laughs, not Oscars.

      • ‘In particular, the energy balance approach does not account for factors that do not directly relate to the energy balance, e.g. solar indirect effects and natural internal variability that affects forcing (although an attempt has been made in the Lewis and Curry paper to make some allowance for uncertainty associated with these factors) . Further, there was ‘something else’ going on in the latter 19th and early-mid 20th century that was causing warming, that does not seem to relate directly to external forcing. The paper does attempt to factor out the impact of the Atlantic Multidecadal Oscillation through the selection of base and final periods, but this is by no means a complete account for the effects of multi-decadal and century scale internal variability, and how this confounds the energy balance estimate of climate sensitivity.’

        What they did was use the AR5 forcing to recalculate a outmoded concept.

      • Wow, they actually found one significant climate scientist who said it.

        LMAO. One. Next up, two.

      • ‘The science is settled, Gore told the lawmakers..’ http://www.npr.org/templates/story/story.php?storyId=9047642

        There are versions of the science is settled everywhere you look.

      • Al Gore is not a climate scientist. There are not versions of it everywhere you look.

        And what people think it means is that all aspects of climate science are claimed to be settled, which is why people like you like to trumpet it.

      • steven

        most interesting distributor of weather -the jet streams

        tonyb

      • “The shift to a cleaner energy economy won’t happen overnight, and it will require tough choices along the way,” Obama said. “But the debate is settled. Climate change is a fact.’

        It isn’t warming and it isn’t warming for decades. One science paradigm is not merely not settled but palpably wrong. The policy is also wrong. Tough choices will not work in the real world – especially if the world is not warming.

      • Obama is not a scientist.

      • We have heard it – the central precepts are rock solid – the planet is warming and we are the cause.

        None of that is beyond dispute – and the planet is likely not warming fir decades.. Learn the lesson or be on the wrong side of science. Not my problem.

      • Thus: “manmade emissions might be responsible what (little) warming we’re sure of.”

        There might be an asteroid headed our way. That “science is settled” too.

        But probabilities! What policy changes are justified by what probabilities? That’s a political question. Which is why people with leftist political ideologies rally round the “world regulatory bureaucracy” memes, while people with more libertarian political ideologies focus on the uncertainties.

        And the uncertainties are huge. One °C/century, with, say, a meter (40in) of sea level rise by 2100, is effectively harmless. Considering that most of the actual temperature rise would be in northern hemisphere high-latitude mid-continental winter, even 2-3 °C/century would probably be fine.

        So, what is the probability of anthropogenic change large enough to be really harmful, and to justify the sort of political/bureaucratic intervention the left wants so much for other reasons? There’s no settled science about that!

        Oh, yeah, and what is the probability that even major problems can be solved without the regulatory nightmare? There’s no settled science about that either!

      • Jim D

        “2. That the effects of rising GHGs are easily large enough to account for the warming
        3. That manmade emissions are easily large enough to account for the GHG rise”

        I really really like your formulation here.

      • Steven Mosher | September 30, 2014 at 11:52 am |
        Jim D

        “2. That the effects of rising GHGs are easily large enough to account for the warming
        3. That manmade emissions are easily large enough to account for the GHG rise”

        How about, 2. That the effects of rising GHGs are easily large enough to account for the warming that is less than half of what most models project.

        3. That manmade emissions are easily large enough to account for the GHG rise which can account for the small rise in temperatures. ?

        Remember Jim D has made in clear in the past he has no qualms with overstating potential damage in order to inspire his vision of change. If you ask JimD how much warming there might be, he is liable to use RCP8.5 and 4+ C just for inspiration, which isn’t “easily explained” by rising GHGs.

      • Jim D “That the effects of rising GHGs are easily large enough to account for the warming”.
        What warming would that be Jim D?

      • Matthew R Marler

        Jim D: The main points are settled
        1. That it is warming
        2. That the effects of rising GHGs are easily large enough to account for the warming
        3. That manmade emissions are easily large enough to account for the GHG rise

        those are your talking points. for me, what has been settled are:

        1′. The Earth’s global mean temperature has increased since the end of the Little Ice Age.

        2′. Diverse models have been fit to the data, some of which show the increase accounted for by CO2, some of which show half of the increase accounted for by CO2, and some of which show the increase totally independent of CO2 increase.

        3′, Human emissions probably account for up to 90% of emissions.

        4′. Both CO2 increase and temperature increase have had effects that in the aggregate have been beneficial.

        5′. The CO2 concentration may never double from its present value of 400 ppm

        6.’ Any future warming from increased CO2 will occur more slowly than has been predicted by IPCC, Hansen, and other “alarmists”.

      • I prefer not to use models in this formulation. It is observation based. Try disputing the statements I made instead of what you might think about models.

      • Matthew R Marler

        Jim D: I prefer not to use models in this formulation. It is observation based.

        All of your statements are based on models.

      • Not sure what you mean. You don’t need a climate model to show any of these. People like Lewis and Curry, who don’t necessarily believe in models, have implicitly endorsed all of these statements based just on what they see.

      • The pause messes up your little formulation, jimmy.

        1. the warming stopped
        2. it ain’t warming
        3. manmade emissions have run amok

      • Don M, what did you choose as your base period and why? Think. Why do you think Lewis and Curry were wrong to conclude that even a small greenhouse effect accounts for the warming? If there are 3 things the actual scientists can agree on, it is the 3 I listed in those words.

      • I don’t need to choose no stinking base period, jimmy. You want the distraction of quibbling interminably about a base period, take it up with someone else.

        I am talking about the pause that is killing the cause. Everybody knows about the pause. Stop playing dumb.

      • Matthew R Marler

        Jim D: You don’t need a climate model to show any of these.

        There is no model free estimate of the warming expected from increases in CO2. your point 2: 2. That the effects of rising GHGs are easily large enough to account for the warming Are you claiming a model-free estimate of the warming effects of rising GHGs?

        You have changed your language from “models” to “climate models”: I prefer not to use models in this formulation. It is observation based. Try disputing the statements I made instead of what you might think about models.

        I disputed the statements that you made.

    • 1. The stadium wave implies the potential for non-warming – or even cooling- for decades.
      2. The rate of warming is 0.07 degrees C/decade – and is not all GHG.
      3. A further cooling influence is quite likely as the Sun cools from a 1000 year high.
      3. Could a cooling world halt increases in CO2 in the atmosphere?
      4. But hey – climate is wild – utterly unpredictable.

    • X Anonymous > “http://michaelcrichton.com/pdfs/GlobalWarmingDebate.pdf”

      Interesting that the sceptic team won the “Global Warming is not a Crisis” debate ( Brian Lehrer on the Intelligence Squared show) but the was no MSM coverage. If the sceptics lost it would be all over the news. I searched the title with New York Times, Chicago Tribune, LA Times, Boston Globe and all I found was one blog post in the NYT.

      • I thought no mainstream climate scientists ever4 claimed that “the science is settled?”

        RICHARD C.J. SOMERVILLE: ” …The science community today has impeccable settled science, despite what you have just heard, that demonstrates the reality of global warming and its primary origin in human activities….”

        Not just settled science, but impeccable settled science.

        And this quote from Somerville is just precious:

        “There‘s never been as thorough and vetted a process for summarizing science precisely for the point of making input to policy makers.”

        Topped only by this about the IPCC:

        “We‘re not a clique defending what we said six years ago.”

        There is something endearing about seeing such blind, unquestioning, childlike faith. Something scary too.

      • GaryM – “… unquestioning…faith…scary too.”

        For some, it is religion, for others a vested financial or political interest.

        There is a process of associative activation and associative coherence among ideas such as environmental pollution, activism, left-wing politics, social justice, and many other trigger words, phrases, narratives, and images. For example, a person that is concerned about the environment, votes democratic, cares about poor people, etc, sees an image in Al Gore’s documentary and that triggers an associative cascade. The idea is integrated into their existing belief system and to try to dislodge it creates massive cognitive dissonance. It’s now firmware.

      • It’s fascinating this look into the past of the climate debate, and how little has changed,

        i get why Gavin Schmidt decided to avoid actual debates like the plague though.

    • JCH – “RC covered it.”

      RC? You mean the Real Climate website? That doesn’t count. Most people get their news from television, radio, news papers, and websites like The Huffington Post (gag) or The Daily Kose (dose).Those are the same people that get frightened into voting for candidates that will implement CAWG inspired economic policies, much to their own detriment, by MSM CAWG scare stories. It is passive aggressive fraud, IMO.

      • NPR covered it. CA covered it. That I can find, FOX did not.

        There are not versions of all the science is settled by climate scientists everywhere you look. If you see them, that would be confirmation bias.

        And we are hovering just under the warmest temperature in the instrument record. Warmer now than 1998.

    • Rob Ellison – “There are versions of the science is settled everywhere you look.”

      Especially if you are looking for it! Ye old confirmation bias.

      • You have to look for it? Obama is a poster boy – and the best they can do is say that climate scientists deny that climate science is settled. Yet they are 95-99% certain that most warming since 1950 is anthropogenic. It’s a joke right? .

  37. I am not too sure how useful the ‘stadium wave’ is because it is linked to the AMO. But the AMO, like ENSO. does not have periodicity, so is unpredictable. But ENSO is real enough – ask any Aostralian farmer. We know it is there but we can’t say why. Yes, the temperature record of the 20th century is unique. Nothing like it has been seen, before or since. Because of its uniqueness we know it is not a part of a stationary series so we can go further and say that it has an on/off character. The only on/off function that could fit the data would be a change of the vibration mode of the CO2 molecule. by the loss of a neutron at the lowest IR temperature mode (about 14 to 15 microns).When this happens, some of earth’s heart could escape into space, reducing atmospheric heating which had been climbing steadily at 0.15C/decade since 1910. Taking into account the decades it takes for the atmosphere to heat the oceans, the rest of 20th century heating is readily explained. See my web site underlined above.

    • The stadium wave is linked to 20 odd indices but the ones that stand out in the temperature record is ENSO+PDO. A cool PDO is associated with more frequent and intense La Nina and vice versa. It is an abruptly changing system with exact correspondence with changes in the trajectory of global temperatures.

      We get La Nina dominance to 1976, El Nino to 1998 and La Nina again since.

      The change in surface temperature is both changes in heat uptake to the oceans and changes in cloud and water vapour associated with these changes in ocean and atmospheric circulation.

      There are actually very few heavy carbon isotopes in the atmosphere. They are formed from nitrogen as a cosmic ray spallation product. When combined with oxygen to form CO2 – it makes no difference to the shape of the molecule and therefore emission and absorption properties – which is the greenhouse effect. The heavy carbon isotope will beta decay back to nitrogen with a half life of 5370 years..

      Changes in surface temperature trajectories are the result of changes in ocean and atmospheric circulation – which have a period of 20 to 40 years.

    • Alexander: “But the AMO, like ENSO. does not have periodicity, so is unpredictable.”.
      I think Wyatt and Curry agree with you: Wyatt 2014 “While regularly repeating, it is not strictly periodic.”. Also, Wyatt and Curry 2013 “As a result, climate regimes – multiple-decade intervals of warming or cooling – evolve in a spatially and temporally ordered manner. While not strictly periodic in occurrence, their repetition is regular – the order of quasi-oscillatory events remains consistent.

      • It’s in our DNA to observe patterns.
        But we can also squint and see them where they are not.

        Here is the chart from the ‘Physics of Climate’. I think it extrapolates a lot, but worth considering:

        The spikes, diurnal and annual, are periodic with known forcing of earth’s orbit.
        The variations are periodic because the forcings are periodic.

        The 3-7 day variance show up as small from synoptic waves ( cold fronts and the like ) passing. We know that these events -can- be much more intense, but on the average perhaps not.

        The PDO and ENSO and other variations are real, but if they are the result of chaotic fluctuations, they are not periodic, not predictable, and as wont to persist as oscillate or revert to the mean.

  38. Tomas Milanovic

    I am unfortunately rather busy these last months but one more time, congratulations Judith (& all) to a new paper .
    I am impressed – Kravtsov and Tsonis ….. that is some heavy lifting :)
    I will have to read that.
    As Mann learned some PCA from Wegmann&McIntyre, it is not surprising that he would “attack” a paper based on PCA too (actually M-SSA but that is a detail).

    So kudos and I am prining the paper as we speak.

  39. Mosher-gibberish-counter: The only way it would not matter what is normal and what is abnormal is if it is cost-free and painless to switch from fossil fuels, Otherwise a little extra warming might easily be good for us, as has been the case for all of recorded history.

  40. In the paper the distinctions between the results of processing the observations and simulations are clearly laid out. Hooks for further work by the authors and others are provided in context. … nice paper…first time I’ve looked at the stadium wave (other interests) and this was an easy entry.

  41. What came first, the wave or the motive? And, is a wave merely phenomenal, like water running downhill, whereas motive is the passion of causes to which reason is a slave? –e.g.,

    Strong equatorial oceanic Kelvin wave activity has been present since August 2013. Upwelling Kelvin waves produce cooling… and downwelling waves produce warming… An upwelling wave that was triggered in May is currently propagating across the eastern Pacific. This wave has produced a sharp drop in oceanic heat content across the central and east-central equatorial Pacific.” ~NOAA

  42. The modelling approach is inherently of no value for predicting future temperature with any calculable certainty because of the difficulty of specifying the initial conditions of a sufficiently fine grained spatio-temporal grid of a large number of variables with sufficient precision prior to multiple iterations. For a complete discussion of this see Essex: https://www.youtube.com/watch?v=hvhipLNeda4
    The IPCC climate models are further incorrectly structured because they are based on three irrational and false assumptions. First, that CO2 is the main climate driver. Second, that in calculating climate sensitivity, the GHE due to water vapour should be added to that of CO2 as a positive feed back effect. Third, that the GHE of water vapour is always positive. As to the last point, the feedbacks cannot be always positive otherwise we wouldn’t be here to talk about it. For example, an important negative feed back related to Tropical Cyclones has been investigated by Trenberth, see:
    http://www.cpc.ncep.noaa.gov/products/outreach/proceedings/cdw31_proceedings/S6_05_Kevin_Trenberth_NCAR.ppt
    (See Fig2)
    Models are often tuned by running them backwards against several decades of observation, this is
    much too short a period to correlate outputs with observation when the controlling natural quasi-periodicities of most interest are in the centennial and especially in the key millennial range. Tuning to these longer periodicities is beyond any computing capacity when using reductionist models with a large number of variables unless these long wave natural periodicities are somehow built into the model structure ..
    In summary the temperature projections of the IPCC – Met office models and all the impact studies which derive from them have no solid foundation in empirical science being derived from inherently useless and specifically structurally flawed models. They provide no basis for the discussion of future climate trends and represent an enormous waste of time and money. As a foundation for Governmental climate and energy policy their forecasts are already seen to be grossly in error and are therefore worse than useless. A new forecasting paradigm needs to be adopted.
    Earth’s climate is the result of resonances and beats between various quasi-cyclic processes of varying wavelengths combined with endogenous secular earth processes such as, for example, plate tectonics. It is not possible to forecast the future unless we have a good understanding of the relation of the climate of the present time to the current phases of these different interacting natural quasi-periodicities which fall into two main categories.
    a) The orbital long wave Milankovitch eccentricity, obliquity and precession cycles which are modulated by
    b) Solar “activity” cycles with possibly multi-millennial, millennial, centennial and decadal time scales.
    The convolution of the a and b drivers is mediated through the great oceanic current and atmospheric pressure systems to produce the earth’s climate and weather.
    After establishing where we are relative to the long wave periodicities, we can then look at where earth is in time relative to the periodicities of the PDO, AMO and NAO and ENSO indices and based on past patterns make reasonable forecasts for future decadal periods.
    For forecasts of the coming cooling based on the 60 and especially the main 1000 year periodicities obvious in the temperature record and using the neutron count and 10Be data as the best proxy for solar “actlivity” see
    http://climatesense-norpag.blogspot.com

  43. Tomas Milanovic

    Ok squeezed out a 4 hours window. Did the homework :)

    The paper belongs to what is called iceberg paper because 9/10 of the content are invisible.
    In the comment I will focus on the invisible 9/10, e.g on M-SSA.

    M-SSA is the general 3 dimensionnal case of spectral analysis where the 3 dimensions are time, space (when discretised it is called “channel” hence “multichannel” SSA) and delay.
    SSA is a particular 2D case of M-SSA with only 1 channel (1 point).
    EOF, PCA etc are all particular cases of M-SSA.
    I am writing these general considérations for interested readers who would like to have a better idea what all these invisible 9/10 are about.

    The important thing to know is that SSA and M-SSA are duals of temporal respectively spatio-temporal chaos.
    In a seminal paper Vautard and Ghil 1989 with 782 citations ( http://adsabs.harvard.edu/abs/1989PhyD…35..395V ) realize the equivalence between classical lagged covariance analysis of statistical origin and embedding delayed techniques (see Takens embedding theorem) used to analyse attractors in temporally chaotic systems.

    This is very important because that means that SSA is not just your average statistical tool but that it goes deeply into the dynamics of the underlying system.
    For instance it allows to know the dimension of the attractor, the number of independent degrees of freedom, the Lyapounov spectrum and to reduce the possibly infinitely dimensional dynamics to a finite dimensional manifold (attractor).

    It is easy to understand why this duality works from the mathematical point of view.
    Temporal chaos is defined by N (N>2) first order non linear ODE of with N variables.
    It is always possible to transform this system in one ODE of Nth order with 1 variable.
    And that means that the whole (N dimensional) dynamics of the system is encoded in a single variable and it is possible to extract it by the delay method.
    To get a flavour of these approaches one can look at Albano and Muench 88 ( http://repository.brynmawr.edu/cgi/viewcontent.cgi?article=1001&context=physics_pubs )
    There has been much work done about this issue starting in late 80ies .

    M-SSM is a natural extension to spatially extended system by adding a 3rd spatial dimension.
    Unfortunately spatiotemporal chaos is much more difficult than temporal chaos if only because one has to replace a finite dimensional euclidian phase space by an infinite dimensional Hilbert space (one replaces an ODE system by a PDE system).
    So while the duality between spatiotemporal chaos and M-SSA is much less understood, it probably holds too.
    For a must read, consult Groth and Ghil 2011 ( http://web.atmos.ucla.edu/tcd/PREPRINTS/MSSA+PS-PRE_vf.pdf ).
    As this paper sits squarely inside the domain of the Kravtsov&all paper as well as of the stadium wave theory, it should have been a mandatory reference.
    As a bonus this 2011 paper refuted Mann before his paper was even written.
    Indeed as the M-SSA results can be interpreted dynamically, it is obvious that the eigenvectors with the largest eigenvalues are dominating the system’s dynamics and therefore can’t be “artifacts” of “sampling”.

    The 2 points that I would challenge are :

    – the choice of indexes.
    It is nowhere obvious nor necessary that the 7 chosen indexes are sort of proxies of the full spatio-temporal dynamics. If M-SSA says that there are cycles then there are cycles.
    But it is important to understand that these index cycles describe a system of 7 interacting largely artificial oscillators. How one goes from there (or even if it is possible to go from there) to an ordinary space of a spherical surface needs to be explained and confronted with experience. I am not sure that it can be easily done or even can be done at all.

    “Why to detrend ? “. Especially why to detrend when one is using M-SSA ?
    It seems to me that it defeats the very purpose of using M-SSA.
    M-SSA finds a data adaptative basis set of eigenvectors (here eigenvector=function). See f.ex the Groth&Ghil paper quoted above which gives explicitely the first eigenvectors in a case study.
    And now we force suddenly on the data a special basis function (x=at+b) which is justified by nothing. If something can create artifacts then this is it.
    Why not to use M-SSA as intended, e.g without any prior data manipulation ?
    If one is interested, one can always compare later the eigenfunctions with and without detrending even if I don’t see what relevance that could have.

    P.S
    Judith I would suggest that you should try to get M.Ghil on the team. The “stadium wave” theory is getting more and more in the territory where I consider M.Ghil the best expert living.

    • Tomas, “Why not to use M-SSA as intended, e.g without any prior data manipulation ?”

      How about why is so much data manipulated to create “oscillation” indexes to begin with?

      That appears to be due to the blend of weather forecasting and climate “projecting”. The PDO is also an extremely noisy “oscillation”. I looks to me that by not using the existing “indexes” you are venturing into a no man’s land, probably a better place, but no many around to hear you.

      • Tomas Milanovic

        Well Captn M-SSA doesn’t “create” oscillations.
        Detrending has a bigger potential to “create” oscillations but it is not sure in every case.
        But when M-SSA tells you that there are cycles then you can be be pretty sure that there are cycles (this is like Fourrier spectral analysis).
        Now don’t imagine that the oscillations we are talking about here are some well behaved friendly sines and cosines.
        Of course that there is noise and of course that they are not periodical.
        Using this word just means that some parameter is going up and down in an irregular way inside a bounded domain.
        These indexes obviously oscillate and they certainly oscillate when they are not “manipulated” (detrended) too.

        I really suggest you have a look on the Groth&Ghil 2011 paper. It exactly explains what M-SSA is doing, and what the eigenfunctions are (and no, they detrend nothing).

      • Matthew R Marler

        captdallas 0.8 +/- 0.2: How about why is so much data manipulated to create “oscillation” indexes to begin with?

        M-SSA does not “create” oscillations any more than Fourier analysis “creates” oscillations; it merely decomposes them into orthogonal functions computed from the data, if the oscillations are there. Detrending would be useful if you knew for a fact a priori that the trend was there; otherwise, it imposes a structure on the solution that may not be justified.

      • Tomas, “Well Captn M-SSA doesn’t “create” oscillations.
        Detrending has a bigger potential to “create” oscillations but it is not sure in every case.”

        The “oscillations” used were created prior to application of M-SSA. For Wyatt and Curry to use AMO and PDO they had to use the accepted form. I agree that detrended should be avoided, but when the “data” is already detrended, they are kind of stuck.

    • Matthew R Marler

      Thomas Milanovic: In a seminal paper Vautard and Ghil 1989 with 782 citations ( http://adsabs.harvard.edu/abs/1989PhyD…35..395V ) realize the equivalence between classical lagged covariance analysis of statistical origin and embedding delayed techniques (see Takens embedding theorem) used to analyse attractors in temporally chaotic systems.

      I could not get that link to work.

    • Matthew R Marler

      Tomas Milanovic, thank you for the Groth and Ghil link. Here is the abstract, for those thinking it might be worthwhile to get it (I second your recommendation.)

      Multivariate singular spectrum analysis and the road to phase synchronization

      We show that multivariate singular spectrum analysis (M-SSA) greatly helps study phase synchronization in a large system of coupled oscillators and in the presence of high observational noise levels. With no need for detailed knowledge of individual subsystems nor any a priori phase definition for each of them, we demonstrate that M-SSA can automatically identify multiple oscillatory modes and detect whether these modes are shared by clusters of phase-and-frequency locked oscillators. As an essential modification of M-SSA, we introduce here variance-maximization (varimax) rotation of the M-SSA eigenvectors to optimally identify synchronized-oscillator clustering

      Notice that the result of the analysis does not imply the existence of coupled oscillators, it just describes the statistical behavior of the coupled oscillators if they are the mechanism generating the phenomenon measured by the data.

      “Varimax” rotation, by that name, was invented by psychologists who developed the method of factor analysis. It isn’t the only method of rotation, and it isn’t guaranteed to get the best result in every case.

    • I apologize if this is a trivial question but is your point 1 related to figure 3? It’s quite a technical paper and I don’t fully understand how they get these detailed maps, I thought they used temps that were averaged across a large area. I assume that to get a good grasp of the dynamics you need at least a descent spatial grid. However this M-SSA appears very useful although I don’t understand the fuss about a stadium wave. Is it really surprising that the (oscillating) temperatures in different regions are coupled and somewhat out of sync?

  44. Tomas Milanovic

    The “oscillations” used were created prior to application of M-SSA. For Wyatt and Curry to use AMO and PDO they had to use the accepted form. I agree that detrended should be avoided, but when the “data” is already detrended, they are kind of stuck.

    Captn I am not sure I understand what you want to get across.
    AMO and PDO are just indexes, e.g time series of some scalar variable constructed in a certain way (averages, residuals , what not).
    They are what they are.
    Of course as you are bright, you constructed them in a way that you hope captures the whole (or most of the) dynamics. You took an infinity of points and selected a single function of them hoping it is representative of the infinity and generally have an argument why it should/could be so (that’s what EOF may be for).

    Now if you analyse the indexes, you observe oscillations – you didn’t “create” any property of the time series , you just discovered what was intrinsically inside like some oscillating eigenfunctions etc.
    Well yes, if you detrend, you analyse detrended series what is potentially very bad if you used M-SSA.

    But you are not “stuck” you just go back and use the data without detrending and see what it gives.
    Hint : it will be again oscillations but potentially very different from the detrended ones.

    • There would be a difference, but the “Stadium Wave” appears to be more about sequencing than magnitude, so it shouldn’t be much difference.

      The problem with sequencing is without understanding cause you can’t determine how reliable the predictions might be. The sequencing can fall apart unless there is another similar perturbation to start it over.

      That compares 40 year (480 month) correlation between the tropical SST and the AMO raw monthly with the seasonal cycle removed. If you don’t remove the seasonal cycle you get a strong inverse correlation. That indicates that the first part of the AMO was a “global” issue, likely volcanic and/or solar forced and the second half is a lagged settling response, which would be more of a natural “oscillation” or response to a perturbation.

      If you were using M-SSA to compare the North Atlantic SST (roughly the AMO) to the Tropical Ocean SST, what type of oscillation do you think it would find?

      All I was saying is the pre-defined “oscillations” limit the analysis the W&C can do without redefining indexes.

  45. The major quasi-periodicities are available by simple inspection of the temperature and driver data without any fancy mathematical analysis – which latter generally obscures that which is clearly visible in the basic temperature and driver data.
    see figs 4,5,9,10,12,14,15 and 16 at
    http://climatesense-norpag.blogspot.com

  46. Tomas Milanovic

    All I was saying is the pre-defined “oscillations” limit the analysis the W&C can do without redefining indexes.

    Well of course.
    That’s what I was saying right in my first post where one of the 2 points I was challenging was the index choice.

    Now there are solutions to this kind of problems. The only thing to show is that one can reconstruct the whole spatial field with a single function (index).
    As I already wrote too, EOF answers typically this kind of problems.

    Again this is like reconstructing N dimensional dynamics with a single one dimensional time serie where I explained how and why it works for temporal chaos.
    It is not because spatiotemporal fields are more difficult to handle that it is not feasible.

  47. Now we’re headed into Solar Cycle 25, after the sun has just recently undergone, one of its oddest magnetic reversals on record… the sun’s magnetic poles are out of sync… Several solar scientists speculated that the sun may be returning to a more relaxed state after an era of unusually high activity that started in the 1940s (See, Robt. Lee Hotz, WSJ: ‘Strange Doings on the Sun’). Perhaps we will come to understand what solar quiescence means: Solar Cycle 25 peaking around 2022 could be one of the weakest in centuries (Science@NASA: ‘Long Range Solar Forecast’). Compared to the past, what if solar activity will for a while be as still as a butterfly’s wing? What might the repercussions be for the future of a civilization if we fear global warming more than cooling when possibly a rapid cooling is on the way?

    Global warming (i.e., the warming since 1977) is over. The minute increase of anthropogenic CO2 in the atmosphere (0.008%) was not the cause of the warming—it was a continuation of natural cycles that occurred over the past 500 years.

  48. “If AMO is linearly detrended, is there, or is there not, a vestige forced signature imprinted upon the residual, thereby exaggerating the perceived role of internal processes? ”

    Detrending the AMO signal exaggerates the cooling rates and reduces the warming rates. The critical issue though is the sign of climate forcing in respect to the AMO. Increased forcing of the climate is directly associated with positive NAO/AO conditions, and positive NAO/AO conditions are associated with a cold AMO mode. This naturally separates a warm AMO mode from theoretical GHG forcing, but begs for a solar explanation for the increase in negative NAO/AO since 1995.

  49. No matter how sophisticated the mathematical technique of data analysis may be, it’s results can be no better than the quality of the data allows. And it always behooves the analyst to demonstrate that the data satisfy the underlying (often tacit) analytic assumptions.

    With global historical data in many regions being more the product of numerical guesswork than of reliable measurements prior to the satellite era, the indications of multi-decadal oscillations provided by various SST indices, in particular, are tenuous at best. Reliable records an order of magnitude longer would be required to establish the physical claims made under the “stadium wave” rubric. Just because MSSA may be useful in analyzing synchronization of temporal chaos doesn’t mean that the indexed variables truly manifest such behavior. And there’s no credible indication that a physical wave of any kind propagates coherently around the globe at some fixed phase speed.

    No doubt, there are real multi-decadal oscillations in various climate variables that show various levels of cross-spectral coherence and different phases in different frequency ranges. But, with many regions showing little variance in the multi-decadal range, that’s a far cry from the ambitious claim of a truly global “stadium wave.”

    • Your point about historical data is well taken. OTOH, Dr. C. makes predictions, so those can lend credence or detract from the hypothesis.

    • John S. | September 30, 2014 at 8:39 pm

      But, with many regions showing little variance in the multi-decadal range,

      Most of the regions around the planet show these kinds of variances in Min Temps, but not max temps.

      Mid Latitude Northern Hemisphere Stations (24.950-49.410Lat, 180.000- -8.000 Lon and 24.950-49.410 Lat, -67.00- -124.800 Lon) divided into Eurasia(32M samples) and the US(24M samples).

      Mid Latitude Southern Hemisphere stations (-23.433- -66.562 Lat, -30.000 – 180.000 Lon, -23.433 – -66.562 Lat,-30.000 – -100.000Lon and -23.433 – -66.562 Lat, -100.000 –180.000 Lon ) divided into South America(4.6M), Africa(1.8M) and Australia(62K).

      • Mi Cro’s got a stash of high grade ore and is finding nugget after nugget after nugget.
        =======

      • Where do you see noteworthy components in the MULTI-DECADAL range in the results that you present?

      • Where do you see noteworthy components in the MULTI-DECADAL range in the results that you present.

        Huh? There’s multi decadal changes in the day over day change in minimum temp at different times the world over. They are significant, and these changes represent the basis of the entire change in average temps, as day over day max temps change hundreds to thousanths of a degree, some of these min temp changes are multiple degrees (though the amplitude is strongly influenced by sample size).

        http://wattsupwiththat.files.wordpress.com/2013/05/clip_image020_thumb1.jpg?w=797&h=546
        As an example, here is the day over day difference in max temp for the US. It’s the best sampled location on the planet. Eurasia isn’t bad, southern hemisphere is pretty sparse.

        Follow the url in my name to sourceforge, I have temps and day over day changes for the globe cut into various shapes and sizes.

      • In the present context, “multi-decadal” refers to the period of spectral components–not just the mere appearance of some disturbance in multiple decades of the record. One has to do power spectrum analysis to establish noteworthy presence of such oscillations.

  50. The principal components of the drivers of climate change are different at different time scales. Check Fig 4 at
    http://climatesense-norpag.blogspot.com
    This shows clearly where we are relative to the Milankovitch cycles – i e we are on the way down to the next ice age.
    The Milankovitch cycles are then modulated by solar activity cycles of varying lengths – Most important for climate forecasting are the 60 year and especially the 1000 year cycle. For the latter see Figs 5 ,9 , and for the former Figs 15 and 16 at
    http://climatesense-norpag.blogspot.com
    The catastrophic schoolboy error of the modelers is to try to forecast future trends based on 70 years or so of data when the most pertinent time scale is millennial. This is exactly like taking a temperature trend from say Feb – June and then projecting it ahead in a straight line for 20 years or so – basically bonkers!!
    Judith is still chiefly occupied with these shorter term data sets when the real action is at the millennial level.

  51. “it’s a pretty good bet that we are going to go nearly a quarter of a century without warming.” ~Patrick Micharels (2013)

  52. Why viewers stilll maake usee off too read neqs papers when iin this technological globe all is accessibl
    on web?

    • …because, climatology has been likened to the ancient science of astrology?

    • The smell of the ink, the feel of the paper. Naw, the ads don’t jiggle.
      ================

    • Online material is no match for a broadsheet newspaper, the mode of use and experience are quite different. And it’s not easy to fold the screen as you want to.

    • Thurman – Why read newspapers when everything is on the web? (translated).

      The ergonomics, the sports page, the funnies, and the bird cage. Also, you can use newspapers for mulch, fish and chips, and dog training. You can hide behind the news paper when you need to avoid someone at the transit station. You can make a paper hat, it doesn’t need batteries or recharging. You can read it in the sun, if it is too sunny you can use it as a parasol. You can share a section with a new friend. It doesn’t fall out of your pocket into the toilet when lean over to flush. You can make paper airplanes and toss them at the boring physics teacher when he turns his back – a drone for a drone (it’s in the bible). You can lay on it on the cold ground – it is a good insulator (better than CO2, in a pinch) – and take a nap in the park. You can tear it into strips and put them in a box to keep a baby bird warm after it falls out of the nest. You can do all these things with a newspaper, but you can’t do them with a smart phone.

    • We less news and more science –e.g.,

  53. ‘Using the synthetic temperature histories, we also show that certain procedures used in past studies to estimate internal variability, and in particular, an internal multidecadal oscillation termed the “Atlantic Multidecadal Oscillation” or “AMO”, fail to isolate the true internal variability when it is a priori known. Such procedures yield an AMO signal with an inflated amplitude and biased phase, attributing some of the recent NH mean temperature rise to the AMO. The true AMO signal, instead, appears likely to have been in a cooling phase in recent decades, offsetting some of the anthropogenic warming. Claims of multidecadal “stadium wave” patterns of variation across multiple climate indices are also shown to likely be an artifact of this flawed procedure for isolating putative climate oscillations.’
    http://www.meteo.psu.edu/holocene/public_html/Mann/articles/articles/MannEtAlGRLPreprint.pdf

    This paper is of course the answer to the wrong question. The problem was the signal itself and not the patterns of connection – and not detrending per se. Specifically the problem concerns the putative removal of a forced component from the multi-decadal oscillation. Mann argues that changing sulphide load in the North Atlantic are an anthropogenic forcing that masks a downturn in the temperature contribution from a warm AMO. The problem with the Mann analysis is that the data on North Atlantic sulphides are largely supposition. Neither address causality of regime variability in any fundamental way.

    The consequences of these changes manifest in changes in AMOC – for which there is limited information. It seems related to both the AMO and NAO – as well as in spinning up or down the sub-polar gyres in all the oceans.

    The observing system shows a decline in AMOC since 2004 – along with a negative trending cumulative NAO and a modestly cooling North Atlantic since the late 1990’s.

    ‘During the time of the 26 N array observations there has been a predominantly negative NAO (Fig. 7). Associated with the negative NAO is a tripole SST pattern with cooler mid-latitudes and warm subtropics. Cunningham et al. (2013) suggested that the AMOC has a role in setting sub-surface temperature anomalies, which have been linked to re-emerging SST patterns and subsequent anomalies in the NAO (Taws et al., 2011). The results presented here are consistent with AMOC driven changes to the SST tripole pattern but they are not sufficient to conclude a causal relationship. Li et al. (2012) found an anti-correlation between sea-surface height (SSH) between 30 5 and 50 N in the Atlantic and accumulated (i.e. time integrated) NAO. If SSH changes primarily reflect variations in heat content, then this also supports the association of reducing AMOC and negative NAO.’ http://sunburn.aoml.noaa.gov/phod/docs/Smeed_2013.pdf

    The NAO has in turn been linked to solar UV variability interacting with stratospheric ozone – e.g. http://iopscience.iop.org/1748-9326/5/3/034008 – and this is related to NH temperature – e.g. http://iopscience.iop.org/1748-9326/5/2/024001 – changing polar and sub-polar pressure fields. The resultant atmospheric see-saw drives the now familiar patterns of global atmospheric and oceanic circulation – including sub-polar ocean gyres and thus the rate of upwelling in the north and south Pacific Ocean.

    The connectedness of the Earth system doesn’t seem in doubt – thus the stadium wave. Mann’s analysis doesn’t in fact relate to this – it merely would – if reliable – change the phase of the relationship marginally. It reduces the amplitude of natural variability. We will see.

    • Anthropogenic warming in theory increases positive NAO conditions, which should make for a colder AMO. So the anthropogenic component would be limiting a warm AMO mode, rather than the anthropogenic component being reduced by a natural cold mode of the AMO as Mann suggests.

    • David L. Hagen

      Solar Cycle-UV-AMO Link
      The MET office now comes out with Solar UV influencing the AMO:
      Solar forcing of winter climate variability in the Northern Hemisphere
      Sarah Ineson et al. Nature Geoscience 4, 753–757 (2011) doi:10.1038/ngeo1282

      An influence of solar irradiance variations on Earth’s surface climate has been repeatedly suggested, based on correlations between solar variability and meteorological variables1. Specifically, weaker westerly winds have been observed in winters with a less active sun, for example at the minimum phase of the 11-year sunspot cycle2, 3, 4. With some possible exceptions5, 6, it has proved difficult for climate models to consistently reproduce this signal7, 8. Spectral Irradiance Monitor satellite measurements indicate that variations in solar ultraviolet irradiance may be larger than previously thought9. Here we drive an ocean–atmosphere climate model with ultraviolet irradiance variations based on these observations. We find that the model responds to the solar minimum with patterns in surface pressure and temperature that resemble the negative phase of the North Atlantic or Arctic Oscillation, of similar magnitude to observations. In our model, the anomalies descend through the depth of the extratropical winter atmosphere. If the updated measurements of solar ultraviolet irradiance are correct, low solar activity, as observed during recent years, drives cold winters in northern Europe and the United States, and mild winters over southern Europe and Canada, with little direct change in globally averaged temperature. Given the quasiregularity of the 11-year solar cycle, our findings may help improve decadal climate predictions for highly populated extratropical regions.

      Note: Supplementary Info

      • Paradigm paralysis is weakening…

      • Stronger negative NAO episodes through the solar cycle typically happen at the local minima in the solar wind strength, which is just after the sunspot minima, e.g. 1997/98 and 2009/10, and often at sunspot maxima too.

    • It is the difference between the ‘detrended’ AMO – e.g. Kaplan et al. [1998] SST data set – and the differenced AMO of Mann. Hence the answer to the wrong question.

    • “The connectedness of the Earth system doesn’t seem in doubt – thus the stadium wave.”

      The characteristics of imputed “stadium waves”–synchronization of chaotic oscillators, physical wave propagation–are by no means necessary to account for the observed coherence of some climate variables in some regions of the globe. That could easily be the result of entirely different mechanisms.

      • Odd. The mechanisms are global in scope – THC, polar annular modes, changes in atmospheric angular momentum, shifts in upwelling. The indices can be seen as chaotic oscillating nodes on a network through which the propagating signal is tracked. These nodes appear to synchronise at critical points in the modern surface temperature record in the model of Tsonis et al.

        I am not sure what you mean by a physical wave. The signal may be propagated through various mechanisms – the wave is a metaphor.

      • Sound science is done explicitly–not by appealing to metaphors. Even more than the authors, you rely on hand-waving references to putative mechanisms, without a shred of empirical evidence indicating strong cross-spectral coherence on a truly global scale. And neither one of you present any cogent physical basis for treating surface temperature as the product of “chaotic oscillating nodes on a network.”

        BTW, in bona fide network theory, nodes are merely connecting junctures; only some networks MAY exhibit oscillatory modes [sic!].

      • These are very real changes in the state of ocean and atmospheric circulation. Ocean SST and THC for instance.

        Defining them as handwaving without data is impossible and insufferable nonsense. There is a limit to how much drilling down into detail is possible with every comment. Some things require some background – an ability to review data – and a willingness to reflect and review. You show none of this.

        ‘The above formula uses the absolute value of the correlation coefficient because the choice of sign of indices is arbitrary. The distance can be
        thought as the average correlation between all possible pairs of nodes and is interpreted as a measure of the synchronization of the network’s components. Synchronization between nonlinear (chaotic) oscillators occurs when their corresponding signals converge to a common, albeit
        irregular, signal. In this case, the signals are identical and their cross-correlation is maximized. Thus, a distance of zero corresponds to a complete synchronization and a distance of √ 2 signifies a set of uncorrelated nodes.’ https://pantherfile.uwm.edu/kswanson/www/publications/tsonis_GRL07.pdf

        The indices can be seen as chaotic oscillating nodes on a network through which the propagating signal is tracked. This is the sense of what Tsonis and colleagues show to be analogous to the theory of synchronized chaos – the system behaved like irregularly oscillating nodes on a network. This is a specific type of network – with specific behaviours and not merely links in a some other type of network. You are utterly hopeless.

      • Showing that the accumulated NAO and AMO are highly coherent scarcely addresses my point about lack of coherence of temperatures on a GLOBAL basis. (That is amply evident in thousands of cross-spectral analyses between various stations and indices across the globe that I have done.) And the presentation of scanty AMOC data, which the authors point out as a decadal-scale phenomenon, is quite irrelevant to questions of MULTI-decadal behavior.

        What is truly hopeless is the prospect of a fruitful discussion with a polemicist who has never done the science, but pretends to have superior comprehension.

  54. Pingback: Things | …and Then There's Physics

  55. Tomas Milanovic

    John S.

    Reliable records an order of magnitude longer would be required to establish the physical claims made under the “stadium wave” rubric. Just because MSSA may be useful in analyzing synchronization of temporal chaos doesn’t mean that the indexed variables truly manifest such behavior.

    The first observation is true. The delayed embedding techniques work because as the system is (necessarily) on the attractor, it (necessarily) visits a neigbourhood of any arbitrary point N times after a certain time.
    A fast qualitative argument (but there exist rigorous mathematical demonstrations e.g Takens theorem) allows to understand that in that case constructing a delayed vector (x(t0-µ), x(t0-2µ) etc) will enable to extract from the single time series the attractor’s properties like dimension, Lyapounov spectrum etc.

    However for this to be significant it is necessary that you observe the system long enough so that it really visits every neighbourhood of the attractor’s points. The minimum time is called the “return time” and it is generally known for temporally chaotic systems (e.g studied with the SSA method).
    It is not really known for spatio-temporal systems like weather/climate but I have read several conjectures that suppose it to be around a century.
    If it is true, then 1 century of data minimum starts to give reliable informations about the system’s dynamics with M-SSA method. More than 1 century is obviously better.

    The second observation is not true.
    If the data length is long enough and the spatial “indexes” (channels) are correctly chosen, then the eigenvectors are truly a basis for the system’s dynamics and the largest eigenvalues define the modes/oscillating paterns that dominate the dynamics (because they explain most of the variance). Nothing untrue about that and the system truly behaves as the largest eigenvalues say.

    For a quantitative treatment and understanding I again suggest to read carefully the Groth and Ghil 2011 paper I linked above. It contains also many useful references to further papers which allow to go more in the depth for interested readers.

    • I fear you miss the point of my comment, which is: the mere employment of MSSA analysis techniques is not sufficient to establish synchronized chaotic behavior in the system. Decades of experience with EOF decompositions of geophysical data shows that the obtained eigenvectors are physically meaningful only rarely: when the system has strong oscillatory modes AND the record-length is entirely adequate to delineate them over more than a few cycles. Otherwise, one gets artifact-ridden results of mathematical interest only. Contrary to the authors’ claims, those vectors are not “filterings” of the data in any rigorous signal-analysis sense of the phrase.

      The problem with the ambitious “stadium wave” conjecture is the general lack of strong cross-spectral coherence evidenced by key indices. Thus the squared coherence of the AMO and PDO is only marginal (<0.5) at multi-decadal frequencies and, despite the claim that the former is the "source" of the "wave," the latter LEADS the former by roughly a quarter-cycle. And there are many regions of the globe wherein the variance of temperature variations is negligible at those frequencies. This is totally at odds with any notion of a PHYSICAL wave, whatever the mechanism of its generation, actually "propagating" around the globe.

      • ‘Wyatt (2012) further explored the ‘stadium-wave’ signal, both spatially and temporally expanding the network. A 300-year proxy record revealed a hemispherically propagating signal,
        albeit with modifications of amplitude and tempo prior to the late 1700s.’

        So you keep saying. I believe the Arctic Eurasian ice shelf was posited as the source of the propagating signal. The signal itself is ubiquitous – found in biology, hydrology and temperatures across the globe. The 20 or so indices used in the 2013 study show that.

      • Prior to the present paper, the Siberian Arctic was claimed to be the source, which is now attributed to the AMO. In either case, we are left with cross-spectral coherences with temperatures in other regions that are insignificant much too often to termed “ubiquitous.” It requires doing the appropriate analyses, not just reading ambitious academic claims, to recognize their unbearable flimsiness.

      • ‘Here we show that observed multidecadal variations of surface climate exhibited a coherent global-scale signal characterized by a pair of patterns, one of which evolved in sync with multidecadal swings of the global temperature, and the other in quadrature with them.’

        You are full of obvious nonsense on both counts. There is no change simply because the AMO is the focus of someone else’s attention. You were obviously wrong and now claim that Wyatt and Curry changed the focus. There is no ‘coherence’ with the surface temperature by the nature of the propagating signal. There is no suggestion that all of these patterns have a direct influence on temperature.

      • Claiming global-scale coherence and actually demonstrating are two different things. FYI, cross-spectral coherence, which involves both the in-phase and quadrature relationships, is independent of phase relationship. Perfect coherence, however, doesn’t necessarily imply “synchronized chaos.”

        The claim that “there is no ‘coherence’ with the surface temperature by the nature of the propagating signal” is patently baseless. And if “there is no suggestion that all of these patterns have a direct influence on temperature,” then what is the point on trying to discern them?

  56. New post at ATTP yapping about moderation policies, and praising Nick Stokes for his “patience”.

    He reminds me of DC, Mike Flynn or FOMBS here, constantly and politely spouting total BS, while others rage at him. But I agree with Mosher: total BS, no matter how politely spouted, is worse than calling somebody out on ignorance, st00pidity, or deliberate deception. Even impolitely.

    But another, more important point:

    FWIW, I went to an interesting talk recently about Dansgaard-Oeschger events, that I may write about in due course. They’re interesting as they appear to be internally forced, and so there is much they can tell us about internal variability.

    Perhaps somebody more familiar with the mythology of “forcing” could correct me if I’m wrong, but isn’t this a complete perversion of the meaning of the supposed word?

  57. Stephen Segrest

    It would be interesting for Mosher (and others) to give us some insight into the use of statistics by Climate Scientists. Does one’s approach to statistics heavily influence research and findings?

    When Mr. Ellison vents about how “Liberal Ideology” has taken over the scientific method, is this below article an example of what he is really talking about?

    http://www.nytimes.com/2014/09/30/science/the-odds-continually-updated.html?smid=nytcore-iphone-share&smprod=nytcore-iphone&_r=0

    • Good timing. A series of primers on the use of statistics by Climate Scientists is underway right now. Feel free to read and participate.
      http://climateaudit.org/

    • Stephen, this might be a good blog for you to visit.

      http://wmbriggs.com/

      • John Smith (it's my real name)

        captd
        thank you for putting up the wmb link
        his definition of the “hiatus” brings a lot of clarity for me
        the Feynman clip is just the best
        makes my day to see real science still lives
        enjoyed your site as well

    • ‘Simply put, if you’re attracted to ideas that have a good chance of being wrong, and if you’re motivated to prove them right, and if you have a little wiggle room in how you assemble the evidence, you’ll probably succeed in proving wrong theories right. His model predicted, in different fields of medical research, rates of wrongness roughly corresponding to the observed rates at which findings were later convincingly refuted: 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.’ http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/308269/

      The Borg collective cult of AGW groupthink space cadets is something else entirely.

    • Besides which – I would call it fringe progressive extremism rather than liberal ideology. Classic liberal ideology emerges from the scientific enlightenment – it is far from that.

  58. Maybe more kids should start reading the newspaper. Here’s an example of the sort of science offered by CK-12 to teach the kids about global warming:

    5 Things

    You hear a lot about climate change. Maybe you hear that it’s controversial. But it’s not among scientists. Expert scientists agree that climate change is happening and it’s caused by human activity. This map shows how much warmer the decade 200 to 2009 was compared with the 1951 to 1980 average.

    News You Can Use
    • The Intergovernmental Panel on Climate Change (IPCC) releases a report every several years.
    • More than 200 expert scientists contributed to the 2200 page, 2013 report.
    • SciShow makes five major points about climate change based on the report:
    1.Warming is unequivocal.
    2.The media interpreted that there has been pause in global warming from 1998 to 2011, but there was no pause.
    3.The levels of greenhouse gases in the atmosphere have not been this high for 800,000 years.
    4.Scientists are now 95% to 100% certain that humans are causing global warming.
    5.The future will see more frequent temperature extremes, mostly of heat, less of cold.

    •Physical evidence of climate change has been measured through looking at Arctic temperatures over the years. Notice the increase in temperature in the graph below…

  59. Note to TonyB
    Big G swapped the August and September CET daily max
    http://www.vukcevic.talktalk.net/CET-dMm.htm
    this time I will not forecast His next move.

  60. Early days, but two current indicators point to a warmer than average DJF.
    a) most of the year daily max & min were above average with exception of August.
    b) it looks as the Iceland Low may move further south-west puling in some warm air from Azores.
    Volcanic eruptions are well above average in the last few months; if Kamchatka blows one of its 3 or 4 regulars at the December’s end or early January, it will override all of the above, add few (solar) CMEs at the time, strongly ionising the Arctic’s stratosphere (eventualy tearing polar vortex apart 2-3 weeks later) and winter will get much worse.
    ergo, don’t rush to your bookie.

  61. The first link is the US, the second link is a global graph. I can only see the second, and it looks like the text labeling it for the US, is for the global graph, but it actually the first (just a link on my view).