Spatio-temporal chaos

by Tomas Milanovic

There are scientists who equate chaos to randomness. I’d put that category at 90%.

There are scientists who equate chaos with Lorenz. They have seen the butterfly attractor picture one day or the other. They know that chaos is not randomness but not much more. I’d put that category at 9%.

There are then scientists who know what is chaos and really understand it. I’d put that category at 1% and much less for the climate scientists.

The chaos one could and should we be talking about as far as climate is concerned is spatio-temporal chaos.

What is known as chaos theory and often associated with Lorenz was actually discovered by Poincare 100 years ago and it is TEMPORALchaos. It is a paradox,  but chaos was first discovered by Poincare in a Hamiltonian system which has been considered for centuries as the perfect deterministic clockwork – the celestail mechanics. Poincare has proven that a gravitational 3 body system was chaotic and unpredictable. Actually it is not even predictable statistically (e.g you can not put a probability on the event “Mars will be ejected from the solar system in N years”).

Scientists having been busy discovering relativity and QM (Poincare too); they have been ignoring these results for 60 years. Then Lorenz found chaos in fluid dynamics and the temporal chaos theory started slowly developing.

The most important point that everybody who wants to understand something about temporal chaos theory should understand that it is all about geometry in a finite dimensional phase space. In other words it deals mathematically with systems of non linear ODE where all unknowns are coordinates of the phase space and the state of the system is perfectly defined by a point P(t) in the phase space by giving its coordinates (degrees of freedom). If this rings a bell with hamiltonian mechanics, it is good as it should.

All the “advanced” concepts (bifurcations, shifts, attractors, fractals) are children of  temporal chaos theory. The simple rule of thumb is that if there is only time dependence, then the chaos can be explained by  chaos theory. Chaos theory doesn’t apply at all to the problems that bring us here, and here is why.

There is something much more complicated and qualitatively radically different from the temporal (Lorenzinan) chaos – the spatio-temporal chaos. There is no established spatio-temporal chaos theory. It is cutting edge and a few people have worked on this only for a few  decades.  Spatio-temporal chaos deals with the dynamics of SPATIAL PATTERNS. Mathematically we deal with fields described by non linear PDEs; Navier Stokes equation is an example.
Spatio-temporal chaos is as far from the temporal chaos theory as QM is from classical mechanics.

The biggest difficulty comes from the fact that we lost this convenient finite dimensional phase space. That’s why almost nothing transports from temporal chaos to spatio-temporal chaos. There are no attractors, bifurcations and such. The whole mathematical apparatus has to be invented from scratch and it will take decades.  To know the state of the system, we must know all the fields at all points – this is an uncountable infinity of dimensions. As the fields are coupled, the system produces quasi standing waves all the time. A quasi standing wave is a spatial pattern that oscillates at the same place repeating the same spatial structures in time. However in spatio-temporal chaos these quasi standing waves are not invariants of the system on the contrary to the attractors which are the invariants of the temporal chaos.  They live for a certain time and then change or disappear altogether.

You can see spatio-temporal chaos if you look at a fast mountain river. There will be vortexes of different sizes at different places at different times. But if you observe patiently, you will notice that there are places where there almost always are vortexes and they almost always have similar sizes – these are the quasi standing waves of the spatio-temporal chaos governing the river. If you perturb the flow, many quasi standing waves may disappear. Or very few. It depends.

Weather and climate are  manifestations of spatio temporal chaos of staggering complexity because there is not only Navier Stokes equations, but there are many more coupled fields.  ENSO is an example of a quasi standing wave of the system.
Of course I hope that the reader now knows that ENSO cannot be explained by something depending on time only (like indexes, time series and such) because if it could, we would have  classical temporal chaos where space doesn’t matter. We would have solved the problem long times ago. But as ENSO is a pattern resulting of interaction of ALL fields in the system, it vitally depends on how these fields interact in space. That’s why all interpretations of ENSO (and other multidecadal quasi standing waves) are failing – people are using functions (series) that depend on time only which cannot clearly encode all the spatial interactions.

There are a few exceptions like Tsonis. I have written a long post in the Tsonis thread so won’t repeat. But Tsonis makes a step towards spatio-temporal chaos by considering that there are several interacting waves what is equivalent to introduce some dose of spatial interaction. Of course as Tsonis considers only 5 waves, it is a rather rough way to discretize space over the whole planet but it is a beginning.

The best way to imagin a full spatio-temporal chaos theory is to imagine that there is a different chaotic oscillator like the Lorenz butterfly) at every point of space (so there is an infinity of them) and that they are all coupled strongly with each other in a non linear and time dependent way. I am not saying that there can’t be some simplifications but nobody knows today. The only thing I am reasonably sure of is that there will be no progress in understanding be it via chaos or not as long as people will insist on the crutches of functions/series that are only time dependent.

That’s why it is completely incorrect to say that climate is a boundary value problem.

To illustrate what the REAL problem of climate dynamics is, I have posted in the Tsonis thread a link to this paper :http://amath.colorado.edu/faculty/juanga/Papers/PhysicaD.pdf

Despite the fact that this paper finds a MAJOR result and is the right paradigm for a study of spatio temporal chaotic systems at all time scales so also for climate, I suspect that nobody has read it.
And probably only few would understand the importance of both the result and of the paradigm. Of course the climate is more difficult than even a network of chaotic oscillators because, among others, the coupling constants vary with time and the uncoupled dynamics of the individual oscillators are not known.
Also the quasi ergodic assumption taken in the paper is not granted for the climate.

Yet even in the general case it appears completely clearly that the system doesn’t follow any dynamics of the kind “trend + noise” but on the contrary presents sharp breaks , pseudoperiodic oscillations and shifts at all time scales. Of course the behaviours in the case when the coupling constants vary will be much more complicated and are not studied in the paper.

Unfortunately people working on these problems are not interested by the climate science and those working in climate science are not even aware that such questions exist , let alone have adequate training and tools to deal with them.
Concerning these paradigm issues, this belongs obviously to the unresolved questions and as far as I am aware, it is only on blogs and among others on your blog that they are discussed.

TM’s Summary 2/16

Main points for the summary:

1)

I commend Jstults for excellent and relevant contributions. He has a good knowledge of the litterature but most importantly he is able to manipulate chaotic ODEs. SpenceUK also added good contributions. Dan Hughes blog on numerical solutions of Lorenz equations is a good read.

2)
This is NOT about numerical models. This cannot be about numerical models.
I hope that by now most have understood that from the mathematical point of view temporal chaos theory is about solutions of non linear ODEs and spatio-temporal chaos about solutions of non linear PDEs.
The former which is much older than the latter (Poincaré 100 years ago on hamiltonian conservative systems and Lorenz 50 years ago on 2D fluid dynamics) is a good introduction to important concepts and mathematical tools but of little to no help in climate matters.
As numerical models cannot find solutions of any system of non linear ODEs or PDEs because the system is simply spatially too huge and all the equations are not known anyway, they have no relevance to what I discuss here.
If I attempt to characterise what they are in my eyes, I would say that they are simulators of the evolution of the system under approximate constraint of conservation laws.
But as R.Hilborn has rightly written “The dynamically allowed space is much smaller than the space that is allowed by the conservation laws”.
Btw I recommend R.Hilborn’s excellent textbook (http://www.amazon.com/Chaos-Nonlinear-Dynamics-Introduction-Scientists/dp/0198507232/ref=cm_cr_pr_product_top) for anybody who would like to go a bit farther than the basics of the non linear dynamics.
From that follows that whatever states the numerical simulation computes, it cannot be sure that they are dynamically allowed. Many of them may very well be just plausible states of the fields but the system will never visit them because they are dynamically forbidden.
This poses the question of the metrics of the states (how do we define a state of the system so that this definintion leads to a meaningful metrics) which is another debate.

3)
There is a fundamental difference both mathematically and physically between temporal chaos and spatio temporal chaos. Judith rightly notes that few of the climate scientists have knowledge about temporal chaos let alone spatio temporal chaos. Even Tsonis and Swanson are not really experts of chaos theory but their paradigm (coupled oscillators) is identical to the spatio temporal chaos paradigm. That is why their work is qualitatively different from the “orthodox” school.
My personal opinion is that I do not believe that numerical models (GCM) can give meaningful support or development to their work but I do not know if they believe it themselves. For that we’d need their opinion.

4)
There is still the old school that continues to equate chaos with randomness. I am not sure that they are willing to learn modern physics so it is certainly not blog discussions that would convince them.
Characteristic of this school is the following quote :
But as soon as you add any sort of noise, your perfect chaotic system becomes a mere stochastic one over long time periods, and probabilities really do apply.
A nice review of the relationships between chaos, probability and statistics is this article from 1992:
“Statistics, Probability and Chaos” by L. Mark Berliner, Statist. Sci. Volume 7, Number 1 (1992), 69-90.
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.ss/1177011444

I suspect that these people didn’t really read the link.
The part relating to stochasticity admits that it is merely a qualitative overview and references the fundamental papers among which Ruelle and Eckmann.
It has apparently escaped to the author of the quote that I have linked the R&E paper in the very first post and he certainly didn’t read it.
What the Berliner’s summary says is that :
IF we have a temporal chaotic system and IF this system is ergodic THEN a stochastical interpretation is possible
Unfortunately neither of the ifs is valid for weather/climate.
Despite this rather obvious point, these people still talk about “perturbations”.
Actually the chaos doesn’t exist for them because there are “perturbations”.
This is a complete misunderstanding of chaos theory.
There are no “perturbations” inside a chaotic system – a solution of the dynamical equations is what it is and all the “perturbations” are already accounted for.
The system cannot be decomposed in a linear way in a sum : nice smooth if possible deterministic solution + noise or “perturbation”.
Of course the external energy supply which is necessary to produce chaos is not necessarily constant. It may even be considered random. This doesn’t imply in any way that the system suddenly becomes random too and none of the quoted papers says anything approaching.
A kind of randomness or more precisely the existence of an invariant (of the initial conditions and of time!) probability distribution of the states exists only for ergodic systems.
But the ergodic property is NOT a given.
Even in temporal chaos some systems are ergodic and some are not.
In spatio-temporal chaos the question is fully open especially as a complete ergodic theory of spatio-temporal systems doesn’t exist yet.
In any case the ergodicity has nothing to do with “perturbations” or variations of the external energy fluxes.



436 responses to “Spatio-temporal chaos

  1. I felt privileged to have read this on the earlier thread Tomas. Very helpful to know of the distinction between temporal chaos theory and spatio-temporal. And striking that the connection of the latter with climate science is only being made on blogs. Thank you.

  2. the system doesn’t follow any dynamics of the kind “trend + noise” but on the contrary presents sharp breaks , pseudoperiodic oscillations and shifts at all time scales.

    Well said! I’ve thought of climate as a driven oscillator, with multiple drivers operating at different frequencies. Obviously, any portion of the atmosphere/ocean is influenced by its neighbors, and them by theirs, etc. I hadn’t considered “coupling constants vary with time”

    If Earth had no moon, and rotated more slowly(or not at all), predicting climate would be a duck shoot. These forces keep stirring the ocean and atmosphere.

  3. Thanks, Tomas, for your effort to educate.

    I wish you well, and will be interested in seeing how readers respond.

    • Is there something to be gleaned from a link between spatio-temporal chaos theory and entropy? I wonder if climate might actually be a good premise for such a study?
      Could entropy be the missing link?

  4. Tsonis’ teleconnection paper gave me a very rough idea of the spatio-temporal relationships. Your post and referenced paper (I seem to remember a couple of attempts at trying to make my way through it) prove to me that I should stick with fishing. I will continue to try grasp the math, but I think have the concept.

  5. I am not going to pretend to understand this, but I believe this is about tipping points. Anyway, I think if you change the forcing of the climate system, you are more likely to encounter such a tipping point, than if you did not change the forcing.

  6. intrepid_wanders

    It would be nice to think in PDE and Matrices at the same time. I get the concept, but my maths with the two, always “DIV/0?” ;)

  7. For those of us not soaked in the jargon, what are ODEs and PDEs?
    Anyhow, the take-away is that the GCMs are even stupider and more useless than we thought.

    • A PDE is a partial differential equation such as the Navier-Stokes equations of fluid motion. They are differential equations in x,y,and z dimensions used in GCM. They were used in Edward Lorenz’s early convection model – to re-discover chaos theory. So the climate models are themselves temporal chaotic dynamical systems.

      ‘Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable. They echo other famous limitations on scientist’s expectations, namely the undecidability of some propositions within axiomatic mathematical systems (Gödel’s theorem) and the uncomputability of some algorithms due to excessive size of the calculation.’ http://www.pnas.org/content/104/21/8709.long

      An ODE is a type of lyrical verse. A classic ODE is structured in three major parts: the strophe, the antistrophe, and the epode.

    • Not stupider, but useless, certainly.

      The problem is in the argument, used quite extensively to give credential to projection in front of a public not used to numerical modeling of Chaotic PDE, that GCM are computer implementation of first physical principles, which makes the model inherently “solid”.

      This is false, as engineers working in Computationla Fluid Dynamic, knows quickly (the fact that many engineer working with numerical models are skeptic is quite telling in itself.

      The “chaotic” natures of the equations deriving from first principles says that they CAN NOT be solved. What can be solved is approximations (either deliberate, or implicit in the numerical technique used) of those equations. And those approximations are OK in some cases and regime, but not in others.

      The type of simulation the GCM attempt is really cutting edge: large lenghtscales, 3D, large timescale, coupling of lot different physics (radiation, phase change, convection, 2 fluids, rotating frame.). It ensure that the validity of the approximations has not been extensively tested. So modelers are in the dark.

      At this point, such models are inherently suspect, can not be assumed to produce more than garbage, regardless of the first principles they are based on.

      But they can be proved to be usefull, by extended experimental validation. I have not seen the king of validation I expect in those case, because (my modeler view on Tomas post) fitting 1 indicator on a short time scale (T in 20 th century) means zilch for this class of models.

      One can not even say what class of indicator over what time and lengthscale could be predicted!

      And there is the validation problem: proxies for longer T history are not really accurate, they can provide a kind of smoothed average…maybe, even that is in dispute. So there is no way to check the dynamic of the models on long timescales, at the moment.

      There is an even more severe problem, that makes me think that GCM are not in the “maybe usefull” class but quite certainly in the “no more useful than dice rolling” class: the dynamic of short timescale and lenghtscale is not reproduced, even if the model is in theory able to catch those (time and grid resolution). For example, the semi-periodic ocean circulations are not there, noise in the model is not similar in amplitude or nature as noise in the experiment, basically beside global T (which has been more or less deliberately fitted), the fit with other measurable indicator or qualitative climate feature is very poor.

      Under those observation, the capacity of such models to give usefull T prediction must be considered as extremely poor, almost regardless to the fitting of T we had over 20th century.

      I am, in fact, much more confident in trend-based phenomelogical models, or trend-adjusted simple 1D models. They are not so accurate, but do not pretend to be something they are not…

      • For example, the semi-periodic ocean circulations are not there, noise in the model is not similar in amplitude or nature as noise in the experiment, basically beside global T (which has been more or less deliberately fitted), the fit with other measurable indicator or qualitative climate feature is very poor.

        You’re saying, after all that fuss about Pakistan and Queensland floods having been caused by global warming, that the GCMs don’t get a hundred years of spatio-temporal precipitation absolutely right? What a shock that must be to everyone.

  8. GLEE!

    There are of course many questions, and not everything is as.. cut and dried, say as set out.

    But this is where the action is.

    Thank you sir!

    So, questions..

    Does scale affection problems in spatio-temporal chaos the same way as for temporal chaos?

    (i.e. Doesn’t election of scale determine whether the subject can be treated as chaotic or not? Can some scales of space and time predictably be treated as non-chaotic, or temporal chaotic only, while other scales containing the same regions must be treated otherwise? Of course, if a larger spatial or shorter temporal scale is the chaotic one, then hardly matters about the smaller space or longer time scale when it might mean the whole shebang is wiped out by a — non-cataclysmic, non-alarming, but mathematically fascinating –event.)

    I mean, using this description, one can prove that actual coin tosses do not follow the classical coin toss model of probability as follows.

    Consider the multiple possibilities of unfair coins, unexpected externalities that could affect outcomes, skilled coin toss experts and their chaotic human whims, social engineering of observers by dishonest actors and their chaotic whims, observational errors and the whims of observers, one would have to call actual coin tosses a spatial-temporal chaos model, in particular if one decides beforehand to do what one can to generate that outcome.

    Not that it’s inaccurate to come to this conclusion..

    But try using that argument to dissuade anyone from using the classic coin toss model in most situations.

    • My feeling is that there MIGHT be scales at which certain properties are well-behaved, but this must be experimentally and numerically investigated. What is currently done is that it is ASSUMED that it all averages out at large space/time scales. There is no proof but hand-waving that this is true. If oscillations like the PDO are part of the spatio-temporal chaos (and/or driven by small perturbations from the sun or something), then the warming after 1980 could be largely natural. This has never been disproven but is simply assumed away by the climate modelers.

      • Craig Loehle

        Thank you sir.

        As you may be able to tell, my own feelings are more optimistic on the strength of the Chaos Theory tools, the validity of using scale to distinguish chaotic ranges from turbulent or orderly, and that what holds in Mathematics holds in Physics, where interpretation is correct.

        This is no less true for Chaos Theory than for Differential Calculus.

        Rejecting Mathematical proofs that also happen to well match observations simply because one has faith in a non-mathematically demonstrable hypothesis doesn’t ring of science to me. A Physicist calling Mathematics hand-waving.. also has a funny ring to me, too.

        Still, I’m quite agreeable to the assertion that interpretation remains difficult and contentious at this time, and for decent enough reasons.

  9. Why is this important and relevant? Why are silly little climate wombats (muddle headed) not helpful?

    This is not an academic exercise on the leading edge of discovery – it could well stay in ivory towers, and I suspect Tomas would be happy to remain there (as I would be in my little hydrological world), if not for the critical message that it has for the world. Abrupt and violent climate change is almost the norm for the planet. Abrupt shifts in Earth systems were observed many times in the last century. Many variables – rainfall, sea surface temperature, fisheries, cyclone frequency – all are observed in empirical evidence – hard data – to change abruptly and for lessor or longer periods.

    The Tsonis et al paper in 2007 used a network model to show chaotic interactions of a few modes of climate action. It is very humbling indeed that a model with all the complexity of a toy is the best numerical approach to this we have seen. The predictability (in the understood sense) of a spatial-temporal dynamical chaotic climate is zero.

    This I repeat emphatically is not academic – we are for instance in a globally cooler mode for perhaps another decade or 3. The reasons for this are blatantly obvious in the Pacific Ocean – but they stem from an abrupt climate shift that occurred in 1998/2001. The implications for the politics of (please God sensible ) carbon reduction are blatantly obvious as well. So let’s show you that this is real world and then scare you a little.

    ‘Researchers first became intrigued by abrupt climate change when they discovered striking evidence of large, abrupt, and widespread changes preserved in paleoclimatic archives…Modern climate records include abrupt changes that are smaller and briefer than in paleoclimate records but show that abrupt climate change is not restricted to the distant past.’ http://www.nap.edu/openbook.php?record_id=10136&page=19

    ‘Most of the studies and debates on potential climate change have focused on the ongoing buildup of industrial greenhouse gases in the atmosphere and a gradual increase in global temperatures. But recent and rapidly advancing evidence demonstrates that Earth’s climate repeatedly has shifted dramatically and in time spans as short as a decade.’ http://www.whoi.edu/page.do?pid=12455

    ‘The research suggests that once temperature rises above some threshold, adverse weather conditions could develop relatively abruptly, with persistent changes in the atmospheric circulation causing drops in some regions of 5-10 degrees Fahrenheit in a single decade. Paleoclimatic evidence suggests that altered climatic patterns could last for as much as a century, as they did when the ocean conveyor collapsed 8,200 years ago, or, at the extreme, could last as long as 1,000 years as they did during the Younger Dryas, which began about 12,700 years ago.

    In this report, as an alternative to the scenarios of gradual climatic warming that are so common, we outline an abrupt climate change scenario patterned after the 100-year event that occurred about 8,200 years ago. This abrupt change scenario is characterized by the following conditions:

    Annual average temperatures drop by up to 5 degrees Fahrenheit over Asia and North America and 6 degrees Fahrenheit in northern Europe
    Annual average temperatures increase by up to 4 degrees Fahrenheit in key areas throughout Australia, South America, and southern Africa.

    Drought persists for most of the decade in critical agricultural regions and in the water resource regions for major population centers in Europe and eastern North America. Winter storms and winds intensify, amplifying the impacts of the changes. Western Europe and the North Pacific experience enhanced winds.’
    http://www.mindfully.org/Air/2003/Pentagon-Climate-Change1oct03.htm

    Will this latter scenario happen? Probably not. Probably we will continue with the milder and less durable shifts seen in the last century. But if we cannot predict that abrupt and violent change won’t happen – it is a quandary.

    • “Why is this important and relevant? Why are silly little climate wombats (muddle headed) not helpful? This is not an academic exercise on the leading edge of discovery – it could well stay in ivory towers”

      These “academic exercises” are very important and relevant because they show that whoever runs the climate science show have no foggiest clue what kind of system complexity they are dealing with. These academic exercises give clues what kind of instrumentation needs to be deployed in order to correctly and meaningfully characterize and understand global behavior of the climate system.

      • A metatheory of climate will not have practical application any time soon – that’s not a reason not to pursue it.

      • I agree completely.

        Whilst an attempt to apply chaos theory to climate may not yield anything useful in the short term, it seems to me to be by far the most likely course of inquiry to eventually make sense of the vast array of interactive elements within such an apparently chaotic system.

        It also has the huge advantage of being non-partisan.

        Surely the reams of discussion on this blog are ample proof that a lateral response to the intricacies of climate volatility would actually be a breath of fresh air.

    • You call this change that you date from the late 1990’s abrupt.
      If that is the standard of abrupt change, then is this abruptness historically significant at all to how people and eco-systems interact in a historical sense?
      Is anything unusual occurring on a scale of significant time?

      • There is an interesting and potentially important abruptness in the satellite temperature record. The satellite temperature profile does not show the steady sort of warming that the area averaged surface statistical models show (the kind of steady warming that supports AGW). There was basically no warming in the satellite record prior to the big 1998-2001 ENSO cycle. Nor has there been any warming afterward. However, the flat trend line afterward is at a higher level than the flat trend line before. This is basically an abrupt step function, where the step is obscured by the ENSO.

        The temperature profile is flat-ENSO-then flat again but warmer. It is all quite mysterious, this abrupt change, which seems to be related to the big ENSO in ways we do not understand. It also, in my view, falsifies AGW because this one-time step up in temperature is completely inconsistent with gradual GHG warming, and it is the only warming we see.

      • The biological changes are interesting – chinook salmon in North American streams, sardines in Monterey Bay and phytoplankton in the central Pacific not seen in such abundance since the 1970’s. It is a result of strong upwelling of nutrient rich water in the eastern Pacific.

    • Curb poetry? I don’t think so. It was actually a quote from Wally Broecker.

      We have first principles. In chaotic systems small changes can accumulate until they precipitate abrupt change that is wildly out of proportion to the initial impetus. After that – the estimation problem emerges. If you see the Tim Palmer quote somewhere here – only estimates in terms of probability density functions are available from the Lorenzian Meteorological Office.

      In the ordinary course of risk management – the low probability high risk events are one end of the spectrum that we need to watch. For instance – a big water storage is designed for a 10,000 year storm. If it was not – you could find yourself up to the neck
      in a whole world of hurt.

      We have just established that prediction is futile. Now you and talbloke are still making predictions it won’t happen based on linear considerations. As a Chief Hydrological I demand logical consistency.

  10. Tomas, I think the picture you have painted is too depressing. Yes, high theoreticians love PDEs and associated infinite-dimensional spaces. Some pure mathematicians love to consider infinite translationally-symmetrical domains (or begin an analysis from zero viscosity) because they think it gives them simpler equations and possibly simpler solutions. It is a delusion.

    Yes, Navier-Stokes Equations (NSE) are simple and nice in form, but they have formal mathematical difficulties with existence and smoothness of solutions. Reality, however, seems to be quite less complicated. The matter is that fluid always have certain dissipative term, viscosity, no matter how big or fast the flow is. Viscosity guarantees smoothness and finiteness of spatial scales if we agree to look at fluid motion only after some (short) initial time interval. Therefore any real flow can be decomposed into some _finite_ number of approximating functions, like Galerkin functions (Claes, helloo!). No matter how harsh the initial conditions could be imagined by mathematicians, in reality the motion will fall very quickly onto this generalized “center manifold” spanned by these functions (Tsonis climate indexes?). Therefore, the spatio-temporal motion in effect gets transformed into finite-dimensional dynamics of _amplitudes_ of these spatial “Galerkin functions”. The amplitudes are now governed by ODE (Ordinary Differential Equations). As such, real systems do have bifurcations and attractors, but maybe not quite structurally stable. At least it is commonly recognized these days that large fluid systems have some sort of intertwined remnants of simpler attractors (“oscillions”), and the phase space trajectory irregularly “wonders” from one attractor ruin to another, which is called “chaotic itinerancy.”

    This was the general schema of things when I left this field about 16 years ago. If there are any new significant developments in this area since, I would appreciate pointers and links.

    • “large fluid systems have some sort of intertwined remnants of simpler attractors (“oscillions”), and the phase space trajectory irregularly “wonders” from one attractor ruin to another, which is called “chaotic itinerancy.””

      This is also observed in the changing activity of the Sun. The solar cycle length wanders up and down, clustering around two lengths at 10.38 years and 12.01 years. Very rarely is there a solar cycle of around 11.07 years, the average length. However this apparently chaotic variation may have an underlying cause. I hav made a discovery by experimenting with Roy Martin’s planetary alignment database and rather than looking at planetary alignment in the traditional way, across straight lines, and instead looking at alignments along the curve of the Parker Spiral of the interplanetary magnetic field, and including the temporal variation in the radius of that spiral caused by changes in solar windspeed, I was able to generate this correlation:
      http://tallbloke.files.wordpress.com/2010/08/rotation-solar-windspeed-adjusted.png
      This gives a good match between planetary motion and the timing of the solar cycles. The amplitude is something we are working on now, building a model based on the physical mechanism for planetary effects on solar activity put forward last August by NASA scientists Wolff and Patrone.

      It seems to me that in at least some cases, apparently ‘chaotic’ changes have underlying causes waiting to be teased out of the data.

      Earth’s weather and climate variation is more complex than Solar variation though, so the untangling of the influencing factors is more difficult.

      • “The solar cycle length wanders up and down, clustering around two lengths at 10.38 years and 12.01 years. Very rarely is there a solar cycle of around 11.07 years, the average length.”

        Interesting that the earths average temperature over much of the past 600 million years has clustered around 22C or 12C, and almost never has stayed at an average between these limits for any length of time.

        It reminds me of a drunk walking down a hallway. Almost always the drunk is is leaning against one wall for stability, but occasionally veers to the other wall for no apparent reason.

        Now if I could just get a government grant to study the drunken hallway walk as a mathematical model for climate science …

  11. Very interesting piece and it helps a bit with my understanding- but i’m afraid it’s probably too advanced for my current knowledge level.

    Any ‘primer’ material that it would be useful for me to read?

    • I will post something a bit less technical with links later today.

      • Appreciated, thank you- i think i follow most of it, but i could certainly do with a ‘foundation’ piece as it were to get the basics down correctly. It’s an important subject and i want to ensure i understand it correctly.

  12. Tomas, very interesting article.

    Now that I know who Tsonis is and his work I can see the description I gave you in the previous post ‘Decadal variability of clouds’. It is in some respects close but really is quite different than his methods. This area of science is almost certain to be the next evolution of weather/climate science for it is logically the only way for all of the vast number of interrelated parameters to be compiled into some rough understanding of cause and effect.

    I guess you could view the nodes and links that I was describing as a layer of network above those you see on Tsonis’s site if each of his spatial graphs (networks) were for one parameter such as wind velocity, humidity, pressure, temperature, evaporation rate, soil moisture, etc and extended to three dimensions when proper. I know, it probably would take many linked super computers to ever have the power to tackle such a model but that is the model I carry around in my head.

    Once again, enjoyed your post.

  13. On the level of principles I agree fully with Tomas. Theoretical understanding of chaotic systems with continuous spatial variables is certainly lacking. Furthermore the real system is also stochastic. The chaotic property means that it would not be predictable even without any later perturbations, but the stochasticity means that new perturbations enter all the time from external sources (the system considered is not the whole universe). This makes full theoretical understanding probably even more difficult to achieve.

    The lack of theoretical understanding does not tell, how well we can succeed in analyzing the situation by models based on the parts of theory that we understand and some simplified models on the parts that we do not understand. The climate scientists have proceeded on this basis and claim fair amount of success in their modeling acknowledging weaknesses in other aspects of their analysis. From a practical point of view the question is: How far we can trust the results of models which are theoretically lacking, but still have significant skill in many ways? In some details the skill is respectable, but detailed tests cover a short period of time and a limited set of states of the Earth system.

    The complexity emphasized by Tomas makes empirical verification of the models extremely difficult. The transitions observed give evidence that complicated interactions of various factors may lead to dynamics outside the capabilities of all present models and that the changes may be of significant size. Paleoclimatology gives some limits on how frequent have major variations been, but interpreting the time series is certainly very difficult – more difficult than the well known controversial studies assumed. The past is also incapable of telling much on the influence the rapidly increasing levels of CO2 may have on future dynamics.

    I return to the role of stochasticity. While it is an additional complication on the settings and may make full theory even more difficult to reach, it may very well be an explanation for better predictability and smoother dynamics for the climatically interesting results (averages and ranges of variability). The well known example of Lorentz is true in a discretized deterministic nonlinear atmospheric model. Such a model has the temporal chaos, where one can really see that one single very small variation in input grows to global dimensions. If the model would be modified to a stochastic model by adding continuously small stochastic disturbances, the effect of one single change in the initial conditions would disappear rapidly. The butterfly in the Amazonian would have an equally insignificant influence as we all think intuitively. Switching from temporal deterministic chaos to spatio-temporal chaos might have the same consequence without stochastic perturbation, or it might not. Here my knowledge is totally lacking, but I doubt, whether anybody would be able to give a firm answer.

    Reaching a level of models that could describe well the chaotic transitions is likely to be extremely difficult. A more realistic goal is giving some limits on the strengths and frequencies of transitions and oscillatory behavior, but even here the situation is far from satisfactory judging from the limited information that I have seen, but I admit that I may have missed something essential.

  14. I don’t know what kind of “scientists” you hang around with. I am a scientist, and I do not know any other scientists who would equate chaos with randomness.

  15. Milanovic,
    Interesting and useful analysis.
    Sometime what may appear to be a chaotic behaviour it is just part of an orderly longer term process, engineers often refer to it as a noise.
    In the solar science circles, during the last two years, there is a raging debate about sunspot activity, and why NASA’s top people got it so wrong. Despite all the computing power and sophisticated theoretical models, they managed to get it wrong (it looks like ) by a factor of 3 (150+ instead of mere 50 or possibly less). This was not some long term prediction, but just 4-5 years ahead.
    You mention 3 bodies gravity problem. In 2003, prompted by my daughters homework I wrote a simple equation, showing effect of 2 celestial bodies on the third (in this case Jupiter and Saturn on the Sun). Although it was denounced by the NASA’s top man Dr. David Hathaway (who is now unfortunately subject of a relentless mockery) it has proven surprisingly accurate. Graph compares the actual solar magnetic field daily measurements with a simple numerical calculation. It gives one of the highest correlation (R^2 = 0.9285) in the natural sciences for two (still considered unrelated) processes:
    http://www.vukcevic.talktalk.net/LFC2.htm
    It tells directly what the sun is doing and why, not based on gravity but on magnetism, another fundamental force, in certain respects very alike the gravity.
    Why this could be of interest to the climate science?
    Well, if there is a direct link between solar activity and the global climatic events, than next 10-20 years may take an unexpected turn.

    • Hi

      Chaotic they may be but are in principle deterministic. You prompted me to return to this Lockwood et al paper – http://iopscience.iop.org/1748-9326/5/3/034008/fulltext

      I was looking for a correspondence of magnetic flux and solar UV – see Fig 5 – it seems to be there (but I wonder why). There is of course direct warming of ozone by UV in the middle atmosphere and a correlation of solar activity with ENSO. Although the latter may well come under the crimes against data act. There may however be a more direct link between solar UV and the Southern Annular Mode (SAM) – an index of sea level pressure in the Antarctic. That seems an easy enough task.

      A negative SAM pushes the tracks of storms spinning of the polar vortex further north piling cold water off the South American coast. This is of course the region of the Humboldt Current – the thermodynamic origin of ENSO. Is this anything like you had in mind?

      If solar UV drives ENSO – as I think it must – I can see long term planetary cooling.

      Robert

      • Hi CH
        There are two major factor in global climatic changes (and I consider CO2 to be a minor one, taking place below the UHI)
        – direct Sun-Earth link (TSI, electromagnetic, UV and particle radiation)
        – Ocean heath storage (long term integration process) and distribution (ocean currents)
        Views of solar scientists (including Mike Lockwood) are constrained by their 1950’s hero Eugene Parker’s theories, which the latest discoveries often bring into question.
        I have an ongoing conflict with that fraternity (they labelled me as cyclomaniac, man of superior ignorance (?!), pseudo-scientist, and even a danger to society, while I am just an ordinary electronic communications engineer). One of the major bones of contention is not only the solar equation (in the above post) but this direct Sun-Earth link as plotted here:
        http://www.vukcevic.talktalk.net/AllvsVuk.htm
        On the Sun-Earth link I wrote two short web-article, one relating to the Arctic and another to the Equatorial Pacific.
        http://www.vukcevic.talktalk.net/NFC1.htm
        http://www.vukcevic.talktalk.net/LFC20.htm
        Re UV: This is a copy of a note I wrote some 5-6 years ago with minor changes (I occasionally quote it here and there since I think it still has some merit)
        Both UV and the particle radiation (particle radiation is a function of solar activity and the strength of Van Allen belt, via the Earth’s field strength) could have far larger indirect contribution by controlling plankton volumes, and in turn changing the oceans’ clarity and CO2 absorption. Pythoplankton has developed a cloud forming mechanism to protect itself from direct UV radiation, hence it has to be a factor worth considering.
        High UV/particle radiation = reduction in plankton = clear water = deeper penetration, more heat absorbed further down and retained = warming, reverse holds true.
        Plankton is largest CO2 absorber, but also oceans are near or largest (by far largest in the more distant past) CO2 emitters, so if CO2 happen to be an important factor than:
        High UV/radiation = reduction in plankton = less CO2 absorbed = warming, reverse holds true.

        Oceans’ storage and distribution system is also complex with a high degree of uncertainty.
        One guesswork could be just as good as another !

  16. Having consulted Tim Palmer’s Lorenzian Meteorological Office

    ‘Prediction of weather and climate are necessarily uncertain: our observations of weather and climate are uncertain, the models into which we assimilate this data and predict the future are uncertain, and external effects such as volcanoes and anthropogenic greenhouse emissions are also uncertain. Fundamentally, therefore, therefore we should think of weather and climate predictions in terms of equations whose basic prognostic variables are probability densities ρ(X,t) where X denotes some climatic variable and t denoted time. In this way, ρ(X,t)dV represents the probability that, at time t, the true value of X lies in some small volume dV of state space. Prognostic equations for ρ, the Liouville and Fokker-Plank equation are described by Ehrendorfer (this volume). In practice these equations are solved by ensemble techniques, as described in Buizza.’ (Predicting Weather and Climate – Palmer and Hagedorn eds – 2006)

    Don’t ask me for the details – I mistook an ordinary differential equation for a form of lyric verse.

  17. Outstanding contribution. Thank you.

    DavS

  18. Strongly recommended reading in relation to these issues :
    Nancy Cartwright How the laws of physics lie, Oxford University Press, 1983.

  19. Eddies in bounded flows such as channel flow have been studied at length and we now have nice correlations and “exact” DNS results. Although the flows are random locally, the geometric “boundedness” of the problem ensures that the averaged properties are repeatable. If the atmosphere is considered as a quasi 2D surface, it essentially means that the eddies are bounded by each other. This exposes the evolution of each eddy to the randomness of its neighbour. Thus it is unlikely that there will be any notion of repeatability of the eddies in the sense that we like to use when looking at the generic well defined bounded problems.

    Anyone who has any experience in predicting heat transfer in turbulent flow will be aware of the difficulty of getting accurate results even for well defined bounded problems such as a heated channel flow let alone the more complex problems.

  20. Not sure how this is a problem for climate science.

    On meaningful time scales (i.e. 15 years and above) the bulk properties of climate average out, which means that any internally driven variation of bulk properties is pretty small scale and transient. Which is what we expect of a heavily damped system.

    Internal modes (Such as the PDO) can change the distribution of temperature for longer; but not the total amount, and it’s the total that we are interested in. Furthermore, the existence of some feedbacks leading to new states – classically the deglaciation/albedo runaway at the end of glaciations – cannot be disputed, but is also accounted for with traditional approaches.

    So what is left here appears to be an assertion that we cannot predict the weather for more than a couple of weeks at best, and that in the <5 year time frame internally generated effects can swamp a longer term climate signal. Which we already knew.

    Unless someone can post evidence otherwise, of course..

    • I do not think that a so called longer term climate signal exists. The notion of “bulk properties” of climate seems to be so vague that it tends towards meaninglessness. Roger Pielke Sr discusses things like ocean heat content which is probably as close as we can get to a meaningful “bulk property”.

      There are large temporal atmospheric temperature changes (eg night and day) and equally large spatial temperature variations. I fail to see how such a system can be described as heavily damped. The only damping that occurs is in the postprocessing.

      Cycles which randomly hit a peak at the same time are most likely responsible for the creation of new states.

      Clouds which have such a big (both positive and negative) influence on temperature are inherently linked to turbulence. It is therefore a gross over simplification to say weather (ie turbulence) = noise can be averaged out to pick up some kind of imaginary background bulk signal.

  21. Tomas Milanovic

    Pekka

    The well known example of Lorentz is true in a discretized deterministic nonlinear atmospheric model. Such a model has the temporal chaos, where one can really see that one single very small variation in input grows to global dimensions

    Well not really. How is it possible that the Lorenz model which is related to fluid dynamics (so spatio-temporal chaos) can be treated by temporal chaos theory?
    Well it is because Lorenz found a “trick” how to represent this specific flow as a 2D flow.
    And it is a known result (for those who worked in non linear dynamics) that a 2D flow is dual to a hamiltonian system.
    The only (formal) difference is that the hamiltonian is an invariant of the system while the flow function is not an invariant of the corresponding 2D flow.
    In any case the end result of this trick or insight is that this specific flow is reduced from the infinite dimensional phase space to a finite 3D space and can be described by only 3 non linear ODE. It became a temporal chaos theory problem which can be studied with the “classical” tools.
    However I stress again : the real world is 3D and not 2D.
    The Lorenz model describing a specific flow approximated by a 2D flow allows no insight in the 3D flow problem, cannot be generalised and more importantly has little to do with spatio-temporal chaos. Therefore regardless whether the numerical 3D models are relevant for the real spatio-temporal chaos or not (I think not), they have nothing to do with Lorenzian temporal chaos.

    The question of “stochasticity” is an important and indeed very deep question.
    I hope that we will come to it later in more detail.
    To begin I recommend this : http://socrates.berkeley.edu/~phylabs/adv/ReprintsPDF/NLD%20Reprints/17%20-%20Egodic%20Theory.pdf
    Sure, this paper is strongly temporal chaos oriented so its relevance for spatio-temporal chaos is limited but it gives the right paradigm how to approach problems of statistical predictability.
    Judith who is expert in N-S surely knows Ruelle’s work who has written the seminal : http://www.phys.au.dk/~fogedby/chaos/ruelle.pdf

    Last to situate correctly the spatio-temporal chaos I suggest f.ex :
    http://yakari.polytechnique.fr/people/pops/NL2574.pdf

    • Here is the abstract of an olde paper that directly addresses this issue.

      Jamse H. Curry, Jackson R. Herring, Josip Loncaric, and Steven A. Orszag, Order and disorder in two- and three-dimensional Bénard convection, Journal of Fluid Mechanics, Vol. 147, pp. 1-38, 1984.

      So far as I know, there are no basis whatsoever for extrapolation of the properties and characteristics of the strictly temporal original Lorenz system to include spatial dimensions.

    • Tomas,
      I did not claim anything in contradiction with your description of the Lorenz result. Being able to describe the situation analytically requires special conditions as you describe. More general deterministic finite dimensional models may also have temporal chaos – and all computer models are finite dimensional, even when the system being modeled is not.

  22. Tomas Milanovic

    On meaningful time scales (i.e. 15 years and above) the bulk properties of climate average out, which means that any internally driven variation of bulk properties is pretty small scale and transient.

    Nonsense. At what “meaningful” scale are the Lorenz system properties “averaged out” to take just a temporal chaos example?
    These words don’t even begin to make sense for the spatio-temporal chaos.

    Unless there is a proof that explains correctly (not C.Johnson way) why it should be so , what is meaningful and why . Of course I may have missed such a proof but untill further notice it doesn’t exist .

    • Global Temperature is an example of a bulk property, and it does indeed average out over sufficient time scales; hence showing that whatever chaos, spatio-temporal or otherwise, is present in the system on short timescales it does not affect our longer term predictions.

      And we don’t need a proof of this; it’s an observed property of the system! Indeed, if the problem is that your model does not explain the observations, then I’d suggest that your model is the thing with a problem.

      • Not for the first time the strange juxtaposition of optimism and pessimism in the AWG story bowls me over.

        How is ‘Global Temperature’ an ‘observed property of the system’ showing that spatio-temporal chaos ‘does not affect our longer term predictions’? Am I right in thinking that it is temperature anomalies in various locations, with ‘expert’ adjustments as weather stations and conurbations come and go, that are averaged? A deliberate, detailed process chosen by human beings in the last few years leading to a obscure statistic over a hundred and fifty that you very optimistically interpret as an observed property of ‘the system’, which then serves to support all the pessimism of which a guilt-ridden humanity is capable.

        The deep meaning in Global Temperature for me is the wondrous observation that, in order for life to evolve on planet earth, over four billion years, it seems as if we have never been either completely ice-free or without some open water across the oceans. Over such timescales it seems incredible – but without it no conversation about credibility. Grounds for optimism indeed.

  23. Tomas Milanovic

    Btw somebody asked what scientists equate chaos with randomness.
    The person who said On meaningful time scales (i.e. 15 years and above) the bulk properties of climate average out is an example.
    There are according to my estimation 90% of them.

  24. Tomas Milanovic

    Chief
    we should think of weather and climate predictions in terms of equations whose basic prognostic variables are probability densities ρ(X,t) where X denotes some climatic variable and t denoted time. In this way, ρ(X,t)dV represents the probability that, at time t, the true value of X lies in some small volume dV of state space.

    Well yes. This is similar to what the Eckmann paper says for … temporal chaos.
    The problem for spatio-temporal chaos is that its “phase space” is infinite dimensional so that you can’t define dV.
    Only if it is spatially discretized it becomes (hugely) finite dimensional and one has a well defined dV.

  25. And here I’d been focused on the experimental side of the problem, without first considering the theoretical. Stupid of me – thanks for pointing it out Tomas.

    Without looking further, it would seem that to control climate (whatever “climate” means) in a politically acceptable range, very different proposals than what have been offered would have to be developed.

  26. It sounds to me that you are ascribing things for which we don’t currently have a physical explanation to “chaos”.
    Judy, you are being very quiet. What is your opinion on this?

  27. Hello Tomas, and thanks for the info.

    I have a question(s) about tuning / hindcasting within the frame work of temporal and spatio-temporal chaos. Let’s take the temporal case first. And maybe last; spatio-temporal chaos makes my head hurt.

    Let’s say that we are given the results of calculations using the original Lorenz 1963 system and that these represent the data that we’re going to use to tune up our model of the data in a hindcast exercise. Of the three parameters in the Lorenz model, we choose to use only one of them, the normalized Raleigh number representation, to tune our model.

    Can a single model actually be tuned; that is given the sensitivity of the calculations by the model to both the initial conditions and parameter values, how is it possible to even attempt parameter estimation through such tuning, and have assurance of a causal connection between the parameter to be tuned and the supplied data? How can we be certain that we’ve selected the correct parameter to be tuned?

    Given that it seems that averages of ensembles of model results is the only accepted method of presenting results of chaotic response for practical use, what is gained by attempting to tune a single ‘realization’. More difficult, how would one attempt to apply any kind of parameter estimation to ensemble averages?

    In a more general manner, if we take tuning through hindcasting to be some kind of ( weak ) form of proper parameter estimation, what is the theoretical basis for parameter estimation within the framework of chaotic response; especially spatio-temporal chaos?

    I think the best that can be said is that based on the known-to-be-false assumption that averages of chaos are not chaotic, solution meta-functionals such as some kind of global-average temperature, are hopefully somewhat connected with the parameter chosen to be tuned. The major pitfall, and often ignored reality, is that tuning to solution meta functionals is the best way to get the ‘right’ answer for the wrong reason.

    Thanks for any additional info

  28. This is a good post but probably a bit too technical for most
    readers.
    I will try to express some of the ideas a bit more simply as promised earlier to Labmunkey.

    Equations

    To model the climate you need to solve equations for heat transport and fluid motion in the atmosphere and the oceans.
    These equations are called ‘partial differential equations’ (PDEs) which means they involve functions like temperature that depend on three dimensions of space, plus time.
    The equations are nonlinear (this means they involve products or powers of the things we don’t know and are trying to solve for). Also, the PDEs are all coupled to each other and to other equations,
    for example the equations of radiative heat transfer.
    Because of all these complications, we can’t solve them with pencil and paper. We can only get approximate solutions with the help of a computer. When we do this we have to choose the ‘initial conditions’ (that is, how we start things off). We also have to choose ‘boundary conditions’ (what happens at the edge of the region we are considering).

    Spatio-temporal chaos (STC)

    When you solve these equations on a computer you often find that the solutions vary in space and fluctuate in time in an apparently irregular, disordered way. This called ‘spatio-temporal chaos’ (STC). As Tomas says, there is no real theory of STC, but you can see it in the swirls in a river or in a turbulent plume of smoke.

    Some nice simple java demonstrations of STC can be found on the web page of Mike Cross (physics prof at Caltech), here.

    You can watch the system evolving and tweak the parameters. Of course those toy systems are much simpler than the weather or the climate.

    The important points are that:
    * There is no randomness or noise in the system. The system fluctuates in space and time, all by itself. Although there is some forcing applied (one of the terms in the equation can be interpreted as a forcing), that forcing is kept constant.
    * The system has ‘sensitive dependence on initial conditions’. This is the key feature of chaos. A very small change in the starting values will fairly soon lead to a significant change in the solution.

    Relevance to climate

    So what, if anything, does this have to do with our climate?
    Well, the main point is that nonlinear equations naturally generate
    fluctuations all by themselves. To mathematicians and physicists like Tomas and me, irregular wobbles in the climate of different regions and the global average temperature are exactly what we would expect to see. The idea of trying to ‘explain’ every little wiggle in the curve, as the direct result of some ‘forcing’ seems unnecessary and in fact rather ridiculous.
    The weather is chaotic on a short time-scale of days as clouds and winds change. Similarly the climate is expected to be chaotic on longer timescales involving larger, more slowly changing processes such as the circumpolar vortex, major ocean currents and ice sheets.

    • Which is fine as it goes.

      But if you look at a global parameter – such as temperature over time – then subtract the calculated effect of various forcings (greenhouse gasses, the sun, volcanoes, etc), then you basically get the ENSO signal; take that out and you get essentially a flat line.

      The only way to avoid this is to assume a very low climate sensitivity (below that calculated) and then assume that the resultant difference is the result of this chaos.

      So it the climate is chaotic over longer timescales, there is precious little evidence for it.

      Which is not surprising. I can, for instance, quite simply create a system exhibiting such chaos by putting a heat source under a pan of water; good luck trying to predict the exact conditions of any bit of that pan into the future. But I know that the average temperature of that pan will be very tightly constrained.

      • Please read what I wrote. Especially the last bit. In fact you don’t seem to have absorbed any of it.

      • I did read the last bit.

        However it seems to conflict with the available evidence; the Earth’s climate does not appear to act chaotically on long timescales – or rather, it always acts chaotically but in a tightly bounded range. There is no evidence, for example, that the climate has ever wandered far (c. 0.2K) from equilibrium temperature (globally) for any substantial period of time.

      • Yes. Andrew Dodd’s explanation is spot on.

        Additionally, I don’t like the idea of ascribing “every wiggle in curve” to chaos simply because we don’t yet know the full spectrum of forcings. That’s really not what chaos theory is for…

      • Hi,

        I have linked above to the National Academy of Sciences, the Woods Hole Oceanographic Institution and the Pentagon on abrupt climate change. There is little doubt that Earth systems behave as a chaotic oscillator over timescales from instants to eons. There are many other references in the scientific literature.

        It is indeed not even in doubt at the IPCC. They regard weather to be chaotic and much longer term climate to be chaotic. It is this interim period of the next 100 years that is conveniently statistical.

        You are bravely flying in the face of evidence.

      • I believe you. I do understand that there are elements of chaos within climate. My point was that there is a danger in ascribing chaos to all not-yet understood climate behaviors when, in fact, there may very well be a physical explanation that we have not yet uncovered.

        And thank you for acknowledging my courage.

      • Yes, but you do that on basically 60-70 years of T data. Which do not exhibit so much variations or complexity. And you have quite a lot of different forcings, some of them lacking accurate estimates…

        I would be extremely careful before drawing any conclusions from this fit, especially as other climate features are much less well predicted, the dynamic of T itself not so well predicted on smaller timescales which should be dsicretizable by GCM. And there is a post by Willis I think that show that on the period where the fir is good (and T record somewhat accurate), a simple linear fit is able to obtain very similar T history from the used forcing history than a GCM in use.

        Willis approach does not show that climate can be obtained from a linear fit of the forcings with some time constant. But it show that using the T fit over 20th century as diagnostic for GCM model fitness is not enough. Imho it is so weak a test that it does not add any trust in GCM prediction or adequacy for modelling earth climate.

    • some interesting points being made here.
      Within economics modelling, attempts to model the feedback mechanisms that occur in the real economy are also really difficult – we know, for example, that investment in new technologies will act as an incentive for the existing technologies it hopes to substitute to become more efficient (the sailing ship effect – i.e. in the 50 years after the introduction of the steam ship, sailing ships made more efficiency improvements than they had in the previous 3 centuries) but how to quantify something even as simple as this is not easy BUT we have learnt a few ways to give sensible (order of magnitude) figures with time lags, the learning by doing effect and phased-in substitution effects based on massive amounts of data.

      http://mitigatingapathy.blogspot.com/

    • thanks for posting that paul- very much appreciated and very helpful. i think i’m beggingin to get a handle on it now.

  29. Tomas

    I take it from your post that you strongly believe the earth climate is a spatio temporal chaotic system? If that is correct, is it “proven”? And if not, could it be proven?

    Also if your assertion is correct, what does it mean for detection and attribution arguments of AGW? (I’d guess they would be meaningless) And for climate sensitivity calculations (from both 20th C temp record and forcings, and; ice age and interglacial terminations, milankovitch cycles)?

    Thanks

  30. Tomas – Thank you for using the swift mountain river as analogy for spatio-temporal chaos. As a kayaker and hydrogeologist, I always felt that the concept of standing waves, eddies and whirlpools in a river system was a good example of spatio-temporal chaos and if you hadn’t used the example, I would have pointed it out here. Here is a short, entertaining video providing an excellent visual of what Tomas is describing. Note that this system changes dramatically as the flow rises and falls (i.e., forcings)
    Big Rapids

    • Exactly.
      And there would be two distinctly different approaches to modeling 1) waves and eddies in a river system and 2) predicting the average depth based on a change in forcings (e.g., increased input from snowmelt).

      • The average depth is not a useful result. It changes constantly along a river – supercritical or subcritical flow, standing waves, afflux, cross-section changes, surface roughness, dynamic changes in water stored in reaches. Please to not overly simplify the sacred hydrological truth.

        I will give you an example of where rainfall averages are misleading. In the 1980’s 2 geomorphologists noted that central NSW streams had changed form from a high energy braided form to a low energy meandering form. Looking at flood heights over 150 years they saw multi-decadal periods of flood and drought – flood dominated and drought dominated regimes (FDR and DDR). In particular – there was a flood dominated regime from the 1940’s to the late 1970’s and a drought dominated regime from 1976/77 (the “Great Pacific Climate Shift”) to 1998. It turned out to be a generalised pattern for northeast Australia – which is of course caused by the multi-decadal Pacific pattern I addressed in the post on decadal cloud change. The average rainfall is not useful for farm planning if you plan based on averages and then go into a 30 year drought. That is indeed what we did.

        The changes in the Pacific are abrupt. The NAS defines abrupt climate change (from memory – Abrupt Climate Change: Inevitable Surprises) as as sudden change that is out of proportion to the initial impetus. It looks very much like a chaotic oscillator. If it looks like a duck and quacks like a duck – it is probably a duck.

      • “The average depth is not a useful result. It changes constantly along a river – supercritical or subcritical flow, standing waves, afflux, cross-section changes, surface roughness, dynamic changes in water stored in reaches. Please to not overly simplify the sacred hydrological truth. ”

        Sorry to offend your hydrologic sensibilities. I was drawing an analogy between wave interaction/river depth and weather/climate. Different drivers.

      • And I was simply pointing out that the analogy didn’t hold water.

  31. Tomas, when you couple a chaotic system with random control-parameter variations (solar input, in particular, in the case of Earth) you do indeed get random, unpredictable responses (unless the control parameters are varied through some sort of feedback-loop to actually control the system – not relevant to this situation). Only a system completely isolated from the rest of the world can have the sort of perfect chaos you are talking about here.

    The reason deterministic chaos leads to randomness given any interaction with the rest of the world is inherent in the local dynamics: two initial points separated by a small “distance” in phase space will grow exponentially more distant over time, with the exponential growth rate given by the Lyapunov exponent. Under spatio-temporal chaos, correlations in the spatial dimension similarly decay exponentially with their own exponents. So any uncertainty in initial conditions makes the later state that much more unpredictable; after a sufficient time for the exponential growth to get large the state of the system is essentially randomly selected from the ensemble of possible states at that point in time.

    In fact, spatio-temporal chaos doesn’t help your argument at all, if you think it has anything to say regarding large-scale oscillations like ENSO. The existence of ENSO indicates that some spatial Lyapunov exponents must be quite small, or you could never have such long-distance correlations across a substantial fraction of Earth’s surface. The actual chaotic behavior of Earth’s climate system is much closer to the behavior of a Rayleigh-Beynard cell, which can be seen in the rather highly structured Hadley, Ferrel, and Polar cells that usually govern the large-scale circulation in the troposphere.

    The primary domain for Earth’s chaotic dynamics, then, is the weather – short term variations in local or regional atmospheric behavior. Over the longer term, weather becomes random simply because there are small random variations in solar input (not to mention butterflies) and what matters is the statistics of the weather – climate. And there is no sign that climate (yes it is a boundary value problem) itself is “chaotic” in any significant sense.

    • The impression I get from past IPCC reports is that the accuracy of hindcasts isn’t being maintained in the long term forecasts.

    • I think based on what we know the Younger Dryas event is a good example of either caotic and/or stochastic processes (I’m not sure if we know enough about what caused it to say whether it was one or both) affecting climate.

      I don’t think anybody thinks that the Younger Dryas was weather.

      However, I doubt anybody that really thinks about things would be happy to say that the Earth is going to warm due to our CO2 output until something completely unpredictable happens.

      You can’t prepare for unpredictable events and many people argue that instead of controlling CO2 output we should spend money preparing for the resuts so that arguement essentially goes out the window.

      In addition, human society (not just humans ourelf, but our society) is really the result of evolutionary processes, and basic evolutionary theory tells us that shifts in the environment are almost always negative for the existing entities.

      This is especially worrisome if you are an American.

      • It is possible that something will happen as a result of climate change due to CO2 and cause a shut down of the THC, and we’ll get another Younger Dryas event. There’s no real reason to think it will happen, but if you accept that climate is largely a chaotic system, then you have to conclude there is no way to put a probability on it happening. The end result is that the “real” climate sensitivity of climate due to CO2 output was in fact not positive, but pretty negative in this case (i.e. caused cooling)

        It is also possible that by some completely unknown mechanism that the senstivity of the climate to CO2 is much more positive than you’d expect in this case and something will happen that will give us warming much larger than predicted.

        If you accept that climate is largely a choatic system.

      • I think that there is a confusion between chaotic and deterministic. In theory – chaotic systems are deterministic. Chaos theory is a metatheory – a theory of theories. You take observed abruptness in climate systems – and say this looks very much like a chaotic oscillator and chaotic oscillators have these general properties.

        One of the properties is that prediction as such is beyond our present capabilities – and the best approach may be the probability prognostication approach of Tim Palmer – a summary of which I have posted elsewhere on this thread.

        So Peter is absolutely right – there is no way that we can eliminate the climate risk of abrupt and severe change as a result of anthropogenic greenhouse gases.

        Is climate chaotic? Well if it looks like a duck and quacks like a duck.

  32. Before I start, can I just say that I struggled a little with this which may be reflected in what follows

    Now I know that I am falling into a trap by putting up a graph with a timebase. However, how does this chaos theory fit with this graph!

    • Among other things, if one sets aside the astronomical Milankovich cycle suggestion as the driver of glacial epochs, the ice ages might be quasi-periodic oscillations like the ENSO pattern. The fact that the cooling spans are gradual while warming episodes are short-term and abrupt is an issue that has been comfortably dealt with yet, regardless of the pet theory applied. There are even longer term oscillations as well. For much of the Cenozoic the planet as been in an “Ice House” pattern, as opposed to the preceding “green house” pattern. The “ice house” pattern is identified geologically by evidence of polar ice caps and apparently, montane glaciers. Real “green house” periods lack evidence of planetary ice. If you read up more on glacial patterns you will find that the interval has apparently changed during the Pleistocene.

  33. Despite the chaotic and stochastic (as I’ve already said in the other thread, and I agree with you those two things aren’t necessarily interlinked) components of the system are clearly predictable as a result of various forcings (at least short term).

    Large volcanic erruptions cause cooling. Solar cycles with less solar output are cooler than those with more solar output.

    Temperatures go up and sea levels follow (at least shorter term).

    Does anybody really want to argue that if something happened and essentially all sun light was blocked from the Earth that the Earth wouldn’t cool.

    That doesn’t mean the system doesn’t have chaotic and stochastic components nor that we could predict the complete climate state of the Earth if that happened.

  34. Since data can be analysed and (somewhat reliable) determinations can be made of whether it fits in deterministic, stochastic or chaotic profiles and on which scales of time (possibly also of space), I don’t see there being a death knell here for studying cause and effect or probability in Climate.

    If anything, the better-developed the Chaos Theory applied, the more varied types and better quality of data that it is applied to, the better we will understand what predictions can be made and over what ranges.

    Further, models are made _more_ and not less meaningful and compelling as tools of research under Chaos, as they are not about prediction, but about understanding what chaotic systems might look like.

    CO2 emissions by man are the strongest single perturbation of the climate we know for the scale of the chaos in our system.

    This is in nearly zero doubt.

    We have good, strong data and theory for cosmic and solar perturbations, and we know to fair confidence that they are not within orders of magnitude of the size of the influence of CO2. Look at the variations in incoming radiation, and show even one that amounts in duration or potential to affect the system as even a fraction of the signal from the multiplication of levels of an actual chemical.

    Cloud formation from cosmic rays?! This is vanishingly small as a perturbation, moves the system in all directions about equally at once, and what demonstrated duration for this input is there? Fractions of a second? A few weeks?

    Volcanoes? Vanish into insignificance in peak duration and overall variability.

    Solar variability? Vanishes on the timescale of climate. Sure, in fifty to a thousand millennia, will overwhelm the effects of CO2 emission in this millennium, if there is some strong single-direction of change in the Sun.

    Space rocks crashing into us? Stop being silly.

    This pretty much leaves the natural variability carried on the large ocean cycles, which is not external perturbation, and as it is confined within the system and not an externality, is not a perturbation.

    This is what AGW is about, perturbation of the chaos in the system, on the scales where the chaos of the system is significant to us.

    That perturbation is CO2 emission.

    Everything else is footnote.

    • Bart R wrote, “Since data can be analysed and (somewhat reliable) determinations can be made of whether it fits in deterministic, stochastic or chaotic profiles and on which scales of time (possibly also of space), I don’t see there being a death knell here for studying cause and effect or probability in Climate.”

      That’s exactly an example of what Tomas was talking about, “99%”. Which data? Reliable determinations? Do you have any idea what kind and amount of data is needed to characterize dynamics of such a large spatio-temporal field object as atmospheric dynamics? Climate statisticians cannot even accept the idea that a regular grid of properly spatially placed stations is needed to just evaluate the basic mean of some projection of temperature field.

      To give you some initial perspective on what kind of amount of data must be collected and processed to perform basic nonlinear dynamics analysis on even simplest and cleanly mathematically defined [and generated] non-linear systems, see this:

      http://www.fi.isc.cnr.it/users/antonio.politi/Reprints/017.pdf

      Please digest the paper and come back with answer before commenting any further on a subject that is clearly outside your expertise.

      (For impatient bologgers without Adobe pdf reader, the answer is: about 10^9 dynamically-significant points per attractor dimension is a minimal must)

      • Al Tekhasski

        I cannot say I have thoroughly digested every line and equation of the paper you supplied, though I am eager to return to it to do so, as it has been well over a decade since last I studied such and deeper in depth, and it seems I miss it greatly.

        You overshoot my mark. This is understandable, as it is ambitious to do what you suggest I say must be done, and characterizing my ideas as ambitious, you read more into what I suggest than is there.

        ” Do you have any idea what kind and amount of data is needed to characterize dynamics of such a large spatio-temporal field object as atmospheric dynamics? Climate statisticians cannot even accept the idea that a regular grid of properly spatially placed stations is needed to just evaluate the basic mean of some projection of temperature field. “

        If there isn’t an attractor within a time frame, that’s fairly self evident in climate, though again I disparage the practice of focus on the temperature field alone. Messy and problematic that dataset will always be.

        No attractor, no point talking about Chaos.

        If there’s an attractor within a time frame, that’s great. You’ve just defined the scope and parameters of an engineering challenge well within practical technical means. Your project gives a good deal more information than is needed to acheive my much more modest aims, but that’s okay, since it would certainly cover what I set out, plus provide no doubt sizable other information of use to someone.

        And again, I don’t regard either the dataset I suggested or the much more impressive dataset you interpreted me to mean as more than mathematical puzzles and Earth Sciences trivia.

        We have a novel perturbation across the globe and each chaotic climate system on the globe; it is bigger and more persistent, and in nearly every measure a perturbation can generally be defined as more likely to upset a dynamic equilibrium or shift or radicalize attractors is larger than any perturbation for the past half million years.

        What does this say to you about the stability of the climate?

      • Bart R, As I said, this subject is clearly outside your area of expertise. Please stop digging.

      • Al

        Nice answer.

        But then, where do I claim expertise?

        What expertise does it take to study, to ask direct questions, to decide whether people who persistently aren’t answering those questions are being deliberately evasive?

        I’m expert enough in Chaos Theory to recognize Tomas Milanovic is extremely advanced in his understanding of that subject. And he also is simply overconfident in its application and the weight one must thereby grant his clearly prejudged opinion about some things outside Chaos Theory.

        Milanovic’s saying in the infinite scale of time and infinitessimal scape of space there will be instances of climate inappropriate to predictability by statistical methods, but the mathematical basis of this claim is so ill-defined as it itself will produce no other framework of understanding. (Big deal, who hasn’t said that?)

        You are saying in the same range there are instances of climate inappropriate to conventional observational methods. (Big deal, who hasn’t said that?)

        You are both then saying throw up your hands, nothing means anything, there’s no point going further with climate science, here, use my way: “Eat, drink and be merry, for tomorrow we die.”

        Do I mischaracterize?

        Keep digging.

        Oh, and if multidisciplinary knowledge is to be considered with Chaos Theory knowledge, then certainly Thermodynamics, Optics, Chemistry, Economics and Policy knowledge also must be considered in application of the one to the others and all to each, so 99%, really, not a meaningful figure.

        Try telling a policy expert what Tomas says, “we can’t know what’s ahead, so let’s keep doing something we know will certainly make it worse.”

      • Bart R asks, “Do I mischaracterize?”

        Yes you do.

        “What expertise does it take to study, to ask direct questions”

        Quite a bit. You need to master all (introductory) levels of Calculus-I-II-III, Differential Geometry, Topology, Mathematical Physics, Classic Mechanics, Mechanics of Continuum Media (ISBN 9028609245, 9001796915), and the be able to read (and understand) the following textbooks past first 20 pages (while most people cannot get through first five):
        ISBN 0387548130
        ISBN 0387966498
        ISBN 0521303621
        ISBN 0387548130

        Then you need to contribute something in peer-acceptable literature. Then you can ask “direct questions” that have a meaning. Have a nice and long (10+ years) reading.

        P.S. use http://www.gettextbooks.com/

      • Al

        Nice.

        One needs to be peer-reviewed to ask questions on blogs?

        Or do you mean WUWT as peer-acceptable?

        Because I’ve done that reading.

      • No, you don’t need to be in a peer-reviewed category to ask stupid questions and make silly assertions and elaborate on goofy interpretations, you are correct. However, being “peer-reviewed” does not guarantee the opposite if your peers are way outside the discipline under discussion.

      • I guess I’m in the 95% who have can’t understand the textbooks on snark, excessive rudeness, ad hominem attacks, etc.

        And no, no condescending appeal to authority there that I can detect.

      • Al

        Though the English translation of Ordinary Differential Equations of V.I. Arnold is stellar (and in the 3rd edition, quaint), one guesses you included it twice for other reasons than its excellence.

        Btw, you can get it in Kindle.

        I take issue with your claim about most people, as it took me very little trouble to explain much of what is in the first third to my then five-year-old sister when I was twelve.

        I can only imagine that one has had a rather disastrous education if one thinks this material particularly more inaccessible than, say, Epstien’s Semantic Foundations of Logic, or perhaps Gaussian self-affinity and fractals: globality, the earth, 1/f noise and R/S By Benoit B. Mandelbrot.

        So, what’s your point again?

        That you have magickal knowledge the rest of us don’t?

      • Bart R said: “you included it twice”

        No, I did not include them twice, these are vastly different books. Check again. I would suggest to study them in the order I submitted.

        You might also need to digest these additional ISBNs:
        3540404481
        0387181733
        038794947X
        3540533761

        http://www.gettextbooks.com/author/Vladimir_I_Arnol%27d

        The point is that you likely won’t be able to understand the point until you have a grasp of the topic at the level of basic concepts described in books I suggested. But with the level of attention you have exhibit here and your attitude, I have some doubts that you will succeed. Sorry.

      • Sorry, I have to take it back, the last reference is indeed a repetition of the first one, to Arnold’s basic text. The last reference is supposed to be a book for engineers,

        “Nonlinear Dynamics and Chaos: Geometrical Methods for Engineers…” by J. M. T. Thompson, H. B. Stewart.
        ISBN 0471876453

        But it always is educational to re-visit a basic text after running the whole circle, the reading will be much more enjoable :-)

      • Al

        And how are you coming on Mandelbrot and Epstien?

      • Psst – it’s Epstein, not Epstien

      • Bart R, this particular abstract property of recurring maps is not overly rich conceptually , and does not require much intellectual effort to understand and appreciate. Much more effort is required to recognize this (important) feature within predominantly smooth physical reality, especially in applications like atmospheric dynamics, and realize that the chaotic effects do no go away with small deformations of “physics”, as some climatologist love to say.

        And I don’t have much of opinion about musings on philosophy of mathematics.

      • Two points to Roj

        Al.. Well, Al has provided a rich trove of recommendations for a Climate Etc. Book Club.

        Maybe we should have regular meetings to review selected readings?

        I think that’s worth a point.

      • Oh, and Al, for the record, I believe Tomas doesn’t have nearly enough dimensions to characterize the universe of ‘climate scientists’ across understanding of Chaos if he’s able to represent it in a single number.

        But believe whatever you will.

  35. I think orbital mechanics remains unpredictable because while we have to accept roundings of irrational numbers in our equations and limitations in our dataset size, nature does not. The shadow of PNS has just fallen across this thread, too.

      • He spent four years in clown school. “I’ll thank you not to refer to Princeton like that.”

        ‘Capture Dynamics and Chaotic Motions in Celestial Mechanics:
        With Applications to the Construction of Low Energy Transfers’
        Edward Belbruno

        ‘This book describes a revolutionary new approach to determining low energy routes for spacecraft and comets by exploiting regions in space where motion is very sensitive (or chaotic).’

        Here is an animation of the Poincaré 3 body problem which first demonstrated chaotic motion in orbiting bodies.

        http://www.scholarpedia.org/article/File:3body_problem_figure6.gif

        I pride myself on the obscure and the tangential – the brain is the ultimate nonlinear system – but has Eli reached new heights in cryptology?

      • Point being that it is possible to handle even classically chaotic spatio-temporal systems because the available parameter space is bounded.

        Even for a system which is chaotic, paths through the parameter space do not necessarily fill the entire space and measures of the areas which are filled can be used to make future predictions. This can and has been done using equations of motion which are not chaotic which is the approach of GCMs. What you are looking for is the area of parameter space occupied by and ensemble of orbits. The key is that the limits/boundaries are the same for the deterministic and chaotic paths and what you are really interested in are the boundaries, not the specific values at any point in time.

        See statistical mechanics for an analogy. Paths of molecules are chaotic, but what we are interested in are average energies (temperature), etc., and the effect of changes in the system on the average energy/temperature.

      • Even for a system which is chaotic, paths through the parameter space do not necessarily fill the entire space and measures of the areas which are filled can be used to make future predictions. This can and has been done using equations of motion which are not chaotic which is the approach of GCMs.

        I don’t quite follow this. The first sentence has the focus on chaotic response and then the second sentence focuses on response that is not chaotic.

        More importantly, it is my understanding that weather is chaotic and that calculations with Numerical Weather Prediction ( NWP ) models / codes are consistent with that assumption. Relative to chaotic response, it is also my understanding that GCMs are kind of like NWP in that they produce weather-like results. Especially, the temperature, a fundamental weather concept, is claimed to be consistent with chaotic response.

        How can equations that ‘are not chaotic’ produce results for a quantity that is said to be chaotic? And why are ensembles of GCM results, based on use of a range of parameter values and initial conditions, used to deduce the response of the realizations of the model earths? If the equations do not produce chaotic response, why can’t each realization be evaluated independent of the other realizations?

        Thanks for corrections to all incorrectos.

      • Because the issue is not the specific path through the parameter space, but how much is filled. A useful example is that in the temperate latitudes summer days are hotter than winter days although you may find an occasional exception. Chaotic orbits make it impossible to predict temperatures within some range for a particular day two weeks out, but you have a damn good idea of the range of possibilities. That is the difference between weather and climate.

      • It is like predicting the Sun is coming up tomorrow. Quite likely and I can’t imagine a scenario where it doesn’t. But if we look a decade or few hence – the range of possibilities include a 16 degree cooling in some places (Abrupt climate change: inevitable surprises, PNAS). There is not much evidence of it being much warmer in the past few million years but I suppose it is a possibility. There are also quire a few ‘alarmist’ abrupt change scenarios at the WHOI and there is that recent Pentagon scenario.

        Sensitive dependence to initial conditions includes greenhouse gases at least in theory – and I would suggest that a small risk of extreme consequences is a quandary. But Eli is not helping by clinging to the old paradigm.

      • But GCM use the Navier-Stokes PDE and are chaotic

        ‘Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable. They echo other famous limitations on scientist’s expectations, namely the undecidability of some propositions within axiomatic mathematical systems (Gödel’s theorem) and the uncomputability of some algorithms due to excessive size of the calculation.’ http://www.pnas.org/content/104/21/8709.long

        And chaotic systems are in theory deterministic but the simple gas model is not chaotic – as San says. So the analogy does not hold water.

  36. “CO2 emissions by man are the strongest single perturbation of the climate we know for the scale of the chaos in our system.
    This is in nearly zero doubt.”
    No, that is not held without reasonable doubt.

    • try again. What do you propose as the strongest perturbation in the past 100 years, and please provide a reference.

      • Eli Rabett

        An excellent question, one concludes by its phrasing and position in the reply chain directed at both, but more incumbent on myself (who forgets sometimes that people forget what to me seem current references already part of the discussion because they’re elsewhere on this site).

        “CO2 emissions by man are the strongest single perturbation of the climate we know for the scale of the chaos in our system. This is in nearly zero doubt.”

        I could sum up all I have to say by amending my phrase above to, “no doubt barring claims that do not adequately account for conditions.”

        Earlier (http://judithcurry.com/2011/02/04/slaying-a-greenhouse-dragon-part-iii-discussion/#comment-40174) kuhnkat offered Ferdinand Englebeen’s discussion at http://noconsensus.wordpress.com/2010/03/06/historic-variations-in-co2-measurements/ and I offer further* this link (http://www.geocraft.com/WVFossils/stomata.html) about whether we can trust CO2 level measurements from ice cores and other proxies, such as what one is sure to be an ‘uncontroversial’ ;) discussion by an expert ‘popular’ ;) at Climate Etc. found at http://www.realclimate.org/index.php/archives/2005/11/650000-years-of-greenhouse-gas-concentrations/ and then there’s discussions mostly unrelated to the period of the past half a million years of CO2 such as http://www.geocraft.com/WVFossils/Carboniferous_climate.html.

        (*I do not offer this as a straw man. My discussion below is not meant to weaken kuhnkat’s citation by attacking a weaker one of my own as substitute for his; in everything I say below, I hope to equally apply the logic to both, and where any find disastisfaction with this reasoning as applied to Englebeen, please reply to the thread with details. I will do my best to amend.)

        Do I think the CO2 level before the Industrial Revolution was 280 ppm, or 270 ppm, or higher, to answer kuhnkat belatedly? Interesting evidence like some of the above suggests possibly higher, but I have an alternative hypothesis to those who present it.

        However, CO2 levels are highly variable regionally due to natural (plant, CO2 accumulation in wet ground or still water, or volcanic, etc.) or human action, seasonally, and based on time of day or night.

        Looking closely at the evidence offered, the ice core record remains the trustworthy and plausible one; while other contrary evidence doesn’t generally amount only to improbably flawed or artificially constructed gemstones strung together artfully into faux necklaces demonstrating what a shame it is their authors don’t spend their time on the real thing, they can be demonstrated to show they are of lesser reliability for the reason above provided of high CO2 variability due to local conditions by enough to dismiss from consideration.

        A botanist can recreate the stomata variation seen in the fossil record in a greenhouse within five generations under conditions that can be maintained under heavy forest domination or swamp boundary situations.

        This makes neither that greenhouse (not to be confused with the Greenhouse Effect) nor the swamp nor the forest global.

        It does mean there could be an interesting case to be made for conditions that tend to generate this stomata profile may tend to be dominated by boggy conditions and deep forests.

        Wouldn’t that be an interesting hypothesis, if testable?

        The Antarctic ice sheet does not suffer this issue of swamps or forests.

        Correcting for local conditions is what G.S. Callendar did over a century ago to distinguish valid from invalid CO2 measurements, and this same correction for the stomata record applied today yields the same result.

        The same old experimental and reasoning error as was the cause of Callendar’s grief is repeated today in these contrary findings, and is as easily dismissed.

        I will handwave slightly here to say what applies about stomata and correcting adequately for conditions applies to all the Englebeen discussion, more or less.

        The contrary cases do not assemble to form reasonable doubt in the 2.3×10^-4 +/- 0.5×10^-4 parts of unity per volume for over 99.9995% (check my figures, check the sigma level, tell me if I’m wrong, inaccurate or imprecise) of the past over half million years before 1775 of the global CO2 level.

        They assemble to form a picture of a world with pockets of high CO2 concentration in some places at some times.

        So I am confident of a claim of human caused 44% rise in CO2 level since 1750.

        This is CO2. Some call it a trace gas, but they are simply wrong if they use this in the context of climate directly or indirectly.

        CO2 is differentially bioactive on plants in the ranges discussed with great significance, in a way we know to have Chaotic elements on the scales discussed, and we know plants profoundly impact climate on millennial and centenary scales.

        CO2’s impact on alkalinity of bodies of water is indisputable chemistry, though much about this topic remains unstudied in the context of climate.

        CO2’s impact on temperature is well-studied, and most regard it as well-established, though much about this topic remains to be examined.

        Global CO2’s impact on those same local CO2 concentrations is yet to be studied, to the best of my knowledge. Is it additive? Multiplicative? Diminished by buffering and limiting effects? A step function? Chaotic itself?

        Be nice to know if any large, still, deep lake is suddenly going to go from belching out its dissolved CO2 in a blanketing cloud for miles around once every ten thousand years to once every five hundred. Or fifty.

        The man-made CO2 emission increase is remarkably sudden, leaping from next-to-nothing to what will soon be 100 Gton a decade. It is large, increases in an odd (mostly exponential?) shape, cumulative (mostly) with an unknown tail-off of re-absorbtion, widespread and compounded by related other changes, such as agricultural and forestry land use, urbanization and sea-use changes.

        This is not an exhaustive list of CO2’s properties or how it may be a perturbation, and brings to mind the question of what makes a perturbation important in a Chaotic study?

        Is the amplitude of perturbation the determinant, or its duration, frequency, quality, type, shape, what?

        I’d love if Chief Hydrologist and Tomas Milanovic and a team of qualified associates with five years and millions of dollars could get together and address the question more completely.

        For that question in our context, I’d say the answer doesn’t matter, as CO2 has them all in one way or another.

        CO2 has seasonal and diurnal variability (natural) on top of its large amplitude shift, making it like a vibrating wildly swinging persistent growing stick in the hornet’s nest, the biggest and suddenest stick we know of, and with an ugly shape unlike anything hornets have ever known before.

        While the Sun has potent perturbations of its usual range of effect on the climate of the Earth to offer, the chief ones are solar megaflares (duration maximum on the scale of what, weeks?), the periodic cycle of sunspots and associated activity minima and maxima, and longer duration changes that we’ll group as ‘other’.

        The Sun isn’t new, and the climate’s accommodated the Sun throughout the past half million years.

        Probably the Sun has produced the perturbations leading to the ice ages.

        It’s not insignificant.

        And yet, the radiation shifts of the Sun amount to a mere fraction of the duration, novelty, and when ‘enough’ CO2 change has accumulated (a speculative figure, but by most accounts one passed decades ago, but if not then sure to be passed soon) amplitude of effect on temperature. Solar variability is surely less impacting on plants than this new CO2 perturbation of the past quarter millennium due to human activity.

        So while the Sun is illustrative, even compelling, it merely highlights how much more significant the CO2 perturbation is by comparison.

        Volcanic activity? Once it was dominant, assuredly, in the geological record. Now, it’s easy to show it is a small fraction of what man does on our current time scale.

        Natural activities of ocean cycles?

        It may be that there is some natural multidecadal oscillation behind the sudden and suspect loss of Arctic ice sheet mass for example, and it may be the changes this natural source caused will feedback to result in something like the total loss of the Arctic ice cap ‘soon’ in geological scale, with profound climatic impacts.

        It is, however, far more persuasive to argue that the natural oscillation was amplified by the ramped-up CO2 level and a result that would not have happened naturally resulted, and that ongoing trends that were unlikely to continue otherwise continue because of human CO2 emission levels. ‘It just happened,’ ought not be considered convincing, and ‘nobody did it’ doesn’t sell.

        So I say it is beyond reasonable doubt that human CO2 emission is the most important perturbation in the climate in at least half a million years.

        Thank you for your attention.

      • Had an eilide, needs expansion.

        Boggy and deep forest conditions being hypothesied both to foster better fossil preservation of stomata structures, and to promote higher than global CO2 concentrations locally, thus to skew observations to the belief of higher CO2 levels than actual.

        If one failed to make such corrections for conditions, one would have to conclude fossil dinosaurs ate a lot of dirt, and the people of ancient Pompei packed their clothes with ash.

      • I would suggest solar for earlier in the century (AR4), and clouds (Clement et al, 2009, Burgman et al 2008, ERBS, ISCCP-FD and CERES – but see the reply to Bart.

  37. In my view, spatiotemporal chaos bedevils an attempt to arrive at a precise or predictive evaluation of a number of climate phenomena, including internal dynamics such as ENSO, PDO, AMO, and other oscillations. Regional fluctuations in temperature or precipitation are also encumbered by the chaotic element inherent in these variables.

    Global temperature is probably different, however. Here, it strikes me that Andrew Dodds, Arthur Smith, Bart R., Peter, and others have provided an accurate assessment of the role of spatiotemporal chaos in the climate system. It is hard to argue with the evidence – the climate system as a whole is not chaotic, but rather harbors chaotic elements that average out over multidecadal timescales, revealing an underlying temperature trend. For the past century, the trend has been characterized by three phases. The earliest one involved warming driven partly by rising CO2 but more by an increase in solar irradiance. A mid-century flat interval was mediated by a rise in anthropogenic aerosols that reduced the transmittance of solar irradiance to the surface. The last phase – from the late 1970s to the present – has witnessed a resumption of warming due to the reduction in the rise of aerosol pollution but a continuation of the increase in atmospheric CO2. These variables – solar intensity, CO2, and aerosol-mediated reduced transmittance – have all been measured, and so the attribution is not theoretical but empirical, although exact quantitation remains surrounded by some uncertainty.

    This does not exclude a partial role for chaos, particularly at shorter timescales. In the case of ENSO, the averaging lies within decadal limits. Other oscillations such as the PDO operate over multiple decades, but also average out over the century. It is likely that all of these have left their mark on the climate signal, but the room within which they may have operated is limited by the reasonably close approximation between the known non-chaotic signals and the observational record.

    The “initial value” issue leaves us with similar evidence that chaos plays a subordinate role in long term global temperature change. Indeed, model ensembles demonstrate that the trends they yield for anthropogenic influences are quite independent of initial conditions, and the level of independence grows rather than diminishes as the timescale is lengthened from one or two decades to a longer timescale within a centennial interval. Despite varying initial conditions, the model results (a) converge to very similar outputs, which (b) match observations fairly well. For those who would argue that “fairly well” is not the same as “perfectly”, one can only agree – part of this undoubtedly reflects some underlying chaos, but as model inputs have improved based on new observational data, it is also clear that earlier deviations from observations (e.g., in the Hansen et al 1984 model) would be largely eliminated by the improved inputs.

    None of this excludes the possibility of unanticipated modulation of what now appears to be a warming trend that is likely to continue over the coming decades in the absence of vigorous carbon mitigation. However, the historical record suggests that modulation sufficient to reverse such a trend is highly unlikely. Abrupt and severe temperature shifts have occurred on occasion in the past, typically separated by hundreds of years or more, but shifts of this magnitude that are global in extent have almost always occurred during glacial eras, when the extent of snow and ice allowed for great changes in feedback in response to only modest signals. Major global shifts during interglacials appear to have been rare, if they have occurred at all. The major difference between the past and present is that temperatures are currently rising at a rate per century that is unusually high for global events. Whether this portends a far greater than average probability that a tipping point will be reached is conjectural, but should be considered seriously.

  38. To sum up. The fact that climate scientists do not treat climate as a chaotic system demonstrates the futility of their research and the invalidity of their conclusions. Why? Because you cannot apply statistical methods to predict the future of any chaotic system.

    • It is important to point out that most climate models aren’t statistical. Tsonis’s work was statistical in nature. Most cimate models though are physical and physical models can be and have chaotic components, and I beleive that is actually true for most climate models.

      • OK then correct me if the following is wrong. Any research that involves trend analysis of physical models (using PC’s) is “statistical analysis” of a system. Right? And if the analysis being undertaken is of a chaotic system then the resulting “trend” has no meaning or value because it has NO predictive power whatsoever.

      • If the system is completely and truly chaotic.

        If sun light was blocked from reaching Earth, is there a high probabilith that the Earth would cool?

        If you answered yes to the above question, then you don’t believe that the Earth’s climate is completely and truely chaotic and chaos in the Earth’s climate doesn’t necessarily invalidate trends seen in global climate models.

      • Richard Wakefield

        Peter, you are misunderstanding what chaos is. Chaos in the non-scientific sense means something akin to a lottary. Completely random.

        It’s not. You have not understood the meaning of Spatio-temporal chaos. It’s a number of oscellations interacting with each other to produce events in the climate. Much like waves on the ocean do when they interact.

      • Maybe you want to plainly state what I’ve said is wrong. I never claimed that random and chaos were the samething, and I well understand the difference.

        http://judithcurry.com/2011/02/09/decadal-variability-of-clouds/#comment-40740
        “There can be systems that are made of multiple interacting subsystems where there is a stochastic component, that are not truly chaotic systems.”

        http://judithcurry.com/2011/02/09/decadal-variability-of-clouds/#comment-40941
        “Systems that are not truely chaotic can have some properties of chaotic systems. All stochastic systems are not truly chaotic.”

      • Stochasticity (randomness) can be modeled statistically and probabilities assigned via those statistical models. True chaos can’t.

      • Monte Carlo, random walk – I am not really sure that stochasticity – a combination of determinism and random elements – is a useful description of climate processes.

        It is probably truer to say that climate is fully deterministic – i.e. that there are no random elements. Chaotic systems are in theory deterministic. One things leads to another – but there is true dynamical complexity. This appears in the climate system as abrupt and nonlinear change. It looks very much like a chaotic oscillator.

        Chaos theory is a metatheory – a theory of theories – it says that complex dynamical systems have certain properties.

        Chaos does not preclude determinism.

      • Diffusion is a stochastic process. Biological systems are dependent upon stochastic processes. There is no way that climate doesn’t have a stochastic component.

      • It is the issue of randomness that I question. Statistical approached are practical – but is anything really random?
        Could you for instance predict the toss of a coin if you had enough information? Just a random observation.

  39. Richard Wakefield

    Gee, the climate system is more complex than we thought, more complex than models can simulate, rendering all future speculations garbage. Isn’t that what skeptics have been saying all along? Yes.

    Excellent post, a keeper.

    • I am no chaos theory practitioner but as I understand it – models of chaotic systems are intended only to show the spatial limits and the attractors within etc. They do not attempt to predict or reenact the exact internal behavior of any preexisting system because a chaotic system is, by its very nature, unable to be duplicated. Removing the sun from Earth’s proximity would alter the internal operation of the chaotic climate (Earth would get colder to be sure) but this in no way invalidates the spatial constraints or the attractors within the chaotic climate system. It alters them but does not invalidate them. So, the argument that removal of the sun and observing a down temperature trend somehow makes the current statistical research approaches a worthy goal is to completely miss the point of chaotic systems. Removal of the sun in no way makes our chaotic climate system any more predictable than with the sun present. To extend the point. Imagine a super computer that had the computational ability to simulate our climate system. Would such a computer be able to predict the future of our real climate? Given perfect starting conditions? Given perfect variables, perfect input down to the molecular level of every atom that exists in on and around earth. Given every charged particle hurtling toward, around or in to our atmosphere. Given every motion of splitting atoms in the Sun etc. NO! – such a model would quickly depart from the unfolding climate found on Earth because chaotic systems defy prediction. We can model behavior (space and attractors) but we cannot predict the future of any chaotic system. Only with an infinitely powerful model will we be able to predict a chaotic system.

      • However, systems are more or less chaotic and that is the rub here. The 3-body problem is chaotic but we know pretty well where the moon will be 100 years from now (absent a pesky asteroid problem, etc.). What is needed is the science of the limits to predictability for specific natural systems. For weather that limit seems to be about a week out, but people still sell seasonal forecasts. The AGW camp is selling 100 year projections. Is this or is it not scientifically reasonable? Where is the research on this central issue, when the USA is spending many millions a year on carbon cycle research instead?

      • Different models are used for weather prediction versus climate forecasts. NWC models are an initial value problem – they’re good for several days out and then diverge due to the chaotic structures discussed here. These models are certainly not used for 100 year predictions. Note that forcings that impact climate such as changes in solar input or in GHGs are not even included in weather models. The drivers are different.
        Climate models are a boundary value problem – changes in forcings alter the general state of the atmosphere. Weather in these models are fluctuations but aren’t predictions of the future.

        Weather prediction is like predicting the wave structure in a tub. That prediction isn’t dependent on whether a hose is slowing increasing the amount of water in the tub. The waves are best modeled as initial value problems and can be forecast for a relatively short period. To forecast the change in the amount of water in the tub, you aren’t really concerned with the exact wave structure. The driver in this case is the large scale forcing.

        You simply cannot equate modeling weather with modeling climate.

      • Richard Wakefield

        Except that “hose” is just a small drip, and the drain is compensating because the climate system must be in equalibrium at all times.

        Continuing that analogy, make the hose drip a different colour of water, and watch what happens.

      • The point was the difference in modeling weather versus climate. Different drivers.

      • Climate is average of weather, mean expectation of it. What makes you think that you can model averages better than the weather objects itself (which you cannot model)? Why do you think that the crippled and heavily parameterized primitive equations of fluid dynamics with fuzzy boundary conditions in your GCMs have anything to do with reality of long-term interactions between climate subsystems and its “coherent structures”?

        Regarding your example of water in a tub, you in fact assume that amplitude of waves is much smaller than the overall water level, so you have a small parameter here, and some sort of perturbation method might be used to derive a reasonable model for the averaged level in the tub. In climate, you have vast variations on daily and seasonal (and millennial) scales , such that your averaged climate signal is nearly lost in huge main fluctuations (that’s why all your efforts to get “warming signal” out of this chaos of weather are futile with your current understating of how the measurements should be done). You have no small parameter to expand around, and no separation of scales because turbulent systems have continuous spectrum.

        Your post is a good example showing that without good theoretical understanding of the subject you are doomed to fail (which you did).

      • Al,
        “Your post is a good example showing that without good theoretical understanding of the subject you are doomed to fail (which you did).”

        Why do you assume I have a poor theoretical understanding?
        And why do you immediately engage in a conversation with me in such a rude manner?

      • Because you wrote utter nonsense, “initial value problem”, “boundary value problem”, different “drivers”, etc. If you want to demonstrate good theoretical understanding, please answer my question: What makes you think that you can model averages better than the initial weather objects itself (of which the averages are built on)?

      • “What makes you think that you can model averages better than the weather objects itself (which you cannot model)? ”

        Climate models do not attempt to predict weather. The forcings that influence climate (solar input, GHGs, etc) have little to do with day to day weather. Weather stems from the dynamical systems that redistribute energy within the climate state. Its drivers are the rotation of the earth, the predominant global-scale flows (jet streams, etc) and even local influences such as topography. Weather prediction depends upon a solid initial condition, and given accurate observations, we are quite good at predicting weather out for several days. Beyond several days, the chaotic elements kick in and models and reality diverge.
        Modeling climate is an entirely different process. Here, the large scale forcings such as solar input, albedo, and GHG warmings drive the climate state. You do not predict climate by averaging together weather predictions.

      • Al,
        It is well established that (most or all) climate models describe boundary value problems. That is a statement about the models and has been verified in many cases.

        The more difficult question is: Is the real climate determined equally well by the boundary values? In other words the question is, whether describing a boundary value problem is a good property or serious weakness of the models.

      • I think we should be unfailingly patient polite and civilised. I shall take this opportunity to remind people that here is a seeking of the sacred hydrological truth through dialectic. For this the required attitude is humour, patience, good will, good faith, honesty and humility.

        I do note that Jen has reduced her scope from rivers to bathtubs. Thinking in analogies – thought experiments – has uses but dangers as well. The bathtub has a couple of strange attractor phase spaces. It is empty, it is filling or emptying, it is overflowing weakening the structure and crashing into the apartment below.

        The 2002 NAS report – Abrupt climate change: inevitable surprises – has a fulcrum analogy that I commend to Jen.

      • I don’t have the report here – I’ll take a look at work next week. I’m sure someone there has a copy of it.

        I do recall that proposed physical causes for abrupt climate change include orbital variations and combinations of feedbacks… and none of this negates the different drivers/modeling approaches for weather systems versus climate.

      • It is an NAS online book. The problem is that the nature of neither the models or climate has been systematically addressed. Copied from earlier – which I shouldn’t need to do.

        ‘A PDE is a partial differential equation such as the Navier-Stokes equations of fluid motion. They are differential equations in x,y,and z dimensions used in GCM. They were used in Edward Lorenz’s early convection model – to re-discover chaos theory. So the climate models are themselves temporal chaotic dynamical systems.

        ‘Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable. They echo other famous limitations on scientist’s expectations, namely the undecidability of some propositions within axiomatic mathematical systems (Gödel’s theorem) and the uncomputability of some algorithms due to excessive size of the calculation.’ http://www.pnas.org/content/104/21/8709.long

        Again to repeat – ‘Prediction of weather and climate are necessarily uncertain: our observations of weather and climate are uncertain, the models into which we assimilate this data and predict the future are uncertain, and external effects such as volcanoes and anthropogenic greenhouse emissions are also uncertain. Fundamentally, therefore, therefore we should think of weather and climate predictions in terms of equations whose basic prognostic variables are probability densities ρ(X,t) where X denotes some climatic variable and t denoted time. In this way, ρ(X,t)dV represents the probability that, at time t, the true value of X lies in some small volume dV of state space. Prognostic equations for ρ, the Liouville and Fokker-Plank equation are described by Ehrendorfer (this volume). In practice these equations are solved by ensemble techniques, as described in Buizza(this volume).’ (Predicting Weather and Climate – Palmer and Hagedorn eds – 2006)

        Clouds also are uncertain – as I show in my decadal clouds post.

        Google abrupt climate change – I get 300,000 hits.

        OK – here are the rules. Here it is a seeking of the sacred hydrological truth through dialectic. The required attitude is: humour, patience, civility, reflection, application, honesty, good will and good faith.

        You have tossed off a I do recall… This is far too casual a dialectical mode and is far from acceptable.

      • In this realclimate piece Swanson, Tsonis’ co-author states that there won’t be significant warming until after 2020.

        http://www.realclimate.org/index.php/archives/2009/07/warminginterrupted-much-ado-about-natural-variability/

        Is this prediction garbage?

        Is the Earth’s climate so chaotic to render a prediction 10 years out on something like global temperature nonsense?

      • It is not a prediction – independent of the underlying physical causes of Pacific climate variability, an understanding of the PDO and ENSO as behaving like a complex and dynamic system in chaos theory emerged from a 2007 study by Tsonis et al. They constructed a numerical network model from 4 observed ocean and climate indices – ENSO, PDO, the North Atlantic Oscillation (NAO) and the Pacific Northwest Anomaly (PNA) – thus capturing most of the major modes of climate variability in the period 1900–2000. This network synchronized around 1909, the mid 1940’s and 1976/77 – after which the climate state shifted. A later study (Swanson and Tsonis 2009) found a similar shift in 1998/2001. They found that where a ‘synchronous state was followed by a steady increase in the coupling strength between the indices, the synchronous state was destroyed, after which a new climate state emerged. These shifts are associated with significant changes in global temperature trend and in ENSO variability.’

        We are on a different trajectory since 1998 and these modes last 20 to 40 years. But who is to say it won’t change tomorrow.

      • “It is an NAS online book.”
        I know that. There is a fee. I was not willing to pay the fee.

        “Copied from earlier – which I shouldn’t need to do. ”
        No, you didn’t need to. I have worked with global scale models – not climate models but CTMs.

        “You have tossed off a I do recall… This is far too casual a dialectical mode and is far from acceptable.”
        You are quite hostile here.
        And you don’t have a full understanding of the scope of modeling tools that are used. Don’t know how to say it any more clearly.
        Weather models are not used for climate modeling. Apples and Oranges.

      • Chief H – you know how to stop a dialogue for sure.
        You are welcome to respond, but don’t expect any more responses from me.

        Judy – this is why you don’t get more participation from working scientists in the field. I try occasionally to participate here – certainly not to educate, because although my work is tangential to climate science, I am not directly involved with it. I read these blogs to try to learn and to challenge my ideas. But it takes much effort to filter out the noise. I simply do not have time nor desire to engage in hostile conversations (no matter how subtle the hostility). Too many folks here are not interested in back and forth and are only interested in showing off.

        I don’t know the answer. There are folks out there that are contributing to improving the study of climate (I really like Stephen Mosher, and Zeke, for instance), and it would be great to find a way for regular in the trenches scientists to here some of their ideas and suggestions. But to engage the masses of working atmospheric and climate scientists? This type of forum (blog) isn’t it, IMO. Not many of my co-workers are even aware of the blogs.

        In any case – good luck to you.

      • Jen: “Don’t know how to say it any more clearly. Weather models are not used for climate modeling. Apples and Oranges.”

        You are not saying it clearly at all. Instead, you are throwing annoying meaningless monikers. Nobody says that climate models are the same as weather models. However, you are not telling us the metrics of their difference. But from what I see here they are the same mathematically — in both are the simplified “primitive equations” (of shallow water sort, aka “dynamics core”) with BOUNDARY CONDITIONS – no-slip at land/water surface, and no flux at some top (obviously some unphysical truncation, because vanishing air density will eventually violate assumption of continuous media in LTE).

        The two classes of models more likely as Oranges and Tangerines. The differences are (1) that you cannot afford spatio-temporal resolution of weather models to simulate thousand years forward, and (2) in weather model you don’t care if your prediction will blow up in 100 years yielding Venus condition or Ice Ball, you just stop the computer after a week of simulated time, and start over. In climate models however you have to botch smaller scale weather dynamics into “parameterized physics”, and stabilize behavior of initially non-viscous Eulerian-based “core” with artificial dampings. Both procedures have UNKNOWN impact on topology of solutions of the original Navier-Stokes problem, especially on long term evolution (which is precisely the entire goal of climate modeling), on development of “coherent structures” and their interaction and formation of seemingly stable large-scale atmospheric circulation.

        Both models have somewhat different computer experimentation tactics. In weather prediction you focus on short-term system trajectory developing from a given initial condition in phase space, and don’t bother to run it down to “weather attractor” (because it will not correspond to any details of future and is useless for weather forecast). In climate “prediction” you have to run your model down to attractor, or whatever has left after all your truncations, parameterizations, and stability regularizations and flux adjustments, and continue to run until you can collect any stable statistics (if you ever check if it is converging to anything). That’s why there is a widespread confusion about “initial value problem” and “boundary value problem”, and AGW proponents hide behind these monikers because they don’t have a clue what does it exactly mean.

        In short, it is formally true that climate modeling “does not use weather models”. It uses highly crippled and botched weather models tailored for perceived stability and match for globally averaged temperature record. It looks like it is not a good idea for AGW climatards climate researchers to distance themselves from weather models.

      • Dear “Al”,
        “It looks like it is not a good idea for AGW climatards climate researchers to distance themselves from weather models.”

        If your aim is to convince people of your failed logic and misunderstandings through use of insults, you fail in more than simply misunderstanding climate science. I’ll give you the benefit of the doubt here and accept that you really have a desire at all to partake in communication. Therefore – <>

      • Climate models are like weather models for the atmosphere and land, except they have to additionally predict the ocean currents, sea-ice changes, include seasonal vegetation effects, possibly even predict vegetation changes, include aerosols and possibly atmospheric chemistry, so they are not like weather models after all, except for the atmospheric dynamics, land surface, and cloud/precipitation component.

      • Judy, you allow the use of “climtards” here?
        So much for civility.

      • http://www.nap.edu/openbook.php?record_id=10136&page=153

        One point only I wish to make – I quoted both Tim Palmer and James McWilliams – so it is not my understanding that is in question.

      • “Removing the sun from Earth’s proximity would alter the internal operation of the chaotic climate (Earth would get colder to be sure) but this in no way invalidates the spatial constraints or the attractors within the chaotic climate system.”

        Does it matter if we acted to greatly reduce the amount of sun light that reaches the Earth?

        Are we internal or external of the “chaotic climate system”?

      • Internal – and very insignificantly internal.

      • We don’t have the ability to seriously block the sun light reaching the Earth?

    • Richard Wakefield

      Yes.

      Absolutely yes.

      On some scale and in some undecidable manner depending on perturbation, yes.

      Meaning no, resoundingly and absolutely no; if skeptics other than myself and perhaps a few dozen others have been saying and listening to and thinking about exactly and primarily this all along then the streets are paved in gold and pennies fall from heaven.

      Everything except CO2 emission by man is the tiniest perturbation, or is part of the system already.

      CO2 emission by man is the first perturbation in at least half a million years of its scale.

      Perturbation is the cat’s whisker that upsets the first domino.

      Perturbation is the little end of the stick the little boy pokes into the hornet’s nest.

      CO2 emission by man is on track at the current rate to equal or exceed the perturbation associated with all the known ice ages of the past half million years.

      Is there a way of predicting an association between a perturbation and the hyperradicalization of any particular attractor?

      No.

      We can’t say how the CO2 perturbation will affect Chaos in the climate, if at all.

      Fred Moolten has it half right, above (http://judithcurry.com/2011/02/10/spatio-temporal-chaos/#comment-41377) about how Chaos distorts our measurements, however nothing I or he says diminishes the truths Tomas Milanovic asserts.

      If we perturb the system enough — “enough” being an unknowable (and possibly larger than could happen) measure — we will make some of the parts of it that are not now chaotic on our scale change their chaos characteristics.

      Chaos impacts problems both of description and prescription in climate. Chaos Theory, especially Temporal but also Spatial-Temporal, contain important tools for framing our problems.

      Chaos Theory currently frames around CO2 emission in big, bold, block, flashing lines. CO2 emission by man is The Big Perturbation on Campus.

      • I hope I didn’t mischaracterize your position. CO2 is indeed the big perturbation. However, the evidence (dating back more than 400 million years) is rather clear. It is a big enough perturbation on timescales of multiple decades or longer to dominate the temperature pattern on a global scale, despite the existence of chaotic elements responsible for fluctuations in the temperature trend globally that average out, and despite significant unpredictability regionally. Given the level of chaos on regional and short term timescales, it’s hard to be confident that prediction on these levels will improve significantly in the near future.

      • CO2 is the BIG perturbation? Please rank in order the following according to your opinions from big to small perturbation. I rank them here in my own order.
        The Sun, Jupiter, gravitational moment of the remaining planets, Earth angular orbit variations of all kinds, galactic rays, motion of the solar system through the galaxy and dust clouds, the Moon, atmospheric water vapor, ocean currents, configuration of the tectonic plates and continental drift, volcanic activity, the natural biosphere, human urban development, human alteration of the greenhouse water cycle (dam’s, rivers etc), ,… human produced CO2.

        If you were to produce a chaotic model using the above, I would venture a prediction that the above former were the massive attractors about which we could make some decent predictions about the future but that the latter human produced CO2 inserted into our atmosphere would leave us with hopelessly inadequate and wrong predictions because CO2 contributed by man is not an attractor of any significance in the chaotic Earth climate system nor is CO2 produced by man a perturbation that would yield any predictive ability. In fact, human contributed CO2 is a small component of natural biosphere contributions and also relatively small when compared to the CO2 that the oceans are releasing as we recover from the last ice age. And given our knowledge that CO2 and temperature have both been at much higher and lower levels (and diametrically no less) in the chaotic climate system history, how does one seriously make the claim that CO2 produced by man is the big perturbation?

      • Is rate important at all?

        How many of those variables are changing at any sort of time frame that they matter to the climate from human perspective?

        There is also no reason to believe that many of the independent variables.

        If x affects y and y affects x, can you really rank x over y with respect to importance?

      • When it comes to predictions of the behavior of a chaotic system, most certainly you can rank according to influence or attraction.

      • So you have some idea what the attractions are for the variables you’ve ranked?

      • I would like to go back to academia and build models of chaotic systems but I have a wife and family and a mortgage along with a high tax load and overbearing regulations. Those are the primary attractors in my chaotic life system.

      • Richard Wakefield

        How many of those variables are changing at any sort of time frame that they matter to the climate from human perspective?

        You mean for the betterment of humans and the planet? Likely not, as the AGW default position is that a changing climate is bad.

      • No, I mean in the time frame that it is going to matter to humans. As he properly pointed out, changes in the Earth’s orbital patterns that are likely to cause significant climatic changes aren’t likely for tens of thousands of years.

        If there are still humans on Earth at that time, I’ll let them worry about them.

      • Richard Wakefield

        And what changes have been happening over which time frame, and show how such changes are bad for us.

        And, if you cannot provide anything, which you won’t because there isn’t, ask yourself why you support a premise for which there is no evidence.

      • Over the Earth’s history, many factors have influenced global temperature, but are not a source of variation in recent history. Of the ones you mention, CO2 is currently the biggest on multidecadal to centennial timescales. Over the course of tens of thousands of years, orbital variations will dominate, but these are not expected to become significant for at least another 25,000 years. Tectonic variations behave similarly. Water vapor is a feedback on CO2, and not a climate driver. The oceans are taking up CO2 rather than releasing it, as documented by measurements of dissolved inorganic carbon and ocean pH. Cosmic rays may or may not influence climate discernibly, but the effect, if any, has been shown to be small. Solar variation has been quantitatively unimportant over the past 50 years, although it played a role in early twentieth century warming.

        The more important point is that the climate system is not chaotic overall, but rather exhibits chaotic elements. These mediate up and down variations in the temperature response to CO2, which has been the main climate driver over the past 40-50 years, but the overall trend is still apparent. I believe the point many of us have been making is that if one starts with the premise that climate is chaotic, without acknowledging the dominance of non-chaotic elements for multidecadal global patterns, it will be impossible to draw conclusions that conform to the observational evidence.

      • Richard Wakefield

        These mediate up and down variations in the temperature response to CO2, which has been the main climate driver over the past 40-50 years, but the overall trend is still apparent.

        Yes, the over all trend is clear. No increase in summer TMax. No increase in the number of heat waves (in Canada both have been dropping since 1900). Winters getting shorter and milder.

      • And I think the point I am trying to make is that if the climate system is not analyzed for what it is – a chaotic system – then the science will resort to PC analysis (which it has) which will lead to false conclusions.

      • Fred, you are simply reciting AGW. I think your AGW model is not only speculative but downright wrong. There is no evidence that CO2 has had any discernible effect on global temperature over the last 40-50 years. The satellites show only a single step function increase in temperature in the last 32 years, coincident with the 1998-2001 ENSO cycle, and that is completely inconsistent with GHG warming. Prior to that the surface statistical models, poor though they be, show no warming since 1940 of so. So according to the best available data there has been no CO2 induced warming for the last 70 years. AGW just isn’t there.

      • David – if you revisit the link I provided earlier (along with temperature indices from other sources), I believe you’ll see a continuing temperature rise over the past 44 years or so, continuing through 2010, with no “single steps” but only the expected bumps and dips from ENSO. CO2 can account for most of the temperature rise, while no other climate variable, anthropogenic or natural, can account for as much (although atmospheric black carbon has played an important role regionally). Given the known radiative forcing properties of CO2 (based on the physics plus millions of years worth of evidence), plus reasonable feedback estimates, it would be hard to argue that CO2 could not have contributed substantially.

        You may disagree, but I would urge any interested reader to visit the data sources to draw his or her own conclusions.

      • Please provide the link again Fred. With 41,000 posts I can’t be expected to track it down from your “earlier” remark. However, I hope is is not to the area averaged surface statistical models, in which case don’t bother, as I consider these models to have become obsolete as temperature estimates when the first satellites were launched around 1978. These obsolete computer models are not data sources. According to the best data we have the only warming in the last 70 years occurred following, and in conjunction with, the 1998-2001 ENSO cycle.

      • Temperature records of SST, land temperature, regional, and global temperatures all show warming that is part of a clear upward trend punctuated but not dictated by <a href="http://www.giss.nasa.gov/research/briefs/delgenio_05/Fig1.pdf"ENSO Fluctuations. The physics of CO2-mediated IR absorption require that some of this be CO2-derived, and no other factor suffices to provide more than a partial further addition.

        Temperature trends are not computed by averaging temperatures over large surface areas, but by calculating temperature anomalies within limited areas and averaging the anomalies. Small errors are possible, but I’m unaware of evidence that large and persistent errors are likely or even possible.

        I’ll probably refrain from further commenting on this point, David, in the absence of new evidence, because I believe readers can visit the data to form their own judgments.

      • The so-called anomalies are first calculated for individual thermometers, most of questionable accuracy. The overall sample is a convenience sample, in no way statistically representative of the earth. Individual thermometer anomaly averages are then averaged for all the thermometers in individual grid cells, covering the earth. The statistical weight of a thermometer is inversely proportional to how many there are, another violation of statistical sampling theory. Many cells have no fixed thermometers so various kludges are used to fabricate grid cell averages. Then these averages of averages are averaged again to get the global average. The overall process is so statistically bizarre that no one knows how to carry the error bars of individual averages, or grid averages, or interpolations, etc., forward to even estimate the likely error.

        So I prefer the satellites which actual measure something and which systematically contradict these averaged-averaged-averaged-averaged statistical contraptions (which AGW desperately depends on). My evidence is direct observation. Science, as it were.

      • You might find “A methodological note on the making of causal statements in the debate on anthropogenic global warming” (2010) Jarl K. Kampen, Theor Appl Climatol diverting.

      • Richard Wakefield

        Fred, explain why there is no increase in summer TMax. The AGW community continues to claim there should be more heat waves. But that is just not happening. It’s an hypothesis of AGW that has been shown wrong.

      • Who has a global historical index of heat waves?

      • Richard Wakefield

        Though there is no formal definition of a “heat wave”, Environment Canada uses more than2 days over 32C. For my analysis I did several measures.

        1) yearly highest TMax, which has been dropping since the early 1900’s

        2) The number of days over 30C, which is also significantly dropping

        3) The number of days over the top 10% of highest TMax as an anomaly using 19 stations with records back before 1920. Also dropping since 1920.

        4) Record setting days, the vast majority of which occur before 1950, and those after 1950 were cooler temps for the record day than those prior to 1950.

      • I’m sorry, but if you plot the data and leave out 1998-2001 there is still a positive trend. It can be done in excel.
        http://vortex.nsstc.uah.edu/data/msu/t2lt/uahncdc.lt

        For a while Spencer et al. were claiming that their data didn’t show warming, but then errors were found in their analysis method. That was about a decade ago.

      • The trend is due to a step function. There is no actual upward trend in the sense of a gradual increase. There are just two flat lines, the later one higher than the earlier one. The pattern is flat-step-flat (but the step occurs during the ENSO cycle. It is a fascinating patter, one we need to explain. But AGW does not explain it.

      • Sorry, it is a fascinating pattern, not patter (quick mouse syndrome). Also, if you want to be more precise you do not want to include partial cycles of the oscillator, which is clearly present. The proper conclusion is that the relatively steady upward trend from 1978 to 1998, that AGW seeks to explain, is probably a statistical artifact of the surface models. The satellite readings do not show this slow upward motion, rather they show an abrupt step function, possibly chaotic in nature. QED

      • There are two explanations currently in the literature both of which are consistent with AGW:
        1. That put forward by Tsonis et. al. that there are synchronization shifts and climatic “regimes” coupled with an AGW signal.

        2. That the steps are related to “steps” in human behavior related to emissions, including non-GHG emissions.

      • Peter – I had to read that twice before I agreed with it.

        As layperson, I think some people need to read Swanson’s alteration of Egon Friedell’s quote about electricity and magnetism:

        Modes of natural climate variability are those forces of nature by which people who know nothing about modes of natural climate variability can explain everything. – Swanson’s proposed alteration to make a point about his paper

        A point about which some climate scientists appear to be ignorant.

      • They found that where a ‘synchronous state was followed by a steady increase in the coupling strength between the indices, the synchronous state was destroyed, after which a new climate state emerged. These shifts are associated with significant changes in global temperature trend and in ENSO variability.’

        Swanson may have a view that the underlying CO2 signal is about 0.1 degrees C/decade in the period 1979 to 1997 – the period of strongly rising trend but outside of the times of strong climate perturbations at the climate shifts. This is not something that emerges from the network model as such. The lower rate of warming may in fact be false comfort given the nonlinearity demonstrated in the papers.

        Hard data trends in radiative flux – trends that are internally consistent, consistent across platforms and consistent with surface observations of cloud in the Pacific – show strong warming in the SW and cooling in UV in the period in question.

        As I keep saying – a nonlinear climate presents new conceptual challenges for both zombies and wombats.

        Come of the problems in conception shown here emerge from confusion of terms. Let’s start by eliminating stochastic. There are no random elements – that is simply a statistical convenience for processes that are deterministic in principle but not in practice. In theory – all chaotic systems are fully deterministic. One things leads to another – but there is true dynamical complexity. It appears in the climate system as abrupt change that looks very much like a chaotic oscillator. An example is the 20 to 40 years regime changes in Australian rainfall that has physical evidence in stream form and in flood records.

        Chaos theory is a metatheory – a theory of theories – that says that chaotic oscillators have particular properties.

      • Richard Wakefield

        Of the ones you mention, CO2 is currently the biggest on multidecadal to centennial timescales.

        There is no evidence for that. Ask Steve Schwartz.

      • Fred Moolten

        You didn’t mischaracterize my position at all.

        Overall, your conclusions appear robust and show extraordinary rigour.

        We come at the issue differently, is all.

        You have the expertise, knowledge, skill and resources to tackle the temperature component of the issue.

        I simply neither have these attributes to such an admirably advanced level, nor see the point of pursuing them once size of perturbation by CO2 is established.

        The perturbation is produced by means unconsented in a common shared resource. It is immoral to continue without consent of those with a share in the resource, and compensation for that consent.

        Everything after that is merely interesting math pastimes and fun Earth Sciences trivia, to me.

      • This is the worst sort of speculation, the precautionary principle in scientific clothing. Do you consider an ice age a perturbation? How do you rank it with increasing a trace gas concentration?

      • David Wojick

        Please point specifically to anything you feel stands as speculative, that is not well-founded in evidence, sound mathematics and good science.

        Ice ages are not perturbations, no. But something started them, and that something is most likely in most cases what can be characterized as a perturbation, or there is some theoretical (and speculative) foundation for an actual cyclic mechanism behind the phenomenon of ice age formation that has yet to be well developed enough to cite.

        If you really want me to discuss the Precautionary Principle, it happens Game Theory is one of my other little diversions, and I’d be happy to see Judith invite a Game Theoretician as advanced in that area as Tomas Milanovic is in Chaos Theory to post the Climate Game.

      • If you can do no better with real events like ice ages, then all you are really doing is hand waving the rest.

      • hunter

        Sorry, wasn’t there for the last ice age, or even the LIA; don’t know anyone who was, personally, and have no record of thermometers or other reliable measuring apparatus in place at the time; no video, no pictures, no sound, no data recording of any kind, merely the proxies in the geological and ice records, and they don’t exactly give the sort of information I’d call a credible basis for concluding what exactly resulted in each ice age starting when it did.

        Insufficient information to form an adequate hypothesis.

        If you can do better, by all means go for it.

        Otherwise, what you describe as handwaving is the refusal to handwave.

  40. Unfortunately people working on these problems are not interested by the climate science and those working in climate science are not even aware that such questions exist , let alone have adequate training and tools to deal with them.

    This is just not true. I am just one member of an active and longstanding community of scientists working in this interdisciplinary area. What you call standing waves are more commonly called coherent structures. That term will lead you to a large and vibrant literature. Another term that is commonly used is pattern formation. A large part of this work goes under the name geophysical fluid dynamics.

    There are many sessions at major conferences devoted to this subject area. For example, the CHAOS2011 conference lists as one of its topics “Chaos in Climate Dynamics”. NSF recently just completed a multi-year program, Collaboration in Mathematical Geosciences, that funded a lot work in this area. NCAR has a program called Image (Institute for Mathematics Applied to Geosciences). Their 2010 Theme of the Year was Mathematicians and Climate. The IMA (Institute for Mathematics and its Applications) has a 2013 Thematic Year on Infinite Dimensional and Stochastic
    Dynamical Systems and their Applications
    with a workshop on Atmospheres and Oceans. I could go on and on but you get the picture.

  41. I think this was a very interesting contribution. Like a lot of people here, I am trying hard to decide what Spatio-temporal chaos actually means for climate prediction!

    Peter’s example in which part of the light from the sun were blocked from reaching the Earth, (above) illustrates the fact that there must be some predictability – as indeed there must be in the case of the mountain stream. E.g. if the viscosity of the fluid tumbling down the mountain was increased (pouring treacle into the water, say) the result would be fairly predictable.

    My guess would be that for any point in the forcing parameter space , there is a range of dynamics possible, and that little more can be said.

    Changing a parameter (e.g. CO2 concentration) would be like the apocryphal flap of the butterfly’s wings – it would produce an arbitrary change that would grow exponentially with time (initially).

    I guess you can’t really look for the “signal of AGW” in the global temperature data, without first knowing the dynamics of the system without the CO2 variation. In a way, we know the answer to that – the climate can swing from temperate to ice age conditions!

    I do hope that Judith writes a piece, giving her perspective on where this leaves climate science – I am rather confused!

    • I do hope that Judith writes a piece, giving her perspective on where this leaves climate science – I am rather confused!

      Considering your designated guide, your confusion may continue.

    • David, I wrote a related piece on climate scenarios
      http://judithcurry.com/2011/01/04/scenarios-2010-2040-part-iii-climate-shifts/

      I agree with what Tomas and Tsonis have to say on this. However, this does not mean prima facie that climate models are not useful; this is more about how we should design climate model experiments and interpret the results. Also, Jeff Weiss makes valid points that there is a community that addresses such issues as related to the climate system and climate models, although most climate researchers do seem generally ignorant on this topic.

      • You can’t agree with both of them. They don’t agree.

        http://judithcurry.com/2011/01/04/scenarios-2010-2040-part-iii-climate-shifts/#comment-28100
        Tomas:
        “This observation has no predictive virtue so I don’t think that it could be used for your intent to elaborate a decadal scenario.
        All that Tsonis says, is that the behaviour (of the indexes, not of the system itself!) significantly changes when the correlation is strong and the predictor good.
        This situation happened in 2001 again.
        So according to Tsonis something will change/has changed significantly.”

        Tsonis though is clearly stating that we have now shifted into a cooler regime and there will be less warming over the next how many ever years.

        http://carbonpurging.com/blogs/mjewett/2009/04/01/exclusive-interview-with-professor-anastasios-tsonis-on-has-the-climate-recently-shifted

        “CP: Would you be more inclined to say that average global temperatures are cooling, that average global temperatures are trending no change, or that average global temperatures are warming?

        AT: Right now we would say that the rapid warming in the 80s and 90s has stopped and we are entering a cooler regime.”

        Tomas is telling you that there is no way of predicting what will happen with any probability.

        Either the GCM that Tsonis used in 2007 paper is worthless or not. Either Tsonis can state that we are entering a cooler regime, or he can’t.

        They both can’t be right.

      • Tomas is commenting about my idea re creating a decadal scenario. The PDO is a manifestation of spatiotemporal chaos, a “coherent structure” if you will (note vukevic would disagree for reasons i don’t yet understand). We can’t predict how long a structure will remain coherent (Tomas is correct). But as a practical matter, it is a useful scenario to consider that this particular structure will remain coherent for another decade or two.

      • Judy – The PDO may be a manifestation of spatiotemporal chaos, although perhaps not completely – Gerald Meehl has provided evidence for an anthropogenic forcing component imposed on an underlying chaotic element. An interesting question relates to the potency of the PDO in affecting temperature trends. The PDO has more or less averaged out over the twentieth century as the temperature has risen. The most interesting interval is from about 1950 to the late 1970s. During that interval, the PDO turned strongly negative, and in addition, anthropogenic aerosols reduced the transmittance of solar radiation to the surface as measured under both clear sky and all sky conditions. The combined effect – a negative PDO and negative aerosol forcing – resulted in a flat but not cooling multidecadal interval.

        It seems reasonable to conclude that the PDO alone can modify underlying temperature trends, but that its potency in this regard is probably modest unless augmented by other climate drivers.

      • Fred, I never claimed that PDO explained all of the 20th century variability. But as Tsonis showed, the combo of PDO, AMO, NPGO, ENSO explains alot of the variability and climate shifts (and this kind of analysis is not at all inconsistent with an underlying forced warming trend).

      • Judith,
        The second figure in this RealClimate posting by Tsonis is a good demonstration of what you stated in general terms.

        http://www.realclimate.org/index.php/archives/2009/07/warminginterrupted-much-ado-about-natural-variability/

        The interpretation of that data may change, but it looks strongly indicative as it appears and is explained in that posting.

      • Pekka – That’s a fascinating article. I read it when it appeared in 2009, and forgot about it, but it makes sense as a hypothesis, although it’s speculative.

        Could the 1998 “super-El Nino” have triggered an unusual climate phenomenon subsequently? Perhaps, or perhaps not. However, El Nino events would predictably (a) increase the exposure of ocean heat to the atmosphere and hence to a possible escape to space; and (b) result in a warmed atmosphere that triggers positive feedbacks and thus greater warming (note that (a) and (b) operate in conflicting directions.) Additionally, a very recent paper in GRL or JGR (I don’t have the reference immediately at hand) suggests that a warmed Atlantic may shift the Walker circulation toward a more intense, La Nina mode.

        Finally, as Swanson and Tsonis indicate, high variability and sensitivity to anthropogenic forcing are two sides of the same coin, rather than competing explanations for the warming of the past hundred years.

      • Though their belief is clear in that post:

        “We hypothesize that the established pre-1998 trend is the true forced warming signal, and that the climate system effectively overshot this signal in response to the 1997/98 El Niño”

        There is a real AGW signal that is present based on the pre-1998 trend (in the figure they show the trend from 1950-1998).

      • I don’t see how anybody could forget the article, FRED.

        I love playing around with it.

      • The concentration on the PDO is not nearly the whole story of decadal variability as it is linked to the frequency and intensity of ENSO in the Pacific multi-decadal pattern. Hey – why don’t we link to AR4 – http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch3s3-6-3.html

        ENSO is the major driver by far of global surface temperature variation. If you recall – this is the V shaped cool pool across much of the central Pacific that is characteristic of the Pacific multi-decadal cool mode.

        http://www.osdpd.noaa.gov/data/sst/anomaly/2011/anomnight.2.10.2011.gif

        And I am just a little bit frustrated with a qualitative narrative that can’t be analysed because there is no quantitative basis to anything at all.

        I have never got an answer from you Fred on the ISCCP-FD and ERBS toa radiative flux up anomaly data. Both these platforms agree within reasonable limits. The uncertainty limits for both are less than 0.4W/m-2. Now we know that between 1977 and 1999 the planet warmed. So the reason for that lies in an energy imbalance. SW up strongly increased as a result of less cloud reflecting less sunlight back into space – planetary warming. At the same time LW up increased and offset some of the warming.

        The conclusion is that cloud change dominated Earth energy dynamics in the period. Can I get a reply on this please – and not a changing of grounds response. The latter is most assuredly an example either of bad faith or cognitive dissonance.

      • Chief,
        Did you mean to say:
        ” SW up strongly increased as a result of less cloud reflecting less sunlight back into space – planetary warming.”?

        Surely, SW up strongly decreased?

      • Yes you are absolutely right of course – my bad.

      • Just 2 point – Tsonis used a ‘toy’ network model not a GCM – not to disparage the groundbreaking nature of the work. And it is more correct to say that we have entered a cooler phase since 1998.

      • https://pantherfile.uwm.edu/aatsonis/www/2007GL030288.pdf

        “The particular model we
        examine here is the GFDL CM2.1 coupled ocean/atmosphere
        model [GFDL CM2.1 development team, 2006]. The first
        simulation is an 1860 pre-industrial conditions 500-year
        control run and the second is the SRESA1B, which is a
        ‘‘business as usual’’ scenario with CO2 levels stabilizing at
        720 ppmv at the close of the 21st century [Intergovernmental
        Panel on Climate Change, 2001]. From these model outputs
        we construct the same indices and their network.”

  42. In the Greenland ice sheet during the Pleistocene there are major temperature oscillations. These were initially thought to have a periodicity of about 1500 years, but better dating showed intermittancy. In my latest paper
    Loehle, C. and S.F. Singer. 2010. Holocene Temperature Records Show Millennial-Scale Periodicity. Canadian J. Earth Sciences 47: 1327-1336
    we evaluated several spatio-temporal models (by others) that are capable of generating such oscillations with intermittancy properties and compared them to the data. This may interest people as a smaller scale (not global) example of how these phenomena are modeled and how difficult they are to evaluate. Also note claims (that we cited) that if a cycle isn’t exactly regular it isn’t real, in contrast to the mechanisms of the models we studied where cycles can be missing. The paper is here:
    http://www.ncasi.org//Publications/Detail.aspx?id=3349

  43. Based on empirical data, it is fair to state that chaotic elements currently preclude attempts to accurately model or predict many short term or regional climate phenomena, and most internal climate dynamics.

    Conversely, they fail to alter the general characteristics of long term global temperature changes driven primarily by anthropogenic CO2 emissions. An interesting question is why. If climate responds chaotically to small initial changes in the Walker circulation or the MOC, why does it respond rather predictably to increases in CO2, independent of initial conditions?

    The answer must inevitably be speculative, but I will speculate that it lies at least partly in the dynamics of CO2-mediated warming. The critical element is the principle that the warming is driven primarily by what happens at the top of the atmosphere (TOA) rather than the surface. Increased infrared (IR) opacity due to elevated CO2 concentrations can be calculated rather accurately from computationally adapted radiative transfer differential equations to lead to a cascade of warming events reaching down to the surface. The result will always be additional heat in the climate system. The magnitude may be altered by regional phenomena (including those affecting lapse rates), and by a variety of feedbacks, but there is no plausible mechanism by which the sign can be reversed. The evidence on feedbacks remains a subject of some debate, but most of the data strongly suggest that long term feedbacks (i.e., those operating over more than a few years) are positive.

    For these reasons, it seems to me that is implausible to suggest that the climate response to CO2 will be too chaotic to evaluate. It is more reasonable to predict that consistent with both current and paleoclimatologic evidence, it will remain a positive cause/effect relationship, modulated by chaotic phenomena, but not obliterated or reversed by them. The issue of quantitation is probably a more reasonable one to address. Here, timescales are important. In fact, warming/cooling reversals may indeed be possible over very short intervals (e.g., during a La Nina), whereas it will be the magnitude of warming that needs to be addressed over longer intervals. Similar considerations apply to non-chaotic anthropogenic cooling phenomena such as industrial aerosols, anthropogenic cooling catastrophes (global thermonuclear war), or non-anthropogenic catastrophes such as an asteroid impact.

    • “why does it (climate) respond rather predictably to increases in CO2, independent of initial conditions?”

      How about… it doesn’t respond predictably! First of all, show me a paper that provides a predictable response of “climate” with CO2. OK, there isn’t one. But maybe you were talking about temperature. Well the temperature record is under intense scrutiny and debate. Anyone that has been following AGW for the last couple of years or more knows that the debate is by far and no means over. There is very little consensus that temperatures have risen by much at all. There is very little consensus that there is a correlation of CO2 increase with the debated temperature increase. The PC analysis that has been done of temp’s vs CO2 give little reason to suggest that climate (whatever that is) is predictably responding to CO2. But that is off topic for this thread really. I think you’re basically still stuck on deterministic analysis.

  44. Tomas – thank you for a very well written piece. This is a subject which, in my opinion, runs right to the core of our understanding of climate. It is also a subject which is often counter-intuitive and the standard “meme” arguments, prepared by climate scientists and regurgatated by advocates, are incredibly weak. The problem is the arguments sound plausible to people who don’t understand chaos, but sound ridiculous to those who do.

    We see some of those debating points laid out here. Arthur Smith argues that the spatial correlation of ENSO is evidence against chaos, because chaotic systems cannot (in Arthur’s eyes) produce structures which correlate across large scales. Of course, this is the very point Benoit Mandelbrot was trying to get across with the Mandelbrot set – self-similarity can be an emergent property of a chaotic system. Therefore, the presence of large scale patterns cannot be evidence that chaos does not apply. Chaotic systems can have large scale correlations (spatially and temporally) – yet these patterns are just as unpredictable as the fine scale patterns. And of course we have a beautiful practical demonstration of this over the last few years. Bullish climate scientists believed their modelling was so good that they would be able to predict ENSO – and efforts were made to do just this a few years ago. These efforts have ended in dismal failure to exhibit any skill in such a prediction. It is a shame they did not observe their failure and begin to wonder whether the same problem that eludes prediction of ENSO might just prevent them from predicting climate at all.

    Another popular incorrect argument is that systems can be chaotic at one scale, but not on another. No! Chaos has defined criteria (exponential divergence from initial conditions; topologically mixing; dense periodic orbits) and a system is either chaotic, or it isn’t. What these people want to claim (but do not know how) is that the chaotic system exhibits some invariant property that can be estimated statistically or analytically. Which is fine, but to be doing this scientifically, you need evidence! And no convincing evidence is ever provided.

    Andrew Dodds’ arguments upthread are perhaps the most extraordinary. A number is pulled out of the air with no justification (that stuff averages out over 15 years), and the supporting evidence of this is that we can measure global temperature averages. Clearly, that we can measure them does not prove that it is predictable – such a claim is patently absurd. The measurement is an issue of metrology and statistics; we know we can measure the sample average. The question is, does that sample mean relate in any meaningful way to the population mean? Is there even a meaningfully defined population mean global temperature?

    This gets to the nub of the problem, which occasionally is asserted that the global temperature is a boundary-value problem. This attempts to argue that the sample mean is a meaningful measure of the population mean. But once again, we get no evidence in support – we either get analogies, such as to the Lorenz attractor (“other chaotic systems have boundary values, therefore climate does” – a terrible argument) or we simply get bald assertion. Neither of these arguments come close to being scientific.

    The closest we get to evidence of the boundary-value problem is that of the 20th century hindcast being evidence that climate is predictable. I’m sorry, this is a joke. The reason it appears predictable at the 30-year timescale is that this reduces the temperature record to circa 2-3 degrees of freedom. It takes neither skill nor imagination to produce a model to successfully hindcast this. And as soon as we increase the number of degrees of freedom, the models fail miserably (e.g. Anagnostopoulos 2010).

    It is heartening that Jeffrey Weiss tells us these issues are taken up within the climate science community. My question to Jeffrey is: why are your colleagues in climate science allowing such atrocious and unscientific arguments such as the type listed here? If it were just commenting bloggers, I could understand it, but these are the types of ignorant argument put forward by the IPCC and scientists at blogs such as RealClimate (e.g. their posts on “Chaos and Climate”, “Butterflies, Tornados and Climate Modelling”). It certainly gives the impression of a complete lack of knowledge of chaos within the climate community. This may reflect unfairly on your work, but this is the highly visible face of climate science.

    • Spence – The evidence is very clear that chaotic elements are significant on some timescales but not on others. Over the multidecadal timescales, the non-chaotic solar signal predominated during the early twentieth century, with CO2 playing a subordinate role. Mid-century involved CO2-mediated warming counteracted by negative aerosol forcing, with perhaps a contribution from the chaotic component of the PDO. Since then, the multidecadal signal has been dominated by CO2-driven warming.

      Upthread, I’ve suggested reasons why the CO2 effect is relatively little influenced by chaotic factors, and why anthropogenic warming can be predicted with reasonable accuracy independent of initial conditions. This is a question of physics more than mathematics. The mechanism is speculative, but the fact that it is non-chaotic and that it dominates the climate signal is a matter of evidence. I believe that there has been a tendency on the part of some interpreters to assume that a system is either chaotic or not, but that is probably wrong in many cases, including climate dynamics. There, chaos is a partial contributor only, and its relative potency is very much related to the interval length that is assessed.

      • Regarding one other point you touched on, it’s worth noting that climate models do poorly with ENSO and other chaotic variations, but well with long term temperature trends as a function of anthropogenic forcing. In particular, this includes skill with forecasting as well as hindcasting.

      • Actually ENSO-like phenomena show up as emergent properties in various models, including, unless I’m remembering incorrectly, NASA GISS Model E.

        What most people see, though, are averages of multiple model runs, which tends to average out such things.

      • No, Fred. You have not presented any evidence which demonstrates that these patterns you are witnessing are true causal effects rather than structures within chaos which you are linking without an adequate understanding of the underlying dynamics of the system.

        We don’t even need invoke chaos to falsify your position here. Self-similarity alone shows that the magnitude of variability in the 20th century is quite consistent with internal variability of the climate system (Cohn and Lins 2005). Your “causality” is entirely consistent with natural variability.

      • Let me respond to three comments you’ve made.

        First, the 20th century warming is only “consistent with natural variability” if one imagines sources of variability that have not emerged in the climate record over the course of millions of years. No source of variability we know to be operating suffices as an explanation. It would additionally require the physics of infrared absorption by CO2 and water, the Clausius-Clapeyron equation, and the confirmatory observational data to be ignored – in essence, the physics by themselves dictate a prominent role for CO2. On these points alone, we can eliminate natural variability as an explanation for most of the long term trends, while acknowledging its short term role.

        Second, in response to Peter below, you state that models exhibit more skill on annual scales than climatic 30-year scales. Can you provide a data source for that claim? All the data I’ve seen show the opposite for temperature trends – annual predictions are much less skillful than long term projections. Indeed, long term projections converge to similar outcomes despite different initial conditions, and the outcomes perform well in matching observations. This includes forecasts as well as hindcasts. The convergence from different initial conditions should alert interpreters to the conclusion that chaos is not an important contributor to these effects, as in fact the physics also indicates.

        Third, you have suggested that model skills fail to match even simple averaging. Technically, this may be correct for some temperature trends, but it reflects a misunderstanding of model skill. If temperature in response to CO2 is rising, and CO2 continues to rise, one can predict without models that the average trend will continue. That general point – that the future will look like the past – is a useful heuristic. It works – except when it doesn’t, and the importance of models is to understand what happens when the trend driver (e.g., CO2) changes. The models do this quite well, by comparing projections and observational data in the presence or absence of anthropogenic forcing (including aerosols as well as greenhouse gases). The evidence demonstrates that the absence of the anthropogenic forcing makes it impossible to reconcile the models with observations. It is this outcome rather than comparison with a simple average that demonstrates model skill.

        Finally, if I were to emphasize any single point in the above commentary, it would be that in addition to an analysis of trends, a detailed knowledge of the radiative physics of greenhouse gases and their consequences is needed for proper interpretation. Without that knowledge, the probability of misinterpreting the data is exceedingly high.

      • I don’t have to imagine natural variability. We can measure it directly. Your mistake is to assume a priori that it averages out over long scales. That is an implicit assumption on your part based on no evidence. Indeed, increasing variability at longer scales actually ties in better with paleoclimate reconstructions than the assumption of averaging out.

        Once you realise that it need not average out it should be obvious that a self-similar system, extrapolated using the Hurst exponent readily measured from all scales, fitted to the instrumental record, finds that the 20th century trend is entirely consistent with it. I provided a reference for the calculations that demonstrate this (C&L05).

        So your statement that we know of no cause is flat wrong. We don’t even need to invoke external forcing at all – internal variability alone can achieve the changes we have seen.

        I have already linked a paper that shows models exhibit worse performance at 30-years than the monthly or annual variation. These included periods with significant CO2 forcing. The models predictions were worse than a naive forecast.

        I am well aware of the radiative properties of greenhouse gases. I am also aware of the problems associated with predicting behaviours of complex systems. I am also aware of how easy it is for experimenters to fool themselves into believing a model result. Without an understanding of all of those issues, you are almost guaranteed to make gross misinterpretations of the data.

      • To elaborate slightly on my above point, the cause/effect relationship between CO2 and temperature derived from data spanning more than 400 million years, and operating within uncertainty margins that can be quantified with reasonable probability requires the existence of a prominent CO2 signal in the record of the past half century. The quantitation remains a matter of some disagreement, but can’t be stretched to a point of expecting only a trivial CO2 contribution.

      • The correlation between CO2 and temperature over 400 million years is well known to be caused by outgassing of warmer seas. The CO2 follows temperature.

        Furthermore, the direct effect of the CO2 differences falls far short of explaining the very large swings. You can’t have it both ways: you use this excuse for dismissing solar impacts. If the effect is too small, it must be dismissed.

        Self similarity in the temperature record (due to internal dynamics) coupled with ocean outgassing fully explains the geological data without any need to invoke high sensitivities to CO2 or any arm-waving about feedbacks.

      • Spence – Let me respond to your comments that appear both above and below.

        The data from over 400 million years is corrected for outgassing and documents a subtantial temperature relationship in which CO2 is the cause and warming is the effect. This is true despite the fact that causality can also operate in the opposite direction when the temperature changes first in response to some other influence such as orbital forcing. The robust response of temperature to CO2 eliminates the possibility that most warming of recent decades is due to natural variability.

        We know the nature of the natural factors that operate, including ENSO, PDO, and AMO. None others operate substantially on a global scale. They have averaged out over the past 100 years. Above, I’ve addressed the modest role of the PDO during the middle of the twentieth century – it may explain some of the variation, but much must be attributed elsewhere.

        All the data that I’m aware clearly shows that models predicting or hindcasting global temperature trends do far better at 30 year timescales than annually or over only a few years. These models of course operate on the basis of the principle that responses in the ocean (70 percent of the Earth’s surface) predominate in determining long term trends. I know of no exception.

        Your reference does not appear to be an exception. It chooses a set of land based stations, ignoring the oceans. More importantly, though, it does not compare global trends modeled for different intervals, but merely asks how individual stations perform over shorter or longer periods. Not surprisingly, short term predictions at individual stations tend to do better when they are given less time to diverge from their own earlier values. This is quite different from model projections of globally averaged anomalies from a very large number of stations.

        Without meaning to be personally disparaging, I believe you don’t understand the physics, which make a prominent role of CO2 unassailable. Quantitation is another matter, as exemplified by disagreements about climate sensitivity values. That has been a subject of extensive discussion on another thread, and I’m always willing to offer my perspective on the climate sensitivity issue.

      • Fred,
        You state: “None others operate substantially on a global scale.” I do not think that there is any possibility to confirm this claim. I think that the best we can say is: “None others are required to reach satisfactory agreement with the observations when significant flexibility is allowed also for the anthropogenic influences by GHG’s and aerosols.” This is a much weaker statement telling little or nothing on the possibility of significant additional natural processes on the time scale from several decades to centuries.

        The main argument in support of CO2 and other human influences is that some influence is certain to exist, the observed development is consistent with a sizable influence and the temporal development is in agreement on the decadal level.

        The unknown natural oscillations have a reduced likelihood compared to the anthropogenic influence, because there is no prior reason to expect them to produce significant effects with the observed temporal behavior. This is not a proof against them but this makes them a less likely explanation.

      • Right Pekka, and some confluence of oscillations mean my new lottery ticket is the one.

        Modes of natural climate variability are those forces of nature by which people who know nothing about modes of natural climate variability can explain everything.Swanson’s, of Tsonis and Swanson, proposed alteration to make a point about his paper to warn off true believers of the hoax

        Their 1998 “shift” existed on one temperature series. Their examples were followed by a continuation of the Holocene, and warming consistent with AGW.

        They are talking about something that is highly unlikely to have taken place in 1998. Do you feel like you are living in a post, abrupt-change climate? Oddly reminiscent of the pre, abrupt-change climate is what I would say.

      • JCH,
        The progress of scientific understanding has much in common with Bayesian approach to using empirical data in building posterior likelihoods.

        Before certain observations we have a set of alternative hypotheses. New observations support those of the prior hypotheses which have given the highest probability to the actual outcome observed empirically. The hypothesis of natural transitions is consistent with any outcome within wide margins. Therefore it is neither disproved nor confirmed at all by the new observations. The hypothesis of influence of CO2 predicts specifically warming with a temporal structure close to the observed although not perfectly, as the changes in the short term rate cannot be explained and may be even slightly against the explanation. In any case the explanation based on CO2 gets significant support compared to alternatives.

        Looking further on alternative hypotheses of climate sensitivity, the results are not as conclusive.

      • It has nothing to do with temperature at all and there is a 2007 study that needs to be read in conjunction.

        Independent of the underlying physical causes of Pacific climate variability, an understanding of the PDO and ENSO as behaving like a complex and dynamic system in chaos theory emerged from a 2007 study by Tsonis et al. They constructed a numerical network model from 4 observed ocean and climate indices – ENSO, PDO, the North Atlantic Oscillation (NAO) and the Pacific Northwest Anomaly (PNA) – thus capturing most of the major modes of climate variability in the period 1900–2000. This network synchronized around 1909, the mid 1940’s and 1976/77 – after which the climate state shifted. A later study (Swanson and Tsonis 2009) found a similar shift in 1998/2001. They found that where a ‘synchronous state was followed by a steady increase in the coupling strength between the indices, the synchronous state was destroyed, after which a new climate state emerged. These shifts are associated with significant changes in global temperature trend and in ENSO variability.’

        I would repeat my rules for hydrological dialectic here – but somehow I think I would wasting my time on a irredeemable climate wombat.

      • The papers of Tsonis and Swanson are interesting and may well discuss a real phenomenon, but I am not convinced that their analysis provides more than a suggestion of a possible mechanism. The model is far too limited in scope and the data not nearly sufficient for stronger conclusions, as I see them.

      • The work has been described by Tomas – as a toy model.

        That it is.

      • Many others operate on a global scale

        http://ioc-goos-oopc.org/state_of_the_ocean/all/

        I think the satellite data tells a story however. CERES date shows little trend from 2000 – but large fluctuation due to cloud changes associated with ENSO.

        ISCCP-FD and ERBS upward radiative flux anomalies agree within reasonable limits and the stability uncertainty is < 0.4W/m-2 – so the trends look to be valid.

        http://isccp.giss.nasa.gov/zFD/an9090_SWup_toa.gif

        'The overall slow decrease of upwelling SW flux from the mid-1980's until the end of the 1990's and subsequent increase from 2000 onwards appear to caused, primarily, by changes in global cloud cover (although there is a small increase of cloud optical thickness after 2000) and is confirmed by the ERBS measurements.'

        http://isccp.giss.nasa.gov/zFD/an9090_LWup_toa.gif

        'The slow increase of global upwelling LW flux at TOA from the 1980's to the 1990's, which is found mostly in lower latitudes, is confirmed by the ERBE-CERES records.'

        The energy implications are simple – warming in the SW due to less cloud offset by cooling in the IR.

      • Fred, your comments once again show that you are simply assuming CO2 drives temperature without strong evidence and dismissing natural variability from ignorance.

        You assert that CO2 dominates temperature over 400 million years, and that ocean outgassing has been accounted for. Good. So how have the other factors in the carbon cycle been accounted for, such as weathering of rocks, or exchanges with the biosphere? Answer: they haven’t. So of course there is a discrepancy after outgassing is accounted for. These are the cycles Freeman Dyson was trying to understand, and found we do not have enough information to account for them or model them. I have not seen anyone show Dyson wrong on this yet. Despite this you just ignore this huge issue.

        You incorrectly characterise natural variability as a handful of observed ocean circulation patterns. These are not the be all and end all of natural variability. You assert that these “average out” over 100 years, again by assertion with no evidence. And what about centennial-scale and millenial-scale variability, and larger scales? You may think these are unimportant at the decadal scale, but this is exactly the topic of Cohn and Lins paper, and, getting back on topic, the Tsonis paper – these larger scale variations have a huge impact on how we perceive climate in the 20th century, particularly in the interpretation of trends.

        The double standards here are astonishing. You dismiss the natural variability because you argue it is too small to explain the 20th century temperature trends, even though you do not know how to measure it. But then you ignore the fact that we have a better understanding of CO2 forcing than natural variability, and that is definitely too small to explain geological-scale temperature variations. Do you apply consistent reasoning here? No, you do not.

        And so what if the paper I referenced only addresses land stations? If the models fail to capture the variability of the land, then they are largely worthless. As we’ve seen from the ARGO network, the models are wrong about the oceans also. There is nothing good left to say.

        You state you believe that I know nothing about the physics, and since you do not know me you can have no evidence to support this assertion. I think this sums up the arguments you put forward quite nicely: they are built on belief with no scientific evidence.

      • Spence – The factors you state as unaccounted for – weathering, biosphere, etc. – have in fact been addressed in the paleoclimatologic data. The role of CO2 remains substantial, with acknowledged uncertainties about the exact quantitation.

        I believe the problems with your argument are several, but the most salient is that you appear to be attempting to throw away more than a half century of data and theory on radiative geophysics confirmed by satellite and ground based observations that confirm a robust role for CO2 in driving current and past climate changes. To account fully for observed changes, you then seem (to me at least) to be trying to conjure up internal climate variations beyond the ones that have been identified and quantified, the latter having been shown to average out over the course of a century. While variations are certainly part of the system, Swanson and Tsonis correctly pointed out that a strong climate sensitivity to such variations inevitaby requires a strong sensitivity to anthropogenic forcing, including the CO2 increases of the past century. We can add the two phenomena (with appropriate adjustments for timescales), but we cannot plausibly substitute one for the other.

        I don’t see an either/or approach as an area that you are likely to find productive. Most of us acknowledge the existence of the internal variables, but an attempt to substitute them for known effects of greenhouse gases rather than to try to see how natural and anthropogenic factors balance out at different timescales will be seen as a dead end by individuals familiar with the abundant data in these areas.

        Finally, you and I can probably agree that timescales are important. Chaotic elements have been operating at timescales shorter than the long term anthropogenic warming trend driven by CO2 – these include ENSO, PDO, AMO, etc., but with net variation on a centennial scale close to zero. It is certainly possible they have also operated on multi-millennial scales as well. However, we know that severe and abrupt climate shifts of global extent have occurred (infrequently) during glaciations, and almost never during warmer interglacial periods such as the current Holocene. Therefore, without dismissing the possibility of an unanticipated event of this type, it would be prudent to base our expectations on the known behavior of anthropogenic trends and identified internal variability. It would be additionally prudent to recognize that sudden shifts are more likely to occur in the same direction as trends than in the reverse direction. Elsewhere, I cited as an example an abrupt warming from massive release of permafrost methane precipitated by the gradually accumulating temperature increase from further CO2 emissions.

        I won’t dwell further on what I perceive to be a number of misinterpretations of the Anagnostopoulos data except to say that the data derived from a few dozen land based stations do not invalidate the utility of models in estimating long term global temperature responses to anthropogenic forcings after the models are initialized to the existing temperature at the start of the run. Since the models have already been tested under appropriate conditions, their estimates under those conditions are the best current test of their skill. The latter can be characterized as imperfect but useful – and improving.

      • One other small point about these exchanges of commentary – their tone seems to have moderated since yesterday. We have all at times indulged in hyperbole, but I hope you’ll forgive me for having responded to not only the substance but also the tenor of your first comment, which in challenging commonly held perspectives, included terms such as “ridiculous”, “absurd”, “this is a joke”, and “complete lack of understanding”.

        The role of anthropogenic forcing from CO2 relative to natural phenomena in current and paleoclimatologic data is something I’m familiar with. That requires me to acknowledge uncertainty, but also to call attention to what we do know. I hope readers of this thread will consider the areas of understanding we can each claim in arriving at an accurate perspective on current climate phenomena.

      • Fred, we’ve moved far away from a discussion of chaos and are really no longer on topic for this thread. So I suggest we park this discussion for the time being and agree to disagree for the time being.

        As for tone, I stand by my assertion that the general claim that models are validated by matching a test vector of 2-3 degrees of freedom in hindcast is scientifically an absolute joke. As a general statement about climate science this is absolutely correct. However, when engaging in one to one discussions as opposed to general statements I typically try to be more moderate. Please do not confuse this for me backing down on my original points.

      • Oops, missed the request for the reference to be repeated. The evidence showing models perform best at monthly scales and most poorly at climatic (30-year) scales comes from Anagnostopoulos 2010, “A comparison of local and aggregated climate model outputs with observed data”.

      • I think we have heard all this before Fred – repetition will not improve the situation.

        If I may repeat myself in summary. I have quoted the US National Academy of Sciences, the Woods Hole Oceanographic Institution and the Pentagon on abrupt change. In addition we have the Tsonis network model showing chaotic linkages between climate indices in the modern era. I have described chaotic oscillators in my own field with abrupt changes in rainfall regimes. The PDO is an example of a a standing wave in a chaotic oscillation as is ENSO, PDV, PNA, NAO and other indices. They all change abruptly.

        I feel that you not not reflect on the evidence in a true dialectic but merely repeat your ideas at great length – albeit with some modification as you assimilate some confounding ideas into your world view. You confidently assert for instance than some elements are chaotic and others – primarily greenhouse gases – aren’t.

        How can I confidently assert a chaotic climate system? Let’s first look at the climate system as a whole. It consists of lithosphere, biosphere, hydrosphere, cryosphere, atmosphere and heliosphere. These elements all interact to create climate. The interactions are in principle completely deterministic in a system that is dynamically complex. The system as a whole exhibits abrupt changes that look very much like a chaotic oscillator. If it looks like a duck…

        Now – because it is deterministic – we can look at proximate cause and effect. For instance – changes in upwelling of frigid sub-surface water having an effect on clouds and that influencing global energy dynamics – as shown in satellite radiative flux data. We have had this dance before – and you always disappear before the end of the song. The SW record shows strong warming between 1984 and 1998 in the SW and cooling in the LW. Now I know that is the case because the planet was warming in the period – so it is a trivial exercise to disentangle causal factors. The dominant factor in this period was cloud and not CO2.

        Chaos theory simply tells us something about the properties of complex and dynamical systems – such as the global climate system. In principle small changes – such as in trace atmospheric gases – can accumulate in chaotic systems and precipitate wildly out of proportion to the initial impetus. Greenhouse gases are part and parcel of the Earth climate system with no special properties.

      • I’m sometimes unclear as to what points you’re trying to make, although I have the sense that if I knew, I would agree with some of them. There are chaotic elements in the climate system, and elements that exhibit non-chaotic behavior. The latter operate predictably (within reasonable uncertainty margins), yield long term trends that are fairly independent of initial conditions, and do not themselves directly create effects that swing widely away from the trend line. They may, however, bring the climate to tipping points that would precipitate a disproportionate shift from the previous rate of change. In the case of warming, the potential tipping points general imply a more abrupt warming rather than a reversal of the warming (for example, a massive methane release from warmed Arctic permafrost). The effect of CO2 and other greenhouse gases fits into this category. This is probably a manifestation of the physics of radiative warming resulting from TOA imbalances. The evidence comes from the past century, but also from a large wealth of paleoclimatologic data. There remain disagreements about quantitation, but there is little evidence to support claims that the effects of CO2 are insubstantial, either in contributing to recent trends or in modifying past trends. The climate sensitivity quantitation issue was mentioned by Swanson and Tonis, who asserted that a significant climate response to internal variations must imply a significant response to anthropogenic forcing. Both are potentially important, and I agree with that, but would add that the difference in timescales is relevant to future climate projections. Over the next decade, it is hard to estimate whether the internal or forced components will dominate. Over the next several decades, the forced (warming) component is likely to dominate, and the evidence from the mid-century PDO data supports this conclusion.

      • You seem mostly unclear I am afraid. The implication for climate sensitivity, for instance, involved chaos.

        I suggest that you read the 2002 NAS report – and meditate on abrupt climate change.

      • The heliosphere is definitely chaotic. This extends from orbital eccentricities – as in the cryptic post from Eli – to internal solar processes. If I say the Sun is not the centre of the solar system – I will probably be accused of being a flat earther. But it is true that the Sun orbits around the centre of gravity of the solar system rather than being strictly at the centre. This induces geo-magnetic changes that are chaotic as a result of chaotic orbits. I would call the reversal of the Sun’s magnetic field in the Hale cycle as an example as chaotic bifurcation. Equally – the Earth’s magnetic reversal on much longer timescales.

        Still – there is a confusion of terms. Chaotic systems are in principle fully deterministic. They are chaotic because they behave like nonlinear oscillators. Chaos theory is a metatheory that gives us clues about the properties of complex dynamical systems.

      • Correct me if I’m wrong, but I believe the solar system center of gravity is inside the sun and very close to its center.

      • No you are not wrong at all – but that really is what drives the solar dynamo as I understand it.

      • This is closer to what I meant on the futility of considering greenhouse as some sort of special case.

        The climate system consists of lithosphere, biosphere, hydrosphere, cryosphere, atmosphere and heliosphere – all these components interact with tremendous energies cascading through powerful systems. In this box are greenhouse gases in the atmosphere – so they are very much part and parcel of the system dynamics.

    • “The closest we get to evidence of the boundary-value problem is that of the 20th century hindcast being evidence that climate is predictable. I’m sorry, this is a joke. The reason it appears predictable at the 30-year timescale is that this reduces the temperature record to circa 2-3 degrees of freedom. It takes neither skill nor imagination to produce a model to successfully hindcast this. And as soon as we increase the number of degrees of freedom, the models fail miserably (e.g. Anagnostopoulos 2010).”

      I’m not really that familiar with the efforts that have been made to validate the hindcast of global climate models, BUT if they are skillful with respect to the number of degrees of freedom they use and predict, then they are skillful.

      If you want to make the argument that the validiation of the models have been incorrect because they don’t properly take into acount the degrees of freedom used in or predicted by the models, I’d be interested to see that information, but Anagnostopoulos 2010 doesn’t speak to that matter.

      • Sorry Peter, I don’t think your response is scientifically meaningful.

        I am saying that the models ability to correctly hindcast 2-3 degrees of freedom is worthless in scientific terms, whether the degrees of freedom have been “taken into account” or not. Ideally, out-of-sample data should be used; in that case, as few as 2-3 degrees of freedom could be meaningful. Failing that, if you have no choice but to work in hindcast, you need more degrees of freedom in the output than you have degrees of freedom in the input. There are more degrees of freedom in publication bias alone than the 30-year averaged global temperature over the instrumental record.

        And your comment about Anagnostopoulos 2010 is also incorrect. The number of degrees of freedom in the test are increased through resolution in both the spatial and temporal domain. The result is that the models fail to show any skill, with coefficiencies of efficiency below zero (i.e., models are outperformed by a naive average).

        Furthermore, skill is lower at the climatic scale (30-year) than at either monthly or annual scale; showing performance degrades with increasing scale (as would be expected given linear averaging vs. exponential error growth).

      • That’s fine because I don’t see how your post is statistically meaningful. A model can be a good model given a certain amount of degrees of freedom and not at another number of degrees of freedom, but if the model is based on the first amount of degrees of freedom it can still be a good model.

        There should be a relationship between the number of degrees of freedoms the model uses and the validation tests that determine whether it is skillful or not.

        To say a model that is designed to make predictions at 2-3 degrees of freedom isn’t a good model because it fails when we test it against 100 degrees of freedom isn’t valid test of the model.

      • Peter,

        I think we might be talking slightly at cross purposes here. By degrees of freedom, I am referring to degrees of freedom in the test vector used to test the model. I will try to explain myself a little more clearly.

        The arguments put forward are that climate models, which produce output at a wide range of spatial and temporal scales, but it is claimed that the only become skillful at global scale with 30-year temporal resolution. Models are not designed to be skillful for a particular number of degrees of freedom, the skill is typically defined in terms of a scale and a time horizon at which skill degrades.

        Now if we can only test the models at global, 30-year scale, this creates a problem because we only have one data set accurate enough to test, the 20th century instrumental record. From Nyquist sampling theory, 120 years at 30 year scale is just 4 degrees of freedom (or independent test points in our vector). However, in practice, we remove one degree of freedom by reducing it to an anomaly, and serial correlation reduces this further. Hence our test vector has 2-3 degrees of freedom.

        We can expand this test vector by adding more years. We can get 100 degrees of freedom by obtaining 3,000 years worth of accurate global temperature data. But where do we get this from?

        I think I would be willing to accept a lesser requirement, but it must be out-of-sample data. If the models could be tested with another 3 degrees of freedom in our test vector, but those be data that the models cannot be tuned against, I would trust the models more. Or, in hindcast mode, I require many more degrees of freedom in our test vector; but from Nyquist, this requires skill at a much finer resolution.

        In summary: models are not set up to be skillful for a number of degrees of freedom, they are designed to be skillful at a particular scale and time horizon. The number of degrees of freedom is associated with the test vector with which the models are validated, and defines how challenging the test is.

        Another point: a truly scientific test requires the test method and acceptance threshold to be defined prior to the test being conducted. This is strictly enforced, for example, in medical studies. In climate, we rely on scientists to do this objectively, but history has shown us that scientists as individuals rarely live up to this requirement of objectivity.

      • I understand your point. I’m not an expert, but I think there that address at least of your issues. In terms of longer term data availibility, you should look at the work that the McGill climate group has done, though I’m not sure how they are validating their models.

        Also, it doesn’t matter if it is based on a few degrees of freedom, if given only those few degrees of freedom, I have a significant model, it is a significant model. It makes it harder for it be a significant model because I only have a few degrees of freedom. A model can be a significant predictor with only 2 degrees of freedom.

        In addition, I understand your point about resolution, but too fine resolution makes it an irrelevant test. Even gaining a few degrees of freedom will largely diminish your issues.

        The paper you references does not address your issue at all and has huge sampling issues. It is easy to imagine, due the amount of the world that is ocean, that a model that well captures ocean/atmosphere dynamics is a significantly good model of predicting global climate even if it does poorly over land.

        A better test would be to randomly create 6 zones from the world and test how the models do for each of those zones. Then repeat that and show over a significant number of randomly created zones the models fail. This would give you unbiased sampling spaces over a large enough range to be at least somewhat valid with respect to global climate, and address your issue of degrees of freedom.

    • Spence – my point about long-distance correlation such as ENSO was not regarding chaotic systems per se – obviously temporal chaos often involves spatially extended attractors, which is very likely a good description of ENSO. My point was that Tomas is invoking “spatio-temporal” chaos, not plain old chaos, and the defining characteristic of adding chaos in the spatial dimensions (for example in the case of the Kuramoto-Sivashinsky equation) is that correlations decay quickly along those spatial directions, just as they do across time for temporal chaos.

      If spatio-temporal chaos, as opposed to plain old temporal chaos, is relevant to Earth’s climate system, then it should obliterate all large-scale correlations. That is just the opposite of what Tomas and Dr. Curry seem to be arguing here though. The argument is clearly confused.

      • Arthur,

        Your original claim was that ENSO was evidence of a small Lyapunov exponent, and that therefore chaos didn’t matter. Lyapunov exponents are relevant to both temporal chaos and spatio-temporal chaos so your claim that you were making a distinction between spatio-temporal chaos and temporal chaos makes no sense.

        Furthermore, your assertion regarding Lyapunov exponents is simply false. You can have smooth / highly correlated data with rapid exponential divergence from arbitrarily small difference in initial conditions. Also, you can have rapidly changing data (spatially and temporally) but a slower divergence.

        In short, your original claim shows a fundamental misunderstanding of what a Lyapunov exponent is and your subsequent claim regarding differences between types of chaos doesn’t do anything to fix your original statement.

      • Spence, I think we’re talking past one another. Jeffrey Weiss (up above) certainly knows more on the subject that I do, and I’m guessing you as well.
        Tomas was insisting on the importance of the spatial component of the chaos in our climate system (see title of this post, and his comments up top regarding the difference – which he introduced here, not I). Weiss mentioned pattern formation studies which are certainly relevant – I think the main point is, these things can be studied in a reasonable fashion with models, and Tomas’ arguments that it’s impossible are simply wrong. Local turbulence is the portion of the problem with the largest Lyapunov exponents, both in space and time, but you parametrize it as viscosity, and it’s no big deal. Similar parametrizations work all the way up to handle larger-scale patterns: eddies and clouds etc. And the things with the slowest large-scale dynamics will emerge, like ENSO, in the models, if they’re there at all. It’s not a mystery.

      • Arthur,

        I expect there are many points on which we could agree here, and I might be being picky on terminology in places (which I often manage to get wrong myself). Whilst climate (and weather) almost certainly are chaotic on some scale (or at least influenced by some chaotic system – even the orbit of the planets!), we have no evidence as yet that this is what limits our ability to model things. It could be our limits at present are simply that our models exponentially diverge from reality because the models do not reflect the underlying relationships, and capturing these better would result in a longer prediction horizon. It needs more research and I would certainly like to see more funding in this area.

        It is a difficult subject and I strongly agree I would very much like to see a post from Dr. Weiss. Hopefully it could be done in a way that complements Tomas’ post. That said, I fully understand putting together a post like this takes a lot of time – as mentioned on another recent thread, I have toyed with the idea of writing up some of my opinions on the work of the Itia group, and some two years later I’m still thinking about it…

  45. The quote function isn’t working so:

    Richard Wakefield,

    “Though there is no formal definition of a “heat wave”, Environment Canada uses more than2 days over 32C. For my analysis I did several measures.”

    And if it was called Canadian warming your analysis would be relevant.

    • Richard Wakefield

      So how would you define heat waves? The AGW community continues to claim we will have more of them in the future due to AGW. So what is it they are refering to?

      • I don’t know. I don’t claim to speak for the AGW community. My only point is that a “local” (e.g. Canadian based) index for anything can’t be used to disprove GLOBAL anything.

        If I say globally poverty is increasing, data saying that isn’t true in Canada isn’t good evidence that I’m wrong.

      • Clearly, models that could pass the Anagnostopoulos 2010 test would be better models, but that doesn’t mean that the current models aren’t skillful at predicting or hindcasting global temperature trends.

        Just that it is possible that they could be better.

      • I must say, I learned a great deal from this thread – bravo Dr Curry!

        Now, Hamiltonian mechanics (from Wikipedia) “do not provide a more convenient way of solving a particular problem. Rather, they provide deeper insights into both the general structure of classical mechanics and its connection to quantum mechanics”. Meanwhile, Lyapunov exponents show “in general, depends on the starting point x0. (However, we will usually be interested in the attractor (or attractors) of a dynamical system, and there will normally be one set of exponents associated with each attractor. The choice of starting point may determine which attractor the system ends up on, if there is more than one.” So as the deterministic analysis of the climate scientists continues they are able to make their black and white PID controller like claims. But engineers around the world understand that no amount of determinism is going to prove any AGW theory as right or wrong. The sooner the climate scientists start walking past the statisticians frat room and start hanging out with the nerds in the lab of the applied math dept. and share a beer with the fluid mechanics specialists in the aeronautical engineering dept. the sooner we’ll get some real progress with climate science.

      • But since TMax isn’t increasing elsewhere in the world, then there is no increase in heatwaves because of AGW. Recall the claim that France’s 2003 heat wave is because of AGW? Well, 1947 as actually hotter and longer.

  46. It is important to quantify what is meant by chaotic variations. What is the amplitude of surface variation achievable by such changes that are intrinsic to the land/ocean/atmosphere system. Is the super El Nino of 1998 typical of a maximum magnitude of such variations? Note that this variation was only sustained for a few months, and was only a 0.5 C perturbation. Can chaos lead to larger and longer lasting perturbations than this extreme event? I don’t think so, because even this extreme required a substantial redistribution of energy within the system for it to show up in the global average. The chaotic system only has a certain amount of energy to play with, so I think 0.5 C is about the most you will see, and that won’t be sustained for long before a reversal. On the other hand, 3 C from doubling CO2 dwarfs this chaos, and should be considered something above the noise. It adds energy to the chaotic system in appreciable amounts that will show clearly above the noise, and is already showing clearly enough for some. And it won’t be a steady rise, but fast and slow, somewhat like we have already seen in recent decades. Every pause recently has been followed by a fast rise, none by falls, and the general rise will be getting faster.

    • I think a better term is abrupt climate change – chaos theory merely tells us something about the properties of complex dynamical systems. I suggest you google abrupt climate change

      • The point is that it is a complex dynamical system, but it only has a limited amount of energy to redistribute, and a limited speed it can do it, just from the laws governing fluid flows and heat capacity.

  47. http://www.whoi.edu/page.do?pid=12455&tid=282&cid=9986

    The climate system consists of lithosphere, biosphere, hydrosphere, cryosphere, atmosphere and heliosphere – all these components interact with tremendous energies cascading through powerful system. In this box are greenhouse gases in the atmosphere – so they are very much part and parcel of the system dynamics.

    It is not just ENSO at all – a small but highly significant part of the climate system. What I was suggesting is that you have radically underestimated the potential and reality for abrupt change in the climate system and that you do some research.

    • Can abrupt climate change occur without changes in forcing? I think not. The best you can do is get super El Nino’s or about 0.5 C fluctuations. This is to argue against those that think what we see currently is part of a natural unforced variation, and who might still be making that argument when we are 3 C warmer. You can’t get 3 degrees without forcing, i.e. by just redistributing the energy, otherwise we would have seen that in the historical record.

      • OK – here are the rules. Here it is a seeking of the sacred hydrological truth through dialectic. The required attitude is: humour, patience, civility, application, honesty, good will and good faith.

        You have shown 2 examples of bad faith – because I mildly suggested that you do further research on abrupt change – and then, when you persisted – provided a link. Without doubt abrupt climate change does appear in the paleoclimatic and modern record.

        http://www.nap.edu/openbook.php?isbn=0309074347
        http://dels.nas.edu/resources/static-assets/materials-based-on-reports/reports-in-brief/abrupt_climate_change_final.pdf

        I have seen that unforced comment used in the Royal Society climate summary – unforced internal climate variation as a result of climate being a chaotic system. In reality – in chaotic systems the forcing is a small change in initial condition that is amplified, one way or another, through the system. It is the definition of abrupt climate change.

        But chaos is simply a metatheory – it describes properties of complex dynamical systems but doesn’t help much in analysing systems themselves. One of these properties is sensitive dependence on initial conditions – the small changes referred to.

        Chaotic systems are in theory deterministic – so we can begin to disentangle causal mechanisms. The best place to start is at the top of atmosphere. There is a dynamic energy imbalance at TOA. Energy in less energy out equals the change in global energy storage mostly as heat in oceans and atmosphere. If more energy enters the system than leaves in a period – the planet warms and vice versa. That is the totality of what is meant by forcing.

        Solar irradiance and radiative flux has been measured for decades. The most useful of the radiative flux measures is the toa power flux up anomalies. The absolute value is not known with any precision but the changes are known much more precisely. The stability uncertainty is less than 0.4W/m-2.

        The trend in outgoing SW flux anomalies in both ERBS and ISCCP-FD agree within reasonable limits and show a large decreasing trend. Less cloud reflecting sunlight back into space – planetary warming. At the same time there was an increase in IR – more heat power flux being radiated into space – planetary cooling. So the net warming during that period was dominated strongly by cloud changes. This is something of a shock to climate wombats – but is objective data that will stand up to scrutiny.

        The CERES record from 2000 shows large changes associated with ENSO but little obvious trend. The reason ENSO has such a large impact on Earth radiant balance is that clouds dissipate over warm water and form over cool. The difference in SST arise in a shifting balance between upwelling, cold sub-surface water in the eastern Pacific and the suppression of upwelling by a warm surface layer.

        There are decadal and longer changes in the Pacific that influence cloud and Earth’s dynamic energy. There are other changes in ice, vegetation etc that can also influence Earth albedo and therefore Earth’s dynamic energy imbalance – producing radical cooling in as little as a decade. Regardless of the small changes, and in this is included greenhouse gases, they propagate through the climate system driving changes in the radiant imbalance of the Earth and therefore warming and cooling.

      • You are missing the point I am trying to get at. What is the intrinsic variability of the current climate system (clouds, ocean, and all) in the absence of any changes in forcing? I say about 0.5 C, and that being sustainable for at most a year. Through the past all larger changes than this have been associated with changes in forcing. e.g. the LIA, and probably MWP, would be solar forcing, Ice Ages due to Milankovitch cycles, the KT transition due to an asteroid, etc. It has yet to be demonstrated even in the paleo record that large changes can be unforced. Going forwards CO2 forcing is several times larger than the LIA solar forcing which was itself measurable in the surface temperature record, so we expect CO2 forcing to be measurable for sure, and yes, it will be accompanied by some effects of changing clouds too, but we don’t know which direction they would push it.

      • There cannot be any warming of oceans and atmosphere without ‘forcing’. It is impossible and you’re concentration of surface temperature is misguided – when we talk about global warming it is in both oceans and atmosphere. The 0.5 degrees surface C surface temperature change from a single ENSO is both irrelevant and, as generally accepted in the literature, overestimated for a 1 year period – it is your guess and without knowing your assumptions I can’t comment. The assumption that ENSO is ‘unforced’ is wrong I believe – there are no stochastic elements in climate. But it is at any rate not a well formed question – and I only persist because you have almost met the essential requirements. It is a bit lacking in humour – but we will let that slide.

        By the first law of thermodynamics. All planetary warming or cooling in any period occurs because there is a difference between incoming and outgoing energy, an energy imbalance. The imbalance results in changes to the amount of energy stored, mostly as heat in the atmosphere and oceans, in Earth’s climate system. If more energy enters the atmosphere from the Sun than is reradiated back out into space – the planet warms. Conversely, if less energy enters the atmosphere than leaves – the planet cools. Thus Earth’s energy budget can be completely defined in three terms. In any period, energy in is equal to energy out plus the change in the amount of stored energy.

        This can be expressed as:

        Ein/s – Eout/s = d(GES)/dt.

        By the law of conservation of energy – the average unit energy in (EIN/s) at the top of atmosphere (TOA) in a period less the average unit energy out (EOUT/s) is equal to the rate of change (d(GES)/dt) in global energy storage (GES). The most commonly used unit of energy is Joules. Energy in and energy out is most commonly reported in Watts (or Watts/m2) – and is more properly understood to be a radiative flux or a flow of energy. A flux of one Watt for one second is one Joule – which is known as unit energy. Most of the stored energy is stored as heat in the oceans which is measured in Joules (or Joules/ m2).

        Energy is everything in climate change.

        The assumption that clouds change only in response to global warming is wrong. I will quote from NASA/GISS: the ‘overall slow decrease of upwelling SW flux from the mid-1980’s until the end of the 1990’s and subsequent increase from 2000 onwards appear to caused, primarily, by changes in global cloud cover (although there is a small increase of cloud optical thickness after 2000) and is confirmed by the ERBS measurements.’
        http://isccp.giss.nasa.gov/projects/browse_fc.htmlIgnoring peaks caused by volcanoes. The extremes of the ‘slow decrease’ is about 7 W/m-2 for the global mean. Given that greenhouse gas ‘forcing’ in the period was at most 0.7 W/m-2 – the change in cloud was 10 times the greenhouse gas forcing. At the same time there was a few W/m-2 cooling from increased IR emissions.

        In abrupt (chaotic) climate change – the concept of forcing is replaced with the idea of sensitive dependence on initial conditions – small changes in initial conditions produce change that is wildly out of proportion with the initial impetus. It happens because of the interactions of systems – lithosphere, biosphere, hydrosphere, cryosphere, atmosphere and heliosphere.

      • It is difficult to classify people on these blogs. There are a large number of people here who think forcing is nothing and chaos is everything, so climate prediction is impossible. Now from your latest comments, I think you are in a different category that does suggest forcing is important. This is an area we agree on, and you don’t have to convince me that the energy budget drives things. However, ENSO is an example of internal variability, as it does not require forcing to drive it. It is just a temporary redistribution of heat in the ocean due to slowly changing interactions of large-scale wind and currents, a kind of slow wind-driven sloshing on the Pacific scale.
        I also realize we have insufficient data on clouds, but unless they are related to the forcing in some way, they won’t be doing anything on their own. Current indications are for positive cloud feedback to forcing, but obviously areas of the globe may have opposite effects, and certainly high and low clouds have different effects.

      • Yes I did laugh this time.

        Energy is everything – there is no warming or cooling without an energy imbalance.

        Chaos is a metatheory – it tells us something about the properties of complex dynamical systems – especially sensitive dependence to initial conditions. But it doesn’t tell us about the detail of what is happening in the world.

        ENSO involves feedbacks: wind, cloud, sea level pressure, surface temperature, Rossby waves – it is a complex and dynamic system in its own right – but only part of the wider global dynamic. Clouds change the global energy dynamic by reflecting more or less sunlight back into space. What initiate these short and long term changes in the Pacific (which include the PDO) are poorly understood. But forcing is the wrong concept as chaos theory tells us. There is a small purtubation and the state shifts.

        But it has an affect on the global energy budge and we don’t need theory when we have data. Rather the theory should explain the data – this is the scientific method after all.

      • “However, ENSO is an example of internal variability, as it does not require forcing to drive it.”

        What if however the tidal effects of the moon acting on the atmosphere, oceans and solid earth, were actually the driver of the atmospheric, and ocean basin oscillations. The period of interactions is predictable as the tidal timings are know with precision, unlike most other climate forcings.

        The preoccupation with focusing only on the visible surface of the earth, and disconnecting the strongest tidal and gravitational forces in the Earth / Moon system has lead to this misunderstanding of why the lunar tidal effects (when not considered) that produce the teleconnections seen in so many global circulation patterns leaves so many easy to answer questions.

        http://tallbloke.wordpress.com/2011/01/31/richard-holle-the-big-picture/

      • I have read the NAS link you provided and I agree with it, especially when it says abrupt climate change is most likely triggered by forcing changes [I said something like that on my first February 10 post here]. Rising CO2 is a forcing change, and can trigger something. This we can agree on, at least, I hope.

  48. I want to make a general point. Things can uncertain and even extremely difficult to predict and not be chaos. We can produce systems where there is no way more effecient than simply watching the system go to predict its final outcome that are not stochastic.

    If I can not say 100% this will be the state of the system at any given point doesn’t mean the system is really chaotic. If I can say with some probability that one variable of the system will have some value does not mean they system is completely unchaotic.

    • Chaos theory is a metatheory – it merely describes some properties of complex dynamical systems. Sensitive dependence especially.

      Where there is abrupt climate change in rainfall, temperature or SST for instance as we find everywhere then it simply looks like a chaotic oscillator. So if it looks like a duck…

      But chaotic systems are in theory completely deterministic – so thinking that the dictionary meaning of chaos or randomness applies is wrong.

    • Tomas Milanovic

      Things can uncertain and even extremely difficult to predict and not be chaos.

      You are of course and trivially right.
      While it is true that chaos =>unpredictability it is not true that unpredictability => chaos.
      I believe that nobody did this claim because it would be ridiculous.
      Chaos is neither randomness nor “anything goes”.
      It is a property of the laws of nature expressed by ODE or PDE under certain conditions.

      The Lorenz equations don’t give chaotic solutions all the time. For some values of the parameters they give just banal periodic answers.
      It is like fluid flows – they are not always turbulent. It will depend on the Reynolds what they do.
      But once the Reynolds or Lorenz coefficients get over or under some critical value, the solution becomes chaotic.
      The science of “transition to chaos” is almost a branch of physics in its own right because it studies what happens when a system crosses to the chaotic domain and why.

      In that sense it is improper to talk about “chaotic systems” because there exists no such thing.
      One should talk about “solutions in the chaotic parameter domain” but people are generally lazy (me included) and go for the short variant.

  49. “We can produce systems where there is no way more effecient than simply watching the system go to predict its final outcome that are not stochastic.”

    That should read:
    We can produce systems where there is no way more effecient than simply watching the system go to predict its final outcome that are not chaotic.

  50. That abrupt climate change happens, things like Younger Dryas happen, doesn’t mean that the system is completely chaotic and trying to assign probabilities at all levels for all variables is a worthless exercise.

  51. Tomas Milanovic

    Unfortunately I cannot spend much time on blogs during the week end and there have been too many posts to adress them all.
    I’d stress several concepts :
    – there are some “elements” that are chaotic and other that are not
    – the problem is a “boundary value problem”
    – “the temporal chaos involves spatially extended attractors” (this one is especially confused)

    As I can’t comment all, I’ll focus on this one :
    Over the longer term, weather becomes random simply because there are small random variations in solar input (not to mention butterflies) and what matters is the statistics of the weather – climate. And there is no sign that climate (yes it is a boundary value problem) itself is “chaotic” in any significant sense.

    This is a good example of the school that equates chaos with randomness. There are many variations on the same theme.
    These are people who either have not read or understand the paper I linked for this purpose : http://socrates.berkeley.edu/~phylabs/adv/ReprintsPDF/NLD%20Reprints/17%20-%20Egodic%20Theory.pdf

    First the idea that chaos “averages out” over some time scale has been known wrong at least since Lorenz.
    The temporal averages of a chaotic solution are as chaotic as the solution itself.
    This is a fact, so it would be about time that everybody acknowledged the fact.
    Using a linear transformation (average) can’t obviously transform miraculously a non linear chaotic system in a linear non chaotic system. This is not a faint sign but rather a screaming neon that climate (an average) is chaotic too.

    There are also people who use the Lyapounov coefficient in a fully inappropriate way. This coefficient is only well defined for temporal chaos because it measures distances in the phase space.
    It is not well defined for spatiotemporal chaos – what physical meaning would have a term like exp(l.r) where r is some spatial direction and l positive constant?
    There are classes of spatio temporal chaotic models (coupled lattices f.ex) which under additional assumptions allow to define a kind of “spatiotemporal” Lyapounov coefficient.
    I doubt that many are familiar with this approach which doesn’t generalise anyway so I don’t use it and recommend not to use a term when one doesn’t know what it means.

    So what is behind that badly formulated statement is probably an implicit belief that there are some invariant statistics in the system.
    This may but must not happen. In temporal chaos there are examples of both – the chaotic logistical equation has invariant statistics while the chaotic 3 body system has not.

    In spatio temporal chaos the problem begins alreadywith the definition. Statistics of what?

    In temporal chaos the statistics are done on the state (phase) space of the system. As the system wanders through the state space, there is a well defined quantity P(X,M)dV where X is a degree of fredom, dV a small volume of the phase state around a point M and P the probability density.
    If P is independent of initial conditions and t then an invariant probability density exists and the system while still being chaotic and unpredictable has a well defined probability to be in a certain state. I refer again the reader to the linked paper.

    In spatio-temporal chaos we deal with fields. The state (phase) space is infinite dimensional. dV is not defined.
    So already the basic statistical activity which consists in counting states and putting them in bins is difficult because each state is different. The probabilty of every state is 0 when the number of states tends to infinity.

    OK so let’s assume that there can be some metrics that enables to count the states in a finite way.
    Let us remark that the handwavings of the kind I quoted never define what this metrics is and why it is relevant for the dynamics. Clearly, like Spence UK rightly said, the number of the degrees of freedom will be large.
    This disqualifies from the beginning all metrics with 1 or few degrees of freedom.
    Unless it is proven that the system can be reduced to few degrees of freedom, it is again another handwaving.

    After his step, and none of the posters stating here the “everything averages out” meme managed even this first necessary step, comes a harder task.
    Now it must be proven that the statistics of the states follows an equation
    P(X,M)dV where P doesn’t depend on t and X0.
    There is not a beginning of a reason why it should be so.
    Actually there are many reasons why it shouldn’t.
    For instance the existence of such an invariant probability necessitates the existence of invariant attractors (of course these attractors are in no way “spatial” structures like somebody wrote). I refer again to the link.
    But the topology of the attractors depends on the coefficients of the equations which vary with time. So this is rather a reason to think that such an invariant probability doesn’t exist.

    Last I would like somebody who believes in that to clearly define what he means with “the climate is a boundary value problem”.
    Mathematically it is obviously nonsense. PED can’t be solved without initial conditions. And as for the “boundaries” of the system, what is it – the earth surface on one side and an arbitrarily large sphere on another?
    I suspect that this statement just means the same thing that I commented above – all solutions tend asymptotically to the same F0(x,y,z,t) when t is large.
    If my interpretation is right, this would need of course a demonstration and I doubt strongly that this can be true for a chaotic system.

    Last comment for those who refer to “observations” and want to draw consequences from it.
    It is only over a short period (50 years or so) that we have data with sufficently high number of degrees of freedom and at least some spatial resolution to hope to represent is some way the system.
    It is a well known result that when one looks for the right dimensionality of a chaotic system, a minimum of data is necessary. This minimum is rather high (several centuries). If one has less than the minimum, the analysis yields only artefacts.

    The data that we have over a longer period are dimensionally so poor (generally only 1 degree of freedom) and have no spatial resolution so that one can interpret them about in any way one wants. For instance the degree of freedom that I consider acting at order 0, the clouds, is completely unknown.

    So the answer that I would give to the reader who asked “Is it proven that the system is chaotic?” would be “Yes and at all time scales.”
    Whether the system can have invariant statistics is an open question whose difficulty is very high.

    • The Earth’s climate system includes chaotic elements that play an important role on some timescales, but are dominated by non-chaotic aspects of CO2-mediated forcing on centennial timescales. The internal fluctuations have more or less averaged out over the course of the past century.

      Averaging out is not only possible, but it is inevitable, although the necessary intervals will vary depending on the partiular phenomenon. The proof does not lie in mathematics but in climate physics. Internal variations cannot alter the total energy in the system, but only redistribute it. Ultimately, over time, any perturbation of the distribution will induce it to return to a previous equilibrium – that is what would constitute an averaging out.

      This necessity would not apply to a climate in a metastable state capable of settling into two different stable equilibria if slightly perturbed. Such states may have prevailed in the distant past, but there is nothing about the current Holocene climate to suggest that more than a single equilibrium is within range – we are not close to a new glaciation nor a new “hothouse climate” (although the latter might become possible if continued greenhouse gas emissions were to remain unmitigated for a prolonged interval).

      The appropriate perspective is to recognize the timescales at which chaotic elements dominate and those dominated by the rather predictable trends from anthropogenic forcings. Even the latter might in theory lead to tipping points with abrupt climate shifts, but based on paleoclimatologic history, these are far more likely to be in the same direction as the trends than in the opposite direction.

      • Tomas Milanovic

        Fred that begins to be like the discussion with Claes Johnson.
        It is not by repeating the same wrong thigs over and over that they will begin to be magically less wrong.

        The Earth’s climate system includes chaotic elements that play an important role on some timescales, but are dominated by non-chaotic aspects of CO2
        There are no chaotic and non chaotic “elements”. What is chaotic or non chaotic is the solution, e.g the states of all fields when they evolve in time.
        CO2 is a parameter (degree of freedom), it and the way it is coupled to other fields has nothing chaotic or non chaotic in it.
        Whether the solutions are chaotic or not is a yes or no question, there is no such thing like “it’s just a bit chaotic sometimes”. It is by using such an inappropriate vocabulary that people get confused.

        Ultimately, over time, any perturbation of the distribution will induce it to return to a previous equilibrium – that is what would constitute an averaging out.

        Also wrong. The system is not in equilibrium, never was and will never be. It is even its defining feature. It can’t “return” to some equilibrium it has never been in to begin with. I thought at least this basic thruth would be known.
        On the contrary the system has an infinity of non equilibrium states and wanders among them. In temporal chaos these states could be on an attractor but would still all be out of equilibrium. Nothing “averages out”.

        This necessity would not apply to a climate in a metastable state capable of settling into two different stable equilibria if slightly perturbed.

        Sorry but this doesn’ begin to make sense. Your mental picture of the system is really the one of a pencil on a tip that can only fall right or left?
        If so, then you probably can only misunderstand all that is being said on this thread.

      • Tom,
        You are going to say I am wrong to bring in the Lorenz model to this discussion but anyway I will. The climate system is in some way similar, except that the constants in the Lorenz model are being changed by the forcing (CO2 content can be regarded as a slowly varying constant in those equations). Now we know that the probability of going to different attractors depends on those constants, and the attractors themselves shift slowly and predictably in response to those constants. This is what climate does. There is a predictable aspect, which is the attractor. In climate terms a world with higher CO2 has a warmer attractor. There may be an alternative, for example, high-albedo cooler attractor it could still go to, and I would like to see evidence for such, but none has been forthcoming.

      • Tomas Milanovic

        Jim

        You were right:)
        However in some qualitative/philosophical sense I can accept this picture.
        The problem being of course that the attractors don’t “predictably transform”.
        This is again the trap of linear thinking – small variations create small answers. But we deal here with non linear systems where this is not true.

        There is a paper that studies exactly that (won’t link it as it is too technical) – Lorenz system with continuously variating constants.
        The changes can be pretty dramatic, the system can even stop to be chaotic and the attractor is destroyed!
        But as you rightly expected what I would say, the biggest problem with this picture is that you have no convenient invariant “attractors” in spatio-temporal chaos so it is hard to transform the analogy in something meaningful.

      • Tom, I think your argument is parallel with the science here. Everyone agrees big changes can happen fast on climate time-scales. Some mechanisms for that are hypothesized, e.g. methane release from polar regions, increased melting of Greenland leading to stopping the Gulf Stream, rapid reduction of Arctic sea-ice and its positive feedback, collapse of Antarctic ice shelves, loss of the Amazon, large volcanoes, asteroid impacts, unexpected solar variation. Climate models can’t predict all of these types of things, but can be, and have been, used to evaluate their effects. If any of these things happen by 2100, they may be as important as CO2 in some regions at least. Given what we know about CO2 and its future variation, this is the effect that is predicted by the IPCC, but for sure it is realized other things of an unpredictable nature may have an influence, or not. If your post is to suggest that climate scientists aren’t aware of these possibilities, it is incorrect.

      • If your post is to suggest that climate scientists aren’t aware of these possibilities, it is incorrect.

        No kidding. Less than 1% of climate scientists went up in smoke here.

        One of the recommended papers I read last night said these notions “should be entertained.” As in, for its entertainment value. What we have here is the mathematical foundation for The Day After Tomorrow. Fantastic.

        Are there any climate scientists who think a return to equilibrium means a return to exactly the same state? As in, what happened to this wandering “climate scientist”.

      • Actually, Jeff Weiss comes from the community of nonlinear geosciences, which does look at climate problems and issues related to the circulation of the oceans and atmosphere. The fact that there is such a community investigating such problems does not imply any particular literacy or understanding of this issue among IPCC authors and others who interpret and apply climate model output.

      • “The fact that there is such a community investigating such problems does not imply any particular literacy or understanding of this issue among IPCC authors”

        which of course was not being claimed… next…

      • Even in the simple Lorentz case I think that making the attractor a bit “warmer,” presumably by moving one of its edges in the “warmer” direction, has no implications for the behavior of any finite segment of any trajectory within the attractor. You still don’t know what trajectory you are on or where it is going.

        To take an extreme case, suppose the ice ages are chaotic oscillations (a real possibility). Making the attractor warmer might simply mean that some parts of some ice ages are warmer than they might otherwise be. Or even worse, some parts that never occur would have been warmer if they had occurred.

      • By my analogy the ice attractor may have been removed completely by the CO2 increase, so we are only left with an attractor related to the previous interglacial one, or the very stable one prevalent in the pre-Ice-Age and pre-Antarctica Eocene period that lasted tens of millions of years.

      • Indeed. Everything is possible when we do not know what is going on. That is what makes AGW pure speculation. I think that how the probability of the next ice age has changed with the CO2 increase, if at all, should be a central question of climate change research. It is not, instead we get absurdly high probabilities for runaway warming.

      • David Archer wrote a book called “The Long Thaw” about the possibility of the extra CO2 making the next Ice Age not happen.

      • The climate system is in some way similar, except that the constants in the Lorenz model are being changed by the forcing (CO2 content can be regarded as a slowly varying constant in those equations).

        The essence of all low-dimensional systems of non-linear ODES that exhibit chaotic response is an extremely delicate balance between the energy input into the systems and the consumption of that input energy for maintaining the response of the systems. If the energy input increases, by increasing the Ra parameter for example, there is no certainty that the solution will not become unbounded. Equally important, there is no certainty that the subsequent response will in fact be a chaotic response. Recall that in the case of the original Lorenz system of 1963, chaotic response is not obtained for all values of the parameters.

        You simply can not extrapolate around with these systems without doing the actual calculations. It is solely the results of the calculations that provide the evidence of chaotic response. Playing with the response by use of words is simply incorrect. Calculations are the only method to ensure that any extrapolations will in fact exhibit the assumed response. There is no other way. In a sense, this necessity is a result of the non-linearity of the equations. Just as the delicate balance between energy input and consumption of that energy is a hallmark of chaotic response.

        Also recall that none of the systems of ODEs that exhibit chaotic response can exhibit a trend.

        I suggest the results of this Google Scholar search for additional information. And this one, too.

      • Tamas, I think Fred’s (and other’s) responses suggest a significant lack of understanding of chaotic math. He seems to be confusing chaos theory with perturbation theory. Sensitivity to initial conditions is not a perturbation. There is no butterfly in the butterfly effect. The intrinsic unpredictability is epistemic, because the unknown delta in the initial conditions is infinitesimal. (That would make the butterfly quite small indeed.) Like fractals, chaos is a mathematical property not a causal one. That is why it was discovered by the mathematician Poincaré.

      • Tomas Milanovic

        Yes David! But this thread is here to try. At least that’s what I do.

      • David – see my response to Tomas below regarding the quantitative importance of chaotic behavior – important at some timescales but minor at others.

      • Fred, It seems to me that you simply fail to respond to Tamas’s points. Your entire argument seems to be pro-AGW slogans to the effect that we damn well know what the role of CO2 is and what the role of chaos is and CO2 is all that matters. I disagree strongly. We have virtually no understanding of the chaotic aspects of climate.

        I am particularly critical of your claim that there is “massive evidence” for “the strong role of CO2-mediated warming”that is “confirmed by satellite.” As I have said before, I see no evidence of CO2 warming in the satellite record at all. There was no warming from the beginning until the 1998-2001 ENSO and there has been no warming since, except the average is a bit higher afterward. It is a step function, or in plain English a jump.

        So what are you talking about? Where is the “massive evidence” of CO2’s strong role in the satellite record? The step up in average temperatures during the ENSO cycle looks a lot more like a chaotic abrupt event than it does like CO2 induced warming. GHG warming does not jump.

      • I’ll let others review the temperature data starting in the mid-1800s on the surface and including satellite data since 1979. They all show a continuing warming trend since around 1976 through 2010, punctuated mainly by ENSO phenomena which average out.

        (For clarity regarding my reference to evidence “confirmed by satellite”, in that particular case I was referring to measurements of radiative flux that confirm the greenhouse effect).

        I have the sense we are repeating some of what we said earlier. I don’t want to be unfair, David, but I believe that if you are implying the absence of a significant CO2-mediated warming, you are going to risk not being taken seriously. Clearly, comments within a thread like this are not an adequate venue for citing evidence from hundreds of millions of years and thousands of separate studies involving independent approaches to the issue. They are a good place for discussing details, once agreement on the basic foundation of modern climate science is achieved. If you would like to discuss those details, I believe I’m well qualified to participate, but that is up to you.

      • “Fred that begins to be like the discussion with Claes Johnson.
        It is not by repeating the same wrong thigs over and over that they will begin to be magically less wrong.

        The Earth’s climate system includes chaotic elements that play an important role on some timescales, but are dominated by non-chaotic aspects of CO2
        There are no chaotic and non chaotic “elements”. What is chaotic or non chaotic is the solution, e.g the states of all fields when they evolve in time.
        CO2 is a parameter (degree of freedom), it and the way it is coupled to other fields has nothing chaotic or non chaotic in it.
        Whether the solutions are chaotic or not is a yes or no question, there is no such thing like “it’s just a bit chaotic sometimes”. It is by using such an inappropriate vocabulary that people get confused.”

        If the amount of energy reaching the Earth from the sun is greatly decreased is their a high probability that the temperature of the Earth will be decreased?

      • Tomas Milanovic

        Peter
        If the amount of energy reaching the Earth from the sun is greatly decreased is their a high probability that the temperature of the Earth will be decreased?

        This is a tricky question because many parts are undefined – “greatly”, “high”, “will be (when)”.
        Also is all being equal?
        If there is a step (f.ex halving at t0 and staying at half forever) I think that the system will go to states with lower temperatures.
        What probability for what temperature where, frankly I don’t know and I don’t think it can be answered.
        Are there places where it decreases much and others where it decreases little? Yes. Where, what? I don’t know.
        Is there an infinity of possible states after a time T each with undefined probability? Definitely yes. Are most of them such as T2(x,y,z,T) < T1(x,y,z,T). Yes.

      • Let’s define it more closely.

        Planetary warming or cooling occurs only as a result of an energy imbalance – a dynamic energy disequilibrium. Energy in less energy out equals the change in energy stored – mostly as heat in oceans and atmosphere.

        This can be expressed as:

        Ein/s – Eout/s = d(Es)/dt

        So Ein decreases because of orbital changes? To stay at the same temperature we would need an equal change in energy out. It is perhaps feasible if there were an equivalent (in an energy sense) decrease in Earth albedo. If we were at the same temperature that would seem to be logically improbable. But to rule it out on the basis of a climate thought experiment is a modern neuropathology that I will not indulge in.

        However, it would seem much more likely that the planet would cool to a new state of dynamic energy disequilibrium.

      • The point is that their is some level of predictability.

        Despite the chaos and complexity, if we put restraints on the system, everybody feels comfortable saying that their is some probability that X will (or will not) happen.

        Even in the case of Tomas that pointed out there were an infinite number of states with cooling and he wouldn’t know how to assign probabilities to them.

        He though did essentially completely eliminate an infinite number of states in which the Earth would warm.

        This is related to the point that Eli made:

        Climate scientists in their models are restricting the system. They are assuming some “average” out put of the sun, some “average” volcanic eruptions, some “average” plate tectonic activity, no nuclear war, no asteriod, metoer, or comet strikes. No aliens with death rays. No killer plant viruses that life essentially all trees on the planet.

        From a chaos theory point, all or none of these restraints might be “reasonable”, but from a more practical stand point, they work because we have no good way of assigning a probability for any of them happening and so practically we can ignore them, and if many of them did happen, we’d have larger issues than climate change.

        http://judithcurry.com/2011/02/10/spatio-temporal-chaos/#comment-41806

        The chaotic nature of climate is a huge problem for predictions, but if you start putting restraints on the paths and the bounds it becomes a doable problem.

        Which is why we see skillful climate models.

      • I was talking in terms of probability not prediction

        See here
        http://judithcurry.com/2011/02/10/spatio-temporal-chaos/#comment-41268

        and see this – http://www.pnas.org/content/104/21/8709.long

        Also have a look at the models page here – http://www.climate4you.com/

        I think you might be being a bit optimistic.

      • Peter – The answer is that the Earth’s temperature will of course decline greatly, and that decline can be quantified fairly accurately. The ultimate temperature will resemble that of planets farther from the sun than we are.

        Your greater point is well taken – chaos is a real phenomenon, but the uncertainty it mediates has been exaggerated for a variety of reasons. Some involve a poor understaning of climate dynamics, whereas others may involve a wish not toaccept the evidence for a strong role of anthropogenic greenhouse gas emissions in determining the temperature of our climate system.

      • In fact, to elaborate a bit more – at very low solar irradiance, water vapor and clouds will disappear almost completely from the atmosphere, and the greenhouse effect will be dominated almost entirely by CO2. This will vary from time to time due to changes in volcanic activity, but over millions of years, CO2 will fluctuate around an equilibrium value. So, therefore, will the temperature, which would be amenable to calculation from the solar irradiance, the CO2 concentration, the surface and atmospheric albedo, and the Stefan-Boltmann law.

        (At even lower temperatures, CO2 would condense to dry ice, as on Mars, and we would once again face the need to compute the balance between atmospheric and surface quantities of an IR-absorbing substance)

      • “will fluctuate around an equilibrium value.”

        But your equilibrium value is what? A measured value? Or a real state? This phrase reveals the invalidity of your position. Again I repeat, you are stuck in deterministic analysis.

      • Tomas – Here is my perception. I’ll let other readers judge whether it is similar to theirs.

        In my view, you have exhibited a tendency to substitute accusations for evidence, stated with a dogmatic certainty that the evidence itself fails to justify, and often contradicts.

        Your area of knowledge and mine differ. You understand spatiotemporal chaos far better than I. I understand climate far better than you, and that understanding allows me to know when you misstate essential facts, as I believe you have done here and elsewhere.

        Two conclusions are well supported by the recent and paleoclimatologic record. First, chaotic fluctuations of the type you reference do in fact average out (and as I explained, they must ultimately do so). Second, for millennia, our climate has been relatively close to equilibrium, as discerned from the tendency of fluctuations to return to a steadier baseline, from energy balance studies, and from observational data on feedbacks. As I mentioned earlier, this is a matter of the physical evidence.

        The evidence also reveals a balance at different times between the role of chaos and non-chaotic behavior. In particular, long term temperature responses to CO2 have been modeled with reasonable accuracy regardless of the initial conditions imposed – the trends have converged over time toward the same final values. This does not support a prominent role for chaos over the assessed timescales.

        It seems to me, though, that you face a more fundamental problem – your unwillingness to acknowledge the strong role of CO2-mediated warming in the climate change of the past century, despite massive evidence documenting the importance of that role – evidence from theory confirmed by satellite and ground-based data among other sources. I believe that as long as you are determined to rationalize away that reality, you will engage in what is ultimately a futile effort. The “greenhouse” effect of our anthropogenic emissions is not going to disappear regardless of whether you wish to disbelieve it.

        At certain stages of an exchange of comments, I often conclude that we will begin to repeat ourselves with little additional benefit. We may have reached that point here. I will be content, however, to leave it in the hands of readers with a strong background knowledge of climate theory and confirmatory data to judge for themselves the relative roles of chaotic fluctuation and anthropogenic CO2 and other emissions in determining our past climate and in their likely influence over the remainder of this century and beyond. I have found that practice to work well in the past, and so I expect it will suffice here.

      • Tomas Milanovic

        Second, for millennia, our climate has been relatively close to equilibrium, as discerned from the tendency of fluctuations to return to a steadier baseline,

        I think it doesn’t make sense to continue.
        You repeat all the time the same wrong thing.
        The system has never been, is not and will never be in equilibnrium unless you redefine compeltly the sense of teh word equilibrium.
        This doesn’t stand to discussion.

      • Tomas – I’ll let readers review what we each have said so that they can arrive at their own conclusions. Later, though, I hope to expand a bit on how the chaotic fluctuations ultimately result in a return to equilibrium through a variety of well-characterized feedback mechanisms.

      • Fred,

        A system which has a feedback bringing it back to an equilibrium is, by definition, not chaotic. A system which is chaotic, by definition, does not return to an equilibrium. You cannot pick and mix these things.

      • The climate system invariably tends to return toward equilibrium via feedbacks. Chaotic behavior may alter the magnitude and timing of the climate response, but not its ultimate destination, which is determined by conservation of energy and the Stefan-Boltzmann law. There is no physical mechanism permitting a climate to persist with the same total energy in a variety of different states – that can only be temporary.

        There is an added wrinkle of indirect effects. An internal variation can expose climate energy to gain or loss at the top of the atmosphere, so that total energy changes (for example, an El Nino warming event can increase the availability of heat capable of escaping to space). Even there, however, feedbacks will tend to restore the energy balance and cause the climate to seek the previous equilibrium.

      • Fred,

        None of the evidence-free verbiage you provided changes what I said. What I was describing is completely general application of chaos theory and we do not even need to invoke the specifics of climate to discuss it.

        If you are claiming that a system returns to some equilibrium, then you are claiming that system is not chaotic.

        If you are claiming that the system includes chaos, then it does not attempt to return to some equilibrium state.

        This is non-negotiable. It is built into the very definition of chaos. You are at liberty to argue that the system is not chaotic. You do not get to redefine chaos into something it is not. Well, not unless you are happy to make it look like you have no understanding of the subject at hand.

      • The climate system tends to return to equilibrium. Even so, I believe that chaotic behavior is sometimes observed. I would be interested in any empirical evidence that could refute either of those statements. My own take on it is that the chaotic behavior can lead to a variety of different states, but that these are ultimately unstable and will eventually tend to return toward the previous equilibrium.

      • Fred, what you are describing remains unrelated to chaos theory. Let me try to expand on Dan Hughes’ point below with a contrived example.

        Let us assume in our contrived example that the atmosphere does move to some equilibrium condition, through radiative and convective processes (I’m not saying this is true, but let’s assume it does for the purposes of this example).

        It gets knocked out of this “equilibrium” temporarily by some cloud and rain cover. However, in this period, the vegetation in the region changes – it flourishes.

        The problem is, the water vapour content of the atmosphere is heavily dominated by evapotranspiration. So because of the change in vegetation cover, there is a change in the average amount of water vapour in the atmosphere. This in turn shifts the equilibrium that we have a priori assumed exists due to the physics of the atmosphere, as water vapour is a far more powerful GHG than CO2. This warms the local area.

        Now, the only way we can get back for the previous equilibrium is that the earth “takes action” to kill off the vegetation to reduce the water vapour content in the atmosphere. Clearly, there is nothing in the earth’s system to cause this to happen. The earth has no way of getting back to its previous state; and has no reason to. It is now in a new, warmer state, thanks to the GHG change.

        So not only is the “state” including atmospheric properties, it also includes things like vegetation cover, without which you cannot determine the “current” amount of GHGs in the atmosphere. There are no strict controls and the earth continues on its new trajectory, which is not an equilibrium condition and has no way to get back to its original state.

        I’ve picked vegetation influence on GHG concentrations in a region as one example. There are millions more interactions like this in the atmosphere, with no simple linear system seeking to “get back” to its original state. And this is ignoring all of the issues surrounding Navier-Stokes etc. etc.

      • I’m afraid these columns are growing too long, with too few other readers caring what we say. To avoid adding to the excess, I’d prefer to respond specifically to your hypothetical example if you flesh it out with numbers and causes – what caused a triggering event, what quantitative changes did it evoke, what were the secondary consequences, etc,? That way I can specify the probable feedbacks. This applies to vegetation as it does to other variables – changes provoke feedbacks, because those feedbacks were the reason why the equilibrium is where it is and not somewhere else. If they did that originally, they will do it again.

        That is the nature of negative feedback systems, of which our climate is a prime example (note that “negative feedback” includes the Planck Response – in essence, the Stefan-Boltzmann response – and does not contradict the concept of “positive feedbacks” as amplifiers of CO2-mediated temperature change, where “positive” is used to denote feedbacks other than the Planck Response).

        In Dan Hughes’s terminology, changes evoke “error signals”, which in turn evoke error-correction responses.

        Where I do find myself in some practical agreement with you is in the notion that feedbacks sometimes operate over many millennia and can lead to the false impression that an unstable state is permanent. Quantitatively speaking, these extremely drawn out responses should have little import for our perspective regarding current climate change over the next century.

        Incidentally, some of these concepts – equilibrium (stable, unstable and metastable) are addressed in Ray Pierrehumbert’s new book, Principles of Planetary Climate.

        Finally, let me ask you a question not as a form of argument, but for your considered opinion. If equilibrium states are states of maximum entropy, in what direction would a system respond to fluctuations occurring exclusively within the system, in the absence of an external perturbation such as a climate forcing?

      • Fred,

        My cartoon example is a cartoon example, so adding quantification serves no purpose or value. But you understand that there is no “error signal” in the vegetation on any scale, from decadal to millenia. The vegetation may change in the future for unrelated reasons (part of the trajectory of the dynamical system) but there is no “error signal” to speak of.

        On entropy maximisation, I am in full agreement with the analysis in this article:
        Hurst-Kolmogorov dynamics as a result of extremal entropy production

        (Click through to preprint). This article is entirely consistent with my example above, and points I have made elsewhere on this thread.

      • Changes in vegetation are a potent source of error signals, and have exerted major climate impacts over the paleoclimatologic record.

        You didn’t answer my question regarding the relationship between maximum entropy, equilibrium, and the direction in which a system will change due exclusively to internal fluctuations.

      • Vegetation is only a potent source of “error signals” in the context of returning to a prior equilibrium if you doubt the theory evolution.

        I have answered your question regarding maximising entropy. The paper I linked to includes a complete formalisation of maximum entropy with regard to geophysical series. It exactly defines how I would expect a geophysical time series to respond as a consequence of maximum entropy. Everything you need to know is contained in those equations.

      • Fred, I think there are no physical phenomena or processes within the Earth’s climate systems that can generate an error signal between their present state and an ultimate equilibrium state. Absence an error signal, or a significant reduction in energy input, the system will ring. If there is a significant reduction in energy input, the systems will also ring, but at lower energy level and maybe smaller amplitudes.

        We all see this appeal to an ultimate equilibrium state all the time. Yet, for me, the condition has never been quantified. It is clear that in the classical sense of equilibrium, the Earth’s systems have never attained such a state. Let me try to give some idea of what equilibrium state means to me. It is the complete absence of any gradients in all possible driving potentials, both within all sub-systems and between sub-systems. In a zeroth-order cut, this means equal pressure and equal temperature everywhere for all time. Of course there are a multitude of driving potentials in the real world in addition to pressure and temperature.

        Has never happened, will never happen.

        If the use of equilibrium in climate science refers solely to a radiative energy balance between energy input and energy output, what are the phenomena and processes that act to drive the system to such a state and ensure maintenance of such a state. Again, in the absence of an error signal, one that relates directly to radiative energy input and output, how can such a state be strictly maintained and not wander around?

      • Dan – Your question is very broad. For global temperature, the error signal is the TOA flux imbalance. This leads, via the radiative transfer equations, to atmospheric and surface warming that acts to eliminate the imbalance. This has been quantified, although the accuracy of the quantitation is beset with some technical problems. The reality of the principle is not in doubt though.

      • The atmosphere is never in equilibrium, but the earth system is always in agreement with conservation of energy and other general conservation laws. This limits strongly the possible variations and guarantees certainly that something that can be called average can be observed for all important climate variables over some time scales. It is also certain that increased energy in the earth system has an effect on these averages.

        Whatever is known about the properties of spatio-temporal chaos in the solutions of some systems of partial differential equations is not going to invalidate conservation laws or lead to rapid fluctuations in the energy content of the earth system.

        The nonlinearities do not only increase the uncertainties, as other nonlinearities act to maintain the conservation laws. These nonlinearities make the system to be closer to a boundary value problem, although claiming that it would really be one is an oversimplification.

        The strength of possible transitions and oscillations is limited by changes that are possible in the balance of various subsystems. My intuition puts the largest uncertainty in the role of deep oceans. Changes in downwelling and upwelling currents might well have significant oscillatory (periodic or non-periodic) effects over a wide range of periods, but it is certainly possible to set some limits on the strength of these effects.

        I do not see any reason to believe that no significant oscillations would appear with typical periods of, say, 100 – 300 years. I do not think that the historical data tells much about that except that their influence cannot be much stronger than what is seen on multidecadal scale. I do not propose that such oscillations are an important part of the recent warming, but I think that their possibility is a further obstacle in drawing conclusions from the historical time series.

      • Pekka – If we disagree, it is mainly a matter of emphasis. I would not dispute 100-300 year oscillations as a possibility, but my reading of the climate records reveals no evidence for a significant role for such oscillations – or at least, no role substantial enough on a global scale to have created a discernible signal outside of changes occasioned by solar variations, volcanism, and other known entities (changes on a regional scale are a very different matter and may have involved such oscillations).

        As a technical matter, the term “equilibrium” may be inappropriate for our climate because we receive energy from the sun and return it elsewhere – to space. I would prefer “steady state”, but “equilibrium” is part of common parlance and so it’s reasonable to continue to refer to it. I’m sure you are right that it will have been rare for a perfect atmospheric equilibrium to have existed for any extended period, but reasonably close approximations are not improbable during times when climate was not measurably changing (again with reference only to global changes). As you know, the IPCC assumed something approximating an equilibrium for deriving forcing estimates based on changes since 1750. Current TOA energy balance studies suggest that we have deviated since then in a positive (warming) direction, but even today, the imbalances appear to be only moderate (ignoring spurious CERES data), and the climate is responding in a way to suggest a rather small imbalance.

        I certainly agree that changes involving the deep ocean play a critical role in determining climate balances. These changes tend to play out over millennia, and unless the deep ocean is far out of balance, its effect over decadal timescales is likely to be relatively small.

      • Fred,
        You note that your reading of the climate records reveals no evidence for a significant role for such oscillations.

        My question is, how strong should such an effect be to make finding evidence for it likely. My limit “their influence cannot be much stronger than what is seen on multidecadal scale” is a intuitive guess for the answer.

      • Pekka,

        I disagree that there is no evidence that oscillations on the centennial (and larger) scales – there is considerable evidence within proxy data of self-similarity, and one can argue (even within the constraints of conservation of energy) from the principle of maximum entropy that it might be expected that natural oscillations become larger at lower frequencies.

        See this paper for a discussion (click on link and through to “presentation”), which demonstrates that over half of the variability in the glacial / interglacial cycle is self-similar natural variability:
        http://itia.ntua.gr/en/docinfo/998/

      • Aargh, there is a mistake in the first sentence: it should read differently. I am trying to say, I disagree with Fred that there is no evidence of large swings at 100+ year scales, and I also disagree that one would intuitively expect oscillations to be no larger than decadal scale oscillations.

        Hopefully the rest of the post should then make sense!

      • Your intuitive guess seems plausible. We might have missed even larger oscillations due the imprecise nature of climate data from earlier centuries. From my perspective, the important question is the balance between variations of this type and trends driven by known factors such as anthropogenic forcings. Based on the principles of radiative physics and reasonable estimates of feedbacks and climate sensitivity, I would say that any current oscillations beyond those we already know can’t be strong so strong that they leave little or no room for what anthropogenic emissions are contributing to the temperature trend. Postulating internal climate variations makes sense. Suggesting that they eliminate an important role for anthropogenic warming doesn’t.

      • Fred,
        I was referring to the Bayesian interpretation in an earlier post. Taking into account the possibility of longer oscillations has a small effect on the posterior of likelihood of various values of climate sensitivity. The effect is not large as there is no prior reason to expect a positive phase of a significant slow oscillation for the period 1970-2000, but including more alternatives for natural variations has certainly some influence on the resulting likelihoods.

      • ‘Large, abrupt climate changes have affected hemispheric to global regions repeatedly, as shown by numerous paleoclimate records (Broecker, 1995, 1997). Changes of up to 16°C and a factor of 2 in precipitation have occurred in some places in periods as short as decades to years (Alley and Clark, 1999; Lang et al., 1999).’ http://www.nap.edu/openbook.php?record_id=10136&page=10

        I think you misunderstand timescales involved.

      • My recollection is that such abrupt shifts, on a global scale, have been observed (e.g., as D/O events) during glacial periods, but shifts of the same magnitude and global extent have been rare or absent during interglacials. What examples exist to the contrary, and how frequently have they occurred?

    • Tomas, thanks for explaining the relationship between Lyapunov exponents and spatio-temporal chaos. My knowledge is mainly limited to temporal chaos and I have some catching up to do on spatio-temporal chaos. That said, I would argue knowledge of temporal chaos alone is sufficient to see many of the basic errors made in climate science and predictability of modelling. The spatio-temporal chaos side is interesting though and must go on my list of things to read up about :)

    • Tomas – you claimed to focus on my comment, but *completely ignored* the central element, which you even quoted:

      “small random variations in solar input (not to mention butterflies)” [as what makes weather random over the long term]

      Chaos as you have discussed it requires fixed control parameters (absolutely constant solar input) and no external sources of variation not accounted for in the equations (no butterflies). You gave zero attention in your supposed response to my comment to this central issue. Others here have been accused of being non-responsive, but I have to say that is pretty non-responsive on your part.

      The fact is as soon as there is any external perturbation of a chaotic system not accounted for in the dynamical equations, you have bumped the system from one path in phase space to another. Earth’s climate is continually getting bumped by external perturbations small and large. The effect of these is to move the actual observed trajectory of the system randomly – yes randomly – among the different possible states available for given energy/control parameters etc.

      The randomness comes not from the chaos, but from external perturbation. Chaos amplifies the randomness so that at a time sufficiently far in the future after even the smallest perturbation, the actual state of the system is randomly sampled from those available. That random sampling means it has real statistics. The “states available” are constrained by boundaries – solar input, surface topography, etc. which makes the climate problem – the problem of the statistics of weather – a boundary value problem (BVP). There are many techniques for studying BVP’s – one of which is simply to randomly sample the states using as physical a model as possible to get the right statistics. That’s what most climate models do. That doesn’t mean it’s not a BVP.

      • This is what i referred to as “pandemonium” in my earlier post on what we can learn from climate models:

        How to characterize such phenomena arising from transient forcing of the coupled atmosphere/ocean system defies classification by current theories of nonlinear dynamical systems, where definitions of chaos and attractor cannot be invoked in situations involving transient changes of parameter values. Stainforth et al. (2007) refer to this situation as “pandemonium.” I’m not sure what this means in the context of nonlinear dynamics, but pandemonium seems like a very apt word to describe this situation.

      • The randomness comes not from the chaos, but from external perturbation. Chaos amplifies the randomness so that at a time sufficiently far in the future after even the smallest perturbation, the actual state of the system is randomly sampled from those available. That random sampling means it has real statistics. The “states available” are constrained by boundaries – solar input, surface topography, etc. which makes the climate problem – the problem of the statistics of weather – a boundary value problem (BVP). There are many techniques for studying BVP’s – one of which is simply to randomly sample the states using as physical a model as possible to get the right statistics. That’s what most climate models do. That doesn’t mean it’s not a BVP.

        Please link the relevant literature. There’s already plenty of unsupported claims and counter-claims on this thread. Where is the existence/uniqueness of the stats developed? Are you talking about a theoretical conjecture or an empirical observation? I’m sure I’m not the only one reading this thread who would be interested in learning more about this. I’m familiar enough with turbulence modeling that the hand-waving about kinetic theory that Eli did above is… unconvincing. Can you raise the level of the game?

      • i’m working on a new thread to focus the bvp/ivp discussion, should be up tonite

      • This isn’t anything new – almost every physical dynamical system, if it’s not trivially simple, displays chaos under most conditions. Statistical mechanics, one of the most successful of all physical theories, relies fundamentally on the reliability of a statistical description of what is actually deterministic (and chaotic – way-more-than-3-body) dynamics of immense numbers of atoms and molecules. This goes back to Gibbs over a century ago, and Poincare’s work was directly related.

        Tomas’ comments about the 3-body system being not even “predictable statistically (e.g you can not put a probability on the event “Mars will be ejected from the solar system in N years”” is true in the strict sense of the exact mathematics assuming no external perturbations. That’s simply because for a deterministic system something will either happen or it won’t, there’s no issue of probability about it at all. But as soon as you add any sort of noise, your perfect chaotic system becomes a mere stochastic one over long time periods, and probabilities really do apply.

        A nice review of the relationships between chaos, probability and statistics is this article from 1992:
        “Statistics, Probability and Chaos” by L. Mark Berliner, Statist. Sci. Volume 7, Number 1 (1992), 69-90.
        http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.ss/1177011444

        and see some of the discussion that followed in that journal (comments linked on that Project Euclid page).

      • looks like another interesting paper, reading now. quite readable.

      • Whilst I agree, in principle, that there may be statistical properties that can be meaningfully estimated from the complex dynamics of the climate system, if you want to be doing science you need to provide supporting evidence for this, and not just hand-waving about boundary value problems and giving inappropriate analogies.

        On the topic of deriving macroscopic quantities even though microscopic values are unknown, see the discussion from the reference below, pp. 1098-1099 from the sentence

        “As an example, in statistical thermodynamics, one can
        correctly estimate aggregate macroscopic quantities…”

        And the following couple of paragraphs. Dr. K explains with examples why this works in thermodynamics, but not with climate.

        Reference here, click through to pdf for page numbers.

      • It is possible that the ergodic hypothesis and statistical analysis are of some use in the context of equilibrium sensitivity calculations, e.g. double CO2 and see what the equilibrium climate state looks like (this equilibration takes several hundred years). At this “equlibrium”, presumably all influence of initial conditions are lost and there is convergence to a climate state that is driven by boundary conditions. However, this would not be the case for decadal to century scale climate variability and transient forcing, as I interpret it, which is the relevant application.

      • The formal mathematical concepts like ergodicity help in listing possibilities and in some cases in making discussion more understandable, but putting too much weight on them may also be misleading.

        When we are discussing the limits of possible behavior, theory allows for almost any behavior and it is very unlikely that what really happens is closely described by some specific theoretical concept. The real issues are much more practical and likely to be answered better through the use of common sense than by some theorem. The theory may easily be misused through references to theoretical results – and I am convinced that this kind of (probably or at least possibly unintentional) misuse of theories can be found in this thread.

        The real issue is finding the correct balance between taking every potential problem of modeling at face value and assuming naively that no significant hidden problems influence still the climate modeling work. It cannot be answered by theory, but theory may help in finding correct places to look for potential errors. At least partial answers can be found by studying carefully the dynamical behavior of the models, when their details are varied. It is important to check, what happens when modeling parameters determining the stability of the numerical methods are varied. If the stability is assured too safely, numerical diffusion results, i.e. the numerical models start to give too smooth results and to be incapable of producing some of the dynamics the physical equations of the model would produce without numerical diffusion. This is just one of the problems. I am sure the modelers have spent some effort on these issues, but good choices are very difficult to find when the modeled system includes subprocesses with widely varying timescales and spatial scales.

        What I would like to see, is extensive discussion on the problems of numerical modeling and how well the climate modelers feel that they have been able to solve all known serious problems of numerical modeling of large complex dynamical systems.

        Studying the behavior of the models may help much, but of course extensive comparison with empirical observations is the second essential component of testing. It is complicated by the fact that empirical data at the right time scale and of right level of detail is very limited. This leads to testing with data that is not directly applicable. Testing with such data includes additional modeling of, how the data is linked to climate variables. Then both agreements and disagreements may tell more about the quality of the additional model (and the possible freedom in adjusting it during the process) than about the validity of the climate model.

        After writing a message in the above fashion I start to worry that I paint a too dark picture of possibilities of testing climate models. I hope that I have done it and that I can read somewhere clear evidence that the issues have been solved better than an outsider can convince himself through reading published articles.

      • Dr Curry, these are good questions. It is worth noting that even under Dr K’s analysis of 1/f noise, that 1/f noise time series does indeed have a population mean, and it is quite relevant to compute forcings of things like CO2 doubling relative to the population mean.

        The humbling experience comes from trying to find out what this population mean is. The challenge is that the sample mean is a poor estimator of the population mean, and the rate of convergence is very slow. So slow that even centuries of data (perhaps even millions of years of data!) do not yield an estimate of population mean of accuracy much beyond that of a single year. So whilst it is theoretically still correct to compute a response to external forcing, providing evidence to support these claims is far more difficult than it first appears.

        This does not make such analysis impossible, but it requires considerable humility on the part of the researcher to truly understand the challenge that nature has set us.

        But then it would be no fun if it was not a challenge! :)

      • Dr. K makes no sense. The precise prescription of Earth’s atmosphere and oceans would require far more than the 10^22 moving particles of typical table-top statistical mechanics, so the law of large numbers applies even more so by that criterion. The specific claim made is that the number of grid boxes in actual climate models is relatively much smaller – but all that means is the statistics of climate models will have much more uncertainty than the actual physical climate, hardly something modelers don’t recognize. And the type of comparison they make in the paper you linked to is *not* comparing statistics of the models with statistics of the real climate, but looking for *actual correlations* between individual model realizations and the actual climate – that’s completely counter to the discussion we’ve just been having about chaos and probability. Of course individual model realizations will not mimic local weather – the relevant comparison is whether the statistics you get from the models matches the statistics of local weather or not.

        Gavin Schmidt had a much longer discussion of exactly this point on an earlier Koutsoyiannis paper here:
        http://www.realclimate.org/index.php/archives/2008/08/hypothesis-testing-and-long-term-memory/

      • I agree that the argument from the paper of Anagnostopoulos et al does not make sense. I did not look at the article before I wrote my above message. Now I want just to point out that this paper is an example of, how difficult it is to have good empirical tests of climate models.

      • Pekka, what did you find did not make sense? I am happy to explain my understanding of any issues that do not make sense if it helps.

      • I was referring to that particular paragraph. I do not think that it was referring to the significant problems or giving a correct picture of related issues.

        I have not studied carefully the whole paper, only enough to figure out, what they have done and what are their conclusions. That particular paragraph appeared to be written so that I believe that I understand, what they say and may form my opinion without major risk of misunderstanding.

        Otherwise I do not try to assess the paper, but I was ready to use as an example of the difficulty of testing climate models empirically.

      • Pekka,

        The paragraph is perhaps not the full story – possibly due to space constraints in the article – and it may be better expressed in one of their presentations, although I could not lay my hands on a better explanation quickly.

        But the paragraph speaks directly to the issue of predictability in science. Fundamentally, predictability is limited at the microscopic scale by problems of measurement (e.g. Heisenberg uncertainty), we then get convergence through the law of large numbers (the rate of convergence is limited by the number of independent samples), and then we become limited again at large time horizons by divergence (e.g. the Lyapunov exponent, as an example from temporal chaos).

        Because of this, we get a window of predictability, at which we have sufficient samples not to be limited by microscale problems, and we are not predicting so far into the future that divergence is a problem. These issues are the fundamental basis for uncertainty in science. I do not see how this is not absolutely to the point under discussion here. I do accept, however, it may not be the clearest, or most complete exposition.

      • Spence,
        I looked at the paper again. To me the real issue is that they are mainly testing performance at level, where it is not expected to exist. I would say that only the 30 year moving average test for U.S. is on a relevant scale.

        The models discretize the system in cells of more than 200 x 200 km. This means that the details cannot be analyzed by the model at a level that is not several cells wide in all directions. Everything on a smaller scale is severely influenced by the difference between the continuous but internally variable reality and the model that is forms of discrete cells. The model is simply different from the reality and agrees with reality only by accident at this level. This extends to the average values over single cells.

        Similarly the models are not expected to describe temporal variations. Due to the sensitivity on initial values also within limits of reasonable agreement with real weather patterns at a specific moment of time, the interesting results come from averages over many model runs or over long enough periods to remove the dependence on initial values. How well the model converges to some values can be tested easily. It requires only enough computer time.

        Based on looking at the models in the above way, it can be concluded, where the limit of reasonable testing goes. As far as I can see, almost all tests of the paper are on the wrong side, i.e. they test things that are not expected to work. Of course one can test, whether the models perform better than can be expected, but getting negative results is not a significant new observation.

        That particular paragraph of the paper was discussing statistical accuracy. To me that is completely irrelevant, because it was already known that there are limits of modeling performance that are stricter that those set by the statistical uncertainties.

        Their paper does not appear to do much more than test whether climate models can produce weather forecasts. Of course they cannot.

      • Pekka, I am very confused by your response.

        Firstly, you must be unaware of a fundamental result from observational climatology. Whilst it is true that a thermometer placed 100 metres from another will give a different absolute reading, if I monitor those thermometers and produce a monthly anomaly, they will yield almost identical figures. Indeed, at high latitudes monthly anomalies correlate well to a range of around 1000km (somewhat less in the tropics).

        This is a fundamental result and the basis of gridded instrumental temperature records. If you believe that the monthly temperature anomaly varies considerably within a 200kmx200km grid cell, then you must question whether gridded instrumental data products are valid. I am aware some sceptics question this, so I have to ask the question: do you doubt this result? If so, you are consistent and we are probably at an impasse. If not, you are being inconsistent as it is the same principle being applied in this example as to the gridded temperature datasets.

        Secondly: although direct correlations are tested, statistical estimates are also checked (Hurst exponent, standard deviation), so we are not only directly checking the detail but we are also checking whether the models are characterising correctly. So your next point is also addressed by the paper.

        Thirdly: in fact, the paper does report skill in the models. It notes the models have some skill at the monthly scale. So, clearly, the assessment is capable of measuring skill. This skill comes from the ability of the models to predict and match seasonal changes well, which they do at single sites as well as across large areas.

        Fourthly: you say you dislike the point tests, but like the contiguous US test. But the results across both are the same. Which at least suggests there is actually little difference between these two tests, contrary to your intuition.

        Fifthly: you say the models do not predict weather, but predict climate. In fact, the paper shows the opposite. As mentioned above, it shows the models do well at predicting model monthly-scale variability – so they are capturing many elements of what would be reasonably termed “weather”. As temporal averaging increases, the performance decreases. In fact, the results clearly show that the models do considerably better at predicting weather than they do climate. This is clear from the results.

        Sixthly, and finally: as we know, averaging reduces uncertainty. In a signal processing sense, we get a signal to noise ratio improvement; if I take 100 samples instead of 1 and average, assuming they are independent samples, I get a 10x improvement in accuracy. Now the same is true for spatial averaging about the earth. The earth is only so large, so the spatial averaging uncertainty reduction that I get is the same, whether for the 1-year scale or the 30-year scale. Given the results show the signal-to-noise ratio is worse at 30 years than at 1 year, how is a fixed uncertainty reduction going to improve the 30 year scale over and above the 1 year scale? Simple answer: it can’t.

        Dr K would ding me for the signal processing analogies, but I hope they help get a point across :)

      • Spence,
        The models forecast seasons. There is the monthly success. Is that a interesting result?

        For the rest I was discussing, what the models are supposed to do. To test, whether the succeed in that, one should compare data with those outcomes from the models, not with something else.

        The issue is not, how to calculate averages from real data. The arguments of the paper concentrate largely on that, but that is not the right question.

        The main obstacle is in any case related to the handling of temporal variations. The tests looked at such temporal results that the models where not supposed to explain, except in the one case of 30 year average.

        My conclusion is that this paper did not present a valid test at a significant level, but it indicated, how difficult it is to perform valid testing of the climate models. This is not new, but demonstrated well by the attempt and its insignificance.

      • My impression of the Anagnostopoulos et al paper is that they appear to have chosen circumstances designed to yield the results they get, which in that sense, are not surprising but not very informative either. GCMs, by their global nature, are intended to model global climate, whose temperature is strongly dominated by the oceans. In addition, their poorer skill in modeling regional climates is already recognized. Therefore, the choice of a few dozen land based stations for either point estimates or averaging over the continental U.S. was destined to demonstrate relatively poor skill.

        In addition, for accurate validation, GCMs should be initialized to the actual conditions that prevail at the start of a run. The paper argues that this isn’t necessary, but it would have been better for them to have actually done the initialization rather than dismiss its importance.

        The best test of global climate models is of course in their performance. Here, model performance has been adequate for predicted temperature trends (Hansen et al 1984) as well as hindcasts. In the case of the prediction, the overestimate of temperature rise that the model yielded would have been eliminated if observationally-based current input data had been used instead of the values available in 1984. The newer data would have led to a climate sensitivity value of about 3C/doubled CO2 instead of the 4.2C generated by the model, and would have matched observed trends very well.

        Despite the poor match between projections from global climate models and U.S. continental temperature trends, the paper (in Figure 12) reveals an interesting pattern. Most models, despite being applied to the U.S., yield trends that simulate fairly well global 20th century trends. Temperature tends to start rising around 1910, and the rise is steeper in the later decades of the century. The models do not accurately reproduce the change from sloping to flat during mid-century, but that is very possibly because they underparametrize mid-century negative aerosol cooling.

        All in all, the main problems with the paper appear to reside not in its methodology (except for the failure to initialize) but in its overinterpretation. It is correct to assert that climate complexity frustrates attempts to reduce the uncertainties inherent in prediction. It is wrong to assert that meaningful prediction is hopeless, and in fact, the evidence indicates otherwise.

      • Fred,
        In my mind the difficulty is largest on the question: “How well we can estimate the value of the evidence the tests give?”

        I believe that plausible tests can be developed and applied, but the tests themselves include assumptions, and furthermore assumptions with a risk of circular reasoning. By that I mean that the present understanding of the scientists influences, how the raw data is handled and that may influence the results of the tests in selfinforcing manner. I have no evidence of such effects, but I think that the risk is real.

        Good straightforward tests of the model would require data that extends over a period covering several periods of “climate” as opposed to weather or oscillations that the model is not even supposed to present correctly. The tests should also be on variables that the models predict directly. This means that they should be on a spatial scale covering rather large number of cells. They should also be based on data that is observed directly, not derived from a proxy by an error-prone process. Only such tests would test specifically the climate model and not some combination of models, where the other components may distort the conclusion. Distortion towards wrong positive (i.e. confirming the climate model results) is possible in particular, if the other parts are tuned during the process.

        All these considerations lead me to conclude that outsiders cannot really verify the value of the tests. This leaves the risk that the insiders are not careful or skillful enough in eliminating bias in testing. Eliminating the risk of bias is very difficult with perfect attempt to reach objectivity. Thus I am not implying any wrongdoing, I just believe that the problem may be too difficult to handle in a faultless way. This possibility makes valuing the tests difficult if not impossible.

      • I certainly agree with you regarding the perils of hindcasting. Ultimately, we will be able to compare multiple long term forecasts with observations, rather than relying simply on Hansen’s model.

      • Fred,
        I am arguing with you emphasizing the difficulties, while you are often defending main line conclusions. My purpose is certainly not to claim that the evidence is not of any value and I am worried also about the fact that many people use the list of problems as proof that the science has not produced any real evidence. My approach is to search best balance for myself and discuss the problems without worrying continuously, whether somebody will misinterpret my arguments beyond their purpose.

        Every now and then I add comments like this hoping that it helps a little in reducing those misunderstandings. I have still (the naive?) trust that full openness is in the long run the best approach and that it is also better for the development of science to be fully open and to avoid tactical hiding of problems and uncertainties in the public debate.

        I can never know, how subjective my thoughts are, but I try to be as objective as I can. Often I get reminded that I do not understand everything that I thought to understand, but that is a way of learning for me.

      • Fred, the basis for the work by Koutsoyiannis and Anagnostopoulos’ work is clearly laid out. If you have a specific issue, please raise it and we’ll discuss it. Arm waving about them trying to achieve a certain result is not terrible helpful from a scientific perspective.

        Please remember that it has been estimated the entire global mean temperature could be derived from as few as 60 points, in that context the failure of models at 55 points is significant. This is because local temperature anomalies at high latitudes are well correlated over ~1000 kms, so in fact 55 point samples they took covers a huge area.

        As an example, the authors went out of their way to maximise the apparent model performance by not selecting the exact location of the test station, but by maximising the match within a gridcell. They bent over backwards to get a result in favour of the models!

        The problem with going back and re-initialising models to get better answers is you are reverting to hindcast. Unless the models can be initialised correctly at the time of running, you have no skill. Hansen’s models have demonstrably failed, with his 1988 prediction now showing the last decade notably lower than scenario C, the massive CO2 cuts taken scenario. Apparently we have achieved the massive cuts scenario without making massive cuts!

        The models are consistent in one thing; abject failure to hindcast anything beyond the 2-3 degrees of freedom in the 20th century global temperature record.

      • Readers can compare my comments with the paper to draw their own conclusions. Initialization should have been done for each run by the authors, but it’s actually a relatively small point. Mylarger point is that the models peformed more or less as one might have predicted, given the conditions under which they were run.

        Interested readers can also compare model performance, including the Hansen model, with observational data – Model Update . Certainly, the models are imperfect and improvement comes slowly, but the results are encouraging. In my view, any effort to demonstrate that models are useless for climate predictions will almost certainly run up against a reality that tells us otherwise. The question that arises is – why would anyone wish to prove that? – but I’ll leave it to others to judge motives, and let the evidence speak for itself.

      • Fred,

        I don’t think there are any “readers”. And yes, they can read your comments, and see that you do not like the paper, but that you are unable to produce any direct, substantive criticism of it, and the best criticism you have boils down to “motive”. Fair enough.

        Do you understand that models which are sensitive to initial conditions can be made to match in hindcast by modifying the initial conditions? This is called the shadowing lemma and the reason that making causal links is extremely challenging (to say the least) with chaotic systems. It also means that I can take any chaotic system, make an arbitrarily small change to the initial conditions, and get any result within the limits of the attractor that you first thought of. The shadowing lemma is the reason your update to Hansen’s model worthless.

      • Spence – Modelers will be grateful if you can instruct them how to initialize the models to give any desired output. Their hindcasts (and forecasts for ENSO, hurricanes, regional predictions, etc.) would certainly look better. More important, I think they would invite you to initialize hindcasts to simulate 20th century warming trends without anthropogenic input – so far, only inclusion of anthropogenic factors permit the models to emulate reality. They might also ask you to explain why for long term projections, initial conditions matter little, with the model outputs converging toward a common and relatively accurate simulation of real world observations.

        I do agree that probably few outsiders are reading this, and so I’m inclined to refrain from further responses unless new evidence is introduced, or unless some conceptual elements are introduced that have not yet been addressed.

      • Fred,

        I agree, despite my posting links to half a dozen peer-reviewed papers on this topic so far, all documenting evidence to support my arguments, you have singularly failed to produce one piece of evidence, other than a link to a blog post.

        Advise your climate scientists to read up on the Shadowing Lemma of chaos theory. Although I read your realclimate article about Hansen’s update, I thought from your original comment that you implied they re-ran the model. I was much mistaken. All they do is hand wave away the fact that the temperature has followed the freeze emissions scenario without doing anything of the sort, by speculating what Hansen’s model would do with the latest figures.

        You know the irony of this? They are basically saying the latest updates to the models are coincidentally exactly what is needed to make the models match the most recent data in hindcast. That is quite some admission.

      • “model performance has been adequate for predicted temperature trends (Hansen et al 1984) as well as hindcasts”

        Fred, is the paper you are referencing (quote above) here “Climate sensitivity: Analysis of feedback mechanisms?” (http://pubs.giss.nasa.gov/docs/1984/1984_Hansen_etal_1.pdf), and from that the Figure 18 in particular?

      • Anander – I inadvertently misled you by citing a 1984 date. The paper you cite details the rationale underlying the development of the model, but the actual models and their projections were in the paper published in 1988 – Hansen et al (see Fig. 3 top).

        Both that paper and the earlier one you mention describe some of the input data that led the models to predict the temperature trend but overestimate its slope – those data led to climate sensitivity estimates of about 4.2C per CO2 doubling, whereas the mid-range of current estimates is about 3C.

      • Arthur, I must ask if you have actually read the paper, because you make a couple of statements that are factually incorrect. For instance:

        “The precise prescription of Earth’s atmosphere and oceans would require far more than the 10^22 moving particles of typical table-top statistical mechanics, so the law of large numbers applies even more so by that criterion.”

        Please check the input assumptions to the law of large numbers. The law of large numbers requires, by definition, independent samples as input. Hence Dr. K’s reference to the correlation over land (quite correctly) reducing the number of inputs.

        Furthermore, you clearly do not understand the impact of the Hurst exponent. Let us say we *do* have 10^22 independent measurements of a system. The number of measurements alone is not enough to determine how uncertainty shrinks; we also need to know the Hurst exponent. The Hurst exponent defines how the uncertainty reduces with each sample added.

        For points which operate independently, such as the thermodynamics case, we see Hurst exponent H=0.5, so if we have 10^22 samples, we get an improvement in uncertainty of (10^22)^(1-H) which is a factor of 0.1 trillion (eleven orders of magnitude). That is a huge reduction of uncertainty.

        For climate, in particular temperatures, estimates yield a Hurst exponent of the order H=0.95. 10^22 samples then yield an improvement in uncertainty of (10^22)^(1-H) which is a factor of 12.6 – just one order of magnitude. You want to dismiss these issues even though by doing so you introduce an error in the uncertainty reduction of ten orders of magnitude.

        “And the type of comparison they make in the paper you linked to is *not* comparing statistics of the models with statistics of the real climate”

        The paper compares estimates of standard deviation and Hurst exponent from reality and from the models. So the statistics are indeed compared.

        As for Gavin Schmidt’s RealClimate comment, Dr K answers the comments here. As you can see, Dr K and GS actually agree on rather a lot.

      • Arthur Smith, while that is a very good paper that you linked (thank you for finding one that everyone can access), it only had a very short section on ergodic theory, and you’re back to the same hand-waving analogy about statistical mechanics and turbulent flows. The [lack of] success for simple models (based on analogy to kinetic theory btw) for turbulent flows of any significant complexity indicates to me that I can’t take your analogy very seriously.

        Where’s the meat? Where’s the results for the problems we care about? I can calculate results for logistic maps and Lorenz ’63 on my laptop (and the attractor for that particular toy exists).

      • A more well-phrased attempt to explain why hand-waving about statistical mechanics is a diversion from the questions of significance for this problem (with apologies to Ruelle): what are the measures describing climate?

        If one is optimistic, one may hope that the asymptotic measures will play for dissipative systems the sort of role which the Gibbs ensembles have played for statistical mechanics. Even if that is the case, the difficulties encountered in statistical mechanics in going from Gibbs ensembles to a theory of phase transitions may serve as a warning that we are, for dissipative systems, not yet close to a real theory of turbulence.

        What Are the Measures Describing Turbulence?

  52. Chief,
    The reply to the post about the prediction by Swanson wasn’t an option, but you didn’t read the link:

    “This overshoot is in the process of radiatively dissipating, and the climate will return to its earlier defined, greenhouse gas-forced warming signal. If this hypothesis is correct, the era of consistent record-breaking global mean temperatures will not resume until roughly 2020.”

    That IS a prediction.

    Is Swanson making garbage predictions for events there is no way of assigning probability too?

    • OK – you are talking about post 2020 rather than the period from 1998?

      If so – we have a couple of strange attractors showing up in the instrumental record. The Pacific states are particularly important to global energy dynamics – through cloud feedback especially. Cloud dominated climate change in the satellite era. We are talking data not theory.

      This makes Swanson’s assertion that the 0.1 degree C rise between 1979 and 1997 – was the true underlying signal of greenhouse gas warming – moot to say the least.

      As there is such a limited record of these climate states – and we don’t know what causes them in a deterministic sense – assuming that the 20th century pattern of a warm Pacific mode followed by a cool mode will continue is an assumption only.

      • You didn’t answer the question. Is even the idea of Swanson making a prediction nonsense?

        Does Swanson so badly understand the relationship between chaos and climate that he has done something completely ridiculous?

      • Hmmm. What is the relation of chaos to climate? Chaos is a mnetatheory that tells us something about the properties of complex dynamical systems. Does it apply to climate? Many facets of climate – rainfall and oceanographic changes in my world – look very much like chaotic oscillators. And if it looks like a duck and quacks like a duck – it probably is a duck.

        Ridiculous is a strong word – and one I would not use. These areas are poorly understood by anyone very much. But if we have these changes in the Pacific that result in some climate change and greenhouse gases increasing in the atmosphere causing even half of recent warming – then it is perhaps not unreasonable.

        My contention is that Swanson has misunderstood the nature and extent of the Pacific changes – something that I have been puzzling about for more than 20 years. Greenhouse gases were not the major cause by far of recent warming. Recent warming was dominated by the Pacific changes and future change in these systems is far from predictable.

        What support do I have for my contention? There is the ERBS and ISCCP-FD radiative flux up data – as well as confirming surface observation of cloud in the Pacific in the COADS database.

  53. I see no reason to assume otherwise that climate science must go down a well trodden path of iterative model development that has had to handle extremely complex issues.

    Namely long term (as in 10 days) weather prediction, and more specifically hurricane tracking. Living in Florida for the better part of twenty years now I can tell you I have watched the capability of hurricane tracking with obvious interest.

    They have made slow and steady progress to the point now that their 2 / 3 day cone of prediction has good performance. The visual comparison of all the “spaghetti models” has proved a valuable tool for gauging uncertainty of the prediction, based on how well the models match.

    Climate prediction is a totally different problem (forcings, complexity, available quality data, etc.). From an engineering standpoint I find it difficult to believe there will be any success until a long series of model -> measure -> edit model -> measure… iterations have been completed.

    For this topic, it is a search for which simplifications work and which ones don’t.

    The catch-22 is the MODEL ITERATION TIME. Instead of being able to measure results against prediction every day (weather prediction) or yearly (10’s of hurricanes a year) it takes decades to test one iteration of performance.

    There are shortcuts (test multiple parallel models simultaneously) but what about the models that “accidently” get the correct results, that are just lucky? How about good models which get discarded early because short term prediction was poor?

    This model development process to reliability is measured in multiple decades, probably centuries. I simply cannot see otherwise.

    What we have today relatively is a group of scientists with their very first attempt at 10 day weather model who want us to believe the model has merit without any validation. This is not reasonable.

    • Because models can be up dated and improved without new data based on old data (i.e. hindcasting).

      • Assuming you can get reliable “new and improved data” for the past. Hindcasting has problems as I’m sure you are aware. Multiple models that all hindcast well when turned loose on the future diverge quickly, which doesn’t give one a warm and fuzzy feeling. Input forcings have to be created when not available, and good data has really only been available for several decades (what was the type and quantity of aerosols over Peru in 1938?).

        Paradoxically the fact that the models hindcast so well (and then diverge in the future) raises red flags as to the amount of tuning that was performed.

        I’m not arguing that hindcasting is invalid, it is the best available data to use. I don’t consider it very strong evidence of a models efficacy, or their forecasting skill.

        It is intuitive (to me) that the modelling of systems this complex and poorly understood requires that the models first demonstrate prediction skill before they may be considered useful (as in policy).

      • Tom – You raise valid points about the challenge of model development for predicting long term trends such as temperature responses to a continued rise in atmospheric CO2.

        Prediction requires a long interval for testing. This has been done with a model developed by Hansen et al in 1984 and evaluated over ensuing years. The model encompassed changes in CO2, solar irradiance, other anthropogenic forcings (e.g., aerosols), and other variables. The results yielded a fairly good match when its output was compared with observational data based on equivalent changes in CO2.

        The match was not perfect – in fact, for a given CO2 rise, the model overpredicted the temperature increase. The disparity was not enormous, however. Moreover, some of the model input data on feedbacks and other constraints can be computed to equate to a climate sensitivity of about 4.2 C per CO2 doubling, whereas if more accurate data based on recent observations had been used (which would have yielded a climate sensitivity of about 3 C per doubling), the match between modeled and observed data would have been very close.

        Despite this one example, it is not practical for every new version of climate models to wait 20-30 years or longer for validation. The alternative has been “hindcasting”, as you mention. Models have been initiated with climate values from a starting point in an earlier decade and then allowed to run with or without the anthropogenic input (CO2, aerosols, etc.). When this has been done, models have performed well in matching the observed trends, as long as the anthropogenic inputs were included. Conversely, when the models were run with only natural variations, their results could not be reconciled with observations.

        Nevertheless, hindcasting is an imperfect approach for reasons you cite. I am aware that models diverge in terms of future projections for equivalent emissions scenarios. On the other hand, an important question is how far apart the margins are, and how reliable will be an ensemble average. I’ve seen different estimates. Can you provide a link to the specific source you had in mind when you mentioned the divergence problem?

      • One field where many people have a large interest in making forecasts based on models built on historical data, is investing. All kind of methods of technical analysis have been built and a lot of testing using history data has been done. The problem has been also subject of extensive academic studies. The academic work has brought up very serious problems in drawing conclusions based on success in hindcasting.

        One of the most difficult problems is that the model builders have very often some important knowledge about the past that they cannot forget. Therefore it may be impossible to exclude bias in defining the testing procedures. To me it appears likely that this problem is even more serious in climate modeling. The modelers avoid unknowingly choices that would lead to conflict with the past and this creates a serious bias in the tests.

        Almost nobody is willing to test models that have already been shown to fail in hindcasting even, when the models would be satisfy other requirements as well as other models with better hindcasting properties. This fact leads to the situation that the models are usually not as good in forecasting than hindcasting even, when there is an attempt to avoid this bias.

      • “Multiple models that all hindcast well when turned loose on the future diverge quickly, which doesn’t give one a warm and fuzzy feeling.”

        Is there any evidence this is true?

  54. Dear Dr. Curry,

    Based in part on my experience running online math & stats courses, I strongly advise you to develop a (strict) policy on hyperactive posting.

    A small number of hyperactive commenters can & do (if left unchecked) completely destroy online threads (& even whole forums in the worst cases).

    There are a handful of commenters who are displaying grossly insufficient personal restraint.

    I had reflected and carefully prepared relatively succinct comments to share in response to Tomas’ stimulating notes, but there is little point posting them in such a dilute thread. Sensible people simply do not have time to observantly ski (skim & skip) such dull slopes. The fall line needs to be kept potently steep.

    [Lower priority: I also encourage a change of format. These narrow columns with meandering trees are a nuisance for people who (in part due to hard time constraints) need to be able to ski threads efficiently.]

    Best Regards.

    • let me go through and clean up this thread, i try to do that periodically on technical threads.

      • clean up took a half hour :(
        i need to keep on top of the comments as they come through

      • Redundancy is a killer. A few commenters have combined for 100+ comments that could have been considerately condensed into a few short but strategic posts. My thoughts (& empathy) are with you in your hosting work Dr. Curry. I will reserve my comments for a more strategic occasion.

      • paul, actually your comment was helpful, motivated me to to something that needed doing.

  55. This is in reply to lots of different comments, so I posted it separately. It’s mostly about averaging, a little about propaganda.

    “most climatic elements, and certainly climatic means, are not predictable in the first sense at infinite range, since a non-periodic series cannot be made periodic through averaging”, E.N. Lorenz, 1975; vocabulary drill: predictability in the “first sense” and “second sense”, probably good to understand what Lorenz means by that terminology to have useful discussions on this topic.

    On the relative roles of initial and boundary conditions see (full text free): Assessing the Relative Roles of Initial and Boundary Conditions in Interannual to Decadal Climate Predictability. There’s no measurable predictability past a couple years (the horizon depends on the functional you are tracking). This is just a modeling exercise, it tells us what’s theoretically possible (getting to a useful forecast system requires another step). See also this report that shows the big iron boys catching up to where Dr Pielke was in 1998: yes Jen, initializations do matter, even for climate kinds of functionals. This IVP/BVP distinction seems to be the result of horribly oversimplified science communication gone viral.

    If you got through those, you might be interested in my little post on Recurrence, Averaging and Predictability (this is an earlier post that discusses IVP/BVP thing a little too).

    The question above, “What makes you think that you can model averages better than the initial weather objects itself (of which the averages are built on)?” Is probably the most important question asked on this thread. It was answered by Jen with hand-waving and bald assertions (and you too Pekka, for shame!). I mean, even the bunny linked something that seems at first glance to be relevant. Free advice to Jen: on the internet nobody knows you’re a dog (or a bunny): waving your hands and expecting deference since that’s what you’re used to, perhaps because you’re well-credentialed in meat-space, ought to get you pretty much the reaction you got. If you’re a practicing scientist, then you are probably very knowledgeable about your craft, and you could probably improve the signal to noise on the thread greatly by generating relevant links rather than logorrhea.

    Pekka, since you mentioned forcing last time, that will be the next post in the series (unfortunately real life often interferes with the blogging, so I can’t give an estimated time to completion). In this thread you said: “It is well established that (most or all) climate models describe boundary value problems.”

    Care to provide a link to any of the on-line climate model documentation that indicates they are solving BVPs (CESM seems to be the best documented on-line)? I’d be surprised if you could find a single one (though I’d still love to learn the details in the off chance that you do, so please share links if you find them). I’ve found a whole lot of explicit, physical time-stepping (usually with some sort of operator splitting). If they’re solving BVP’s why aren’t they using all the standard convergence acceleration techniques like local time-stepping, or dropping the time derivative altogether and doing a Newton method (maybe using a steadily increasing pseudo-time step to get started)? Why are they still doing a physical, global time-step?

    To go back to the important question, which has not been answered yet: how do you expect to get meaningful frequency statistics (predictability in the “second sense” ) from a BVP solution? Where is this magic discussed in the lit? I’ve seen plenty of hand-waving about BVPs and climate from certain propagandists, but that’s about it. It’s a shame that real scientists in tangential fields, like Jen and Pekka, have been taken by this balderdash.

    • I’m still interested in doing a post related to your averaging ideas and your post on various consequences. the IVP/BVP issue needs sorting out also. thoughts on how to make this into a post here? thx

      • These ideas aren’t original to me; it seems sort of weird that I’m posting them (as an outsider) rather than someone who has a professional familiarity with them (as pointed out above, they’re out there). Someone had to write the papers I’m linking…

        This is the subtle brokenness of climate science, the outsiders are making the corrections rather than the insiders. How hard would it have been for a Climate Scientist(TM) to say to the uncritical cheerleaders on this thread, “you know, that IVP/BVP thing we sometimes say is just a simple way of communicating a complex issue, don’t take it so seriously.”

      • I think the problem is that other than Curry nobody here is a climate scientist.

      • The climate scientists spend time on threads on greenhouse effect and surface temp data sets. Climate scientists not paying attention to this kind of topic supports Tomas’ original assertion (Jeff Weiss being the notable exception in terms of showing up on this thread).

      • So Jeff was wrong?

        One of the major meetings on chaos didn’t have a section on climate?

        The papers that jstults wrote really were written by nobody?

        Tsunis’ co-author didn’t write a piece for RealClimate?

        Wouldn’t it be more accurate to state that the scientists that work on these problems, create models, and work on model validation don’t come here much?

      • A vast majority of climate scientists that i have encountered have misconceptions about the chaotic nature of the climate system. If you gave a quiz on this to the all the scientists on the PNAS list (the group that is most commonly recognized as climate scientists, and i suspect that you do not see tsonis, swanson, weiss on this list), a substantial number would not pass it. This does not mean that there are not scientists who belong to the American Geophysical Union who understand all this and work in these areas. So Jeff is not wrong in this regard.

        As for the climate scientists that participate in blogs, they are tiny in number and percentage. Your point is?

      • I’m going to assume you aren’t claiming that most climate scientists don’t understand that there are issues with the models, that different models give different results, that as we move in time the models are less likely to be accurate, and that the models are just that models and not complete realistic representations of climate.

        With that said, does it matter if most “climate scientists” don’t differentiate between the role of chaos and randomness in the issues with the models (has anybody made any effort to do so?)?

        Does it matter if a person that is constructing historical temperature records based on whatever proxy doesn’t really differentiate between randomness and chaos?

        Does it matter that a person that is working on how energy is transferred from GHGs to water at the liquid/air interface doesn’t?

        Does it matter if a person that is working on the spread of viruses that infect oysters due to warming doesn’t?

        On the flip side, does it matter the majority of climate scientists can’t tell you the correct way to convert proxy X into a meaningful temperature record?

        Does it matter if most climate scientists don’t understand how molecules orient themselves at air/water interfaces or the different molecular/bond motions that produce different IR absorbance spectra?

        Does it matter if most climate scientists don’t understand the role of temperature in the formation of different protein conformations and therefore can’t speak to how temperature influence the ability of viruses to reproduce?

        My point is simple:

        The climate scientists that worry about these issues don’t post here (much- Jeff made a single post) so you aren’t really going to see a meaningful discussion on the role of chaos or stochastic processes on climates, how that is handled in model building, and what that means in terms of model verification.

        The end result is you get somebody like me, who is really a biochemist (though I deal with stochastic and chaotic processes in that field), where all I can really state is even in systems where there is chaos and stochastic processes it is possible to construct “skillful” models. The relevant models aren’t necessiarly garbage and some variables can be predicted in some systems.

        That models with only a few degrees of freedom can be skillful (even though, obviously, a model would be better if it had more (as I already stated a model that could pass the test in Anagnostopoulos 2010 is likely better than a model that doesn’t, though it doesn’t mean that a model that doesn’t, doesn’t have predictive value with respect to temperature).

        That people that should understand the issues seem to use models (as Tsonis and Swanson did in their publications) and are willing to make longer term predictions (like Swanson did in the realclimate posts) indicating that they think that the models have some value.

        And thanks to jstults link, the peer review literature suggests the climate models are in fact “skillful”.

        I think that exchange well makes my point. With respect to the claim that the climate models aren’t skillful, I generally understand the concepts well enough to say the point is wrong and things can be skillful even with a few degress of freedom, but I’m not a climate scientists and don’t know the literature well enough specifically to refute the point by giving references to the climate science literature or to make specific statements about climate science.

        I’d guess that conversation would have gone differently if a climate scientists would have been involved that worked on these issues and knew the literature.

      • Peter, with regards to your statement:

        “I’m going to assume you aren’t claiming that most climate scientists don’t understand that there are issues with the models, that different models give different results, that as we move in time the models are less likely to be accurate, and that the models are just that models and not complete realistic representations of climate.”

        With regards to that statement, i would say that there are many climate scientists that fall into the category you describe, but certainly not most. But that is a far cry from saying these scientists understand anything about nonlinear dynamics and chaos. Climate modelers that work on the dynamical core obvious know about these issues, but this is a small fraction of the people that actually use climate model results.

        In terms of whether climate models are useful, I have stated on numerous previous posts that I think that they are useful, and have given specific examples. My concern has mainly been with the way the AR4 climate model experiments were designed, how the model simulations were interpreted, and insufficient exploration of model uncertainty.

        Note, my personal background in this area includes a Ph.D. study area and thesis was that was half geophysical fluid dynamics. I have not done much fluid dynamics research since the 1980’s, but I keep up with the literature, and much of my current research is on the interpretation of weather and climate model ensembles. So I can hold my own in a blogospheric discussion on this topic, but I am not an academic expert in this area.

      • “My concern has mainly been with the way the AR4 climate model experiments were designed, how the model simulations were interpreted, and insufficient exploration of model uncertainty.”

        And posts by people with the relevant expertise to have that conversation would actually be useful, interesting and people like myself, Thomas, Bart, Eli, and Chief would actually probably learn something, but wouldn’t probably actually be involved in the conversation.

        Instead, we got a post by Tomas, who based on the comments by Jeff and the references in jstults’ blog, seems rather ignorant on the research and literature in the field saying that ‘Hey, climate is chaotic and climate scientists are doing it wrong’, and insinuating therefore that all current models aren’t useful.

        And then people like me trying to make the point that despite the chaos and the resulting issues, it doesn’t mean that useful models can’t be produced and very few references to the real peer reviewed literature in the field because none of us really know it.

      • Actually i dont see where Tomas said “climate scientists are doing it wrong.” Many are misinterpreting chaos in the context of the climate system, and nearly all are neglecting interpreting the climate system (and climate model simulations) in the context of spatiotemporal chaos. Nowhere did Tomas say climate models are not useful.

      • He has an extensive post on his site:
        http://www.variousconsequences.com/2011/01/recurrence-averaging-and-predictability.html

        It should be pointed out that nobody is claiming that climate models are very accurate out to infinity.

        That would just be ridiculous.

        Though I would like to point out this paper that he’s referenced:

        https://pantherfile.uwm.edu/aatsonis/www/2007GL030288.pdf

        “It is argued that simulations of the twentieth century performed with coupled global climate models with specified historical changes in external radiative forcing can be interpreted as climate hindcasts. A simple Bayesian method for postprocessing such simulations is described, which produces probabilistic hindcasts of interdecadal temperature changes on large spatial scales. Hindcasts produced for the last two decades of the twentieth century are shown to be skillful. The suggestion that skillful decadal forecasts can be produced on large regional scales by exploiting the response to anthropogenic forcing provides additional evidence that anthropogenic change in the composition of the atmosphere has influenced the climate. In the absence of large negative volcanic forcing on the climate system (which cannot presently be forecast), it is predicted that the global mean temperature for the decade 2000–09 will lie above the 1970–99 normal with a probability of 0.94. The global mean temperature anomaly for this decade relative to 1970–99 is predicted to be 0.35°C with a 5%–95% confidence range of 0.21°–0.48°C.”

        Despite the chaos in the climate system “skillful” multidecade predictions are possible.

      • Good catch Peter; I’ve linked that paper on this site before (sorry I can’t find the link). The point I was making then was a bit more subtle: it’s interesting to consider the year the “prediction” in that paper was made. A quick back-of-the-envelope with a spreadsheet will show you how much risk they were taking by publishing this prediction after the decade in question was half in-the-books already.

        “the Devil can quote scripture for his own ends” ; – )

      • Ooops; it seems you’ve linked a paper I didn’t actually reference. Here’s the one I referenced on predictive skill. Decadal: yes. Multi-decadal: that’s a stretch. Skillful: well, they got a publishable result, but read the fine print.

      • Hmm, comparing that to my comments earlier about testing the models. How do they do in this paper?

        Firstly, they increase the number of degrees of freedom by moving from 30-year smoothing to decadal tests. This yields 6 data points in hindcast, which is good, and one in “forecast”. Sorry about the scare quotes about “forecast”, but one degree of freedom plus the slight problem that the paper was published in 2005, half way into the forecast period. Whilst the runs were configured in 2000, the test method and criteria really need to be pinned down prior to the forecast period. I would have preferred them to demo 1995-2004 and propose 2005-2014 as a future test.

        The second problem is that the models fail the skill test in 2/6 hindcast decades. Now, bearing in mind these models cannot be immune to experimenter bias* that is not a terribly impressive result. I would say it looks like the 2000-2009 period passed the test also, which is not far from 1 degree of freedom in almost-forecast mode. That is more impressive.

        But there is a number of serious issues I see with the methodology. They use the difference between the mean of the three decades preceding and the following decade (decade under test). However, the serial correlation means that the last decade in this period will probably be a better estimator than the average of the three decades. There is no guard period between the reference and the test decade, which I would have thought fairly essential for serially correlated data. Finally, the worst flaw: their “naive” forecast that they test against is a control model run without forcing. Yep, they test the model against itself – incestuous testing. The idea is that this is supposed to represent natural variability.

        You can’t just use a model as its own test for natural variability. The only way you could would be to characterise the statistics and the structure of natural variability and show they match. But if you’ve already had to work out the statistics and the structure of natural variability to do this, why not just use these statistics directly?

        Even worse, if the models produce less natural variability, even if their prediction does not change one jot, they will be reported as “more skillful” by this type of analysis. Yuk.

        To be fair to the authors, they do caveat this assumption explicitly. But it rather weakens any claims of “skill” until evidence that natural variability is meaningfully captured is produced.

        * Footnote: to be blunt, the people developing the models are aware of what the historic temperature series looks like, experimenter bias, publication bias etc. will tend to evolve models which generate the 20th century global temperature irrespective of how good the models really are.

    • jstults,
      Thanks for useful criticism on sloppy commenting and very interesting references.

  56. Firstly it is necessary to dispel a number of logical fallacies.ie Equilibrium,sensitivity ,and lineraties .

    Ghil 2008 captures the arguments succintly

    As the relatively new science of climate dynamics evolved through the 1980s and 1990s, it became quite clear from observational data, both instrumental and paleoclimatic, as well as model studies| that Earth’s climate never was and is unlikely to ever be in equilibrium. The three successive IPCC reports (1991 [2], 1996, and 2001 [3]) concentrated therefore, in addition to estimates of equilibrium sensitivity, on estimates of climate change over the 21st century, based on several scenarios of CO2 increase over this time interval, and using up to 18 general circulation models (GCMs) in the fourth IPCC Assessment Report (AR4) [4].

    The GCM results of temperature increase over the coming 100 years have stubbornly resisted any narrowing of the range of estimates, with results for Ts in 2100 as low as 1:4 K or as high as 5:8 K, according to the Third Assessment Report. The hope in the research leading up to the AR4 was that a set of suitably defined better GCMs” would exhibit a narrower range of year-2100 estimates, but this does not seem to have been the case.

    The difficulty in narrowing the range of estimates for either equilibrium sensitivity of climate or for end-of-the-century temperatures is clearly connected to the complexity of the climate system, the multiplicity and nonlinearity of the processes and feedbacks it contains, and the obstacles to a faithful representation of these processes and feedbacks in GCMs. The practice of the science and engineering of GCMs over several decades has amply demonstrated that any addition or change in the model’s parametrizations” i.e., of the representation of subgrid-scale processes in terms of the model’s explicit, large-scale variables |may result in noticeable changes in the model solutions’ behavior

    As we see above first the arguments for equilibrium are logically ill posed.

    Secondly the inability to reduce the sensitivity for co2 doubling ie its irreducibility over the last 30 years or so suggest either the problem is ill posed, or indeed it is irreducible with all its random consequences.

    This is an open problem ,and it may be that STC is indeed a barrier to the reduction of the uncertainty exponent ie the porous or riddled basins of attraction eg Ott 2006 Scholarpedia, 1(8):1701

    http://www.scholarpedia.org/article/Basin_of_attraction

    Or it may be simply the increase in model error due to additonal incorrect parametization this is seen in metrological models eg Nicolis and Nicolis 2007.Orell 2002.

    We can see arguments for an underlying trend in the T record that is used as evidence for say Agw ,however the statisitics are distorted by say positive excursions of singularities such as El nino or negative excursions such as volcanics .As these have the effect of distorting the mean eg remove the first point excursion of say el nino and AGW disappears into noise over the last 40yrs.

    • thx can you provide a reference or preferably a link to the Ghil paper, looks like an interesting one

      • a really good paper, but it defies easy digestion and understanding, for me anyways

      • Redefine our logical assumptions and the reference frame changes , eg say using Jaynes example .Nature does not prepare distributions gaussian ,cauchian (or indeeed faustian) it prepares states.Some of these are stable and some are instable,some are amenable to a brief statistical description,and some are not.

        What needs to be ascertained is, what can be described by standard mathmatical analysis ,such as these authors have and what constraints this implies.

        Or to put it another way what are our limitations, are Chaos and complexity constraints on our understanding?

        Do we need a reframation to look at phase states such as enso and try to understand what causes these excursions,rather then who is causing them.

      • maksimovich

        Much more elegantly put than I could have said, of the description of our conceptual framework problems.

        Thank you.

      • Any work that references Arnold (and Arnold’s tonques, cats, diffeomorphisms, etc.), deserves a close attention. Over the years I came up with a rule of thumb – most works (in nonlinear dynamics area) that do not have a reference to Arnold can be safely ignored.

        Funny the Ghil’s paper starts with a negative example from climateprediction.net. Here is what I wrote 5 years ago about shortcomings and conceptual blunders of this experiment at RC:
        http://www.realclimate.org/index.php/archives/2006/04/how-not-to-write-a-press-release/comment-page-5/#comment-12398

    • This may be of some interest if you haven’t seen it. I was struck by a similarity in terms.

      Irreducible imprecision in atmospheric and oceanic simulations
      James C. McWilliams * -http://www.pnas.org/content/104/21/8709.long

  57. Oh Bart – my Bart…

    ‘I’d love if Chief Hydrologist and Tomas Milanovic and a team of qualified associates with five years and millions of dollars could get together and address the question more completely.’ Hell yes – what fun.

    I find it hard to argue against the climatological and biological significance of anthropogenic greenhouse gas emissions in a chaotic climate system – even if the bloody planet cools for the next hundred years.

    Sorry to repeat -but I wanted to put this in the right place.

  58. “the system doesn’t follow any dynamics of the kind “trend + noise” but on the contrary presents sharp breaks , pseudoperiodic oscillations and shifts at all time scales.”

    Excellent. Can we now push the funding towards this direction of research and away from the ‘trend plus noise’ nonesense ?

  59. As jstults says, “is just a simple way of communicating a complex issue”. In the same sense that the less-than-spherical-cow version of radiative equilibrium with the Earth taken to be a black body is a simple way of communicating.

    Although I’ve not seen it explained in any detail, I’ve come to the conclusion that they are saying, At radiative equilibrium the problem is a boundary value problem. As Fred summarized here, all we need to know is how much radiative energy enters the system and that the Earth will adjust so that the out-going energy equals the incoming [ my interpretation ].

    Now, that picture is consistent with the picture of an isolated black body in radiative energy equilibrium.

    However, neither picture is related to the systems of interest. Fred’s summary leaps directly to the radiative energy transport equations. That leap avoids all the action. All the critically important action. But again it’s consistent with the black body picture if there was no material surrounding the Earth, or on the Earth, that interacts with the energy.

    So it seems to me that the simple way of communicating a complex problem has led to several fallacies becoming fixed in the discussions of the real problem; (1) the Earth is a black body, (2) with no materials either surrounding the systems or in the systems, (3) in radiative energy transport equilibrium, (4) response is chaotic solely based on extremely rough appeal to temporal-based chaotic response, (5) but at the same time exhibits trends, (6) but at the same time averages of chaotic response are not chaotic, (7) the mathematical model is a boundary value problem yet it is solved in the time domain, (8) absolutely all that matters is the incoming radiative energy at the TOA and the outgoing radiative energy at the Earth’s surface, (9) all the physical phenomena and processes that are occurring between the TOA and the surface along with all the materials within the subsystems can be ignored, (10) including all other activities of human kind save for our contributions of CO2 to the atmosphere, (11) neglecting to mention that if these were true there would be no problem yet we continue to expend time and money working on the problem.

    Very likely a generalized over-statement, the exceptions to which will be quickly pointed out.

    • Dan – With respect, I believe you have created not a straw man, but a straw superman. The climate responds to a flux imbalance at the TOA in such a way as to restore the balance – i.e., it tends toward equilibrium. However, I have never seen a claim that the magnitude of the imbalance is “all we need to know”. The response that is computed involves every one of the items you list – among them terrestrial and ocean heat storage, convective adjustments, regional and seasonal variations, and a multitude of feedbacks. The relevant atmospheric moieties of course include CO2, but also water vapor, aerosols, ozone, and a host of lesser greenhouse gases. Boundary conditions include a temporal element (e.g., the CO2 concentration at Time-1 and Time-2), as well as time-invariant constraints.

      The system is complex, as you note, but it is that complexity, rather than spherical cows, that climate science attempts to understand, model, and predict – with a success rate to date that is modest but encouraging, albeit not likely to evolve into perfection anytime soon.

      Addendum – Modeling the Earth’s surface as a black body in infrared wavelengths is a very good approximation of reality (didn’t we discuss this once before?)

    • Hi Dan,

      A blackbody is one that absorbs all incident electro-magnetic radiation. At Earth like temperatures – the energy would be re-radiated in the infrared. In reality, there are clouds, ice, desert, cleared land, even green trees and blue sea that reflect light back into space. So Earth is not even nearly a perfect blackbody.

      There is a dynamic energy disequilibrium at top of atmosphere that is in accordance with the 1st law of thermodynamics – energy is conserved. In any period:

      Energy in – Energy out = planetary heating or cooling

      If energy in is greater than energy out in the period – then the planet warms and vice versa. It is a very simple thing is it not. But absolutely irrefutable.

      Climate is of course driven by by the Sun. At the top of atmosphere the the inward radiative flux is about 1360 W/m-2 in the latest estimate – but changes a little bit over an 11 year solar cycle.

      A Watt is a unit of power – 1 Watt for 1 second is one Joule which is a unit of energy. About half of the incoming energy is as heat (infrared or longwave) and about half is as visible light (shortwave).

      http://solar.physics.montana.edu/ypop/Spotlight/Today/index.html

      Energy out changes quite substantially – and here is where most of the fun is. It changes because of greenhouse gases, cloud and ice cover changes, land clearing, volcanoes, dust and soot in the atmosphere – all of the physical changes that result in a change in the radiative flux leaving the planet either as IR (heat) emissions or as reflected sunlight.

      It is these physical changes that create Earth’s dynamic energy disequilibrium and therefore warming or cooling.

      There are many indications that climate change is abrupt. ‘Researchers first became intrigued by abrupt climate change when they discovered striking evidence of large, abrupt, and widespread changes preserved in paleoclimatic archives…The chapter concludes with examples of modern climate change and techniques for observing it. Modern climate records include abrupt changes that are smaller and briefer than in paleoclimate records but show that abrupt climate change is not restricted to the distant past.’ http://www.nap.edu/openbook.php?isbn=0309074347

      The implication is that complex and dynamic processes in Earth climate are very sensitive to changes in energy in or energy out – climate responds nonlinearly with abrupt change.

      Chaos theory is simply a theory that tells us something about the properties of complex and dynamic systems. One of these properties is that the system is sensitive to small initial changes. So we say that climate is chaotic – but all that means in a practical sense is that climate is delicately balanced.

      Planetary emissions change a little in the IR – perhaps enough to trigger chaotic change. They change a lot in the SW – it is here where most of the action is as cloud and ice especially change in a dynamic climate. A lot more visible light is reflected from a snowball Earth than from a blue and green planet. Energy is everything in climate – but it is as a dynamic energy disequilibrium as climate is never static.

      Cheers
      Robert

    • “(6) but at the same time averages of chaotic response are not chaotic,”

      It is the noise in the signal that gives the (misleading) appearance that the average of chaos is not chaotic over time.

      http://prl.aps.org/abstract/PRL/v65/i12/p1391_1

      • ‘In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.’

        ‘In probability theory, the central limit theorem (CLT) states conditions under which the mean of a sufficiently large number of independent random variables, each with finite mean and variance, will be approximately normally distributed.’

        I don’t see how this helps

      • . . . the result of performing the same experiment a large number of times.

        In the case of running GCMs with different ICs, parameter values, discrete-grid and time step sizes, continuous-equation models, different numerical solution methods, and other changes among the models / codes / users the same experiment is not conducted a large number of times. The same goes for measuring the temperature of the atmosphere and oceans. I think the LLN requires that the same procedures be applied in the same un-changing manner to the same un-changing object.

  60. Chief

    I’ve been arguing against the significance of CO2 for a quarter century.

    What (large) significance I assert has survived that process intact.

    Right now, I’ve got it down to a new pencil scribbling out the IR window in the non-saturating, non-overlapping 1-30 micrometer band for every square inch of sky every decade, plus the equivalent in plants of a growth hormone (NOT a fertilizer, it makes plants leggier and less durable — good for weeds, bad for crops) being administered without control to every wild weed in the world, plus alkalinity dropping measureably and having shown habitat stress effects on plants and animals.

    If I manage to knock this amount down any further, or have to admit there’s enough certainty for those items I don’t affirm (like temperature, which I’m startled to see how good the information has become in a quarter century), I’ll let you know.

    At the same time, if the Arctic sea ice extent drop is or isn’t caused by CO2 perturbation, it’s almost certainly going to perturb the entire Northern Hemisphere climate by enough on its own to be in the category of a Heinrich event.

    So your troubles are my troubles.

    • Hi Bart,

      I am rooting for abrupt climate change – inevitable surprises. But I think to see the future we need a book with ‘Don’t Panic’ in big pink letters on the front.

      I am almost certain, however, of the resilience and adaptability of life and the toughness and inventiveness of humanity.

      A Heinrich event would certainly upset the applecart.

      Cheers
      Robert

      • From inside one, in a human lifetime, could we even notice the thing?

        I don’t make my arguments from an attitude of alarmism or sense of all-knowingness.

        I generally subscribe for myself to principles that are themselves generally-accepted, such as that it is worthwhile to decrease human misery and increase opportunity for human power to make individual decisions tending to lead to individual happiness.

        To me, the principled approach that comes out of the Chaos view of climate is that while there will be abrupt misery-inducing (though mathematically fascinating) changes, fewer and smaller are generally less miserable and less expensive, and less expensive climate means more economic abundance to use to pursue mathematical studies.. erm, I mean happiness.

        So, fewer abrupt changes means fewer and smaller perturbations. While it’s not a guarantee — no such thing in Chaos — it’s an observed correspondence.

        CO2 emission is the big perturbation, and probably the cheapest to fix among the big world-changers.

      • I think there is a worthwhile dialectic to be had here – probably the most central of all. I will have a think.

  61. Judith, I can’t reply to your post, but Tomas said:

    “Despite the fact that this paper finds a MAJOR result and is the right paradigm for a study of spatio temporal chaotic systems at all time scales so also for climate, I suspect that nobody has read it.
    And probably only few would understand the importance of both the result and of the paradigm. ”

    Tomas is not just saying that climate scientists have it wrong, he’s saying that (he thinks) NONE of them read the relevant literature to know they have it wrong, and even if even the relevant literature was given to them to read only a few would understand the literature well enough to realize that it showed they were doing it wrong.

    And no Tomas didn’t specifically state that climate models are useless, which is why I used the word insinuate. However, if you think that I am wrong, take his post to work with you, find 10 non-climate scientists that don’t read this blog, and ask them if they think the author of the piece thinks climate models are useful at all, and then let us know what the results are.

    • Well this doesn’t seem to be going anywhere. Tomas says he thinks this line of inquiry and interpretation is important, i agree. To see who is paying attention to Tsonis paper, go to google scholar. Shows a total of 59 citations. Look at who is citing the papers, mostly skeptics (e.g. Lindzen, Singer.)
      http://scholar.google.com/scholar?start=0&hl=en&as_sdt=5,41&sciodt=0,41&cites=14495501804131994031

      • Judith,
        I would keep these two issues separate. By the first issue I mean the importance of unpredictable or regular transitions and oscillations resulting of the internal dynamics and the by the second the Tsonis papers.

        Jstults describes in his blog well, what kind of issues are involved. I am not tying to judge his conclusions or details of his posts, but I certainly think that the issues have been described well.

        The Tsonis paper is a first and simplistic attempt to find teleconnections between selected few variables. This is interesting. I have no reason to say that the results should be wrong or misleading, but to me the evidence appears far too weak for drawing strong conclusions.

        I have been worrying that even common properties of all present climate models and models than can be developed in near future may common bias towards such stability that is not necessarily true for the real Earth system. This is not based on sufficient knowledge of the actual models but on, what I know more generally about hyperbolic partial differential equations and problems in solving them numerically. Even there my expertize comes less of own actual modeling experience than from discussions with close colleges who have done such work and from giving a lecture course on the subject (which required a lot of extra effort, when I noticed that the textbook that I used contained some rather serious weaknesses. Going through all that forced me to learn more than I expected).

      • I’m working on a BVP/IVP post, hope to have it up tonite

      • Pekka,

        I think you have mis-characterised the Tsonis findings. It is rather about finding signatures of chaotic bifurcation in important climate indices.

        They found that where a ‘synchronous state was followed by a steady increase in the coupling strength between the indices, the synchronous state was destroyed, after which a new climate state emerged. These shifts are associated with significant changes in global temperature trend and in ENSO variability.’

        It is equivalent to the ‘slowing down’ in the paleoclimatic record of Dakos et al 2008 –
        http://www.pnas.org/content/105/38/14308.full.pdf – a theoretical property of chaotic states.

        My interest commenced in the late 1980’s with an observation by geomorphologists that streams in Australia had changed form abruptly after the late 1070’s. Why these abrupt shifts in rainfall?

        Rather than teleconnections, cycles, oscillations or (in the case of Australian rainfall) regimes – which are all concepts we need to throw in the bin – what we have is strange attractors and chaotic bifurcation.

        While the network model is very simple – the result is of great significance.

        Cheers
        Robert

      • Rob,
        I didn’t really attempt to characterize the data in any more detail than the number of words used allowed. That was enough for the message I wanted to tell. I have read their papers and know, what is the idea in it. It is interesting and worth presenting, but not based on sufficient amount of data to convince me.

      • Pekka,

        ‘The Tsonis paper is a first and simplistic attempt to find teleconnections between selected few variables.’

        Teleconnections is the wrong idea – rather it is measure of autocorrelation in the global climate system – as in the Dakos et al paper which you didn’t bother to look at.

        So no – I don’t think you have read and understood. And understanding is made more difficult in your unfamiliarity with the underlying data – as shown by your misinterpretation of the Clement paper in the clouds thread.

        Robert

    • Hi Peter,

      I think Tomas underestimates by quite a bit the numbers of scientists who are coming to a realisation of chaos in both climate and models. To quote the abstract of James McWilliams 2007 PNAS: ‘Irreducible imprecision in atmospheric and oceanic simulations’ – http://www.pnas.org/content/104/21/8709.long

      ‘Atmospheric and oceanic computational simulation models often successfully depict chaotic space–time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but nonunique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model. Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.’

      The ‘ensemble’ methodology currently in use is to use a single run from a diversity of models, graph these and take a mean. Given the range of possible outcomes of nonlinear processes in individual models – the usefulness of this is a matter of perspective. It is very far from the ‘systematically designed model families’ of James McWilliams.

      I have quoted Tim Palmer elsewhere on this thread –
      http://judithcurry.com/2011/02/10/spatio-temporal-chaos/#comment-41268 – there is a bio of Tim Palmer here – http://www.earth.columbia.edu/sop2006/bios/palmer_t.html

      Palmer asserts that because of the range of uncertainty in many factors – we should think of the prognostication of weather and climate in terms of a probability density function. (This has led me to propose that the collapse of the probability function leads to many climates in alternate universes – the many climates interpretation. Obviously – as there is only Bart and myself in our mutual admiration society – I don’t have a reputation to undermine.)

      Tomas informs me that the Tim Palmer phase space dV is of practically infinite size. As someone who has training in both engineering and environmental science – I expect that there are a practically infinite number of fudge factors that can be used to constrain the size of the phase space.

      You need to ask yourself quite complex questions about the theoretical limits of models abilities to faithfully represent real world climate physics -and the implications of that. ‘Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable.’ James McWilliams

      Cheers
      Robert

      • This has led me to propose that the collapse of the probability function leads to many climates in alternate universes – the many climates interpretation. Obviously – as there is only Bart and myself in our mutual admiration society – I don’t have a reputation to undermine.

        Alternately, to the same underlying climate in many alternate points of view?

        A hologram is as good as a multiverse.

        And don’t worry too much about the admiration. I say the same about all the chief hydrolog..

        Oh.
        Wait.
        You’re a ‘Hydrologist’?!
        I’d been thinking ‘Hydraulogist’ all along.
        Well, that’s entirely different.

      • smile when you say that pardner!!!

  62. A lot of the main stream climate science I’ve seen assumes climate is not chaotic. That the averaging of weather over time yields a predictable long term average, and that by averaging multiple climate models, the result becomes even more certain. This assumption is a necessary prerequisite if one is to put faith in the IPCC projections.

    This assumption appears to result from a naive application of the Law of Large Numbers, where it is found to be the noise, not the temperature that causes convergence of the average. In effect, long term climate averages only appear predictable because of the noise in the signal.

    Globally coupled chaos violates the law of large numbers but not the central-limit theorem
    http://prl.aps.org/abstract/PRL/v65/i12/p1391_1

    • i read the abstract looks interesting i’ll try to track down a copy

    • But – “With the inclusion of noise, the law of large numbers is restored”. As I was saying up above…

      • I understand the paper to say that the noise in the signal is what gives the impression that the law of large numbers holds in chaotic systems like climate. If the noise signal is large enough, the climate signal does have a long term average. It is the average of the noise.

  63. OK, there are a lot of very smart people on this thread so I went away and did a little research looking for the layman’s term to describe why I think current climate research fails to adequately account for the chaotic nature of our climate. And the word is “complexity”. From http://www.au.af.mil/au/awc/awcgate/ndu/valle.pdf page 5 section on complexity… “As with chaos, the behavior of self-organizing complex systems cannot be predicted, and they do not observe the principle of additivity, i.e., their components cannot be divided up and studied in isolation.”

    And this is where current climate science and scientists fail to apply the appropriately level of humility when publishing their claims. So for example, when a value of 2 degrees temperature increase is given as a “tipping point”; where is the evidence provided to support such a claim? Because it isn’t given and anyone with even a passing understanding of chaotic systems knows that such a valiant claim is completely baseless unless backed up by thorough treatment of the system as complex as it is. And I should expand this point; that the climate system is one of if not *the most complex natural systems* that humanity is currently trying to analyze. And the examples of bold unsubstantiated claims go on uncontested but broadcasted and repeated ad nauseum by the media and by green groups and other vested interests. The polar bear will be extinct because the arctic will be ice free because the old ice is thinning and the new ice is too weak and the wind doesn’t count and the ocean currents are not involved – just to mention one other broad sub group. Basically, the climate scientists are continuously attempting to use reductionism as their way to communicate the threat of global warming but the public intuitively understands that such claims are too simplistic and therefore can be ignored.

    Meanwhile, the last twenty years of GCM’s are extraordinarily too simple to explain the massively complex system that is our global climate. And the scientists won’t admit it.

    • So we have a complex and dynamic climate system that is sensitive to small changes in climate variables? That this implies that there is no problem with anthropogenic greenhouse gas emissions is lacking somewhat in internal consistency.

      The potential for abrupt and violent climate change is somehow a lessor problem than a steady evolution of climate?

      • You can’t apply the precautionary principle just because you can’t predict the future.

      • You can’t predict if you will get in a car accident, but the Precautionary Principle says you should drive around in a Hummer or similar as it will significantly reduce your chance of injury or death.

        In fact, you can use the Precautionary Principle to justify just about anything. It just depends on finding a greater risk and ignoring the cost/benefit.

        Most deaths happen in hospitals, so as a precaution never enter a hospital. Most deaths at home happen in the bath, so as a precaution never take a bath.

      • And breathing releases CO2 which causes global warming so as a precaution stop breathing.

      • Breathing does not contribute to the net atmospheric accumulation of CO2.

      • Each person releases about a ton of CO2 from respiration each year. If I can get just 20 people to agree not to have children that otherwise would have, then the net reduction in CO2 would more than equal all the CO2 I produce each year for the rest of my life and I am carbon neutral.

      • Your respired CO2 does not add to the atmospheric concentration of CO2. You are carbon neutral already as far as respiration is concerned.

      • Wheres driving a Mustang increases risk of death or injury but it is a lot more fun. Ditto with bathing – women like you a lot better if you are clean and drive a Mustang. And I am sure you would like that society took the precaution of having hospitals when you roll over in your Hummer.

        As Tim Palmer said above – climate is prognosticatable as a probability density function. It is absolutely certain that there are some risks of highly adverse outcomes – it becomes standard risk management after that. Get used to the idea – and buy into creative, pro-market and pro-people solutions. There is always a balance of risk and a need to find the best compromise.

      • or below – chaos in operation?

      • I am tempted to flippantly ask why not? But it is not the precautionary principle at all. It is hard headed risk management.

        ‘Prediction of weather and climate are necessarily uncertain: our observations of weather and climate are uncertain, the models into which we assimilate this data and predict the future are uncertain, and external effects such as volcanoes and anthropogenic greenhouse emissions are also uncertain. Fundamentally, therefore, therefore we should think of weather and climate predictions in terms of equations whose basic prognostic variables are probability densities ρ(X,t) where X denotes some climatic variable and t denoted time. In this way, ρ(X,t)dV represents the probability that, at time t, the true value of X lies in some small volume dV of state space. Prognostic equations for ρ, the Liouville and Fokker-Plank equation are described by Ehrendorfer (this volume). In practice these equations are solved by ensemble techniques, as described in Buizza.’ (Predicting Weather and Climate – Palmer and Hagedorn eds – 2006).

        The number of ‘state spaces’ are very large – but they very much include the possibility of extreme change. It seems very much to me to be similar to designing a water storage for a 10,000 year flood. This is the rare event with extreme consequences.

        If you can’t predict it won’t happen – it is a quandary.

    • Well patrioticduo, sounds like you might not be a scientist per se but you certainly read right through that veil and with paper in hand. It’s nice that someone from outside can so easily see what most climate scientists clearly cannot even comprehend, for their predictions keep coming out day after day after day. Thanks. Will enjoy reading that paper.

      • I am certainly not a scientist, but I did stay a holiday inn last night. The paper is military and relates to adversaries and starting conditions. It just had the best wording summation for “complexity”. And I am assuming you’re not being sarcastic. It can be difficult to tell on blogs.

  64. That is the beauty of climate modelling. Due to the sensitivity of chaotic systems to initial state, by careful adjustment of the initial parameters you can come up with just about any answer you want to support just about any policy you want to promote.

    It is sort of like building railroads in the old days. If you knew where the railroad was going to be built, you would buy up the land along the way before the railroad came through and make a fortune.

    Climate models let you control where money is going to be invested in the future, by controling government policy. This allows investors to buy up specific industries, and make a fortune when the climate models predict that the government should invest heavily in those industries for the future.

    Along with many scientific carreers, many major companies and pension plans are now heavily invested in the AGW model predictions in anticipation of government policy changes. They stand to lose significant sums of money if CO2 abatement policies are not enacted.

    This is a power lobbying force in both government and the media. Except for the combination of both the financial metldown and climategate, these policies would already have been enacted.

  65. I have, to me, a very perplexing question.

    I look at the Wong et al paper with the ERBS data in the tropics.
    http://journals.ametsoc.org/doi/pdf/10.1175/JCLI3838.1

    We have the ISCCP-FD data with pretty much the same results.
    http://isccp.giss.nasa.gov/projects/browse_fc.html

    We have Pacific decadal pattern (PDP) observations of cloud and SST – including detailed observations of cloud processes in Zhu Ping et al (2207).

    Regardless of Fred rabbiting on about how good the models are – they deal only in cloud feedback – whether positive or negative is moot.

    Do we have empirical evidence of secular changes in clouds that invalidate the models regardless or do we not?

  66. Roger Pielke is reporting a new study about random walks in global climate, which I found quite interesting:
    http://pielkeclimatesci.wordpress.com/2011/02/14/new-paper-random-walk-lengths-of-about-30-years-in-global-climate-by-bye-et-al-2011/
    Tomas, any thoughts on this?

  67. Tomas Milanovic

    Judith

    Thanks for having cleaned up the thread. I have waited untill the activity drops down to attempt a summary. Not a synthesis because there are so many redundancies and Johnsonian (I have just invented a new adjective) stubborness of some posters that a clean synthesis is impossible.
    But as I suppose that you put this post up because you are genuinely interested in new relevant ideas, I hope that this summary will help.

    1)
    I commend Jstults for excellent and relevant contributions. He has a good knowledge of the litterature but most importantly he is able to manipulate chaotic ODEs. SpenceUK also added good contributions. Dan Hughes blog on numerical solutions of Lorenz equations is a good read.

    2)
    This is NOT about numerical models. This cannot be about numerical models.
    I hope that by now most have understood that from the mathematical point of view temporal chaos theory is about solutions of non linear ODEs and spatio-temporal chaos about solutions of non linear PDEs.
    The former which is much older than the latter (Poincaré 100 years ago on hamiltonian conservative systems and Lorenz 50 years ago on 2D fluid dynamics) is a good introduction to important concepts and mathematical tools but of little to no help in climate matters.
    As numerical models cannot find solutions of any system of non linear ODEs or PDEs because the system is simply spatially too huge and all the equations are not known anyway, they have no relevance to what I discuss here.
    If I attempt to characterise what they are in my eyes, I would say that they are simulators of the evolution of the system under approximate constraint of conservation laws.
    But as R.Hilborn has rightly written “The dynamically allowed space is much smaller than the space that is allowed by the conservation laws”.
    Btw I recommend R.Hilborn’s excellent textbook (http://www.amazon.com/Chaos-Nonlinear-Dynamics-Introduction-Scientists/dp/0198507232/ref=cm_cr_pr_product_top) for anybody who would like to go a bit farther than the basics of the non linear dynamics.
    From that follows that whatever states the numerical simulation computes, it cannot be sure that they are dynamically allowed. Many of them may very well be just plausible states of the fields but the system will never visit them because they are dynamically forbidden.
    This poses the question of the metrics of the states (how do we define a state of the system so that this definintion leads to a meaningful metrics) which is another debate.

    3)
    There is a fundamental difference both mathematically and physically between temporal chaos and spatio temporal chaos. Judith rightly notes that few of the climate scientists have knowledge about temporal chaos let alone spatio temporal chaos. Even Tsonis and Swanson are not really experts of chaos theory but their paradigm (coupled oscillators) is identical to the spatio temporal chaos paradigm. That is why their work is qualitatively different from the “orthodox” school.
    My personal opinion is that I do not believe that numerical models (GCM) can give meaningful support or development to their work but I do not know if they believe it themselves. For that we’d need their opinion.

    4)
    There is still the old school that continues to equate chaos with randomness. I am not sure that they are willing to learn modern physics so it is certainly not blog discussions that would convince them.
    Characteristic of this school is the following quote :
    But as soon as you add any sort of noise, your perfect chaotic system becomes a mere stochastic one over long time periods, and probabilities really do apply.
    A nice review of the relationships between chaos, probability and statistics is this article from 1992:
    “Statistics, Probability and Chaos” by L. Mark Berliner, Statist. Sci. Volume 7, Number 1 (1992), 69-90.
    http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.ss/1177011444

    I suspect that these people didn’t really read the link.
    The part relating to stochasticity admits that it is merely a qualitative overview and references the fundamental papers among which Ruelle and Eckmann.
    It has apparently escaped to the author of the quote that I have linked the R&E paper in the very first post and he certainly didn’t read it.
    What the Berliner’s summary says is that :
    IF we have a temporal chaotic system and IF this system is ergodic THEN a stochastical interpretation is possible
    Unfortunately neither of the ifs is valid for weather/climate.
    Despite this rather obvious point, these people still talk about “perturbations”.
    Actually the chaos doesn’t exist for them because there are “perturbations”.
    This is a complete misunderstanding of chaos theory.
    There are no “perturbations” inside a chaotic system – a solution of the dynamical equations is what it is and all the “perturbations” are already accounted for.
    The system cannot be decomposed in a linear way in a sum : nice smooth if possible deterministic solution + noise or “perturbation”.
    Of course the external energy supply which is necessary to produce chaos is not necessarily constant. It may even be considered random. This doesn’t imply in any way that the system suddenly becomes random too and none of the quoted papers says anything approaching.
    A kind of randomness or more precisely the existence of an invariant (of the initial conditions and of time!) probability distribution of the states exists only for ergodic systems.
    But the ergodic property is NOT a given.
    Even in temporal chaos some systems are ergodic and some are not.
    In spatio-temporal chaos the question is fully open especially as a complete ergodic theory of spatio-temporal systems doesn’t exist yet.
    In any case the ergodicity has nothing to do with “perturbations” or variations of the external energy fluxes.

    • Tomas, thanks for this summary, I will add this to the main post.

    • Many thanks, Tomas, for providing a very interesting thread. And, of course, many thanks to Dr. Curry for hosting it. Certainly there are quite a few interesting ideas in this thread that I’ll be looking to read into further in the future.

  68. Thomas,

    “Spatio-temporal chaos” may be the missing piece of the puzzle.

    I hope you will respond to my question about “spatio-temporal chaos”
    http://judithcurry.com/2012/02/15/ergodicity/#comment-169283