by Tomas Milanovic
On the thread Trends, Change Points and Hypotheses, the issue of ergodicity was mentioned numerous times, and some clarification of this concept is needed.
So what is and what is not ergodicity and why does it matter in physics?
It is annoying to see that many people use the word ergodicity without knowing what it means and how it should be interpreted. It is not annoying because it is mostly wrong but it is annoying because those readers who are not intimately familiar with these concepts, will get confused and ultimately get farther from understanding rather than nearer.So what is and what is not ergodicty and why does it matter in physics?The ergodic property is a very general mathematical property of measurable sets. A measurable set can be defined as (X,µ) where X is some set (f.ex the standard R^n cartesian space) and µ is a measure. For purists, I left out the sigma algebra, it is not necessary to understand the rest Now let’ take some transformation T : X ) X , T is a map, a function And request that T preserves the measure µ , e.g µ (T^1 (A)) = µ(A)Okay, that’s everything we need in matter of definitions – you need a space X , a metricsµ , a transformation T and we want this T to preserve the metrics. The ergodic property of this triplet is simply the statement : If for any A that is a subset of X, we have T^1(A) = A then µ(A)=1 or 0What does that mean in words ?If there are some subsets A (imagine a set of points) that are left invariant by the transformation T, then their measure (size) is either 0 or the measure of the whole set X (because µ(X)=1). Even more crudely, an ergodic transformation doesn’t get “stuck” in some particular part of the space. Now an ergodic transformation has a property which is expressed by the ergodic theorem which is why we are actually interested in ergodicity at all. Here we must first define an iteration average and a space average on X for some function F. The iteration average of F is Fia = lim (noo) 1/n Sum over k [F(Tk(x))]. This rather heavy formula is just saying that you take some point x in X and make it move by applying T on x, then T on T(x) etc. If you take F of each of these points , and make an arithmetical average, you obtain the iteration average. The space average is easier : Ok and the ergodic theorem says simply that : And that’s all we need to know as far as the maths go. There are many interesting additional results and consequences but the real maths are not easy and most readers would probably stop following. The point of this short but rigorous introduction was to demonstrate that ergodicity is not some fog that could be interpreted by anybody as it suits his particular view. Now why does that matter in physics in general and in the atmospheric system in particular? Well as we have a rigorous and very general mathematical theory, we can now take some particular cases with physical meaning. So here we go: X is a cartesian finite dimensional space and its points are states of a dynamical system (each state is defined by N coordinates). X is also called then a phase space. Suppose T is ergodic (please note and I stress that I wrote suppose). It is not a surprise that all this looks like Hamiltonian mechanics because it IS Hamiltonian mechanics. It follows that all those who are not familiar with statistical mechanics and for whom KAM is some chinese abbreviation should abstain talking about ergodicity. Well and now we can apply everything we already know from above on this particular case. First there are no subsets of the phase space where the system stays “trapped”. It roams almost everywhere. Besides one can easily demonstrate the non intersection theorem which shows that the system never pases twice through the same state. We also know with the ergodic theorem that if we follow the states of the system for an infinite time (remember: infinite iterations of T) and take the time average of some parameter of the system, then this average will be equal to the (probability weighted) average of this same parameter over the whole phase space. This is very interesting because we are mostly interested in the latter while we can experimentally measure only the former. What is even more interesting is that it also follows that there exists an invariant PDF of µ that doesn’t depend on initial conditions. Of course it depends on T (e.g on the form of the dynamics) but not on the initial conditions. And THAT is very rich! This means that by taking some (any) initial condition and following the trajectory of the system for a (very) long time, you will obtain empirically ONE PDF, but thanks to ergodicity, you know that you don’t need to redo it for the rest of the infinity of initial conditions because your PDF is the one unique PDF for the whole system. Caveat: when I say infinite time, I mean it seriously. You have to observe really for a long time and it would take a whole new post to discuss how long one has to observe to apply the ergodic theorem in practice. So now I hope everybody understands how confused a discussion can become when people mix up stochasticity, randomness and ergodicity. Last let us make a HUGE qualitative leap and cross from finite dimensional X’s (aka Hamiltonian mechanics) to infinite dimensional X’s (aka field theories). From the mathematical side not much changes – the formulation of the measurable set theory doesn’t prescribe any particular X’s, µ’s and T’s. But physically everything changes – our points become functions, the measures base on square integrable fields and trajectories can no longer be geometrically visualised. Navier Stokes and by extension weather and climate belong to this category. Of course one can also talk about ergodicity but it takes considerably more skill and training, and no simple analogies to statistical mechanics or thermodynamics work anymore. And what has chaos to do with all this? Well chaos theory is interested by a particular category of T’s (recall: T defines the dynamical laws). Namely, T’s with sensitivity to initial conditions, which represent a very large sub set of possible T’s in physics. So when one studies a chaotic system, one can and must restrict the phase space to the allowable finite subspace which is the attractor. Of course there are no mysterious “stochastic” perturbations which make the fundamental features of behaviour of chaotic systems disappear – Navier Stokes is deterministic and chaotic all the way down to the quantum scales. I would like to describe now how ergodicity can be used for these particular cases but I am afraid that the post is already too long anyway. JC comment: This is a topic that I want to understand better, and I am hugely appreciative of Tomas’ contributions here, see also his previous posts I can’t say that I now understand this, but at least I now know enough not to use the term ‘ergodic’ since I don’t really understand it. I’m hoping that some further discussion of this will illuminate us all.

Tomas,
I noticed that velocity and motion have NEVER been included.
T
I know that I have not slept on it so I probably have something amiss; yet the following paragraph struck me:
“First there are no subsets of the phase space where the system stays “trapped”. It roams almost everywhere. Besides one can easily demonstrate the non intersection theorem which shows that the system never pases twice through the same state.”
“Never passes twice through the same state”. My mind leapt to the model industry, past performance is no predictor of future performance. Efforts to tune the GCM’s to the past doesn’t have relevance to predicting weather/climate as we are not going to be going through that same state again.
Now I really want to go to bed and sleep on it!
Thomas, is the following factual information an example of ergodicity?
1. Far beyond the opaque photosphere, in the solar core, neutrons are now becoming atoms of H and then fusing into He, etc.
Pulsar =(nemission)=> neutron =(ndecay)=> H =(fusion)=> He, etc
2. The system exploded five billion years (5 Gyr) ago, ejecting the atoms that now orbit the Sun.
http://www.omatumr.com/Data/1994Data.htm
http://www.omatumr.com/Data/1996Data.htm
3. The system evolved ergodicity (naturally but inexplicably) into Thomas, Joe, Judith, RiHo08 and Oliver, each consisting of about
a.) 10,000,000,000,000 living cells,
b.) Produced by dividing a onecell zygote,
c.) Consisting of ~100,000,000,000,000 atoms,
d.) That were hot, poorly mixed supernova debris 5 Gyr ago, and
e.) Are now assembled together into an assortment of different types of cells (toe nails, eye balls, ear drums, ankles, elbows, tits and tails) that together contain ~1,000,000,000,000,000,000,000,000,000 atoms functioning as a living, thinking creatures trying to decipher their own origin and knowing that not a single cell is their body has remained there over their own imaginary life (~75 years for Oliver).
Is that not an example of ergodicity?
Thanks, Thomas, for responding below to steps 1 and 2.
Regardless of those parts, it is in step 3 where the system seems to have evolved by ergodicity (naturally but inexplicably) into Thomas, Joe, Judith, RiHo08 and Oliver – living creatures capable of studying and debating the origin of the 10^27 atoms that comprise each of them.
Please indicate if ergodicity and spatiotemporal chaos theory are simply a more mathematically appealing was of saying God or Divine Design?
If there is a difference, please explain it.
Thanks
Tomas,
The start of the theory is based on “balance and energy living forever”.
What if: The Universe was NEVER designed for balance and energy living forever. Every planet was NOT designed to be in balance, motion is generated by being slightly off balance. Two energies hitting each other cancels each other out. But in circular motion, these energies are always slightly off with two competing different energies of centrifugal force and velocity.
Cannot have point to point contacts as everything is moving and we have no actual bearing on any stopped object. At any moment, every object has shifted or has changed.
Tomas, I am curious if atoms may be a good way to explain ergodicy?
A single hydrogen atom with one electron has a probability shell around a single attractor.
A hydrogen molecule has two attractors with two shells and a shared probability between attractors. This adds the probability of stretch between attractors and rotation of the molecule, expanding the probability distribution of the electrons.
Then a water molecule has three attractors with one stronger adding more vibration states and more complexity to the PDF.
I noticed that velocity and motion have NEVER been included
Well yes and no.
If you consider the succession of points xT(x)T(T(x))etc, you have “motion” and if you take a differential you have “velocity”.
The ergodic theory is very general – you only need to give X (space), µ (measure) and T (dynamical law) and the results will be valid for ALL X’s,µ’s and T’s.
For instance if you take for T the Navier Stokes, you obtain fluid mechanics.
And there you will meet the “real” motions and velocities (of fluids).
But things that happen in the phase space are rather abstract and a “movement” in the phase space doesn’t need that something really moves.
It just means that the state of the system changes, “moves” from state A to state B.
This is again a comment that causes both positive and negative reactions in me.
On the other hand the comment is true and there has certainly been confusion about ergodicity. The comment gives some information that may be valuable from that point of view.
On the other hand it’s possible to use the concept of ergodicity in a way that’s not mathematically strictly correct but still reflects exactly those same physical principles for which the mathematically precise theory also applies.
It’s also true that formally correct theories can be used in grossly misleading ways. The formal definition of ergodicity is dependent on assumptions that are contrary to any real world application like taking time averages over infinitely long periods or making assumptions that are equivalent to that. It’s also true that many systems don’t satisfy the assumptions precisely, but may be so close to satisfying the assumptions that theories formulated based on the formal assumptions apply also very well in practice to these cases.
Many fields of mathematics have been developed upon methods invented by physicists and made axiomatically valid mathematics much later. There remain certainly many successful mathematical methods in physics that lack formal justification in form acceptable to mathematicians.
Formal correctness is a value by itself but it must not be given too much weight. Competent physicists do all the time calculations, which are formally somewhat dubious, but still give reliably good results. This is a form of professional competence.
Another problem is that I don’t believe that many readers can really appreciate the content of Tomas’ text. Even many of those who can formally follow the argument may be left unable to understand its real meaning for physics. Personally I try to express the physical ideas as well and in as understandable fashion as I can, even when that involves some sloppiness in the use of the terms.
Somebody (perhaps Mattstat or WHT) presented here some time ago references to recent papers that tell about difficulties that top specialists of statistical physics still have concerning the formalities. That tells, how complex the connections between fundamentals and practical methods are in these fields. it’s possible to understand and use statistical physics well even, when there are weaknesses in the knowledge of the related mathematical physics.
Agree, Tomas may understand this stuff in his own head, but he can’t explain himself out of a cardboard box. If it takes him that long to explain something that is truly intuitive to anyone that has studied any statistical physics, he won’t get much traction..
Here is ergodicity in a nutshell: If a behavior showed a probability dependence that followed some declining envelope, then if you wanted to see the 10^8 tail cumulative of that probability, you would want to gather at least 10^8 samples. If you think that is the ergodic behavior but it doesn’t happen, then ergodicity has been violated for some reason.
Here is another case that is beyond intuitive for anybody that has used a scientific calculator. There is a random() function key. If the algorithm that underlies this function does not eventually give you a good uniform sampling of all the random numbers between 0 and 1 according to the number of significant digits, it is not ergodic,.
Question to Tomas.
Now, if you don’t like the way I phrased this and you have problems with the way I stated it, you better give me a different name for this kind of property, because it is needed. A practical definition is needed to explain to software engineers on how to evaluate a random number generator, to an experimental physicist when he is looking at how many experimental measurements to take, or a statistician to figure out how many Monte Carlo runs he should execute. As far as I can tell, there is no other word that fills this role other than ergodic.
The ultimate problem is that the definition of an ergodic process doesn’t help much when we need to determine whether or not a particular process is ergodic. You need examples, and that is just the way it is. Another good one is that a stationary random process is ergodic if any average that we want to find is calculated from any sample function of the process by a long enough average over time, which is what Pekka said.
The practical consequences of this are whether we will ever see that huge natural variation temperature “burp” that will completely swamp out the anthropogenic one we are experiencing right now. You can argue about this in statistical terms but you can also think in terms of energy barriers and likelihoods of surpassing these via internal spontaneous means. All of these chaotic systems you are desribing do not change the internal energy of the system and will only increase entropy. The only possibility left is if the variability kicks in a tipping point and changes the albedo (or something similar) to allow more energy into the system, as Pekka is patiently explaining on other threads.
I have plenty of other examples but before I present them, I want to know if this is anything more than an academic exercise that Tomas is working out.
Agree, Tomas may understand this stuff in his own head, but he can’t explain himself out of a cardboard box.
Webhub when I happen on a post which begins in such a juvenile impolite way, I stop automatically reading.
I wouldn’t know if what you wanted to say has some value but next time you want to interact with somebody in a productive way, try to behave like an adult and tone down this strange agressivity.
Perhaps that’s the way how you were educated but as far as I am concerned, I prefer less rude reasonable discussions.
Hi Webbie
If your busy schedule of slagging other folks off for not being as clever are you like to think you are, could you spare a moment to leap back to the ‘Trends’ thread and give us all the benefit of your worked example.
You may recall that you had introduced – to the astonishment of me and any other reaction kineticists reading – Arrhenius’s Rate equation for the variation of chemical reaction rates with temperature as a way to predict the concentrations of CO2 dissolved in seawater. And you challenged us to ‘solve it for yourselves to see that I am right’.
Well the sad news is that none of us have the faintest idea of where to start, since we cannot get hold of any of the ARR equation parameters that have anything to do with CO2 concentrations. We can tell you a lot about how the speed of a chemical reaction will change with temperature, but nothing about what the reaction does. We know how fast it’ll go, but nothing about where it started from or where its going.
It may be of course that the blinding light of your modesty has obscured an obvious truth from us, but the more restive spirits among us are starting to mutter the BS word under their breaths. So please, Webbie, give us a worked example. It may well be good enough to win the Nobel if you can actually do it.
Thanks.
Web, you’re actually absolutely incorrect, from what I see you saying. What you’re talking about isn’t ergodicity, or even related to it.
Ergodicity is group theory, not statistics (though it is group theory as applied to probability space), a very different branch of abstract mathematics. It has to do with transformations acting upon a group. Since taking the inverse of a transformation on the group does not change or inverse the specific element of the group in discussion (A in the example), the group has ergodicity, and often times A happens to be the identity element of that group. However, ergodicity applies to probability space, beyond just standard group theory such as Rings and Sets. If a probability space has an identity element, and thus ergodic, that says a lot about the nature of the entire probability space group–as the entire group must have certain properties to be ergodic. These properties are defined by group theory.
This also applies to Markov chains.
You can also think of the ergodic component of a system as being the irreducible part.
So, actually, Tomas did a great job explaining this.
God,
So smartypants, you tell me what to call the property that Pekka and I both described?
If you can’t, I will continue to refer to it by the eword. If you do come up with one, I will consider adjusting my vocabulary.
Latimer, Look up the partial pressure of CO2 in seawater at various temperatures. Add in Henry’s law. Now go away. Cripes, what an addled brain.
WebHubTelescope: If it takes him that long to explain something that is truly intuitive to anyone that has studied any statistical physics,
A very good history of the emergence of the concepts of “ergodicity” and “ergodic” is presented in the excellent book “Physics and Chance” by Lawrence Sklar, published by Cambridge University Press. The concept has never been “truly intuitive” to anyone. In practice, no process can be known to be ergodic, but making the assumption of ergodicity permits computations that would otherwise be intractable, and so it is frequently useful. Sklar presents the scientific problems and diverse partial solutions that led the scientists to formulate the concept carefully, and it presents a survey of mathematical texts that discuss it formally.
Thomas Milanovic’s post is an heroic attempt to put the matter succinctly, but “the time average equals the ensemble average almost everywhere” is not easy to understand, and impossible to understand quickly. Should Thomas Milanovic or someone attempt a second post, I would recommend including a plethora of examples of ergodic and nonergodic processes.
WHT’s posts confuse the matter hopelessly. I’d recommend that people skip them completely.
Fine by me if you want to skip it.
Much of the engineering and applied physics literature since probably the 1950’s takes the statistical definition.
Engineers will still have to explain to their bosses that they took sufficient samples and they formulated their test vectors with an adequate reachability set for v&v.
But, according the geniuses, they can’t use the eword. No worry, they will make up something.
WebHubTelescope: Much of the engineering and applied physics literature since probably the 1950′s takes the statistical definition.
No problem with that, but your post was nonsense.
It’s funny that you can’t say exactly what is nonsense, which is typical of the hollow assertion.
I don’t think what I said is nonsense, as there is no engineering term other than an ergodic process to describe the two elements required to perform a verification. That’s what I asked for and no one has yet ventured a replacement term. Specifically, the elements for evaluating an ergodic process are:
1. SPACE: A reachability set that encompasses a state space
2. TIME: A sufficient sampling from the state space
We can’t just use the term reachability because it does not cover the aspects of sampling. Coverage is a possible word to use but I don’t know if it is sufficient to convince someone of its comprehensiveness.
Here is another example taken from the wind model I have elsewhere in this thread. Say that someone wanted to verify that a vehicle could withstand winds of a certain straightline speed. They specified a rating of a certain wind speed, and they wanted to know what percentage of the time that the vehicle could withstand anything under that speed. Since only nature can provide those kinds of wind speed, and having no knowledge other than the vehicle could operate anywhere in the world, then we would use the universal PDF of the wind speed that I mentioned. One would look up the speed and determine the cumulative probability of all wind speeds less than that rated. Since we assume that the PDF is representative of an ergodic process (one that covers the complete state space and analytically establishes that sufficient time has passed for full coverage), then we have a probability of success of that design. The result is that the vehicle could be rated to handle 99.99% of the winds expected to occur in nature. That number then goes in the verification report. Or it can be used in reverse to establish a requirement.
If the wind model was included into a more complex verification, such as for vehicle cooling, the wind PDF would provide an importance sampled input to provide convection cooling parameters to the external interface. And so on.
I hope you get the point, if engineering design is your bag.
Sorry, but I have to laugh at doing this because all I am doing is homework for my real job, and a variation of what I just wrote will go into a verification strategy for a design, no doubt to get approval from my colleagues. Fortunate to have the occasional case where the job description meets with a hobby :)
I have to thank you all for hammering home the STATE and TIME aspects to ergodicity as it is something I can use professionally :) :)
WebHubTelescope expressed interest in alternate terminology. When I took a graduate course in statistical computing, the term that was thrown around colloquially was “burn in”. Of course everything hinges on T – and more importantly on “suppose”.
MattStat  February 15, 2012 at 4:06 pm 
“In practice, no process can be known to be ergodic, but making the assumption of ergodicity permits computations that would otherwise be intractable, and so it is frequently useful.”
For those who can’t follow the concepts, this is what’s key in determining whether heroic abstractions are divorced from reality. Collegiality culturally demands that insiders just politely “go with it”. While this is innocent in an abstract context, we could have endless discussions on the ethics of application. Very tricky business.
Thanks,
The concept of burnin is neat. Over time you gain confidence that your model is covering more and more of the state space, and so you can generate better moments and bounds.
WebHubTelescope: If a behavior showed a probability dependence that followed some declining envelope, then if you wanted to see the 10^8 tail cumulative of that probability, you would want to gather at least 10^8 samples.
That is not ergodicity in a nutshell. It claims to be ergodicity, and the claim is nonsense.
WHT, there may be a disparity between what you wrote and what you intended to write. But what you wrote is not a definition of ergodicity.
So change it to a pragmatic tenet of ergodicity, and call it reachability or some other practical term that we can use.
I am not wedded to the purity of application as laid down by you all.
This is getting weird when we start arguing over a useful rule of thumb.
Because your attempt at an example is not a useful rule of thumb for ergodicity. It’s unrelated. Ergodicity is ultimately a variant of group theory dealing with transformations of a “group”, which can be applied to a dynamic system. Even its notation is ultimately group theory notation. See this PNAS published 1963 paper about Egodicity and Group Theory http://www.ncbi.nlm.nih.gov/pmc/articles/PMC221293/pdf/pnas002400176.pdf
Also, here’s a report looking at the origin of “Ergodic Theory” https://www.fa.unituebingen.de/lehre/ws200809/ergodentheorie/scripts/mathieu.pdf . Maybe you’re thinking of monode?
WebHubTelescope, you asked for examples of why your first post was so bad that it should be skipped, and here is one:
The ultimate problem is that the definition of an ergodic process doesn’t help much when we need to determine whether or not a particular process is ergodic.
The definition is one of the things that is essential to determining whether a process is ergodic.
It is possible that you really know what you are talking about, but your presentation in that post was execrable.
This paper from a statistical physics workshop last year goes into the monode more suited to my taste (no German needed):
“How and why does statistical mechanics work”, Navinder Singh
http://arxiv.org/pdf/1103.4003.pdf
The author claims that the ergodic hypothesis is not required for statistical mechanics following arguments by Landau.
The author also recommends the book “Chaos and coarse graining in statistical mechanics” by Castiglione et al (2008), which seems to cover some of the macroscopic issues of eddy diffusivity, etc, that have applicability to the larger topic of climate science. Apparently, on the role of chaos in statistical mechanics, they say that the “more coarse your observables, less you depend on chaos”.
This is getting into foundations of physics, and I was simply looking for some pragmatism with regard to stochastic models. I understand YMMV when it comes to the more deterministic dynamical systems, such as planetary motion, etc, so I will get out of the way.
WHT,
Thanks for an interesting paper, I’ll look at it more carefully, but on first sight it seems to discuss with care very similar issues that I have brought up here based on old impressions and views of physics whose origins I’m unable to recollect.
I mentioned in one of my messages a book used in a special course of mathematics included in my undergraduate studies: Khinchin: Mathematical foundations of statistical mechanics. That book may have had some influence on my thinking, but I read that book so early that I doubt the strength of its influence. (The lecture course didn’t get far with the book because our mathematics professor got stuck with issues of measure – and not for vain as those problems must remain problematic for the theory even now.)
I am holding the slim paperback version of the Khinchin book in my hands, marked $1.35. One passage:
Basically due to the large population and fluctuations going as 1/sqrt(N).
My copy is marked $1.50. One interesting marking on the cover is also “Translated from the Russian by G. Gamow”. Copyright year is 1949.
Based on the paper of Singh my way of thinking follows closely that of Landau. That’s not surprising as the same has been true on other physics as well. I have in my bookshelf from Landau only the book on Fluid Mechanics (and from the continuation of the series by Lifshitz the book on Relativistic Quantum Theory). Thus I cannot go back to check what Landau and Lifshitz write about Statistical Physics, but I have no reason to doubt Singh’s description.
Singh’s paper discusses also the links between different approaches telling that mathematical proofs on equivalence are in many cases missing. It’s known that they have led to the same formulas for practical calculation in physics. To me it’s not at all obvious, which approach should be considered the best justification for the practical physics in case that the equivalence is either unknown or perhaps even shown to break at some formal level. The mathematical theories are more complete for the ergodic theory, but some paradoxes remain. The beauty of the mathematical theories is, however, not a proof that the approach is more justified from the point of view of physics.
Singh discusses also Quantum foundations of statistical mechanics. Ideas with some similarities with Landau’s approach are discussed in the chapter Other approaches. Those look most interesting to me, but the presentation is very brief on these issues.
Based on the paper of Singh a lot is still unresolved on the level of mathematical fundamentals. That has, however, not been a problem for practical applications of Statistical Mechanics. The approach of Landau gives in my view the best basis for estimation of the applicability of the theory to various real world problems. Studying the fundamentals is interesting in itself, but mostly not so important for practical applications. One question related to the ergodicity is, however, important: that of representativity of the present knowledge. Does the present knowledge cover even imprecisely the whole range of states of the Earth system that will be important for understanding the future or have we been bound to a some part of the phase space, which is not the only important one?
Apologies, this got caught in spam
The Universe was NEVER designed for balance and energy living forever.
Joe I have always found your posts strangely scaring.
Sorry but I cannot offer you a relevant comment – your phase space must be somewhere far beyond mine.
Tomas,
Ergodicity has a clear physical significance in that it makes meaningful certain averages. That may be prcise and formally correct, but that may also be a good (or excellent) approximation over a limited (possibly very long) period of time.
That’s not the only example of that kind.
Semantics is always on endless source of disagreement. Some people insist that they have the only legitimate way of using a word, while others disagree.
On the other hand it’s possible to use the concept of ergodicity in a way that’s not mathematically strictly correct but still reflects exactly those same physical principles for which the mathematically precise theory also applies.
This is one of the confusions.
While it is possible to use ergodicity as an approximation of a physical system (chaotic systems are a typical example), it is absolutely impossible to use it in a sense which would not be mathematically strictly correct.
The reason is that there are NO physical “principles” founding ergodicity.
By that I mean some qualitative or experimental principles fully divorced from their mathematical expression.
For instance if a system’s dynamics show that there EXIST invariant subsets in the phase space then this dynamics is NOT ergodic regardless how hard one would wish that they were.
An example of this case is a N body problem. Not ergodic.
My point is that using a word for another only leads to confusion and lack of understanding.
Saying that a system is ergodic has an extremely strict and precise meaning. One can and does also say that a system is quasi ergodic (to express that there is a slight difference) what has a very precise meaning too.
So if one wants for example say that there is some randomness one should say just that and not to use the concept of ergodicity which is much to powerful to be abused lightly.
And yes you are right, I could have linked papers about problems in ergodic theory because there is a legion (there are different “grades” of ergodicity) but they are invariably very technical so it didn’t quite fit in the style of the post which wanted mostly to be pedagogical, relatively “simple” and eliminate confusions.
I misplaced my comment of 9:29. It should be here.
Tomas Milanovic  February 15, 2012 at 9:20 am 
“For instance if a system’s dynamics show that there EXIST invariant subsets in the phase space then this dynamics is NOT ergodic regardless how hard one would wish that they were.”
Lay readers, note the simple requirement: mere existence.
Tomas, I am curious if atoms may be a good way to explain ergodicy?
Well not really. Atoms are best explained by QM. And QM is certainly not the simplest thing to explain ergodicity which is itself not very simple either.
Ergodicity is best explained by Hamiltonian mechanics and that’s also the way how it is generally taught.
Easiest ergodic system – a billiard game.
True, I was thinking of the point where the complexity moved beyond QM, if that is the case. The problem I am having is that ergodic and nonergodic processes coexist in many systems. Like during a positive AMO, most of the impact of the shorter term pseudocycles can be somewhat predicted. During a negative AMO, the impact of the shorter term pseudocycles would be different, but somewhat predictable with enough data. The AMO itself though, doesn’t appear to be predictable.
Semantics is always on endless source of disagreement. Some people insist that they have the only legitimate way of using a word, while others disagree.
Pekka, I fully agree with this statement. But I don’t know what it has to do with the problem.
I was not talking about “semantics” anywhere. I was talking mathematics and physics.
And I and about any scientist would object very strongly if somebody called a vectorial space something that is not associative under the pretext that it is just “semantics”.
Same goes for ergodicity.
Your problem is that you seem to reduce ergodicity to some equality between some averages.
But this is just one partial property and even not the most important one.
It is rather popular because since Boltzmann (you see how old that is!) people wanted to see in it the explanation of the efficience of statistical thermodynamics.
One thing is sure – ergodicity explains this property and there is no other concept or theory that can explain it sofar.
So if you want to use the ergodic theorem you would certainly be well inspired to verify that you deal with ergodic dynamics (VERY strict meaning of the word ergodic).
If you omit this stage then you are only randomly groping in the darkness (what seems to me often common in climate science) and the odds for being wrong are higher than for being right.
I can’t imagine why somebody would want to do this.
Tomas,
I use the word ergodicity in blog discussions when it’s the word that seems to best describe, what I have in mind.
If I would be writing a scientific paper on statistical physics – or give an advanced lecture course on it, I would use the word differently.
Although it is a technical word it has reached that level of ubiquity that sticking to a restricted use is not any more warranted in nonspecialist context.
So, is your loose use of the word responsible for Web’s intoxication?
==============
Shouldn’t think so Kim. It is just crazy loon syndrome.
I use the word ergodicity in blog discussions when it’s the word that seems to best describe, what I have in mind.
Then don’t.
If someone can’t read your mind then you will only confuse him and probably yourself too.
And “ergodicity” didn’t certainly reach any level of ubiquity (like spatiotemporal chaos) that would allow a majority of scientist to know what it is and feel comfortable with it.
I couldn’t say that I am willingly less rigorous or accurate on a blog than in a technical paper.
The difference is that I go less in details on a blog but if I wanted to stress commutativity in a group, even on a blog I would go sofar as call it abelian :)
Tomas
”
We also know with the ergodic theorem that if we follow the states of the system for an infinite time (remember: infinite iterations of T) and take the time average of some parameter of the system, then this average will be equal to the (probability weighted) average of this same parameter over the whole phase space. This is very interesting because we are mostly interested in the latter while we can experimentally measure only the former.
”
Well – Unless you and your experimental apparatus happen to be immortal, you cannot even measure the former.
As you say:
”
Caveat: when I say infinite time, I mean it seriously. You have to observe really for a long time and it would take a whole new post to discuss how long one has to observe to apply the ergodic theorem in practice.
”
That new post would be appreciated – Because it would be directly relevant to the actual practice of science.
Our knowledge of initial states is never perfect – but how precise and how numerous data need be for good prediction is a central question in any scientific discipline. This question cannot, unless I am mistaken, be answered by ‘a priori’ analysis, only by the frequency of ‘correct’ outcomes.
…e.g., If to go from NY to CA you must leave home by 8 a.m. and take the subway to the airport, if you leave on time you know you will end up in CA no matter how long you stand on the platform waiting for the subway to arrive.
Well – Unless you and your experimental apparatus happen to be immortal, you cannot even measure the former.
This is the strict mathematical formulation. Of course an infinite time limit has no physical signification.
In practice, like with any limit question, it is all about the rate of convergence.
Imagine that you measure some dynamical evolution and notice some pseudo oscillations on a scale of 100 000 years.
Imagine then that you have a good reason to think the system ergodic.
Quite obviously applying the ergodic theorem on a time scale of 10 000 years would be dumb beyond belief.
On the other hand if you observe these oscillations on a time scale of 1 hour, then 10 000 years of data might be an overkill.
But in reality the problem is never so clear cut – especially with chaotic systems you have variability on many time scales (the power spectrum of a chaotic system is quite flat).
Then you either to begin to hand wave (easy but not very convincing) or begin to go seriously in depth of the properties of the dynamical laws (much harder but ultimately more rewarding).
”
But in reality the problem is never so clear cut – especially with chaotic systems you have variability on many time scales (the power spectrum of a chaotic system is quite flat).
Then you either to begin to hand wave (easy but not very convincing) or begin to go seriously in depth of the properties of the dynamical laws (much harder but ultimately more rewarding).
”
But looking at “the properties of the dynamical laws” is not the same as assessing whether those laws correspond to anything outside of logic. You are still talking about a strict mathematical formulation. (Math waving, so to speak…)
E.g. We understand the properties of Newton’s laws in great depth – but the fact of the matter is that they do not correspond to the way things happen. My point is this: a priori analysis of the “properties” of Newton’s laws would never reveal this rather important scientific fact.
E.g. We understand the properties of Newton’s laws in great depth – but the fact of the matter is that they do not correspond to the way things happen. My point is this: a priori analysis of the “properties” of Newton’s laws would never reveal this rather important scientific fact
This is rather cryptical.
If you mean that you couldn’t find special or general relativity by studying only Newton laws, then it is (partly) correct.
But if you mean that material bodies don’t obey Newton laws when the velocities are low and gravity weak, then it is wrong.
In the matter that concerns us here, we need no relativistic correction and Navier Stokes laws (which describe after all only energy and momentum conservation) are the right laws to apply for fluid dynamics.
But I am ready to trash Navier Stokes the day when it begins to be believable that energy and momentum are not conserved on a large scale :)
This day would need to prove that millions of experimental measures went all somehow wrong (without mentioning all the theorists since 200 years).
I bet that you don’t know Emmy Noether – but if you look her up, you will find what is (for me) the by far most convincing proof that energy and momentum conservation will never fail.
”
I bet that you don’t know Emmy Noether – but if you look her up, you will find what is (for me) the by far most convincing proof that energy and momentum conservation will never fail.
”
Arrogant much, Tomas? You lose that bet. I am well aware of the mathematical connection between symmetry of action and conservation laws. Unfortunately, Noether’s theorem does not apply to dissipative systems.
Anyhow – no one will ever “prove” energy and momentum conservation.
Matters of fact cannot be proven.
You are clearly more of a logician than a physicist.
Thomas, is the following factual information an example of ergodicity?
1. Far beyond the opaque photosphere, in the solar core, neutrons are now becoming atoms of H and then fusing into He, etc.
Pulsar =(nemission)=> neutron =(ndecay)=> H =(fusion)=> He, etc
2. The system exploded five billion years (5 Gyr) ago, ejecting the atoms that now orbit the Sun.
Oliver I am utterly lost with that.
Yes neutrons are known to transform in protons what is not yet quite H but you only need to get the temperature down by several orders of magnitude in the solar core and catch an electron to have H.
Yes protons and neutrons (not H) fuse to He.
It has been suspected that this was how the Universe was working since about 15 billions years.
No, pulsars are not known to emit neutrons, they are suspected to be neutrons. And no, neutrons can’t transform back in protons+electrons inside a pulsar because the gravity is too high.
I am not aware of an explosion 5 billions of years ago. But that’s approximately the time the Earth formed from the protodisk of dust orbiting the Sun that contracted in the middle.
And beyond all that I don’t see where ergodicity (or non linear dynamics for that matter) enter the scene.
Thanks, Thomas.
I am a simpleminded experimentalist with a very, very limited vocabulary, a PhD in nuclear chemistry, and a suspicion that ergodicity is related to other words of attribution like:
a.) Ergo, thus, therefore (when the link between cause and effect is obvious), and
b.) Ergodicity, magically, Divinely (when the link between cause and effect is not obvious).
In other words, I suspect that ergodicity and spatiotemporal chaos theory are a more mathematically appealing was of saying God.
Please help me understand the difference, if there is one.
er, God is tic. Where’s Joe to ponder the awkward movements or Joshua to explain the motivated thinking behind them?
=========
kim: er, God is tic. Where’s Joe to ponder the awkward movements or Joshua to explain the motivated thinking behind them?
Be careful what you wish for. Or, …, perhaps you are merely seeking material for your short rejoinders.
Ged
Ergodicity is group theory, not statistics (though it is group theory as applied to probability space), a very different branch of abstract mathematics. It has to do with transformations acting upon a group.
Yes!
It really makes a warm and fuzzy feeling to meet someone who knows what he’s talking about :)
Actually when one studies measurable sets, it can always have a statistical application because a probability is just a specific case of a measure.
It is clearly the case in ergodic theory because µ is generally taken for a probability in physical applications.
I am a big fan of the CUMSUM Chart approach for examining changes in steady states;
http://www.itl.nist.gov/div898/handbook/pmc/section3/pmc323.htm
This is by far the best method for looking at steady state changes with ‘noisy’ data.
That is excellent Tomas. Thanks for the explanation and follow up.
In other words, I suspect that ergodicity and spatiotemporal chaos theory are a more mathematically appealing was of saying God.
Oliver I guess you wanted to say “way” not “was”.
As I don’t believe in God (at least not in any orthodox sense) I am afraid I won’t help you in this matter.
For the origin of the word ergodicity, it has been coined by Boltzmann when he was working on statistical mechanics and trying to understand why it worked.
It comes from Greek “ergon” (work, energy) and “odon” (way, path).
Even if nobody thought to ask Boltzmann what lead him to this word, it is surely the right etmology because he was working on problems of energy flows.
Thanks Thomas for catching the typo. The intended question:
“Are ergodicity and spatiotemporal chaos theory a more mathematically appealing way of accepting God, a Higher Power, or the great Reality that surrounds and sustains us?”
Specifically, I asked if ergodicity and spatiotemporal chaos theory explain the following sequence of events:
Supernova debris that orbited a pulsar five billion years (5 Gyr) ago evolved by ergodicity (naturally but inexplicably) into Thomas, Joe, Judith, RiHo08 and Oliver, each consisting of about
a.) 10,000,000,000,000 living cells,
b.) Produced by dividing a onecell zygote,
c.) Consisting of ~100,000,000,000,000 atoms,
d.) That were hot, poorly mixed supernova debris 5 Gyr ago, and
e.) Are now assembled together into an assortment of different types of cells (toe nails, eye balls, ear drums, ankles, elbows, tits and tails) that together contain ~1,000,000,000,000,000,000,000,000,000 atoms functioning as a living, thinking creatures trying to decipher their own origin and knowing that not a single cell is their body has remained there over their own imaginary life (~75 years for Oliver).
Probably few of us believe in the particular concept of “God” that we were taught as children. That is not the issue.
Please just tell us if ergodicity and spatiotemporal chaos theory explain the above sequence of events better than ” Higher Power or the great Reality that surrounds and sustains us?”
If so, then tell us how ergodicity and spatiotemporal chaos theory can account for the fact that these events occurred over a finite time span of five billion years (5 Gyr).
The time span is based on the decay products of radioactive species in meteorites and in the Earth: Al26, I129, Pu244, U235 and U238:
http://www.omatumr.com/Data/1994Data.htm
http://www.omatumr.com/Data/1996Data.htm
Thanks again, Thomas, for your patience.
I would like to explore further an area of contention between Tomas and Pekka regarding the effect of stochastic variations on the properties of a chaotic system. Tomas states:
“Of course there are no mysterious “stochastic” perturbations which make the fundamental features of behaviour of chaotic systems disappear – Navier Stokes is deterministic and chaotic all the way down to the quantum scales.”
Pekka has stated that stochastic perturbations can in practice interfere with the tendency of a chaotic system to continue to explore all regions of its phase space, so that for example, the flapping of butterfly wings would almost never in practice result in a hurricane somewhere else on the planet even if there were some finite probability of that occurrence based on the chaotic behavior. Are these views irreconcilable?
The Earth’s climate system is not an isolated system, nor is it in equilibrium. At any moment, the amount and distribution of incoming solar energy will differ from those quantities that existed minutes earlier, and circulation patterns, ice melting, evaporation, and temperature change will also reflect the nonequilibrium character of the climate. If Tomas is stating that an isolated chaotic system is fully deterministic and not subject to stochastic perturbations from within, is that incompatible with the notion that external perturbations that inevitably follow from the nature of the climate system can constitute the stochastic perturbations that destroy the fully chaotic behavior of the system? The answer would seem to have practical implications.
Fred,
I describe one more time my view. There will be replication of my earlier comments, but also something more.
The basis of chaos theory is in consideration of equations of motion that are also called “illposed initial value problem” (I learned that term from the 1978 book Principles of Advanced Mathematical Physics by Robert D. Richtmyer soon after the book had been published). The idea is that the nonlinear dynamics is such that very small deviations in initial conditions grow arbitrarily large, the same point Tomas made in his posting.
Because small deviations grow large also all small stochastic disturbances grow large and even little stochastics destroys the influence of the original difference and leads to another unpredictable outcome.
When we have more stochastics, the equations of motion must take that into account and the new equations have more dissipation. When the dissipation is strong enough the problem turns to be a wellposed initial value problem, whose solution is not any more highly sensitive to initial conditions, but much more controlled by the new equations of motion and the nature of stochastics.
The same system can have clearly chaotic behavior at some spatial and time scales, but behave nonchaotically at other spatial and temporal scales. All kind of turbulence from smallest scales to hurricanes and other essentially global scale phenomena have much chaotic in them, but most of them are too shortlived to make the climate chaotic. To make the climate behave chaotically some of the very large scale very slow processes must have the nature of illposed initial value problem. That requires that their dynamics is not strongly dissipative. It’s possible that that’s the case for ocean circulation which may perhaps couple with some other phenomena like formation of sea ice and glaciers.
I’m most definitely not claiming that these processes are not possible or that they cannot be important. What I have criticized is the certainty that chaoticity is necessarily essential for understanding the Earth system and that approaches that neglect chaoticity must be of little value. I consider this point open, not decided in either direction.
“What I have criticized is the certainty that chaoticity is necessarily essential for understanding the Earth system and that approaches that neglect chaoticity must be of little value.”
Pekka – My impression is the same, except that I would emphasize the importance of timescales, because it seems to me there is little doubt that at certain timescales – particularly relatively short intervals – chaotic behavior is critical for an accurate understanding of the behavior of the climate system. It also strikes me that some of the apparent disagreement is simply a matter of focusing on two different concepts. One is the fully deterministic character of chaotic systems (as Tomas emphasizes) and the other is the nondeterministic component of climate behaviors that include both chaotic and nonchaotic elements. In this sense, stochastic behavior should be thought of as extrinsic to the chaotic components. As I mentioned elsewhere, the recent long term (multidecadal or centennial) global temperature record supports the concept that nonchaotic behavior dominates over the longer time intervals, but this isn’t necessarily true of other eras or of very long (multimillennial) timescales..
Fred this problem has been discussed for a long time – at least a century.
There are 2 aspects:
– the “butterfly effect”
– the “stochastic” perturbation
The first bases mostly on misunderstanding of what a chaotic system is.
It is theoretically established (and easily observed on simple chaotic systems) that 2 future states very far apart (e.g very different dynamic parameters) may have for origin 2 initial conditions which are as close to each other as one wishes.
For instance if you make the difference equal to a butterfly wing flap, you will find 2 orbits that lead to macroscopically very different final states after a certain time.
The misunderstanding is that (some) people think that it means that this particular wing flap causes this particular final state after this particular time.
This is so obviously wrong that one doesn’t even need non linear dynamics to prove that.
I thought that everybody was already beyond this basic understanding.
I had posted one day somewhere an illustration of the “butterfly effect” based on the storm of the century in Europe in 2000.
I had some 20 final states showing what the weather models “predicted” for a 48 hours span.
About one third was showing a huge storm. About one third showed a normal weather and one third anything in between.
These “predictions” were produced by very slightly varying the initial conditions and the result was a huge dispersion of final states.
This IS the “butterfly effect” in work – the system finds itself in an extremely sensible initial condition and the orbits diverge wildly. However one could make 100 or 1000 runs with yet different conditions and obtain yet more of final “predictions”. In the end one could have observed a final state (storm) which would be at origin one butterfly flap apart from another non realized state (balmy breeze) .
But you won’t be able to pinpoint this difference in initial states on any particular cause – in other words you won’t find the damn butterfly that flapped its wings – perhaps it was a fish who jumped out of water or a cloud that cooled the sea at a bad moment.
The second is easier.
There simply is NO “stochastic perturbation”.
The only place where every scientist should know to find “stochastic perturbations” is quantum mechanics.
In classical (non quantum) physics from general relativity through electromagnetism to fluid dynamics the laws are always expressed by a set of PDE (or ODE in the simple cases).
And only someone particularly ignorant would say that a solution of of a given PDE is random.
So clearly classical physics, unlike QM, is exclusively determinist and there is nothing intrinsically probabilist about it.
Now since Blotzmann and even earlier we know that some systems (not all!) with very many degrees of freedom find themselves in states which can be simply described by a single PDF.
Many elastically interacting rigid spheres are an example.
So if you approximate many atoms by rigid spheres, and the approximation is not stupid and the spheres are many, you will find without surprise that you get a similar PDF.
And you invent statistical thermodynamics.
Now why it works is not because there are mysterious “stochastical perturbations”. Since Boltzmann we know that it works, as I already said several times, because of … the ergodicity of a system of many rigid spheres.
The statistical properties are an emergent feature of deterministic systems provided that they have very many degrees of freedom and are ergodic.
Deterministic chaos is also an example even if more complex than simple rigid spheres. But also here the invariant PDF when it exists has the same origin and cause – ergodicity.
That’s why inventing some etherical “stochastical perturbations” with no cause and no justification is just admitting that one has not a clue what the hell is going on.
Oh, I admit that this exercice which is, like the Chief has written, just another exercice of curve fitting can in some particular cases deliver something that works … untill it stops working.
But it is in no way any explanation of what the dynamics is really doing – people have been trying (see f.ex Kolmogorov) to demonstrate some “stochastical laws” in turbulence.
Well sometimes it works and sometimes not.
But even when it works, it doesn’t explain why.
Sofar there is no equivalent of the ergodic explanation for turbulence unless one considers that saying that it is a “stochastical perturbation” is an explanation ;)
Sorry for the botched bold characters.
As I have seen Pekka’s comment after posting, I would like to precise 1 thing.
Chaotic systems are not “ill posed” they are solutions of perfectly ordinary equations.
Now it is true that high sensitivity to initial conditions is sometimes called “ill posed” or “ill conditionned”.
Not many people use it though because it implies that there is something wrong for those who do not know the accurate definitions. This is not the case because Navier Stokes is highly sensitive but there is nothing wrong with it.
It is also not so much that “perturbations expand”. What is true is that orbits diverge exponentially (Lyapounov coefficient positive) and this is just a property of the system of equations regardless whether there are perturbations or not.
I understand that in a way a difference in initial conditions can be interpreted as a perturbation but it is not very useful for chaotic systems because then one should have to consider that the system is perturbed all the time (e.g not only at the initial time) and this leads nowhere.
Last but not least I have never argued that “climate” must be chaotic because “weather” is.
This would be a too huge shortcut for such a complex system.
But as I observe that when one increases the time scales, more and more non linear coupled processes kick in, it would be extremely surprising if the system was not chaotic (spatiotemporal chaos, not simply temporal).
So this is an opinion not an established truth even if I am convinced that it will be established in the next decades (expanding on Tsonis&Co).
“There simply is NO “stochastic perturbation”.
Tomas – this is an area where we may end up in partial agreement and partial disagreement. Neglecting QM uncertainties, I think it’s reasonable to claim that every event in the history of the universe was predetermined at the time of the Big Bang (if that was the starting point). However, I don’t see that as conflicting with our notions of randomness or probability, because probability can seen as an expression of the state of our knowledge about outcomes. An event may appear capable of ending in multiple possible outcomes (e.g., a coin toss). If we can’t state that a particular one is inevitable, for us the outcome is probabilistic even though an omniscient being knows what will happen.
Certainly, climate dynamics are characterized by some degree of randomness in that sense. Empirically, we know that many phenomena appear random based on our limited knowledge. If those events perturb the behavior of a system, then the range of possible behaviors will differ from what we otherwise expect. If enough of these perturbations occur, then I would suggest that apparent sensitivity to initial conditions may become less and less important and the average effect of the perturbations more important. This is also likely to be true of nonrandom perturbations such as climate forcing, and a reason why I think the empirical data suggest that behaviors that appear chaotic on some short timescales (one or a few years for example) tend to lose that property over longer intervals and instead converge toward a common outcome regardless of what he have perceived to be the initial conditions. The actual length of the relevant intervals is something I don’t think we can derive but must be based to a considerable extent on our observations of the climate system.
I know I can be such a pest.
Tomas brings in NavierStokes, but there is a diffusion term sitting right in the middle of the equation. Diffusion is random walk and random walk is a stochastic disturbance.
Is it true that most illustrations of turbulent or chaotic solutions to NavierStokes occur when the diffusion term is minimized?
“Illposed” is closely related to chaotic. Strictly speaking illposed is the limiting case, where the sensitivity on initial conditions is infinite (i.e. larger than any preset limit), not only very large. Chaos can develop without reaching that limit.
Nonlinearity is more essential for chaos than it’s for illposedness as in a strictly illposed problem the unpredictably starts immediately, while the chaotic dynamics takes some time to build up. There are clear mathematical differences but the outcome is similar.
Philosophically you may say that there’s no stochastic perturbation, but that’s not true in practice. When the system is large and complex in the way the atmosphere and the full Earth system are, the outcome is exactly that of stochastically perturbed system. There are also really stochastic perturbances from many sources including those originating from the sun. Formal mathematical theories of finite systems of classical statistical mechanism may exclude stochasticity, but that’s misleading in practice for any very large system and wrong for an open system influenced by external disturbances.
Thomas Milanovic: The statistical properties are an emergent feature of deterministic systems provided that they have very many degrees of freedom and are ergodic.
Deterministic chaos is also an example even if more complex than simple rigid spheres. But also here the invariant PDF when it exists has the same origin and cause – ergodicity.
That’s why inventing some etherical “stochastical perturbations” with no cause and no justification is just admitting that one has not a clue what the hell is going on.
I have enjoyed reading your interchanges with the others, Pekka, cnp, etc.
About that quoted text. Empirically, it has always happened that experimental results have random variation in them. Your paraphrase of the Doctrine of Necessity (that all results are causally determined, and apparent random variation results only from imperfect knowledge), is clear. However, it can not be known to be true because it can not be tested. In order to test whether DoN is true, you have to do every experiment twice, once with perfect knowledge, and secondly with ordinary imperfect knowledge; then you have to get results showing that the random variability only occurs in the case of imperfect knowledge. Scientists have made great strides in understanding nature by modeling what is empirically observable, and eschewing as much as possible any untestable propositions. In line with those two features of scientific aims, any realworld modeling has to include stochastic terms where necessary to get computational results that conform to the measured outcomes.
As I wrote above, you made an heroic attempt to elucidate a difficult concept. As concepts go, it has a misleading simplicity. It took decades of work by geniuses to formulate, and it is not easy to understand.
Thomas Milanovic: Chaotic systems are not “ill posed” they are solutions of perfectly ordinary equations.
From the point of view of estimating properties of real systems from measurements made on them, chaotic systems are illposed initial value problems. This merely means that there are a great many initial values that produce the same solution (that is, solutions with the same meansquarederror) over the observed time span, yet may have different implications for how the system got started.
Sofar there is no equivalent of the ergodic explanation for turbulence unless one considers that saying that it is a “stochastical perturbation” is an explanation
Actually, “ergodic” is the name of a property that they system may have. It can’t be known for sure whether the system is ergodic. But either way, the fact of ergodicity is not an explanation.
The probabilistic/deterministic dichotomy is a fascinating subject that extends beyond the realm of climate or thermodynamics. For my own understanding, I sometimes think useful insight can be drawn by illustrations based on coin tosses – this is something that is easy to visualize and relatively easy to intuit. For that purpose here, I’ll omit considerations of QM uncertainty. Doing this will probably not introduce large errors for most applications, although I’m not completely sure of that presumption. What follows is described on a very simple level, and may be of no interest to readers with a sophisticated understanding, but although it’s simple, I think it’s accurate.
Consider the toss of a very ordinary standard coin. As it’s in the air, you are asked to choose an outcome. What is the probability that it will be heads? You are likely to claim that it is 50% – i.e., 0.5.
Imagine, however, that beside you is a supercomputer that is analyzing relevant variables – the coin’s center of gravity, the height and direction of the toss, the initial angle, the rate of rotation, wind speed, the consistency and deviation from horizontal of the ground, and hundreds of other factors that affect the outcome. The computer disagrees with you, and states the probability of heads to be 0.93.
The computer, however, is not omniscient, and will inevitably neglect some relevant variable. However, an omniscient being who has been following the fate of the Universe since its beginning and is aware of all relevant information realizes that the outcome has already been determined, and cites a probability value of 1.0 – i.e., a certainty. (The being might have cited a value of 0, but we’ll assume the supercomputer is on the right track in citing a value close to 1.0).
Which probability value is the correct one?
Well, the answer is that all three of them are correct. In an apparently deterministic universe, probability is not an inherent property of an event but rather of our level of knowledge about that event, and will therefore vary depending on that level. In citing a value of 0.5, we are essentially acknowledging that our level of knowledge is limited to the fraction of heads that a perfectly “fair” coin will approach as the number of tosses goes to infinity, and is not based on further information about this particular coin. The computer knows much more than that and can compute the fraction of heads exhibited by the subset of coin tosses with the specific characteristics it has observed, and the omniscient being, knowing everything, is aware of the predetermined outcome.
Let’s extend the analogy to a series of coin tosses – say 100 tosses. The outcome will be a specific pattern of heads and tails that the omniscient being can specify. On the other hand, we can’t predict that pattern but can be reasonably confident that the number of heads will probably not differ radically from the number of tails. If the number of tosses is extended further – say, to 1000, we can narrow the range even further to 0.5 plus or minus a small quantity.
Notice, however, that we will have no idea at all what the exact pattern of heads or tails will be, even though it is predetermined. Here, however, is where we begin to see some analogy with thermodynamics and statistical mechanics. If we have no need to know the exact pattern, but only the “average” behavior of 1000 tosses – i.e., the percentage of heads or tails – we can estimate that well because a fraction in the range of 0.5 can arise in a very large number of different ways, while extreme values can come about in much fewer ways. There is only a single way, for example, for the coin to exhibit 1000 consecutive heads (or tails).
For many thermodynamic considerations, it is the average behavior of a system that matters to us (its “macrostate”), and not the individual “microstates” that can all fall within an average range (e.g., close to 0.5 for a fair coin). In the case of a coin toss, each individual state (exact pattern of heads and tails) is as likely as any other state. In many thermodynamic scenarios, that equivalence doesn’t hold, but the general principle remains the same. The individual microstates are arrived at deterministically but the macrostate behavior is probabilistic.
Some of this was addressed in a recent thread that included a discussion of entropy. Entropy can be thought of in many different ways, but a probabilistic interpretation describing the tendency of a system to gravitate toward macrostates achievable via a very large number of microstates as opposed to a few is one useful perspective among others.
The problem here is I think the failure to recognize the fact that for climate, the diffusion is very small compared to the advection and convection. Pure diffusion of course gives well posed problems. Even for aerodynamics, the diffusion plays a role that is critical but does not damp significantly the dynamics. I do think a critical perusal of the recent literature will show that the deterministic NavierStokes equations do a very good job of describing turbulence, vortex dynamics, and convection even though all are also “chaotic.” It was a subject of controversy for about a decade before it became clear that statistical mechanics was largely irrelevant to fluid dynamics.
I see that as totally off track for the transient climate response. If say in 30 years time, we see the same warming behavior continued and all the natural variations have been classified as +/ fluctuations, then the transient will likely be modeled completely by a twopart diffusional response.
#1 The diffusional response to the carbon emissions as atmospheric CO2 is slowly sequestered
#2 The diffusional response of heat introduced to the system as the ocean slowly takes up the excess heat.
That is the basic difference between the way the skeptics think about the climate system and the view of the consensus climates scientists. The slow diffusional process will hold sway because all of the advection and convection and eddy diffusion will be modeled as a random walk, and everything will eventually wash out or cancel apart for the transient move to a steadystate target. And the wild card in this is the uncertainty of a tipping point.
It makes sense to keep in mind what Andrew Lacis has stated on this blog:
So the skeptics seem to worry about the natural variability, while the consensus climate scientists essentially use group theory to isolate it from the bigger picture (in the sly physicists view of group theory, physicists will know what I am talking about, wink, wink).
Webby, Everything you say is hand waving. The question requires rigorous analysis. Thomas has tried to educate us. I repeat that the doctrine of the attractor is really little better than circular reasoning. “The models if integrated long enough always seem to settle down to essentially the same climate.” That is not a pejorative characterization, but a direct quote. We need some reason to believe it. If I have a model with too much diffusion, the result is the same as the doctrine of the attractor. There has to be some method of distinguishing between the two explanations. That explanation would be convincing even to the most skeptical among us. For me anyway, that requires more than hand waving,
I don’t see anything you have done master handwaver. You apparently have forgotten that we don’t need to know everything, as entropy does a good job of filling in the rest.
Thie simplification is obvious to me. You put in large convection and advection factors into your GCM equations and those swamp the long term diffusion effects that really guide the arc of the climate.
This is like placing a tracer in a watershed. The tracer can follow all sorts of tortuous paths but when it comes out the other end, all that matters is the dispersive characteristics. This is a fascinating topic that doesn’t get much visibility in comparison to the trendiness of chaos.
Fred Moolten: The individual microstates are arrived at deterministically but the macrostate behavior is probabilistic.
You either missed entirely or denied my epistemological/scientific point: it can’t ever be known that the microstates are arrived at deterministically — because every attempt to study them will be accompanied by random variation. All we can know is that the models for the macrostates are probabilistic models whether the microstates are deterministic or not.
MattStat – Perhaps I missed the comment you refer to. Your point is well taken. However, I didn’t assume one can prove the microstates to be deterministic, but rather I believe that such an attribute is a feature of one view of the nature of our universe. I wouldn’t insist it’s correct (although it has some appeal), but it was peripheral to my main point, which was about the probabilistic nature of large ensembles that have (or might have) deterministic behavior for individual elements.
Fred Moolten: However, I didn’t assume one can prove the microstates to be deterministic, but rather I believe that such an attribute is a feature of one view of the nature of our universe.
Fair enough. I was asserting that such a view of nature is untestable, though to be fair it always has great heuristic value. That is to say, searches for causes nearly always are successful, though rarely can one learn all causes.
Neglecting QM uncertainties, I think it’s reasonable to claim that every event in the history of the universe was predetermined at the time of the Big Bang
Neglecting the Sun, I think it’s reasonable to claim Jupiter is the heaviest body in the solar system.
Neglecting the law, I think it’s reasonable to claim we can go round killing people.
Neglecting water, I think it’s reasonable to claim Earth is as dry as the Moon.
Fred, you seem to have a pretty low opinion of the influence of quantum mechanics.
Vaughan – Is there a snark virus going around? In another comment of yours in the same vein, you (mis)characterized my point as “In other words, God does not play dice, it just looks that way to mere mortals.”
I suppose that for “God”, you could substitute “the Schrodinger wave equation”, and that might make a valid statement, because as I understand it (which is superficially I admit), the equation is deterministic and the probabilistic element arises when it’s translated into observations we make in whatever world we’ve inhabiting at the moment.
My point, however, was not to claim that QM is unimportant, but that at whatever practical level certain phenomena can behave as though they are deterministic, the results they generate must be seen by us as probabilities in the absence of total knowledge on our part. The extent to which the unknowability is an inherent, inseparable feature of the behavior or merely a lack on our part of knowledge that is theoretically attainable is not central to my argument, because in either case, the essential point is the importance of unknowability. I’ll leave it to the QM experts to describe the relative importance of those two different types of unknowability for phenomena of practical interest to us, including, for example, how much it matters for understanding the behavior of a gas in a container. If the microstates were theoretically knowable but practically unknowable, would that make a difference? This is an interesting topic, and I hope to learn from the discussions.
Fred,
The butterfly was never more than a metaphor – a reference to the shape of the orbits of the Lorenz attractors when plotted – the topology of the phase space. – http://en.wikipedia.org/wiki/Butterfly_effect –
The Earth climate system is never isolated – the energy input is a control variable in the deterministically chaotic climate system. Deterministic to the quantum level as Tomas says. The term stochastic applies to any system that that can be analysed in terms of probability – a statistical convenience.
The break in billiards is deterministic – an outcome of force and vector much as in the NavierStokes equations – stochastic as it can be analysed probabalistically – and ergodic in that in a frictionless universe the system will return after a long enough time to the initial state by the Poincare recurrence theorem.
The discussion elsewhere touched on vortices – described to any degree of accuracy by the NavierStokes fluid motion equations – where interaction between vortices resulted in mutual destruction and that it was claimed is a random process. But deterministic is merely the outcome of cause and effect and it seems even with battling vortices that we have not done away with cause and effect. Indeed, dispensing with cause and effect down to the quantum level seems to me to be an impossibility.
Robert I Ellison
Chief Hydrologist
Chief, Just a couple of points. You are very insightful.
First, regarding the discussion with Vaughan Pratt on a previous thread, it is true that opposite vortices can cancel over a long time as diffusion takes place. However, in all real forced systems, the vortices are continuosly generated so the patterns don’t fade with time. In fact, vortex dynamics are chaotic with vortex sheets breaking up into smaller and smaller vortex filaments. And the famous Reyleigh Taylor instability concerns shear layers of concentrated vorticity. So, its not really very useful to invoke diffusion as if it damps perturbations. With outside forcing, the chaos can produce a lot of variation in forces for example.
Second, you are right that even though popularly characterized as chaotic, vortex dynamics is in fact deterministic, its just an ill posed problem. Also see my later comment on norms.
An interesting thing I have heard recently but not investigated is the assertion that the multibody gravitational problem is in fact chaotic as the number of bodies increases. Do you know anything about this?
Hi David,
Perhaps you have not heard of the Poincare three body problem? This goes back to the origins of the idea of dynamical complexity.
Cheers
http://www.scholarpedia.org/article/Three_body_problem
David Young: An interesting thing I have heard recently but not investigated is the assertion that the multibody gravitational problem is in fact chaotic as the number of bodies increases. Do you know anything about this?
Look up the “digital orery” from the late 80s or early 90s. A computation covering many centuries of virtual time showed that the planetary movements are never periodic, and that the trajectories look chaotic.
MattStat is correct. The paper in question can be seen at
http://groups.csail.mit.edu/mac/users/wisdom/plutochaos.pdf
(The office of one of the coauthors of that paper, Gerry Sussman, was two doors down the corridor from mine back when I taught at MIT.)
But I recommend reading it backwards, since the key paragraph is the last one in the article, which reads
In our experiment Pluto is a zeromass test particle. The real Pluto
has a small mass. We expect that the inclusion of the actual mass of
Pluto will not change the chaotic character of the motion. If so,
Pluto’s irregular motion will chaotically pump the motion of the
other members of the solar system and the chaotic behavior of Pluto
would imply chaotic behavior of the rest of the solar system.
The difficulty I have with that statement is that part of their proof that Pluto’s motion is chaotic assumes that certain extremely low probability events didn’t happen. From a practical standpoint I don’t mind that kind of assumption. But the probability that the asteroid belt is quasiperiodic rather than chaotic is fantastically much lower than that merely on the basis of the number of asteroids. So why not infer that the planets move chaotically on that basis?
A reasonable answer might be that the influence of the asteroids is so minute that the chaotic motion of Jupiter might not manifest itself within the expected lifetime of the Sun (and hence the solar system). In fact I can’t think of any other answer.
But this then raises the question, why doesn’t the same objection apply to Pluto?
Good question. IMHO their concluding sentence is too glib. I don’t believe they’ve proved that, within the expected lifetime of the solar system, Jupiter will manifest any detectable sign of chaotic behavior.
Fred, I don’t know the answer to your question but would hazard a guess. In principle, but perhaps not in practice, the boundary conditions for the climate system should be measurable to sufficient precision to make the system deterministic. In practice, it is impossible to do so, and so we must be content with sensitivity analysis with regard to the boundary conditions. A lot of problems are not so sensitive to boundary conditions as to render meaningful simulation impossible. However, great care is required and good methods essential. One thing is certain — excessive dissipation will kill the meaningful dynamics and render simulations of little value. I would argue that at present, this question is academic since other errors are probably much larger than the errors in boundary conditions. This is bourn out by fluid dynamics where boundary conditions are not nearly as large an uncertainty as subgrid models of turbulence.
Check out a perfect example of an ergodic process, that of the stochastic distribution of wind speeds. The phase space of the system is all possible wind energies constrained only by the fact that a mean wind energy exists.
http://theoilconundrum.blogspot.com/2012/02/windspeedsofworld.html
The question is, how long do we keep taking measurements until we are comfortable with obtaining a sufficient sample of the most energetic winds?
The distribution I derived is based solely on maximum entropy principles and it agrees very well with results taken over the span of 2 years. This only found speeds up to around 60 mph, but experience tells us that eventually speeds much higher than that are possible. So what spatial and temporal range would we sample over to establish an ergodic set of data that would verify this distribution?
That is analogous to the kinds of questions that we need to ask. So until or unless you have sufficient statistics to verify, you go with the model and apply the levels of uncertainty necessary to extrapolate for that unknown and potentially remote probability.
If you don’t care for the wind analogy, consider a huge earthquake or a supergiant oil reservoir as yet.undiscovered. The only factor that really matters in seeing a 10 magnitude earthquake or a trillion barrel reservoir is time.
WebHubTelescope
“The only factor that really matters in seeing . . .a trillion barrel reservoir is time.”
See the list of the 18 largest oil fields:
Ghawar Saudi Arabia 1948 66100 Burgan Greater Kuwait 1938 3260 Safaniya Saudi Arabia 1951 2136 Bolivar Coastal Venezuela 1917 1436 Berri Saudi Arabia 1964 1025 Rumalia N&S Iraq 1953 22 Zakum Abu Dhabi 1964 1721 Cantarell ComplexMexico 1976 1120 Manifa Saudi Arabia 1957 17 Kirkuk Iraq 1927 16 Gashsaran Iran 1928 1215 Abqaiq Saudi Arabia 1941 1015 Ahwaz Iran 1958 1315 Marun Iran 1963 1214 Samotlor Russia 1961 614 Agha Jari Iran 1937 614 Zuluf Saudi Arabia 1965 1214 Prudhoe Bay Alaska 1969 13
See Sam Foucher on oil field distributions.
Global discoveries peaked ~ 1965.
Suggesting we jump from 100 GB to 1000 GB when nothing larger than 100 GB has been discovered for 50 years is asking too much.
I think you are missing the finite constraint of earth’s surface or earth’s volume containing geological reservoirs.
That was my point in bringing that up. The next magnitude higher earthquake may not exist either as sufficient time has not passed for it to develop the necessary energy. Same thing with oil reservoirs as they could have developed that large if enough time had passed.
WebHubTelescope: Check out a perfect example of an ergodic process, that of the stochastic distribution of wind speeds.
Where have you shown that it is ergodic?
My model is ergodic as derived. The empirical data and process that it models may or may not be. That is the issue — that you would have to sample for an infinite time to claim that property. I don’t have time to wait, so I use that model for the bulk of the data (with a very acceptable fit by the way 0.999+) and extrapolate the long tails.
As always, the challenge is for someone else to come up with a better model.
What is it called that I am doing? Observation, characterization, and modeling. YMMV.
WebHubTelescope: My model is ergodic as derived.
I reread, and …
That is a slight misstatement. You derived the distribution, then tested whether the empirical time distribution matched the derived distribution, which it did very closely. So you are in the main correct: if it is not exactly ergodic, it is very nearly so.
WebHubTelescope, I reread your TOC post, and I see that I am still being imprecise. Here is a quote: I actually didn’t have to even vary the parameter, as the average energy was derived directly by computing the mean over the complete set of data separately. This corresponded to a value of 12.4 MPH, and I placed a pair of positive and negative tolerances to give an idea of the sensitivity of the fit.
As this is a single parameter model, the only leeway we have is in shifting the curve horizontally along the energy axis, and since this is locked by an average, the fit becomes essentially automatic with no room for argument. The probabilities are automatically normalized.
Thus, you derived the functional form of the state distribution, and then estimated the parameters (all 4 nicely constrained by the mean and the functional form) from the time distribution. Thus, you assumed ergodicity for the purpose of the computation, but you did not test it.
As I wrote above, and as Sklar describes in some detail, ergodicity was the formal expression of the idea that allowed approximate computations in otherwise intractable problems. As in your case, the assumption is often close enough to be useful. Should we take a onetime sample of all the wind speeds, we might well find that the ensemble distribution matched the time distribution — but you have not shown it. With that caveat, the time distribution is probably useful enough.
OT – But it looks like climate deniers have their very own “climategate” now.
http://www.guardian.co.uk/environment/blog/2012/feb/15/leakedheartlandinstitutedocumentsclimatescepticism
Judy’s gonna let you rant about this in the next week in review. In the meantime, contemplate the disparity in funding between the consensus and skeptics, and cogitate over the failing consensus science.
===============
contemplate the disparity in funding between the consensus and skeptics, and cogitate over the failing consensus science.
Kim, did you follow your own advice and cogitate over it? Or is your belief system so thoroughly wedged that even if God himself came down in order to lecture you for an hour on the health hazards of a frozen brain you would still refuse to cogitate over it?
If you can’t even follow your own recommendations to others, you’re a dead loss on this blog.
Or did you actually do the cogitation and conclude that parity in funding would imply more consensus than would disparity in funding?
If so then your cogitator is seriously overdue for its next service.
kim:
Thanks – but I’ll leave the ranting to others.
And I would cogitate on the ‘failing consensus science’ except that it isn’t failing, except in the minds of people who believe the press releases from the Heartland Institute.
Singer, Soon, Idso, Watts – all of them on the take.
I fully expect that many of the ‘denizens’ here will have a “road to Damascus” experience once they read the leaked docs. Or not.
Hypocrisy. Heartland is thy name.
ceteris non paribus:
On the take? That is absurd.
The operative “gotcha” phrases in the “leaks” are clearly interpolated forgery. Terms like “teaching science” and “opposing opinions” etc. are pure warmista cant. They assume skeptics sloganize the same way they do, so stuck them into their pretend Heartland documents.
Heartland has vigorously denied the content of the “worst” of these letters comes from it, and will likely be able to prove that. Not that this will dissuade warmista hypocrites and shysters from repeating them endlessly. Just more of their (your) echochamber narrative.
@Brian H: Heartland has vigorously denied
And as we all know Heartland has a heart of gold and would never stoop to even a little white lie. Only blackhearted warmistas would stoop that low, right, Brian?
Vaughn;
Yes, you Pratt. They have done so repeatedly. And they have the “money trail” leading to their door. In the million$ and billion$, not paltry thousands.
Warmistas are Out To Save The World, on their own terms, and anything goes in the Cause.
Spit.
Warmistas are Out To Save The World, on their own terms, and anything goes in the Cause. Spit.
Just as the pen is mightier than the sword, so is a water pistol loaded loaded with ink mightier than one loaded with spit.
I am well aware of the mathematical connection between symmetry of action and conservation laws. Unfortunately, Noether’s theorem does not apply to dissipative systems.
Even if this is slightly off topic, it deserves an answer.
I am still not sure if the author knows what Noether’s theorems are about or if he didn’t read carefully my post.
My argument was that Noether’s theorems (and their generalisation) were for me the reason why I believe that energy and momentum conservation will never fail.
Obviously I didn’t say anywhere that Noether’s theorems apply everywhere on everything.
It is necessary (in the simple version) that the system’s dynamics be described by the Lagrangian alone.
Now all microscopical systems of importance are in this case.
Macroscopical systems with non conservative forces are not yet they conserve energy too.
So clearly Noether’s theorems are not sufficient for a category of macroscopical systems but they are enough for me to deliver a solid starting point for an explanation of energy and momentum conservation.
@Tomas: My argument was that Noether’s theorems (and their generalisation) were for me the reason why I believe that energy and momentum conservation will never fail.
Starting when? Assuming E = mc² and that the mass m of the universe has increased since the Big Bang, why should energy be conserved?
There is no more reason for the universe to respect Emmy Noether’s theorems in any form than Euclid’s Fifth Postulate. For all we know the universe consists of only finitely many of the particles currently registered at the Particle Zoo.
If a particle and an antiparticle spring out of the vacuum in the manner prescribed by the relevant Feynman diagrams, did the vacuum give up some energy in accomplishing this? (That’s not a rhetorical question, the answer may well be yes to an excellent approximation given that this sort of thing has happened a lot since the Big Bang.)
I would argue that if energy is not conserved then neither is momentum, on the ground that a change of coordinates in Minkowski space swaps some momentum for some energy.
But this still leaves open the possibility that momentum and energy together have been conserved ever since the Big Bang, which is consistent with all of the reasoning above.
It would be nice if someone who’s thought this through carefully would weigh in on this stuff, especially if they can do so without appealing to the recent branedead fads that are so openended as to be about as meaningless as E8. (Obscure references, yes, my fault, no. For obscurity, foundations of physics has pinned postmodernism to the mat. Peter Sokal picked the wrong target.)
> Peter Sokal picked the wrong target.
Vaughan,
I believe you mean Alan Sokal:
http://en.wikipedia.org/wiki/Sokal_affair
Thomas, This is a great post. Thanks
A few more comments on the role of stochasticity.
Formally Newtons mechanics is fully deterministic, but starting from that many important and well known physical equations are difficult to understand and derive. Heat conduction is a prime example. It’s easy to derive based on random motions, i.e. stochastics. Exactly the same is true for all dissipative processes. One can imagine elaborate schemes for understanding them as a consequence of deterministic morion of a multiparticle system, but they are easy to understand based on stochastics.
The explanation based on stochastics is perfectly valid. Complex initial conditions of multiparticle systems lead through ergodicity to the same result – or so we believe although few have tried to check that. Furthermore the real systems are not closed but influenced from outside in a manner that may with justice be called stochastic.On additional point is that Quantum Mechanics is after all a more accurate theory than classical mechanics. Thus the real world does not follow Newtonian mechanics.
It is very essential to consider all dissipative processes and their theories are practically always based on stochastic nature of dissipation. Dissipation tends to make dynamics more stable and easier to analyze. If the dissipation of the real world systems is strong enough, modeling those systems using computer models becomes much easier. As far Earth system models work, we may thank dissipation for that.
The heat equation also admits an infinite speed of propagation of thermal signals, due to a singularity in the diffusive term solution. That first random walk step is a doozy according to the math.
It also manifests itself in other diffusion models. This is a problematic bit of behavior that lead to the DealGrove model for Si oxide growth as a practical yet nonideal heuristic.
The easy way around this is to admit some uncertainty in the diffusion coefficient and in the interface location. The infinite speed disappears and the kernel solution comes out very clean. This is just part of the initial condition uncertainty that we know must exist in practice.
Even though this works quite well, the role of disorder has always been marginalized, almost obsessively by certain disciplines. I have a text on this alternate, dispersive take, and tend to lean on it to model several phenomena. Not everything, just where a dynamic solution to a master equation is required.
Yes, but the problem is that in most interesting problems dissipation is so small as to be irrelevant except in the limit. All the interesting dynamics is dominated by convection and advection. I would claim that in fluid dynamics, this issue has been examined in a lot of detail over the last 30 years and it seems that for practical purposes, the NavierStokes equations do describe turbulence, vortex dynamics, even convection very well. It is counterintuitive but I think probably true.
David,
For that reason I have mostly avoided the word diffusion and brought it up here only as a simple example of the importance of stochasticity.
Turbulent mixing is another example and turbulence adds in many ways essentially to the dissipation.
I would go as far as claim that stochasticity is the real nature of physics that’s analyzed by statistical mechanics. Ergodicity is a additional problem invented by mathematically inclined theoreticians who try to make the deterministic Newtonian mechanics consistent with this true nature.
You discuss also persistent vortices that are present in the earth system, which is not an equilibrium system but not very far from stationary. I have no problems with that, but you should remember that the persistent vortices are what they are as a combined result of continuous forcing and strong dissipation. Looked from this starting point the variability of the climate can be formulated as the question of stability of the quasistationary Earth system. Are the flow patterns stable, oscillating quasiperiodically or switching unpredictably between several “attractors”.
I would say that obviously the current climate and weather is completely dominated by the constant dynamics of dissipation as observed by temperature changes, mainly over daily time periods with slower seasonal and spatial variations.
Sometimes I think the strong imagery of violent turbulence of the atmosphere and ocean is exaggerated. True, there is a huge amount of energy in the aggregate, but remember that the earth’s average wind speed is like 1214 MPH, which placed in perspective amount to a leisurely exertion on my road bike. The ocean is mixing by diffusion at the level of eddy diffusion, with effective diffusion coefficients that are very low. What is the effective sqrt(Dt)? Perhaps 10’s or at most 100 meters?
Then the effect of these huge decadal oscillations is like what, about a 1/2 degree C in average global temperature change? (And perhaps one can say the same thing about global warming) Even though this is a low number, where is that energy coming from in the NavierStokes equations, if the equations are meant to be energy conserving and nondissipitave? In the end, the natural variations in temperature are caused by the exchange of heat with an ocean circulation and melting perturbations, with the albedo and GHG causing the external forcing (minor long term nonstationary from planetary).
One has to wonder what the agenda of these people is to create this aura of chaos as somehow that dominant and critically vital in the greater scheme of things, when stochastic analysis does such an adequate job in terms of a mean value analysis? As Andrew Lacis has said, the energy balance is what really matters in the long term.
These are all extremely valid observations, and are just not discussed too often because there are not enough people that have a deeply intuitive feel for how diffusion and dispersion plays out in applied physics applications. I am still just a hobbyist in climate matters, but we have to remember, so is everyone commenting here. I guess I distinguish myself as the amateur with the flair for the trashtalk.
Web and Pekka, Dissipation is real but small. There are three points to make here.
1. The dynamics seem more important to me to climate than dissipation. The issue is the feedbacks which depend on dynamics. And in fact the feedbacks produce most of the Greenhouse effect on temperatures, for example water vapor. Dissipation keeps things from blowing up, but cannot overcome continuous forcings.
2. The arguments from global conservation of energy or from entropy increase seem to me to be very weak constraints. Just think of an airplane with constant throttle settings cruising at a constant altitude. Now we raise the spoilers. The dynamics change a lot and the airplane dives rapidly. Energy input and output is still the same. The result for you as a passenger is totally different. Now conservation of energy on every flux box is a stronger constraint but conservation of momentum is also critical. This is why I have a problem with Andy Licas’ global conservation of energy argument. It totally depends on feedbacks which totally depend on dynamics which is very difficult to get right. These things are also not constant as the system changes as well. Feedbacks are likely strong nonlinear functions of forcings and boundary conditions.
3. As much as I distrust GCM’s, they at least in principle can get dynamics more or less right assuming that spurious dissipation is carefully controled. Excess dissipation, whether due to numerics or subgrid models is a problem people have been fighting for 60 years with only limited success. Generally, the problem is that people don’t pay careful attention to it. If there has not been a conscious effort to control it, it is likely a deadly problem especially for climate models where integrations are performed for hundreds of years. Trust me on this, even in fluid dynamically simple problems, for example a turbulent boundary layer in a pressure gradient, our best modeling efforts can be off by 10% in total forces. The details of transition to turbulence are often off by 100%. These effects are very nonlinear but can have dramatic global effects.
By the way Web, your last post here gives me hope that you may yet become a civilized participant here. I may stop calling you Webby.
David, that’s funny I was scrolling throuh this whole post and was going to respond here and then I looked and saw it was the latest so here goes.
I don’t have the mathematics background to appreciate Tomas’s posts. I have some sympathy for what Web is trying to do but I wonder if there is enough data to tell if it could possibly work. Could the Earth system arrive at a state where some huge “burp” of energy is absorbed or emitted from the oceans? Do we have enough understanding of the system characteristics to know? I also have lots of sympathy for Pekka’s standpoint (and terminological exactness aside agree with most of what he says). However I can’t help thinking that Web and Pekka are trying to apply techniques which have mostly been resolved for experimentally testable systems, and extrapolating to this one big one where we can’t experiment.
David – I see where you are going with NavierStokes, but it doesn’t stop there, right? In addition to the convection dynamics, you have cloud nucleation and a biosphere and lots of other chemistry to worry about. Are we ever going to get the shape of the attractor right given these constraints? Hell even radiative transfer is QMbased, and so randomness works its way in to the models at that most basic level, though in that instance it may be able to be wellcharacterized and deterministic in a sense.
It seems to me we are left with little more than educated guessing, which wouldn’t stop me from trying to write a GCM from scratch if someone wanted to pay me a lot of money to do it (so far, no takers) based on the best “available” physics, Web and Pekkastyle. It’s still guessing when it comes to unique problems like this one. I try to stay away from policy discussions on sciency threads like this one, but since there is at the very least a “policy” vs “nopolicy” debate here, I am going to say guessing is what we’ve got. Doesn’t stop us trying to improve the models, but limitations are still extreme.
Finally a thought question as if anyone cared. Take as a problem the earth’s crustmantle system. Keep the energy coming from the core as long as you want. Take the initial conditions of now or whenever. Is the system of oceans and continents ergodic?
Good example. Black body radiation is a perfect ergodic source. This is due to statistical mechanics, as an ideal BB source generates the entire state space of photon energies (ala Planck distribution) continuously.
That’s why I had earlier, and in another thread, compared my wind energy distribution to the Planck distribution as something that might be universal. The “blackbody” source of atmospheric kinetic energy generates a statespace of wind speeds that maps to twolevels of coarse graining — the finer graining a very local wind speed, and the coarser graining an ensemble average of the local speeds. This is like a superstatistics approach with each of the coarsegrained means calculated as a maximum entropy estimator.
“Finally a thought question as if anyone cared. Take as a problem the earth’s crustmantle system. Keep the energy coming from the core as long as you want. Take the initial conditions of now or whenever. Is the system of oceans and continents ergodic?”
I think you have to ask that question with respect to some measure over a state space. (I got that from the formal definition) The blackbody radiation of the oceans and continents certainly would be.
yeah no not that simple, i was thinking size and position of the continents and oceans. seems to me you could have an attractor of sorts if the properties of the system kept returning land masses within a limited size range. maybe not. maybe the end state would be more and more smaller land masses (entropy of a sort). maybe the overall quantity of land area has an attractor defined somehow by buoyancy of the lighter crustal material, with its finer structure determined by massing heights etc. yawn….
I think quantum randomness must count for a lot here, because of all the chemistry involved. We talk about molecules as if they were deterministic, but AFAIK the distribution of momentum in molecular collisions is controlled to some extent by QM.
hey does this whole thing boil down to QM vs. classical mechanics? time to go to sleep.
David,
You make statements like
– the dissipation is small
– Dissipation keeps things from blowing up, but cannot overcome continuous forcings.
We appear to have different scales of comparison for small. I consider that a strong dissipation is needed to keep the Earth system as stable as it is. With little dissipation we could have hugely stronger instabilities, not only some hurricanes now and then and whatever we see on the real Earth.
Dissipation is most visible in the way all kind of vortices disappear at different spatial scales. Some vortices persist as they are driven uniformly enough by forcings related to solar radiation and Earth’s rotation. They are essentially stationary, but their strength is controlled by dissipation.
There are very many phenomena that might occur in the Earth system but do not due to strong enough dissipation. Concerning long term variability the question arises, where the dissipation is not too strong for the buildup of variability.
When large Earth system models are used, one of the crucial questions is: do they have the right level of dissipation or too much due to numerical diffusion or other stabilizing techniques or perhaps little due to neglect of some dissipative mechanisms or overuse of techniques used to counteract numerical diffusion. All these problems may appear in different ways at different spatial and temporal scales.
Pekka, We can quantify the dissipation. It’s 1/Re, Re is the Reynolds’ number. Since the physical length scales for the earth are large, the Re is large as well. For an airplane Re = 10^^7. So dissipation is 7 orders of magnitude smaller than the convective terms. You know that vortices dissipate pretty slowly. They break up into smaller and smaller filaments.
Pekka, Trust me on this. If numerical dissipation is too small, the result is usually instability. Thus, almost all numerical calculations are too dissipative. There is a constant struggle to control it. My problem is that I got no indication from the exerts such as Schmidt that they had thought about this problem very carefully. Usually, that means the dissipation in the models is too high, perhaps very high. And that implies that the behaviour seems more stable in the long term than the real system.
As I see it, the chaotic nature can thus cut both ways. If the greater instability in the real system leads to more easily reachable runaway conditions, does that make you an even greater CAGW alarmist?
Seriously, what will happen if the climate scientists do the numerical computation correctly to your liking?
Will it make the warming signal even more wild (since the dissipation effects are removed, replaced by even stronger fluctuations)?
Or will it lead to some other cycle that ultimately obscures the GHG forcing? Or do you just want them to do the GCM’s correctly?
If the results stay the same and agree with Andrew Lacis’ premise that energy balance is what really counts, would you then go with the consensus?
Yes. I am totally in favor of correct science. If co2 is a problem, we need to know. That’s why I am upset at clinate science. They are failing to give us the facts we need, partly I fear because some are ideologically inclined to desire a certain outcome.
David,
Two comments on your earlier comments (and a bit more).
First I do agree that it’s typical for numerical models that they are too dissipative, but the argument that too little dissipation makes the model unstable does not hold if the real system is dissipative enough to allow for a model that is less dissipative, but still stable. I have written several messages months or perhaps about one year ago, where I have discussed the point that all GCM’s might be too dissipative for processes characterized by large spatial and temporal scales. I have no evidence on that but that seems to be a real possibility.
The second point concerns the Reynolds number. That applies to a specific type of dissipation, that characterized by viscosity, which is due to molecular level processes, but that’s not the only form of dissipation. The creation and annihilation of larger scale turbulence and vortices may lead to dissipation whose strength is not characterized by the Reynolds number. Just thinking a bit on typical atmospheric phenomena like storms etc. tells immediately that there are indeed many other dissipative mechanisms that play a great role in the Earth system dynamics. The case is not as clear with ocean currents, but I suspect that the range of dissipative mechanisms is rather wide there as well.
Finally we have the requirement that destruction of free energy by all dissipative processes must equal the net creation of free energy by solar radiation and the rotation of Earth. Thus the total amount of dissipation is fixed and the question is actually: What is the totality of processes that the flux of free energy can maintain before it is lost in dissipative processes? If the level of dissipation is weak, then we have a lot going on and vice versa.
We know what happens on the level of weather phenomena, but we don’t really know what happens on the long time scales of decades and centuries. How strongly are such phenomena driven and how strongly they are damped by the forms of dissipation that affect them?
This is getting into Axel Kleidon’s area of research.
Kleidon estimates that at most 6000 TW is available to atmospheric kinetic energy of the approximately 100,000 TW of average solar insolation hitting the earth’s surface. Assuming that is correct (and I have some other corroboration that it could actually be between 1700 and 3500 TW), then what is the residence (or relaxation) time of this kinetic energy before it gets dissipated as heat? And I would want to know if this residence time has fattails, which would indicate how long it could persist beyond the intuitive level of a few days or weeks. I can imagine that there are some very interesting first order analyses one can do, such as using the mass of the atmosphere 5*10^18 kg and the average wind speed, or perhaps looking at autocorrelation functions.
Then also apply this kind of analysis to the ocean circulation, where I imagine the dissipation process is much slower.
Is there any indication that this is a valid path to pursue? I was thinking that maybe GCM’s of atmosphere are less important than the ocean models, so one can use a stochastic model for the atmosphere and retain the GCM for the spatial idiossyncracies of the oceans . Plus, how is this discussion related to the hyped work of Tsonis where I think he is doing the same kinds of analysis concerning fluctuations and dissipation?
Pekka,
While I generally agree with your last message, I’m a little confused about the other dissipative mechanisms. The only one I can think of other than boundary layer friction is molecular viscosity.
Let me rephrase the point about numerical and artificial dissipation. Neglecting viscosity, which if it is very small is not well resolved on practical grids, the NavierStokes equations have a hyperbolic component and central difference methods are unstable, they generate spurious oscillations. Whether they blow up is academic, the answers are badly wrong. The way to fix this is to introduce artificial viscosity via one mechanism or another. The most popular is “upwinding” the first order differences. This problem is 60 years old and the literature is volumonous and repetitious and mostly about epsilon improvements of no practical significance. However, this is the dissipation I was refering to. Without artificial dissipation, most practical models of fluid dynamics systems are essentially singular for interesting Reynolds’ numbers, i. e., when the actual diffusion is so small it cannot be fully resolved on the computational grid.
In any case, I still claim (roughly) that the dynamics are more important than dissipation for climate because of feedbacks. I think most of the significant climate changes of the last 10 million years were probably due to feedbacks that dwarfed radiative forcings. Climate is always changing and regardless of what we do, there will eventually be climate challenges if for no other reason than orbital changes in the distribution of forcing. We need to focus on understanding and not on political agendas and ideological predispositions, such as the one that started with Rousseau that mankind and his civilization spoiled the pristine “noble savage.”
We talk about molecules as if they were deterministic, but AFAIK the distribution of momentum in molecular collisions is controlled to some extent by QM.
As are all particle interactions. But the uncertainty in molecular collisions resides far more in the uncertainty of the trajectories before the collision than in anything that happens during the collision itself.
While absorption of photons by molecular bonds is likewise at the mercy of the approach, QM plays a much bigger role in the actual interaction of the photon with the molecular bond than it does with a molecular collision, which can be accurately modeled as a classical interaction. One can work backwards from the momentum and energy of the colliding particles before and after to reconstruct very accurately how close they came to a headon collision, far more accurately than could any local quantummechanical analysis of the collision itself.
David,
The dissipation that I’m talking about is principally related to formation, behavior and cancellation of turbulence and large scale vortices, i.e. mechanisms that cannot be calculated correctly from NavierStokes equations in practice (and perhaps not even in theory). Reynolds number tells whether turbulence starts to form in a flow but what happens when it starts to form can be calculated only very approximately using methods that work for some cases and in some respects but fail more generally.
What you wrote about the required dissipation is related to the above. What I had in mind on the level of dissipation in the models involves also other larger scale mechanisms like those I discuss below on oceans. The problems are in several ways too complex for comprehensive modeling based on first principles and modeling dissipation is an issue at every level.
When turbulence has been formed it creates typically further turbulence and adds very much also to the molecular level dissipation. In the atmosphere there are very many mechanisms that create turbulence, there are local gradients that determine the effective Reynolds number. Thus the global dimensions or the height of the atmosphere are not the only spatial dimensions of importance. The complex geography of land areas is a major factor, but not the only one.
What I consider particularly interesting for the climate is the dynamics including sources of dissipation in the oceans. It’s known that mixing based on very small density gradients is much more important than simplistic models predict. What controls the patterns of the “conveyor belt” is an interesting question. In particular it would be important to understand, whether there are several attractors for that pattern and how switching between these attractors occurs if there are many. The answers depend on the level and nature of dissipation, but not only on that.
Pekka, Thanks for the clarification. It is indeed true that turbulence creates increased effective dissipation. But it still raises the diffusion coefficient only a few orders of magnitude. In any case, vortex dynamics often takes a long time to create turbulence at small scales. It does depend on the situation. It seems we can agree that proper modeling of actual dissipation is essential. I still contend that in fact this is usually not the case. Even in simple situations, we are still not there, for example a turbulent boundary layer. Most of the turbulence models are too dissipative, as can be seen by the fact that votiicity is damped too quickly.
Right, but in the relevant questions we do have quite a lot of time.
Major atmospheric weather patterns have typically lifetimes from days to months and ocean currents may change over decades or centuries.
When we ask, whether dissipation is strong or weak we must consider also the time scales.
@Pekka: As far Earth system models work, we may thank dissipation for that.
Quite right, and this is moreover a central point.
Though few computer scientists currently realize this, the perspective on the interaction of time and information that they’ve built up over the past few decades equips them very well to engage physicists on basic issues like dissipation. In particular computer scientists would feel comfortable discussing the four points of the following square.
EVENTS TIME
STATES INFORMATION
Time can be understood as a metric on a space whose points are events, while information can equally well be thought of as a metric on a space of states.
The concept of dissipation would not exist without the concept of state. The motion of planets as prescribed by Copernicus, Kepler, Galileo, and Newton is essentially stateless. If you play backwards a movie of the planets circling the Sun, with the solar system’s North and South interchanged, you would find it satisfies the same laws as with the standard orientation of space and time. Here the arrow of time seems perfectly symmetric.
But if you play backwards a movie of someone diving into a swimming pool, it’s obvious it was backwards. Turning the scene upside down to make it appear that the diver is falling out of the pool onto the diving board lower down might satisfy the basic laws of Newtonian mechanics and gravitation, but it woud not satisfy anyone who notices the ripples.
The dead giveaway here is that the ostensibly chaotic ripples on the surface of the pool very suddenly decay to a perfectly flat surface.
People take dissipation for granted until they see it timereversed.
The flat pool expresses a welldefined state. Diving into it erases a great many bits of that state.
Without state we could not perceive time, because we could not tell what had changed. We need state to keep track of what used to be, in order to compare it with what is now, in order to infer that something has changed.
With perfect recall we might have perfect clairvoyance. However combinatorics renders perfect recall an impossibility (we can’t begin to store all the information about the past), whence so is perfect clairvoyance.
Good post. Very succinct and clearly put. The impact of the dive reminds me of a forcing in a weather/climate context and in IMO this forcing is only predictable if (say) the dive is part of a regular data set with relatively narrow stochastic variance.
My takeaway from this thread so far is that if it can be shown that previous observations have no influence on current or future observations, then the data set would have negligible predictive capability. Many examples of predictable data sets abound such as roulette wheels and the like.
From an economics perspective, the stock exchange data set has very little predictive capability as past stock prices have no bearing on future stock prices yet there are many people who use charts of past price movements to support their buying or selling decisions although there is no rational basis for doing this.
My takeaway from this thread so far is that if it can be shown that previous observations have no influence on current or future observations, then the data set would have negligible predictive capability
Agreed. My recent research has been into the 162 numbers constituting the annual averages (column 14) of the HADCRUT3VGL data set for global landsea temperature since 1850. This is a truly miniscule dataset compared to the gigabytes of data distributed by Richard Muller’s Berkeley Earth Surface Temperature project.
Logically one should not be able to deduce as much from a tiny dataset as a huge set, so if I can show the opposite then this may tell us something important about climate.
The data set of BEST temperatures is another example of observations that have little predictive capability IMO and this applies to most other raw climate data that I am aware of.
Your project is most interesting and good luck with it. The reduction of stochasic fluctuations though use of annual means could well yield better information that can be deduced from the very much larger BEST data set, as you suggest.
Notwithstanding the foregoing I would still be cautious in using climate data, in any format, for prediction purposes. Weather forecasts over a 710 day period would yield information of value for adjourning areas when based on contemporaneous observations, but this would not really be prediction based on data analyses.
Thank you Tomas and those contributors who have added to the sum total of this learning experience for me. Humility is a great virtue in science and it is well overtime for some of us start practising this.
I profess not to be visiting this blog to exercise my ego but to learn and to gain understanding of climatology to the best of my intellectual capacity and to make as informed a judgement as I can on the merits of the AGW hypothesis.
As Judith has already said that she will desist from using the term “ergodic” and in the knowledge that her scientific credentials are way past my own – I, too, will desist :) but to simply say in future that something or other is not capable of prediction.
Let’s try a God like coin toss – say daily rainfall totals. We have reasonable records over more than 100 years from school houses and post offices all over Australia. So we can rank the records at points – there is a lot of regional variation – construct a probability distribution and finally fit them to a log Pearson type III curve with a regional skew coefficient. Fantastic – but then we notice that we get 25 year regimes of high summer rainfall followed by low summer rainfall – with abrupt change between. The average isn’t much good because it is too low for one regime and too high for the other. Totally misleading for water resource planning. So we stratify the records into one high flow pile and one low flow pile and go through the whole exercise again – and we now have two averages. Much better. But these aren’t exact 25 year regimes – they tend to vary from 20 to 40 years. So now we would like to know just when these regimes are going to start and finish. Ummm. For that we need a theory of why – and I’m going with control variables and dynamical complexity unless someone has a better idea. Ergodic – smergodic – I’ve got a dam to build.
Oh, God. Ick.
Thomas,
Web has a juvenile personality. Being retired, he has little else to do with his time but to condescend to those who know more than he does.
On a more serious note, I want to hold forth on an issue that may be one of the sources of confusion here. Basically, before we can talk about properties of a solution to a partial differential equation, we must define the norm we will use to measure the solution. The situation is I think that a lot of analysis of elliptic systems uses the H1 norm which is the integral of not just the fundamental variables but also their derivatives. For a lot of problems, such as fluid dynamics, this is the appropriate norm since for example the skin friction is related to the derivative of the velocity. The H1 norm is a harsh mistress. What we see with chaotic systems is that the problem is ill posed with respect to initial conditions when measured in the H1 norm. This is the butterfly effect and says that the details are chaotic. The part of the argument that is not rigorous and in my mind very questionable can be generously paraphrased thus: ” I know the local errors are large, but every time I run the model, the long term statistics seem to be similar.” One is tempted to ask exactly they mean by similar, but that is met even by Schmidt with silence. And this is the problem as I see it. The attractor for the NavierStokes is likely of very high dimension and very complex. What is needed I think is some breakthrough in nonlinear analysis or numerical methods that can quantify these very vague notions. Perhaps you already know of such a thing. I would be eager to learn about it. Once again, thanks for this rigorous and informative post.
There is a mountain of evidence in fluid dynamics that the NavierStokes equations describe accurately even very chaotic flows if integrated accurately. They seem to remarkably summarize fluid mechanics, contrary to a previous century of statistical mechanics explanations of turbulence. I also note that in practice, direct integration of the NS equation is just impossible and will remain so for at least 50 years. The Reynolds’ number is simply too high. Thus we are forced to fall back on turbulence models which can do some things well, but generally get worse and worse as we stray further from the model problems for which they are calibrated. This is in my view the main challenge in all of fluid dynamics, namely, to be able to characterize the effect of subgrid turbulence on the resolved scales in a theoretically more satisfying way.
David Young
Being retired, he has little else to do with his time but to condescend to those who know more than he does.
Speak for yourself. I was just condescended to by someone who knew more about H1 norms than I do.
Sorry, Didn’t mean to condescend. I think Thomas probably knows more about the H1 norm than I do.
David Young says:
(A) It is amazing what one can accomplish if one spends a couple of free hours every evening blogging your thoughts over the course of several years. Eventually you can collect hundreds of interconnected pieces of evidence that turns into a thesis, if you can maintain some diligence.
(B) The exact opposite occurs if all one does is write comments to blog sites which complain over and over that people are doing things wrong, and the only right way to do something is only known to the people with the keys to the kingdom.
Which categories do we all fit into? (A) or (B)
Professor Pratt obviously fits into category A.
P.S.Agewise, I am not even close to retirement. Maybe that accounts for my adolescent nature, and why I am not a villager.
Web, We have been over this territory many times over the last 6 months. Climate science seems like a rather insular field when viewed against other areas that are more mature, like fluid dynamics and statistics. What surprised me was that climate scientists seem to be less willing to learn than most, at last in my experience. I understand why that is the case, i.e., they feel under siege and challenge. But you know life is tough when you stoop to political activism. My only interest in this is to get the science and mathematics more right. I have NavierStokes codes to write and run and don’t dabble in trying to set up and run something like a shallow water atmospheric simulation, if that’s what you are trying to imply. I don’t have access to all the data and the massive software infrastructure required to get initial and boundary conditions set up, run hundreds of runs, etc. etc. Such things require a large teamand anyway, the real progress is more likely to come in other areas and then be picked up reluctantly by the Real Climate commissars. Their problem is that having chosen a political agenda, they now need to convince people like me that they are right. So I’m waiting patiently for a time when their argments are convincing enough for me to write my Congressman. So far, I prefer Judith by a long ways.
So far, Fred has come the closest to making inroads. But I just can’t buy the simple conservation of energy arguments as discussed above. For someone like me who has extensive experience in being a peer reviewer, I do know the telltale signs of problems.
David,
We discussed this point when you started to write on this site. My view is that you draw unjustified conclusions about the willingness to learn based on web discussion. If you didn’t know then, you certainly know now, how polarized the discussion typically is. All parties feel that they have to be in defense and avoid showing weak points to their opponents. Therefore the public discussion gives a distorted view. (I have been reading Mann’s new book and that reading tells once more clearly, how bad that situation is. Mann cannot concentrate on the subject, but thinks more about the opposition than about the content. The other side does just the same.)
Under these conditions you cannot get full and sincere answers. That’s bad, but that doesn’t tell what happens behind the curtain.
One real reason for the sluggishness of reacting to alternative approaches is in the large models. The models are originally built on first principles, but that’s only the starting point. Discretization alters the modes and it’s true for practically any large model in any fields of application that the model ends up as a combination of first principles, semiempirical parameterization and tweaking. Using all those tricks the resulting model starts to behave more or less as expected, which means that it doesn’t fail dramatically in any respect and that in can reproduce reasonably well many results in agreement with empirical data. The model works so as a whole and changing any of the basic details means that most of the rest must be reconsidered again. The modelers’ trust on the model is based on the number of successes in comparison with data rather than on the first principles physical accuracy of the model.
That’s particularly true for changes that affect so essential features as the method of time stepping. Therefore the model builders like more some smaller stepwise improvements than fundamental changes in time stepping. I do believe that other factors can indeed be adjusted so that a better time stepping method does not necessarily result in a more correct model. It might improve the speed of calculation significantly, but reaching that point would take so much time that getting even would be far off.
Pekka, I understand why people are reluctant to admit weaknesses. I’m just saying that the situation is a lot better in other fields and I expect real scientists to be more honest. Being honest will be its own reward. By the way, I’ve tried the behind the curtain approach and have heard nothing definite. You know people are polite and promise to read the papers and then they don’t get back to you. In most fields it would be different. It does anger me a little that in such an important area, the politics is so paramount.
David,
I’m fully with you in the wish to get more information.
The main difference in my approach is that I refrain (also in my own mind) in drawing conclusions on faults in peoples behavior, when there are also more benign explanations for the state of matter.
I’m more worried about system level failures. One must combat them, but that doesn’t require condemning individuals based on lacking evidence. (You are not among the worst in that, but you have also made statements that I wouldn’t make even privately or to myself, let alone on web. Such statements may make you a foe for some of the scientist and lead them to act accordingly.)
Well, my experience is that in other fields, people aren’t nearly so sensitivie to strong statements. My opinions as expressed here pretty much agree with Muller. But then in other fields, I’ve never seen the kind of thing with regard to the peer review process we see in climate science as revealed in the emails. Do you really think most climate scientists read Judith’s blog? I doubt it. They read what is posted at Real Climate as excerpts, often taken out of context.
Well mostly climate scientists don’t read the blogs at all. However, I do have a surprisingly (to me) large following among climate scientists, based upon people telling me they follow the blog.
David,
Maybe you just cannot imagine what scientists of other fields write in their emails :?
Some of the emails do really tell about wrong attitudes, but much of the fuss is greatly exaggerated.
The attitudes expressed in the ClimateGate emails explain why the principals were not doing science, and not doing science is what got them and us in the predicament we are in. That point needs more exaggeration.
==========
Pekka –
This is an interesting point w/r/t the a possible implication of blogs as a methodology for “peer review.” Even when people post anonymously, the element of being on public display is certainly a prominent (and potentially negative) influence how people approach discussions. And of course, there is the quality of being a “realtime” exchange – which I think has both positive and negative influences (immediate feedback and creativity versus the benefits of time for deliberation and careful construction of expression).
I think that this is an useful consideration when people sometimes might be tempted to adopt a binary mentality in finding fault with peer review and lauding blog science – as opposed to weighing the comparative pros and cons of of both vehicles for exploring ideas.
Of course, people feel a need to selfprotect in the process of peer review also – so the question for me is whether that tendency is any greater when discussions take place in public? This, I think, also touches on the questions of whether or not more access to code, data, or methodology is always a “more is better” situation. At what point, if any, do the benefits of openness and public access bring diminishing returns?
To Vaughan’s defense, WebHubTelescope might be talking about these mistresses from experience.
As E.M. Forster writes in his magnum opus:
> Only connect.
http://neverendingaudit.tumblr.com/post/17879167287
Pekka, Actually the emails show a lot of controversy and that is very healthy. What is inexcusable is that the public face is different than the private one. Especially with regard to the hockey stick, privately, there were strong objections and reservations, but in public every measure was taken to keep the problems out of the literature. The problem I think is what the emails show about the integrity of the literature and how some people chose to try to manipulate it. And even more disturbing is the Schmidt doctrine that this doesn’t matter. Has the situation gotten any better? I would say no. That is the real scandal. I like controversy and arguments about science. It’s all good. But the literature should clearly reflect these things.
As an effort to make the term understandable, this is largely a demonstration of how wedded mathematicians are to their own vocabulary. Don’t even notice when undefined terms are slipped into the mix. These, believe me, are deadly for any noninsiders trying to grasp meanings. First symptoms are loss of interest, then sleepiness, then fixation on some simplistic (mis)understanding and avoidance of the subject thereafter.
Four examples at random:
PDF
RAM
regodic
Hamiltonian
Not to mention casual use of ‘cartesian’, ‘phase’, ‘iterative’, etc., etc.
At the very least be very explicit whom you are addressing when writing such material.
Brian, You will have to excuse the mathematicians here. We have been infected with the superior attitude of the ever sarcastic and condescending Gavin Schmidt and the other commissars at Real Climate!! Some of these terms are unfortunately very difficult to explain simply. Please indulge us. It will take a long struggle to understand some of these things or to explain them. In graduate school, I thought I would never get there. :)
Brain, at least Tomas said it was like billiards instead of nuclear physics, so we gots a chance :) I may have to play some 8ball to study up on ergodicity the regodic, er.. rerack and study some more.
Google Hamiltonian billiards – I dare you.
LOL Chief, I have, that’s why I stick to 8Ball, the table is rectangular and has pockets :) That makes 8ball nonergodic :)
That makes 8ball nonergodic
Assuming smooth cushions. What if they’re made of molecules? :)
It’s the pockets that make it nonergodic – conceptually equivalent to black holes. God doesn’t roll dice but does play pool with black holes and quantum entanglement.
conceptually equivalent to black holes
Are you more or less likely to notice a black body being sucked into a black hole than a polar bear coming at you out of a snow storm? ;)
God doesn’t roll dice
Finally you found something to agree with Fred on, CH. Never thought I’d see the day. You, Fred, and Einstein in a threeman protest against gambling in physics. Five if you count Podolsky and Rosen.
A deeply interesting thread, and magically almost free of trolls :)
Thanks to Tomas, Pekka (and CH for pushing discussions in this direction over quite some months)
As for ergodicity itself, I’m in the position a 1 watt bulb finds itself in the middle of the Simpson Desert – turned on, but with an effective radius of less than 1 metre. There is some light, though, where before there was none
As an analogy, I was patiently trying to show my 14 yr old daughter the concept of infinity (lo, these 10 years ago). When I thought I’d reached a state of adequacy with her, I was stopped in my tracks: “When space finishes, what’s on the other side ?”
My latest painful infinity thoughts:
If you play an infinite number of games of solitaire, will you have somewheres therein an infinite sequence of wins? Of losses? Both? Every possible hand will be repeated an infinite number of times, of course, but will there be an infinite sequence of repeats of each hand?
Yes.
This is equivalent to a more classical and “easier” question.
There are as many natural numbers (1,2,3 etc) as odd numbers (1,3,5 etc) and the number of both is infinite.
It’s because infinity + infinity = infinity and not 2 infinities.
As soon as you take an infinite set, you can get many apparent paradoxes which are due to the trivial observation that what is infinite is not finite :)
Of course in practice you will never be able to play an infinite number of games so the question is kind of academical.
I got more sense reading this:
http://en.wikipedia.org/wiki/Ergodicity
David Young
An interesting thing I have heard recently but not investigated is the assertion that the multibody gravitational problem is in fact chaotic as the number of bodies increases. Do you know anything about this?
Yes, this is what Poincare found by studying the 3 body system already 100 years ago.
In that sense the father of the deterministic chaos is Poincare and the amusing part is that chaos was first found in a system which has been thought the perfect example of clockwork regularity – the celestial mechanics.
This gave birth later to the celebretaed KMA (KolmogorovArnoldMoser) theorem which demonstrates how perturbations destroy the invariant tori (in the phase space) for Hamiltonian systems.
You can read more about chaos in the planetary orbits of the Solar system here : http://arxiv.org/pdf/1103.1084v1.pdf.
The attractor for the NavierStokes is likely of very high dimension and very complex. What is needed I think is some breakthrough in nonlinear analysis or numerical methods that can quantify these very vague notions. Perhaps you already know of such a thing. I would be eager to learn about it.
Nobody knows if it is “complex” but it is known that the upper boundary is large.
As you are familiar with Hilbert spaces, read that : http://homepages.math.uic.edu/~acheskid/papers/NavierStokesattractors.pdf and http://arxiv.org/pdf/nlin/0403003v1.pdf then follow the track.
These papers are beyond the usual technical level of blog discussions so it is recommended only for people who are seriously interested in these matters and have adequate (math&physics) training.
I would go as far as claim that stochasticity is the real nature of physics that’s analyzed by statistical mechanics. Ergodicity is a additional problem invented by mathematically inclined theoreticians who try to make the deterministic Newtonian mechanics consistent with this true nature.
and
Formally Newtons mechanics is fully deterministic, but starting from that many important and well known physical equations are difficult to understand and derive. Heat conduction is a prime example. It’s easy to derive based on random motions, i.e. stochastics. Exactly the same is true for all dissipative processes. One can imagine elaborate schemes for understanding them as a consequence of deterministic morion of a multiparticle system, but they are easy to understand based on stochastics.
Pekka I am sorry and wouldn’t want to come over as rude but you have misunderstood and confused about everything that could be misunderstood or confused in questions of ergodicity and non linear dynamics.
I already commented on these questions above but apparently it didn’t yet come over.
So I will try to reexplain again.
1) Newton mechanics has nothing to do here. Nobody talks about Newton mechanics. It is completely off topic.
The topic is HAMILTONIAN mechanics for the finite dimension spaces and FIELD THEORY for infinite dimensional spaces. Why would somebody talk about Newton mechanics which is completely irrelevant is beyond me.
2) If the equations of Hamiltonian mechanics are “difficult to understand and to derive” for somebody then I would suggest him not to post strong opinions on this thread because posting strong opinions about ergodicity requires Hamiltonian mechanics as a minimum.
3) Ergodicity is not some meaningless fog “invented” by some people playing games. It is a sharply defined property of dynamical systems and actually the purpose of this post was to try to explain how it is defined, why and how it matters. Apparently some have got it and some not – perhaps I am partly to blame that some didn’t get it because I didn’t explain some aspects clearly enough. But I can’t be blamed for not answering relevant questions when they are asked.
4) You consider that statistics and curve fitting are alpha and omega of all physics. You are of course entitled to hold such an opinion. My point is that it is an extremely limited opinion which wouldn’t allow much progress in science.
Luckily there were many “mathematically inclined theoreticians ” who didn’t stop at your opinion and actually tried to understand why things work as they do.
Gibbs, Boltzmann, Maxwell, Einstein, Sinai, Kolmogorov, Hopf , von Neumann are just a few names of those “mathematically inclined” and misguided guys who didn’t think that “stochasticity is the real nature of physics ” and it enabled them to make huge progesses in science and to contribute to the ergodic theory among others.
And be very sure that their insights went well beyond merely wanting to make “Newtonian mechanics” fit with some statistics.
5) Most importantly and if there is a single thing that I would want that every reader of this thread brings home is the fact that statistics are consequence and not cause of dynamical laws.
Indeed the merit of ergodic theory as well as of non linear dynamics is mainly in the fact that they go much deeper than curve fitting of some statistics over some experimental data.
They bring a consistent and rigorous explanation of why some systems possess an invariant probability distribution of some property and why some others don’t.
Without this insight you will be never able to understand why f.ex the turbulence stubbornly refuses to be fitted to some statistical curve.
So yes, statitics are useful in some cases but not because they are “simple” but because they are the right emerging property of the dynamical laws in this particular specific case.
And ergodicity is one of the most important crisply defined dynamical properties that allows us to know when statistics will work and when they won’t.
I know that Tomas is smart and uses an alias to hide his background but I sense that he doesn’t want to depart from the purity of his chosen discipline. I think Pekka, Fred, and I differ in that we each have that practical engineering or problem solving mindset where we will borrow from whatever works.
I remember in junior high where the math teacher had an assignment where the students had to create their own random number generator by using calculator functions. The constraints were obviously that the number had to be between 0 and 1, and I was surprised by how easy it was to do. In retrospect, it would get demolished as a valid pseudorandom generator, but it gave me insight about what a random process is about. Flash to today and I think about the iterated map work of Elser and what they can do to pseudorandomly search state spaces, and you get the impression that nonlinearity and chaos is a better stochastic model than we would think.
The key is in the constraints on the system. So spatiotemporal chaos can generate unpredictable patterns. So what? They still have to obey constraints. Does anybody seriously think that some confluence of state space trajectories will spontaneously push or pull gulps of thermal energy any larger than we have observed in the past? And even if they did the energy balance is still there preventing absurd extremes.
My head is currently at reading the openaccess article “A Stochastic Energy Budget Model Using Physically Based Red Noise” by Weniger, et al. This is a thing of beauty in my opinion, as it is incorporating ideas of energy wells and stochastic resonance with some practical simulation insights. This is way cool stuff, IMO, and something that MattStat could sink his teeth into.
In case you didn’t notice, Curry also admitted to not understanding where you were going with this. All the fake skeptics nodded their heads in agreement because they thought you were saying something profound. Same goes for David Young and Chief. Some of us are not afraid to challenge you all’s POV.
And this is a blog for heaven’s sake. I used the cardboard box metaphor, because obviously you can explain yourself out of a paper bag!
Web said, “And this is a blog for heaven’s sake. I used the cardboard box metaphor, because obviously you can explain yourself out of a paper bag!” I noticed that but it must lose something in the translation.
The biggest point about the Chaotic nature of climate is the impact on the energy balance is not necessarily going to average out in a realistic time frame. The AMO has a stronger impact on temperatures than the PDO and the AMO is greatly influenced by volcanic activity. So we are not reasonably certain what average is.
I am working on the Siberia thing and looking for some way to find an average to compare with the 1800s to present temperature changes.
http://i122.photobucket.com/albums/o252/captdallas2/SiberiawithAMOandVolcanocomparison.png
Most of the warming is in the northern hemisphere higher latitudes and from what I have so far, temperatures are highly impacted by volcanic activity. I would say that adds an ergodic touch to the problem.
‘In most cases the starting point for the development of a stochastic model is a deterministic one which captures the gouverning dynamics. The role
of stochastic processes is to describe the dynamics of fluctuations and/or uncertainty of model variables and/or sub grid processes with an emphasis
on the word “dynamics”: whereas stochastic parameterizations can be used to represent uncertainties of model parameters, initial values and
other time independend variables, SP provide a tool to lay a hand on time and state dependend interactions between subgrid fluctuations and model
variables. To refer to the early meaning of stochastic as the “art of guessing” it is indispensable to carefully choose the correct SP (stochastic process) for a given model.’ Weniger et al
This is an interesting paper – thanks Webby. It is about stochastic parametization of unresolved processes in meteoroloigal models. This is of course a far cry from modelling all of climate by heat and CO2 diffusion.
My guess is that he googled stochastic – not a probem – but then is incapable of interpreting these papers in a rational context.
This is yet another example of Webby saying nothing with any meaning at all in the unfortunate process – all too common – of interpreting everything through a narrow world view. An example of a monomania that we are all familiar with through the contributions of 1 or 2 other denizens.
He will interpret this as ‘competitive trash talk’ – but he always seems to be on the verge of making sense. But then you look more closely and everything is irrational. It is very distracting.
Chief is my point guard for setting up a clear out play.
I was experimenting with the OrnsteinUhlenbeck process last year and I blogged the results:
http://theoilconundrum.blogspot.com/2011/11/multiscalevarianceanalysisand.html
Subsequent to that, I came across the Weniger paper, which covers some of the same ground (OU process = red noise). I have no doubt that some more sophisticated models similar to these will be forthcoming.
There may be a tipping point where we will see lots more stochastic climate models. There is just so much data that is becoming available and its just a matter of time before someone reconstructs more interesting stochastic patterns of climate evolution.
What chief doesn’t seem to appreciate are all the basic building blocks that go into a more comprehensive model. I am making progress, but as with most hobbies, I can go at my own pace. So everytime he accuses me of monomania, I pull some other novel analysis out of the toolbox. Kind of cool, eh?
@ CH
Since Webby has intervened yet again with his jejune troll trickies, I may as well say it:
He’s a perfect example of the Marshall McLuhan (1964) aphorism
” The medium *is* the message”
Most amusing – but I have somewhere else to be.
You create a graph that has peaks and troughs and claim that it resembles the variability of the Vostok data – therefore there is some fundamental capture of the climate fundamentals based on a schematic of a socalled well.
Scary delusional.
That set of ice corebased models is not delusional. Weniger at al apply the approach of a double potential well to map the two stable temperature points indicated by the ice core data. The steep opposing sides of the potential well are the albedo change at low temperatures and the StefanBoltzmann at high temperature. The shallow hump in the middle which splits the well provides a metastable point in which the climate can wander around in, flipping back and forth occasionally.
In the Weniger model, one side of the well is the warm climate and the other side is the cold climate. The red noise generates random walk disturbances in which the climate can escape one of the shallow wells and settle in the other. When they run that model they can reproduce the same spectral density as the data (this drops off as 1/f^2 indicative of red noise).
Unbounded random walk will not revert to the mean, and that is the reason for the OrnsteinUhlenbeck process. My analysis did not include the double well and I simply demonstrated how the climate could randomly walk within a single shallow well via a red noise disturbance. I am happy to see the Chief finds that amusing.
The Weniger model is very closely related to the stochastic resonance model first used by Benzi for climate change about 30 years ago. The double potential well is the first figure in the heavily cited review article on stochastic resonance by Gammaitoni.
I would suggest that 3300 citations is not delusional.
How this relates to an ergodic process is that the earth has had these large climate shifts that occur every once in a long while. Whether they are triggered by outside orbital forcing function or other red noise is another question. Clearly we would have to wait a long time to wander around the complete state space of such a shallow well, and the stochastic dynamical model demonstates the reachability set.
Bull – the glacials/intergalacials are bimodal at the very least as seen in the cartoon in Figure 4 of Benzi 2011.
The atmosphere itself (or the climate system ina wider view) can be seen as a strongly nonlinear and infinitedimensional dynamic system interacting via a multitude of different time and spacescales of the various flow characterizing variables. For numerical simulations a model system of the atmosphere or the climate system can treat only a finite number of degreesoffreedom (dof). The effect of the truncated dof on the resolved ones can in general not be neglected due to the scale interactions. Therefore it is necessary to parametrize the effects of the unresolved scales on the resolved ones. In principle the stochastic character of the unresolved scales has to be taken into account. http://arxiv.org/pdf/1105.3118v1.pdf
‘How is that different to stochastic resonance?
‘The concept of stochastic resonance was invented in 198182 in the rather exotic context of the evolution of the earth’s climate. It has long been known that the climatic system possesses a very pronounced internal variability. A striking illustration is provided by the last glaciation which reached its peak some 18,000 years ago, leading to mean global temperatures of some degrees lower than the present ones and a total ice volume more than twice its present value. Going further back in the past it is realized that glaciations have covered, in an intermittent fashion, much of the Quaternary era. Statistical data analysis shows that the glacialinterglacial transitions that have marked the last 1,000,000 years years display an average periodicity of 100,000 years, to which is superimposed a considerable, random looking variability (see Figure 1). This is intriguing, since the only known time scale in this range is that of the changes in time of the eccentricity of the earth’s orbit around the sun, as a result of the perturbing action of the other bodies of the solar system. This perturbation modifies the total amount of solar energy received by the earth but the magnitude of this astronomical effect is exceedingly small, about 0.1%. The question therefore arises, whether one can identify in the earthatmospherecryosphere system mechanisms capable of enhancing its sensitivity to such small external timedependent forcings. The search of a response to this question led to the concept of stochastic resonance. Specifically, glaciation cycles are viewed as transitions between glacial and interglacial states that are somehow managing to capture the periodicity of the astronomical signal, even though they are actually made possible by the environmental noise rather than by the signal itself. Starting in the late 1980’s the ideas underlying stochastic resonance were taken up, elaborated and applied in a wide range of problems in physical and life sciences.” http://www.scholarpedia.org/article/Stochastic_resonance
One paper is looking to parametise variables with complex dynamic models – one is looking for the principles of behind the dynamics of complexity. It is both curve fitting with noise and perhaps limited success. It is environmental noise and the noise consists of ice, cloud, CO2 feedbacks, the slow dynamics of the ocean and many other factors. It is all chaotic in the sense of dynamical complexity and all determinant in principle. Stochastical approaches may be pragmatic but they are yet to be successful in climate studies.
‘Atmospheric and oceanic forcings are strongest at global equilibrium scales of 107 m and seasons to millennia. Fluid mixing and dissipation occur at microscales of 10−3 m and 10−3 s, and cloud particulate transformations happen at 10−6 m or smaller. Observed intrinsic variability is spectrally broad band across all intermediate scales. A full representation for all dynamical degrees of freedom in different quantities and scales is uncomputable even with optimistically foreseeable computer technology. No fundamentally reliable reduction of the size of the AOS dynamical system (i.e., a statistical mechanics analogous to the transition between molecular kinetics and fluid dynamics) is yet envisioned.’ http://www.pnas.org/content/104/21/8709.full
Yes, and that is what the Figure 4 in the Weniger 2011 paper shows. What is your point?
What I find interesting is that when I plotted the frequency of the Vostok data, the bimodality is not very strong at all, as the temperature is just as likely to be found in transitions as in the extremes. This is somewhat obvious just by looking at the historical data:
http://cdiac.ornl.gov/trends/temp/vostok/graphics/tempplot5.gif
That’s why I didn’t use a double well when I ran my OrnsteinUhlenbeck simulation. The data did not suggest that it would stay in the extreme modes for long, and it is better described as a random walk that wandering from one rail to the other.
Oh God – tell me more about stochastic resonance. That’s right – you really haven’t got the idea of dynamical complex systems yet.
‘The Weniger model is very closely related to the stochastic resonance model first used by Benzi for climate change about 30 years ago. The double potential well is the first figure in the heavily cited review article on stochastic resonance by Gammaitoni.’
‘What I find interesting is that when I plotted the frequency of the Vostok data, the bimodality is not very strong at all, as the temperature is just as likely to be found in transitions as in the extremes. This is somewhat obvious just by looking at the historical data:
http://cdiac.ornl.gov/trends/temp/vostok/graphics/tempplot5.gif
That’s why I didn’t use a double well when I ran my OrnsteinUhlenbeck simulation. The data did not suggest that it would stay in the extreme modes for long, and it is better described as a random walk that wandering from one rail to the other.’
Idiot. There is a tipping point somewhere and you will turn from being an idiot to well – a bufurcated idiot.
That’s a buffercated idiot of course. You can say two incompatible things because you really don’t understand either.
‘temperature found in transitions and not extremes’ You unmitigated, unctious idiot.
Chief said:
Chief said:
What exactly are the two incompatible things? The related concepts are tipping points, stochastic resonance between wells, and attractors. Tell me that you can conclusively point to one of these as the answer. All I can do is run an analysis to weight the possibilities.
I also attached what I replied with earlier as I screwed up the HTML format
This is my plot to demonstrate the weakness of the bimodality that Chief claims.
http://img19.imageshack.us/img19/6622/vostokrank.gif
Hopefully a few people may understand how to read Rank plots. These are useful because they pull out the subtle trends that may not show up as clearly on a regular histogram.
1. If there was a strong bimodality, the rank plot would have the shape of a horizontally stretched S, with most of the rank on the upper and lower parts of the Scurve.
2. If there is a uniform distribution of temperatures within an interval, then the rank plot would just be a linear slope.
As you can see, there is a rather weak bimodality, separated by a weak transition region. However the temperature extreme constraints are very strong, with just a few points exceeding these extremes. What this indicates is just what I said, the temperature is mainly in the transitions and not in the extremes.
The random walk model is thus very reminiscent of a OrnsteinUhlenbeck process, with a strong reversion to the mean. There may be some structure in the shallow well, suggesting a possible hump, but is not too strong. The fact that data was taken over a 400,000 year span with a uniform sampling suggested that we have (1) good coverage and (2) sufficient temporal sampling. In other words we have a good reachability set.
You guys can decide on what kind of ergodic properties this data set has, since you are the gatekeepers on deciding what is and is not deemed “ergodic”.
If there is something I am missing with respect to this data, the Vostok FTP site awaits for you to knock yourself out and do some of your own analysis.
WHT @ 12:03 – I can read a rank plot but I don’t need to to understand what you’re saying. I think you’re right but it’s just Vostok. Now do the tropics and the temperate zones. I also think Chief is right. “Stochastical approaches may be pragmatic but they are yet to be successful in climate studies”. You two wanna go another round?
There are indeed tropical proxy data based on coral data I think. Point me to some archives and I can do a quick rank plot.
In electronics, the common metastable condition is where the measured voltage bangs back and forth between the power supply rails. There is often no bistable characteristic to speak of. What happens is that a bit of AC noise or signal gets the circuit moving in one direction and then the positive feedback keeps it going until it hits the limit of the power supply. Something else then sets it going back in the other direction. The visual is that the potential well is flatbottomed initially and then it tilts back and forth to invoke the metastability; this produces the typical squarewavelike observations in a measured signal. Some refer to this as “clipping” of the signal. If you ever homebuilt circuits you know what I am talking about. It is of course designed out of the products before most people run into it, but it is endemic in the prototyping phase.
It is possible that our climate system is like this flatbottomed (or at least shallow) well. The analogies to powersupply rails are the StefanBoltzmann feedback on the high temperature end and albedo changes on the low end. Neither of these rails is abrupt like a powersupply voltage is, so that the squarewave clipping is not seen and it is replaced by a “cushioned” rounded entry and exit with respect to the extremes.
Maybe the strong clipping is observed in the tropical data, I don’t know and will have to check.
The consensus science that Lacis promotes is that stochastic approaches, such as in looking at mean value energy balances, are the best way to predict climate change. I am just looking at it from assorted odd angles.
Chief apparently doesn’t like it one bit.
I will keep on going on as many rounds as I want. This is like a blowout game and all the regular fans have left the building and gone to the heartland.
Tomas,
I have noticed before that you cannot understand my way of thinking. That doesn’t stop me from keeping my views.
I*m ready to accept that I have written some erroneous comments as I have not thought all of them carefully.
My point is really related to the relative role of idealized mathematical presentations and practically realizable empirical settings and measurements. Many of the mathematically beautiful theories represent idealizations that cannot be realized. Physicists are accustomed in doing the appropriate calculations and most physicists don’t really care about purity of the approach or the beauty of the related philosophical considerations, they are happy, when they get the right results.
As I have told, I’m a theoretical physicist by education and I have given numerous physics courses at high university level, most on quantum mechanics, but some on other physics subjects as well. As a young lecturer I was rather fond of formalism, but years have made me more pragmatic. I have interest in the philosophy of science as well.
I don’t present my views based on ignorance of what you write, but because I believe that formalism may often (or for many people) become a hindrance for real understanding of the nature of physical phenomena. Different people look at physics in different ways and that’s fully legitimate as long as their views are in agreement on the actual physics that has been confirmed as true.
My choice of words has been based on my attempt to make my messages at least partly understandable to people, who have not studied extensively physics. That may indeed lead to problems as the messages become rather vague and prone to misunderstanding.
Perhaps the strongest reason for my activity has been my impression that overemphasis of formalism has been used on this site many times in a way that’s misleading. Some other approaches have been condemned erroneously, because the proponents of the formal approach don’t really understand, what it means in the real world of large and very complex systems. That has happened again in this thread, when the role of stochastics (or equivalent phenomena in the nonstochastic statistical mechanics of a large complex system) is not understood.
I try to emphasize real important phenomena, not formal correctness that makes it difficult to see.
Pekka,
This all started long ago in a thread far, far away. If I recall correctly I said that that there were physical realities at the heart of climate. Atmosphere, heliosphere, pedosphere, cryosphere, hydrosphere and biosphere. There are tremendous energies cascading through powerful mechanisms. It is evident that everything is stable until feedbacks result in immense perturbation of the system and we shift abruptly between climate states.
It is not as if either formalism or stochasticity has made much progress with climate matters. The full dynamic suite has stumbled over a famous mathematicians dilemma – ‘the uncomputability of some algorithms due to excessive size of the calculation’ (James McWilliams, 2007) even with realistically foreseeable increases in computing power. It falls headlong at the hurdle of sensitive dependence within the limits of accuracy of measurement of initial conditions and of structural instability within the bounds of necessary but nonunique parametisations of boundary conditions. ‘Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable.’ Op. Cit.
I have given a simple example of where stochastics fails to answer the most important questions in a dynamic environment. In Australia, we have more than 100 years of data on daily rainfall totals. We can rank these at point and build a frequency distribution. We can and do fit these to a log Pearson type III frequency distribution with a regional skew factor. This allows estimation of 1000 year and more rainfall totals. But then we notice that there are 25 year periods of low summer rainfall and 25 year periods of high summer rainfall. There is a difference of 4 to 6 times average summer rainfall between regimes. These are flood and drought dominated regimes that change abruptly from one state to the other – and the average of rainfall is too low for one one regime and too high for the other. Useless for water resource planning or flood forecasting because for that we need to know what is happening in the specific regime we happen to be in at the time. So we go back and stratify the totals according to regime and go through the whole process again so that we now have two distributions and two averages. Better – however – the regimes are not exactly 25 years long but range from 20 to 40 year periodicities.
The frequency distribution tells us nothing about the causes or the timing of the next regime shift. For that we look to canonical properties of dynamical complex systems – principally to date in an increase in autocorrelation in the climate signal and noisy bifurcation. Slowing down and dragonkings in my favorite terminology. So this is another and seemingly necessary approach to real world climate data.
Robert I Ellison
Chief Hydrologist
@Pekka I have noticed before that you [Tomas] cannot understand my way of thinking. That doesn’t stop me from keeping my views.
What we’ve got here is a failure to communicate.
Perhaps some examples that look chaotic in the freqency domain but not the time domain, or vice versa, would help.
Rob,
I agree fully that collecting data, determining related PDF’s and creating a stochastic model that has the correct PDF’s is not likely to produce useful forecasts. Neither do I claim that I have any strong arguments telling that some other known approach would result in models with good predictive capabilities.
What I have tried to emphasize is only that the more stochastic dissipation we have in the system the better are changes that useful models with fair predictive power may ultimately be developed. If dissipation is weak in the sense I use the word “weak” unpredictable chaotic transfers of state and quasiperiodic oscillations will be essential and make predictability much less. With stronger dissipation modeling the Earth system will be easier and gradual progress may ultimately result in models that have significant predictive power on climate even on regional scale.
To me some of the statements of Tomas are typical myopia of people, who emphasize formal mathematics too much and fail to recognize that other approaches may be more powerful. The other approaches are compatible with the formal ones, but differ in power. When formalism cannot tell much quantitative (only what the limiting behavior is) the other approaches can. Similarly David appears (if I have understood him correctly) to generalize knowledge on certain class of fluid dynamics calculations too far into a region, where other approaches are likely to be more powerful. Both have one common property in their failure: When their own pet approach fails to give the answer, they think that no approach can give the answer. (Vaughan Pratt has gone as far as suggesting that a small model could give good answers. I don’t believe that, but problems of differing scale and structure require different methods and allow different simplifications for making the calculations more tractable.)
Having said all that, I don’t trust very far the sufficiency of the tests and arguments that I have learned from the climate modeling community. When different present models agree, they may fail for a common reason. Calculating averages of different models is certainly not a reliable way to get good results, etc. All the the huge problems of big computer models remain, but I think that there’s a fair possibility that the basic approach of present models can indeed be further developed towards much better results. The other possibility that the Earth system has a nature that makes such progress unattainable is also present, but in my present view not as likely or even certain as some others think. The key in answering, which view is more correct is in the role of stochasticity and dissipation.
Vaughan Pratt has gone as far as suggesting that a small model could give good answers. I don’t believe that,
Pekka, I’d be very interested in any evidence you have to the contrary.
Or is it just a gut feeling with you?
Vaughan,
I commented on that point in another recent message. It’s a gut feeling as I cannot even imagine any more valid argument for such a “impossibility theorem”.
My principal argument is in the importance of the complex boundary conditions including the shapes of the continents with their mountain ranges as well as the shapes of the ocean floors. My gut feeling is that the boundary conditions affect the result too much for allowing simple models to succeed. Some parameterizations may agree with data, but extrapolating from that remains doubtful.
I agree with Pekka about the simple models. The problem as I see it is the dynamics that has a huge range of scales. There is some interesting recent work by T. J. R. Hughes on multiscale methods that is at last trying to address the problem. But generally, the simpler the model, the more you are dependent on heuristics that are limited in their range of applicability. That said, there is far too much dependence generally on complex models without understanding the real issues. And simple models can be remarkably accurate within their range of applicability. An example is thin shear layer theory which is actually more accurate for thin boundary layers than much more complex eddy viscosity models. The temptation to run complex models of questionable accuracy and get mesmerized by the complex and colorful graphical results is to be resisted. It seems to me that in climate science, this is the normal mode of operation, you run the model over and over again and observe that the patterns look “similar” every time. That’s a very weak argument.
Thomas Milanovic: 3) Ergodicity is not some meaningless fog “invented” by some people playing games. It is a sharply defined property of dynamical systems and actually the purpose of this post was to try to explain how it is defined, why and how it matters.
It is not a meaningless fog.
Ergodicity was however invented by people studying nature (though not playing games.) Like other concepts, it may be an accurate representation of nature.
I have tried to emphasize that the same physics can often be looked at in several different ways. As has been discussed by many and also confirmed several times by Tomas, the role of ergodicity in statistical mechanics is complex and still a source of scientific disagreement among top level specialists.
Ergodicity was certainly a very essential point for theoreticians who wanted to formulate a consistent theory for statistical mechanics of a finite closed system based on nonquantum theories, which are fully deterministic. To what extent involving QM is not totally clear as the answer to that depends on the interpretation of QM.
While all that is true for fundamental theoretical considerations and a source of much intellectual interest, it is a fact that very many results of statistical mechanics can be obtained more easily and directly through the approach based on stochastics. That can be seen in most textbooks and the derivations are perfectly valid. I claim that this approach allows often also a much better way for estimating where the results are likely be accurate for practical purposes, when the approach involving ergodicity etc. cannot tell easily, whether the question has been set in a way, where the asymptotic results apply to them with the required accuracy.
The problem of formal mathematics is often that it tells rigorously, what happens at the limit, but is hopelessly powerless in telling, how close to the limiting behavior we are. That’s the problem that I have met in many guest postings on this site. They bring up some theories that are formally correct and claim that they are essential also in practice, but are in reality incapable of telling, whether they lead to important additional insight or totally negligible extra effects. At the same time some other approaches may actually give answers also to this question, but realizing that may be difficult and accepting that even more difficult for the proponents of those theoretical approaches.
To what extent involving QM is not totally clear as the answer to that depends on the interpretation of QM.
Pekka, wouldn’t that imply the existence of an experiment that could distinguish interpretations? If so the experiment would be well worth carrying out. If not then this thread seems to be diverging away from reality and into foundations of physics.
The differing interpretations lead to differing philosophies (metaphysics) but only in exceptional cases to measurably different results. Statistical physics of macroscopic systems is one of the most unlikely areas of being measurably influenced as the statistics dominates most of the results leaving little role for the influence of differences in the micro level dynamics.
Classical and Quantum statistical mechanics give significantly different results, when FermiDirac of BoseEinstein statistics is essential, but that’s true only in some specific cases like theory of conductivity and superfluids. Even then all viable interpretations of QM are likely to agree as all which do not have already been killed (like Einstein’s hidden variable alternative).
“Neglecting QM uncertainties, I think it’s reasonable to claim that every event in the history of the universe was predetermined at the time of the Big Bang (if that was the starting point). However, I don’t see that as conflicting with our notions of randomness or probability, because probability can seen as an expression of the state of our knowledge about outcomes. An event may appear capable of ending in multiple possible outcomes (e.g., a coin toss).”
Fred you can’t ignore QM at Big Bang because it is precisely near this point where physics breaks down and quantum gravity is necessary.
And quantum gravity neither conflicts nor supports any notions about statistics or probability.
Yes probability can be seen by some as an expression of the state of our knowledge but I strongly recommend that it is not seen that way because it only leads to confusions between knowledge and predictability.
There is no special relationship between predictability and existence of some invariant (unique) probability distribution. All cases exist.
– predictable and no probability distribution (classical determinismus)
– predictable and probability distribution (typically QM)
– non predictable and probability distribution (typically ergodic systems like the roulette wheel)
– non predictable and no probability distribution (arguably the hardest case, typically chaotic non ergodic systems)
So you see probabilities have nothing to do with the states of your brain (knowledge). They are rather properties of the dynamics of some systems and they are not properties of some others.
The art is to be able to distinguish between the 4 cases above.
“Probabilities have nothing to do with the states of your brain (knowledge). They are rather properties of the dynamics of some systems and they are not properties of some others”
Tomas – Your first.sentence is simply wrong. Your second sentence is largely correct, but doesn’t conflict with the principle that probability is a reflection of the state of our knowledge rather than an inherent property of an event. In the case of statistical mechanics, as an example, the behavior of each molecule of a gas in a fixed volume is deterministic, but because we can’t know how all of the innumerable molecules will behave, our understanding of the macrostate is probabilistic. That is indeed an inherent property of a gasfilled volume, but it is based on the unknowability of each microstate. If they were within our capacity to know, the probabilistic element would vanish. See also my earlier example of a
series of coin tosses for an illustration that clearly demonstrates the relationship between probability and individual knowledge in a circumtance where an outcome is deterministic for an omniscient observer but probabilistic for anyone else.
The link to my earlier comment is Comment 169604.
Fred, This fits in well with the idea of prior knowledge and prior probabilities. However accurate the likelihood, if the prior is not so good it will smear the results. I think Christian Beck’s superstatistics is an outgrowth of this and maxent is a way of quantifying our ignorance.
BTW, superstatistics is being applied to fluid mechanics in some recent papers by Beck.
Your first.sentence is simply wrong. Your second sentence is largely correct, but doesn’t conflict with the principle that probability is a reflection of the state of our knowledge rather than an inherent property of an event.
In other words, God does not play dice, it just looks that way to mere mortals.
You’re a real Einstein, you know that, Fred?
That is indeed an inherent property of a gasfilled volume, but it is based on the unknowability of each microstate. If they were within our capacity to know, the probabilistic element would vanish
This is really confused Fred.
And contradictory on top.
In the beginning you say that some invariant probability distribution is inherent to the system – e.g independent of the states of your brain and indeed of your existence.
Only to say a few words later that this same invariant probability is not inherent because it is based on the “knowability” of some parameters – e.g on my brain states and my existence.
But besides being contradictory it is also scientifically wrong.
Just read : http://www.dtic.mil/cgibin/GetTRDoc?AD=ADA446004
And then try to explain how these people manage to compute those “unknowable” microstates.
Actually this (and many similar) paper must have made magically vanish all probabilities from fluid dynamics, right?
The reality is that the first half of your statement is right and the second wrong. That’s also why the whole is a contradiction.
Tomas – I think it will clear to most readers that what I said was neither wrong nor contradictory, and if you review my comments (including the earlier one on coin tosses), you should understand why. The probabilistic nature of the behavior of a gas (including its entropyseeking) is inherentbecause our knowledge of the individual microstates is inadequate to specify its behavior on the basis of determiniistic properties those states possess (or may possess as MattStat pointed out). To an omniscient being, Tomas, there would be nothing probabilistic about a gasfilled volume whose macrostate is determined by deterministically behaving microstates. The inherent property comes from our lack of total knowledge.
I’m surprised you continue to be confused about this, because I thought my examples made it clear, and it appears to be understood by at least some other readers. I encourage you to go back and read all my comments to get a clear picture of what I said, and why it is both correct and internal consistent.
Fred, I understand what you are trying to say because it echoes what E.T.Jaynes has said via his work (see Probability Theory: The Logic of Science). Yes, I know that mentioning Jaynes will get everyone up in arms because (1) it is just warmed over Boltzmann/Gibbs and (2) he introduced the idea of subjective probabilities to physics, which is considered a nono. Yet, it often arises that we don’t have access to direct observations of some state space and all we have are the marginals, likelihoods, or priors and so we make do the best way we can. Not having knowledge of each one of the states, or of the exact governing equations, or spread in the parameters are examples of this ignorance in knowledge.
Fred’s examples give me an opportunity to reexpress the way of looking at things that I have discussed in several messages. My view has certainly been influenced by Quantum Mechanics and its interpretations.
In this approach the the interpretation that the molecules in a volume have a specific initial state whose knowledge would allow predicting the future state exactly is rejected, not only as unattainable in practice, but even on a more fundamental level: a state that has not been observed is not well defined even in principle. This is a central principle in Quantum Mechanics, but it’s not impossible to extend it to classical physics. In particular it can be extended in a natural way for a complex system that is continuously in some unpredictable interaction with the surrounding. In this approach the development is always stochastic. It’s not known or even knowable in principle, what will happen in the micro level collisions over some future interval as the molecular trajectories have been influenced directly or indirectly by the external interaction.
The coin toss example is quite different as the unknowable stochastic influences change the estimated future motion of the coin extremely little. In this case the stochastic phenomena may affect the probability in tenth or perhaps twentieth decimal. The coin toss example would become more similar, if the air where the coin is moving would be filled by other moving particles of mass similar to the coin to the extent that the coin would be hit by 1000 other particles between the moment the probability is estimated and the moment it reaches ground.
It’s natural that the approach I describe above was not satisfactory for 19th century thoretical physicists, but the situation has changed mostly through Quantum Mechanics, but I think that there are also other developments in the philosophy of science and in the interpretation of science that go in the same direction.
So, what’s your point about bringing MEMS into the discussion? My background is in semiconductor physics and that also uses macroscopic laws such as Maxwell’s equations that obviously scale down to the miscroscopic. What is the concept of carrier “holes” but an abstraction to indicate our ignorance about counting exactly how many electrons are missing from the Fermi levels. Holes are not only unknowable on the individual level, but they aren’t even real entities! Holes have different mobilities and other characteristics than electrons even though no one has ever even seen one.
In the semiconductor world, the transport regimes analogous to the stochastic and chaotic/deterministic talked about here are diffusive and ballistic transport. It is very hard to get to ballistic transport because the carriers thermalize so rapidly, which is the essential dissipation route to diffusive transport. The idea of stochastic resonance is all over the place in semiconductor physics and technology, with carrier population inversions, bandgap engineering, etc. but we don’t talk about it as stochastic resonance because there is already a well established terminology.
On top of that, there is the concept of dispersive transport, which is diffusive transport with huge amounts of disorder in the mobilities and bandgaps. This is the concept that I have done some recent research on because all the cheap photovoltaic technology is built on amorphous or polycrystalline garbage.
On that level, what I think is happening in the climate system is closer to a mix of dispersion and diffusion than determinism.
For example, as I study data relating to wind speed distribution, I am trying to get a handle on what controls the dispersion. This morning I just remembered that the wind power dissipation is the cube of the wind speed, while the distribution that has a skew (the third statistical moment) equal to the mean is just the distribution that matches the empirical results. This is to me is mysterious … is it just that the mean wind speed matches statistically the dissipation due to friction? Could that be some universal scaling result? In terms of maximum entropy it would be the distribution that has a constraint of a finite mean with the mean equating to cube root of the third moment (i.e Mean = standardDeviation * Skew). This then determines all the fluctuations observed over the larger ensemble. I don’t know, is anybody else but me looking at this from such a fundamental level?
@Tomas Fred you can’t ignore QM at Big Bang because it is precisely near this point where physics breaks down and quantum gravity is necessary.
I am fascinated to learn that there exists a concept in physics that is both undefined and necessary. Is there any other such concept?
There can be no question that quantum gravity is undefined. No one has a clue what it is. There is string theory and other ToE’s that try to unify the four fundamental interactions. There is loop quantum gravity which organizes spacetime as spin foam. And there are all the harebraned models of universes a millimeter away from ours that one can read about in Discover magazine etc.
The feeling that you need something that you cannot define sounds more like the sort of mental state you would find in DSMIV TR than anything to do with physics, let alone climate.
“Neglecting QM uncertainties, I think it’s reasonable to claim that every event in the history of the universe was predetermined at the time of the Big Bang”
Doesn’t this require the universe is finite with infinite precision?
I have a box on table A. I open the box and count 20 apples inside. I now close the box and take it across the room to table B, where I again open the box and count the apples. There are 20. Now I close the box and take it back to table A. Do I “know” there are 20 apples inside the box? If my answer is “yes” then have I made the ergotic assumption?
Sure, that is as ergotic as it gets, psychedelic and mindexpanding indeed …
Pekka
My origins are similar to yours. I also began with theoretical physics with major in QM.
That’s why I mostly agree with what you write even if I find that you stay too often only on the surface without wanting to understand whether the foundations of this or that belief are sound and consistent.
This post was initially a comment in another thread and Judith must have thought that it could interest more people so she put it apart.
I have written it because it was irritating to see in this other thread that some people were referring to ergodicity when they obviously ignored what it was.
It was not irritating because people ignore ergodic theory – after all statistically there are many more who ignore it than those who know it.
But it was irritating because those of the readers who come here to learn something, would come home with completely wrong ideas.
My purpose is not to start a philosophical discussion as interesting as it may be whether physics is just applied mathematics or whether mathematics are just idealised irrealistic approximation of physics.
I stick to the topic which is ergodicity for this thread.
And here, you have clearly misinterpreted this issue by statements like “all of physics is statistics”, “ergodicity is just some irrealistic mathematical invention” and you even side tracked the discussion by qualifying fundamental issues as “merely semantics”.
The truth is opposite.
The origin of the ergodic theory is purely physical – I gave a short list of names of physicists who contributed decisively to what is really an important progress in understanding of physics.
The mathematics came only much later – Boltzmann&Co didn’t come to ergodicity through the theory of measurable sets – it was not sufficiently developped at their time.
But today the ergodic theory is here to stay and it is a foundational element of all studies of dynamical systems – be it the motion of planets (links above) or predictability of chaotic systems.
The ergodic theorem which also came much later is what allows to know when a time average along an orbit can be taken equal to the phase space average and more importantly when it can NOT be taken equal.
On another thread somebody asked what was the point of all this.
Well very concretely in the climate case the point are “ensemble averages”.
Among others ergodicity would give the answer whether they are relevant to the dynamics of the system or not.
So you see, all this is physics and I would add very relevant physics.
Of course the mathematics play also a important role (and I spared the readers sigma algebras and Banach spaces and thus sinned against scientific rigor :)) as they should.
Considering like you too often do that everything is simple statistics and nobody actually needs some “confusing mathematics” doesn’t apply to this domain and to many others.
Tomas,
You are making a caricature of my views. I could not imagine myself claiming that all physics is statistics or anything like that. I do also accept the great power of Hamiltonian approach to a wide set of problems of physics. Neither do I want to downplay the importance of studying the mathematical basics of physical theories.
Where my views seem to differ is on the relative importance of approaches that are not mathematically as precisely understood but have turned out in practice to produce excellent results, in comparison with mathematically better formalized theories that few practicing physicists ever use in practical calculations. Some of these alternative approaches are simplifications that are known not to be strictly correct, but often they are based on a different mathematical approach that might sometimes be formulated precisely although that has not yet been done.
In this particular case the issue is related to the way a sufficient and correctly weighted coverage of the phase space is assured. Ergodic theory is one way of studying that and is based on deterministic equations of motions. Similar results may be obtained in a stochastic theory, which does not follow fully deterministic dynamics, but may give the same results in the limit of weak stochastic disturbances. I have pointed out that many well known results of statistical mechanics can be easily derived in a stochastic theory, and I do believe that much understanding on the validity of certain approaches to study real world systems is more easily obtained though the consideration of stochastics.
Very vaguely the success of statistical mechanics and other statistical methods is based on the law of large numbers. This basis is the reason for the fact that different approaches produce often identical results. Applying any precisely defined physical theory means always that the actual problem is idealized. There may be assumptions of total isolation, ideal experimental devices etc. Calculating fully the realistic case would be extremely cumbersome and physicists have learned, how they can take advantage of idealization of the setup.
What I was proposing in one of my messages is that an idealization based on stochastic formulation of the physical setup might be also fundamentally as correct as the idealization the great physicists of the 19th century introduced based on the deterministic view of physics that was accepted until QM made that issue more controversial. A theory based on stochastics might be as correct and at the same time more closely related to the issues that physicists applying the theory to practical problems meet in trying to find out the limits of applicability of their methods. The reason to believe that a theory based on stochastic may be fundamentally as justified is in the great practical success of that approach.
Another point is related to the nature of real stochasticity in our surroundings. I really cannot understand arguments that claim that it would not be very important and essential for understanding the Earth system. Stochastic phenomena and dissipation connected to that are essential. They are among other things the reason for the growth of entropy in all irreversible processes. New free energy enters continuously the Earth system and it must be removed through equal amount of dissipative processes, which can most easily be understood through stochasticity.
Among others ergodicity would give the answer whether they are relevant to the dynamics of the system or not
How would the answer to that question bear on the questions of whether AGW exists and if so whether it poses a serious problem?
You are making a caricature of my views. I could not imagine myself claiming that all physics is statistics or anything like that
Pekka there is no caricature.
Just go a little higher :
I would go as far as claim that stochasticity is the real nature of physics that’s analyzed by statistical mechanics. Ergodicity is a additional problem invented by mathematically inclined theoreticians who try to make the deterministic Newtonian mechanics consistent with this true nature.
This is as close to saying that all physics is statistics as you can get. Not mentioning Newton who has nothing to do with the issue.
But as I said, you are entitled to your opinions like everybody, I have no problem with that. Just stay consistent.
And to at last get a real (e.g not confused) idea about ergodicity, I have specially found for you : http://www.physicstoday.org/resource/1/phtoad/v26/i2/p23_s1?isAuthorized=no
Really read it to get an idea.
Tomas,
I formulated that sentence badly. I see now that it can be read in a different way and applying to a wider set of problems than what I had in my mind. My latest message should give a better view of what I wanted to say.
Tomas Milanovic  February 16, 2012 at 7:54 am  wrote:
“5) Most importantly and if there is a single thing that I would want that every reader of this thread brings home is the fact that statistics are consequence and not cause of dynamical laws.
Indeed the merit of ergodic theory as well as of non linear dynamics is mainly in the fact that they go much deeper than curve fitting of some statistics over some experimental data.
They bring a consistent and rigorous explanation of why some systems possess an invariant probability distribution of some property and why some others don’t.
Without this insight you will be never able to understand why f.ex the turbulence stubbornly refuses to be fitted to some statistical curve.
So yes, statitics are useful in some cases but not because they are “simple” but because they are the right emerging property of the dynamical laws in this particular specific case.
And ergodicity is one of the most important crisply defined dynamical properties that allows us to know when statistics will work and when they won’t.”
Some informallystructured comments:
This is useful, but I’m going to suggest that we be careful here. The hinges connecting reality with abstraction are T (an area of intense climate controversy) & “suppose”. Mathamatics offers an infinite world. We can get lost permanently on any branch of it. It’s fine to “go with” the hinges for abstract purposes, but it’s not necessarily ethical to do so for application (despite prevailing academic cultural norms). Another concern: Unfortunately, with Tomas’ shift of focus to ergodicity, many readers appear to have forgotten about the fundamental differences between temporal chaos & spatiotemporal chaos. Above there was one excellent suggestion (I think it was MattStat’s) that Tomas bring examples to the table next round. There’s also some mud that needs clarifying. For example, does Tomas think anomalies (from local annual cycles) are ok for the Tsonis approach? If so, we have some quite serious discussions to have about what EOP (Earth Orientation Parameters, not to be confused with earth orbital parameters) are telling us. Tomas’ greatest contribution to the online climate discussion so far has been in drawing attention to the fundamental differences between temporal chaos & spatiotemporal chaos, but I’m looking at this as one step on a stepping stone route to better conceptualization for online climate discussion participants. It’s going to take years, probably decades for all of this crossdisciplinary communication to shake down into something that doesn’t rattle many readers’ nerves. Patience is a virtue. Closing Caution: Climate conception MUST be consistent with the EOP record.
From the papers I have seen, Tsonis appears to lean quite heavily on statistical correlations to understand fluctuations. I recognize that approach because I was experimenting with a kind of multiscale variance characterization without being aware of his work.
People aren’t lifting a finger to understand the connection between the Tsonis stats and these:
1. http://wattsupwiththat.files.wordpress.com/2011/10/vaughn4.png
2. http://wattsupwiththat.files.wordpress.com/2011/12/image10.png
There’s this goofy notion that persists that it’s all just chaos. Such misconception is rooted in deep ignorance of raw fundamentals. I’m particularly disappointed in Dr. Curry for not yet publicly acknowledging the seminal importance of LeMouel, Blanter, Shnirman, & Courtillot (2010).
Before the internet era it was not uncommon for seminal papers to go unrecognized for decades, but perhaps now delays in collective human cognition can be challenged and shaved down in duration by orders of magnitude.
Background material for those willing to lift a finger:
Sidorenkov, N.S. (2005). Physics of the Earth’s rotation instabilities. Astronomical and Astrophysical Transactions 24(5), 425439.
http://images.astronet.ru/pubd/2008/09/28/0001230882/425439.pdf
Gross, R.S. (2007). Earth rotation variations – long period. In: Herring, T.A. (ed.), Treatise on Geophysics vol. 11 (Physical Geodesy), Elsevier, Amsterdam, in press, 2007.
http://geodesy.eng.ohiostate.edu/course/refpapers/Gross_Geodesy_LpER07.pdf
http://geodesy.geology.ohiostate.edu/course/refpapers/Gross_Geodesy_LpER07.pdf
Zhou, YH; Yan, XH; Ding, XL; Liao, XH; Zheng, DW; Liu, WT; Pan, JY; Fang, MQ; & He, MX (2004). Excitation of nonatmospheric polar motion by the migration of the Pacific Warm Pool. Journal of Geodesy 78, 109113.
http://202.127.29.4/yhzhou/ZhouYH_2004JG_PM_Warmpool.pdf
Best Regards.
Paul EOP is of use in understanding of the seasonal changes in the recent past exhibiting intermitent behaviour and interference (with mode locking and annihilation Jin 1995 Ghil 2008) eg Vecchio 2010
http://www.atmoschemphys.net/10/9657/2010/acp1096572010.html
Good find maksimovich (February 18, 2012 at 3:37 pm).
Vecchio, A.; Capparelli, V.; & Carbone, V. (2010). The complex dynamics of the seasonal component of USA’s surface temperature. Atmospheric Chemistry and Physics 10, 96579665. doi:10.5194/acp1096572010.
http://www.atmoschemphys.net/10/9657/2010/acp1096572010.pdf
“While changes in amplitude are commonly interpreted as caused by stochastic climate ﬂuctuations […], the phase of the seasonal cycle, namely the synchronization of each season during time, is yet poorly investigated.”
That’s part of what I was getting at when I referred to anomalythink “mud” needing clarification (in the Tsonis framework) below.
“Previous studies about the phase of the global seasonality have underlined the presence of a phase shift. Thompson showed that, after 1940, the phase behavior started to change at an unprecedented rate with respect to the past 300 years (Thomson, 1995).”
“The anomalous phase shift recorded after 1940 has been attributed to the increasing presence of CO2 in atmosphere (Thomson, 1995; Stine et al., 2009). The phase shift is not predicted by the current Intergovernmental Panel on Climate Change models (Stine et al., 2009), thus representing a yet rather obscure physical effect of the climate system.”
Yikes. Suggestion for these researchers: Study EOP FAR more carefully and pursue the exercise I’ve outlined below [ http://judithcurry.com/2012/02/15/ergodicity/#comment171311 ], which points to the answer to a question I asked the community last fall [ http://wattsupwiththat.files.wordpress.com/2011/10/vaughnsunearthmoonharmoniesbeatsbiases.pdf ] (and answered independently shortly thereafter – details forthcoming several months from now).
“Quite surprisingly, we found that the local intermittent dynamics is modulated by a periodic component of about 18.6 yr due to the nutation of the Earth, which represents the main modulation of the Earth’s precession.”
Hardly surprising, considering it’s crystal clarity in LOD’ [ http://wattsupwiththat.files.wordpress.com/2011/04/vaughn_lod2_fig1.png ].
I’m going to look into this article in more detail. It’s refreshing to see phaseawareness — sentient life at least lucidly acknowledging the dramaticallysimplifying realworld utility of complex numbers – (Chief Hydrologist take note). Thanks again maksimovich.
Paul the problem is that as inversion in the t anomalies can occur at different temporal scales,the understanding of the mechanisms are either poorly understood,or the models are using too small a time series eg Le Quere with southern winds.
Carvalho 2007 suggested that anti persitence is prevalent in the time record whith implications for global metrics.
In this study, lowfrequency variations in temperature anomaly are investigated by mapping temperature anomaly records onto random walks. We show evidence that global overturns in trends of temperature anomalies occur on decadal timescales as part of the natural variability of the climate system. Paleoclimatic summer records in Europe and
NewZealand provide further support for these findings as they indicate that antipersistence of temperature anomalies on decadal timescale have occurred in the last 226 yrs. Atmospheric processes in the subtropics and midlatitudes of the SH and interactions with the Southern Oceans seem to play an important role to moderate global variations of temperature on decadal timescales
http://www.nonlinprocessesgeophys.net/14/723/2007/
maksimovich  February 18, 2012 at 3:37 pm  wrote:
“Jin 1995 Ghil 2008”
I suspect you’re referring to articles listed here:
http://www.atmos.ucla.edu/tcd/MG/BiblioAug%2708.htm
Can you please clarify exactly which articles? If so, thank you.
@maksimovich  February 18, 2012 at 7:08 pm 
I regularly reference it:
Carvalho, L.M.V.; Tsonis, A.A.; Jones, C.; Rocha, H.R.; & Polito, P.S. (2007). Antipersistence in the global temperature anomaly field. Nonlinear Processes in Geophysics 14, 723733.
http://www.icess.ucsb.edu/gem/papers/npg147232007.pdf
And now I’m pointing further:
http://judithcurry.com/2012/02/15/ergodicity/#comment171311
Can you clarify exactly what you’re referencing when you write “Le Quere with southern winds”?
Can you please clarify exactly which articles?
Ghil 2008 explains the problem nicely (this was an invited paper to the 250 euler conference ) CLIMATE DYNAMICS AND FLUID MECHANICS:
NATURAL VARIABILITY AND RELATED UNCERTAINTIES 2008.
Parts of interest are
We have shown, for a stochastically perturbed Arnol’d family of circle maps, that noise can enhance model robustness. More precisely, this circle map family exhibits structurally stable, as well as structurally unstable behavior. When noise is added, the entire family exhibits stochastic structural stability, based on the stochasticconjugacy concept, even in those regions of parameter space where deterministic structural instability occurs for vanishing noise
Clearly the hope that noise can smooth the very highly structured pattern of distinct behavior types for climate models, across the full hierarchy, has to be tempered by a number of caveats. First, serious questions remain at the fundamental, mathematical level about the behavior of nonhyperbolic chaotic attractors in the presence of noise. Likewise, the case of driving by nonergodic noise is being actively studied
Second, the presence of certain manifestations of a Devil’s staircase has been documented across the full hierarchy of ENSO as well as in certain observations Interestingly, both GCMs and observations only exhibit a few, broad steps of the staircase, such as 4 : 1 = 4 yr, 4 : 2 = 2 yr, and 4 : 3 _= 16 months.
and theorem 3b
Before applying this result, let us explain heuristically how a Devil’s staircase step that corresponds to a rational rotation number can be destroyed” by a sufficiently intense noise. Consider the period1 locked state in the deterministic setting. At the beginning of this step, a pair of fixed points is created, one stable and the other unstable. As the bifurcation parameter is increased, these two points move away from each other, until they are pi_ radians apart. Increasing the parameter further causes the fixed points to continue moving along, until they finally meet again and are annihilated in a saddlenode bifurcation, thus signaling the end of the locking interval
@maksimovich (February 18, 2012 at 3:37 pm)
From your Vecchio, Capparelli, & Carbone (2010) lead, tracked this down:
Stine, A.R.; Huybers, P.; & Fung, I.Y. (2009). Changes in the phase of the annual cycle of surface temperature. Nature 457, 435441. doi:10.1038/nature07675.
http://www.seas.harvard.edu/climate/seminars/pdfs/Stine_2009.pdf
They’re looking at an interesting problem, but they’re way offtrack. I think some readers here will find their reference to IPCC model failings interesting, even if those readers have little or no interest in actually lifting a finger to help better understand & appreciate natural variability.
maksimovich (February 18, 2012 at 8:07 pm)
drew attention to:
Ghil, M.; Chekroun, M.D.; & Simonnet, E. (2008). Climate dynamics and fluid mechanics: natural variability and related uncertainties. Physica D.
http://www.atmos.ucla.edu/tcd/PREPRINTS/EE250_GCSFinal_bis.pdf
http://tel.archivesouvertes.fr/docs/00/18/49/46/PDF/GCS_Final.pdf
While I find few of their abstract modeling musings enlightening, I do share this very specific concern with them:
“The influence of strong thermal fronts – like the Gulf Stream in the North Atlantic or the Kuroshio in the North Pacific – on the midlatitude atmosphere above is severely underestimated.”
Exactly – look at what stands out flashing in climatology animations:
Net Surface Heat Flux:
http://oi54.tinypic.com/334teyt.jpg
200hPa Wind:
http://i52.tinypic.com/zoamog.png
200hPa Wind — Polar View:
http://i52.tinypic.com/cuqyt.png
Google “face gear” & “crown gear” for image (not web) results. What we have at the surface [ http://upload.wikimedia.org/wikipedia/commons/6/67/Ocean_currents_1943_%28borderless%293.png ] are ~5.5 such gears – FLUID ones that shapeshift & move with the seasons.
EOP indicate that they are COLLECTIVELY CONSTRAINED GLOBALLY at the timescale of the Schwabe solar cycle.
(Aside: Think carefully about the consequent implications for potentially naively paradoxical interpretation of spatiotemporally aggregated statistical summaries, such as can be easily demonstrated in simpler analogous settings, for example under simple constraints of the form a+b+c=1. One can change the sign of relations by simply varying the aggregation criteria.)
Interpreting the stats that show this apparently requires a very specific combination of skills possessed by almost no one (?) involved in climate research & discussion, so far as I’ve been able to tell (so far at least). I would possibly be eternally appreciative if Dr. Curry could eventually put me in contact with some sensible researchers who are actually able to see this clearly, with ease. If I had time & resources, I could write this up at a level accessible to brighter high school students, but it would be a monstrously timeconsuming task to simplify that much. I would need absolutely guaranteed financial shelter for at least 3 (probably 5) years before I would even consider attempting it. The formidable communication challenge is extreme background variability in an extremely diverse audience – i.e. one cannot assume uniformity of knowledgecombo of applied aggregation & hierarchy theory, complex numbers, wavelets, advanced correlation & regression diagnostics in a complex wavelet setting, EOP, etc.
With increasingly superior info in hand from more insightful data exploration, it will be quite interesting to see what magic will be worked by characters like Ghil & Milanovic.
Max and Paul, thank you thank you for the Ghil ref, it is an important one, and one that i will integrate into my next modeling article.
Help, how does Ghil’s nonhyperbolic chaos relate to spatiotemporal chaos?
Dr. Curry, I may not have this to the correct level, but hyperbolic chaos typically has the bifurcations which are deterministic, so their patterns can be resolved in shorter time scales. A Spirograph would be a fair analogy. Nonhyperbolic chaos the bifurcations are muted or drift making them inconsistent, non deterministic in shorter time scales. Three year old with a crayon.
Tomas can correct me, but hyperbolic chaos is basically ergodic and nonhyperbolic more likely nonergodic, adding layers of complexity. Different effects in the system can be either.
When I was playing with my simple radiant versus all flux model, radiant is hyperbolic, peaking at some ideal value and conductive/convective/latent are relatively linear which tends to offset the radiant peak.
http://redneckphysics.blogspot.com/2012/02/comparingperfection.html
That is pretty crude, but it is a fair approximation of the hyperbolic radiant peak at 50% emissivity. Volcanic would be nonhyperbolic adding the pseudo, to the pseudocycles, solar would be hyperbolic. Since they ocean/atmosphere circulation are related to both, they would have nonhyperbolic tenancies. That is why I believe the climate is nonergodic, which Tomas will likely correct me for saying shortly :)
Not in any unique or well defined way. Hyperbolicity and hyperbolic chaos has been studied extensively for systems of small number of variables while the whole idea of spatiotemporal chaos is to look at systems with vary many and possible a continuous set of variables.
As an example this book
http://www.springer.com/physics/statistical+physics+%26+dynamical+systems/book/9783642236655
has a part on higherdimensional systems, which considers four oscillators as a case of higherdimensionality. That’s very far from spatiotemporal chaos of a system formed by continuous fields in threedimensional space.
Thanks Pekka
I could still add in agreement with capt. dallas that hyperbolicity is a restriction that’s satisfied by certain relatively simple systems, but almost certainly impossible for spatiotemporal chaos. Many much simpler systems are, however, also nonhyperbolic.
Being hyperbolic is a strong restriction, but nonhyperbolicity by itself doesn’t say very much as the systems can be nonhyperbolic in very different ways (as unhappy marriages are all unhappy in their own way according to Tolstoi).
Pekka said, “Being hyperbolic is a strong restriction, but nonhyperbolicity by itself doesn’t say very much as the systems can be nonhyperbolic in very different ways (as unhappy marriages are all unhappy in their own way according to Tolstoi)”
Very true, I guess that is why they call it chaos math :) Even nonergodic systems have some higher probability regions on various time scales. Selvam has quite a few systems that she considers nonergodic.
I think a reasonable approach though is to pick out the hyperbolic signatures one at a time to at least establish limits on what we know, so we can better understand what we don’t know. Baby steps, but it seems like some progress can be made.
I chuckle at much of the discussion of chaos and nonlinearity when it comes to try to understand various natural phenomenon. The classic case is the simplest model of growth described by the logistic differential equation. This is a nonlinear equation with a solution described by the socalled logistic function. Huge amounts of work have gone into modeling growth using the logistic equation because of the appearance of an Sshaped curve in some empirical observations. (when it is a logistic difference equation, chaotic solutions result but we will ignore that for this discussion)
Alas, there are trivial ways of deriving the same logistic function without having to assume nonlinearity or chaos; instead one only has to assume disorder in the growth parameters and in the growth region. The derivation takes a few lines of math.
Once one considers this picture, the logistic function arguably has a more pragmatic foundation based on stochastics than on nonlinear determinism.
That is the essential problem of invoking chaos, in that it precludes (or at least masks) considerations of the much more mundane characteristics of the system. The mundane is that all natural behaviors are smeared out by differences in material properties/characteristics, variation in geometrical considerations, and in thermalization contributing to entropy.
The issue is that obsessives such as the Chief and others think that chaos is the hammer and that they can apply it to every problem that appears to look like a nail.
Certainly, I can easily understand how the disorder in a large system can occasionally trigger tipping points or lead to stochastic resonances, but these are not revealed by analysis of any governing chaotic equations. They simply result from the disorder allowing behaviors to penetrate a wider volume of the state space. When these tickle the right positive feedback modes of the system, then we can observe some of the larger fluctuations. The end result is that the decadal oscillations are of the order of a tenths of degrees in global average temperature.
Of course I am not wedded to this thesis, just that it is a pragmatic result of stochastic and uncertainty considerations that I and a number other people (such as Curry) are interested in.
Tomas Milanovic  February 15, 2012 at 9:20 am 
“For instance if a system’s dynamics show that there EXIST invariant subsets in the phase space then this dynamics is NOT ergodic regardless how hard one would wish that they were.”
Suppose key factors are (perhaps due to malicious deception &/or naive ignorance) absent from T. Then we have potential for paradox (via abstractly hidden mixing across real structured gradients). Careful data exploration has the power to reveal such abstractlyhidden existence. Lay readers: This is an Achilles Heel, a warning to not accept T without carefully inspecting it diagnostically (using observational data). EOP indicate global constraints on climate. Why are these constraints not front & center in climate modeling diagnostics? People have trouble recognizing Simpson’s Paradox. Often they don’t even remember to consider the possibility of it’s existence. And in a multivariate context like climate, who is even capable of imagining, let alone detecting, the many possible forms of multivariate statistical paradox? A recently discovered (2010) example of previously unrecognized mixing across key gradients is the existence of a clear solar signal in the terrestrial westerly winds. It is camouflaged by interannual variability in a manner that COMPLETELY fools traditional data analysis methods. This APPEARS to remain controversial (but isn’t) since so few possess the functional numeracy necessary for understanding the nature of the pattern masking. (It’s actually dead simple for anyone who truly understands it.) I will have more to say about this several months from now, but I mention it here informally to caution those (the vast majority here I estimate) who seem to NOT be making the connection between T and the endless controversy we see daily in the climate discussion. Regards.
Dear Tomas Milanovic
I applaud you for sticking your head above the parapit. It is very difficult to state an abstract concept on a blog.
I have to say that I have always had difficulty with Ergoticity – while understandiing the mathematical formalism, I’ve never really understood its implications, as I suspect, people who say “in a deterministic. stationary Ergoditic system … etc,” don’t understand either.
Thank you for making putting up this post. I enjoyed reading it and I understand the difficulties of presenting this concept but you have done well.
Bravo!
‘First we construct a network from four major climate indices. The network approach to complex systems is a rapidly developing methodology, which has proven to be useful in analyzing such systems’ behavior [Albert and Barabasi, 2002; Strogatz, 2001]. In this approach, a complex system is presented as a set of connected nodes. The collective behavior of all the nodes and links (the topology of the network) describes the dynamics of the system and offers new ways to investigate its properties…In this approach, a complex system is presented as a set of connected nodes. The collective behavior of all the nodes and links (the topology of the network) describes the dynamics of the system and offers new ways to investigate its properties. The indices represent the Pacific Decadal Oscillation (PDO), the North Atlantic Oscillation (NAO), the El Nino/Southern Oscillation (ENSO), and the North Pacific Oscillation (NPO) [Barnston and Livezey, 1987; Hurrell, 1995; Mantua et al., 1997; Trenberth and Hurrell, 1994]. These indices represent regional but dominant modes of climate variability, with time scales ranging from months to decades. NAO and NPO are the leading modes of surface pressure variability in northern Atlantic and Pacific Oceans, respectively, the PDO is the leading mode of SST variability in the northern Pacific and ENSO is a major signal in the tropics. Together these four modes capture the essence of climate variability in the northern hemisphere. Each of these modes involves different mechanisms over different geographical regions. Thus, we treat them as nonlinear subsystems of the grand climate system exhibiting complex dynamics.’ Tsonis 2007 – A new dynamical mechanism for major climate shifts
Webby’s contention that Tsonis uses statistical correlation to examine flucuations is trivially true but misses the point. Correlation is calculated in a sliding window between all possible pairs of nodes and used to calculate the distance between nodes.
‘The distance can be thought as the average correlation between all possible pairs of nodes and is interpreted as a measure of the synchronization of the network’s components. Synchronization between nonlinear (chaotic) oscillators occurs when their corresponding signals converge to a common, albeit irregular, signal. In this case, the signals are identical and their crosscorrelation is maximized. Thus, a distance of zero corresponds to a complete synchronization and a distance of (square root) 2 signifies a set of uncorrelated nodes.’
Thus a quantitative analysis of climate shifts observed in the indices that are not smeared by dissipation. The ripples on a pond are not comparable to the ‘Great Pacific Climate Shift’ of 1976/77.
To the great Chief, it’s trivial when someone else goes and gets their hands dirty, but we have to bow in honor when he links to what he considers an authoritative source.
I find one thing surprising in the Tsonis approach for climate shift in the paper that Chief links when one compares it to some of his other papers. In this one, which is supposedly breakthrough (?), he doesn’t do the control study assuming that the movements are just random walk. Whereas in the other papers, he does do the random walk simulations. What I typically find in doing correlations between random movements, is that the large rare correlations can occur as spurious events — there aren’t many of them so when they do occur they tend to stick out and so it adds the element of random chance confirmation bias. In other words, the correlation numbers he generates are not any more convincing than if one pointed at anecdotal coincidences. He is doing the correlations between only 4 (!) networked nodes for cripes sake!
Of course, one can say the same thing about the gradual AGW warming trend, which could be one rather large spurious event (the single network node of the GHG model) or confluence of additive events, but that is the way it works. One ultimately needs a huge amount of supporting evidence, and this should grow with time.
Thus, Tsonis is applying network analysis to the correlations but is not doing the extra number crunching necessary to evaluate the complete state space. Network analysis has as much to do with all the very common connections as the large rare connections. I can point to some work that I have done in looking at a network model of research citations. This is a network because all citations are mutual in some respect. One not only needs to look at the articles with a huge number of citations, the “dragonkings” in Sornettespeak, or “gray swans” as Taleb refers to then, but also all the other citations in the population. Only then can you get an idea of how the network nodes evolve. In the cases I looked at, I used an underlying dispersive learning curve model to map the density from single cited articles to the maximally cited articles. See these figures I came up with:
PDF for research citation learning model
http://img707.imageshack.us/img707/6978/citepdf.gif
CDF for research citation learning model
http://img713.imageshack.us/img713/9845/citecdf.gif
As an analogy, what Tsonis is essentially doing is looking at possible coincidental synchronization of several highly cited works in time and space. This could happen just by chance. Sure, I understand how this will occur because of of the spatiotemporal nature of the earth’s climate system but the statistical argument is not yet convincing. He took just the tip of the iceberg of network analysis and ignored the rest, with no consideration of all the lesser nodes in the network. As with chaos, there is some element of hype in the way that Tsonis has drawn on network analysis to promote his research. I am not seeing we are gullible, but someone really has to cast a critical eye on his approach.
Suggestion for Chief, WHT, & all others (especially climatologists):
Reproduce figure 3’s middle panel, stratify by month, and then take the integral:
Mursula, K.; & Zieger, B. (2001). Longterm northsouth asymmetry in solar wind speed inferred from geomagnetic activity: A new type of centuryscale solar oscillation? Geophysical Research Letters 28(1), 9598.
http://spaceweb.oulu.fi/~kalevi/publications/MursulaAndZieger2001.pdf
This simple exercise will clear the fog on climate shift temporal synch. It’s neither random nor chaotic. It’s something FAR simpler: solar phasealignment with the terrestrial year. Synchronization occurs at times of GLOBAL constraint. The data that demonstrate this:
1. ftp://ftp.iers.org/products/eop/longterm/c04_08/iau2000/eopc04_08_IAU2000.62now
2. ftp://ftp.iers.org/products/geofluids/atmosphere/aam/GGFC2010/AER/
3. ftp://ftp.ngdc.noaa.gov/STP/SOLAR_DATA/SUNSPOT_NUMBERS/INTERNATIONAL/daily/RIDAILY.PLT
4. ftp://ftp.ngdc.noaa.gov/STP/SOLAR_DATA/RELATED_INDICES/AA_INDEX/aaindex
Mother Earth’s not always equally receptive to Father Sun’s advances; she has her own internal cycles [ http://upload.wikimedia.org/wikipedia/commons/6/67/Ocean_currents_1943_%28borderless%293.png ].
If anyone has trouble understanding the concepts underpinning the construction of the Paul Wavelet (Dr. Curry included), just ask. I’m currently devising a more intuitive reparameterization of the Paul Wavelet (in terms of grain, extent, & sampling resolution) that should help people see it as a superior alternative to the Complex Morlet Wavelet in many contexts.
Folks: I’m going to suggest that we need to cut out some of the yammering to make time to do actual serious work. Everyone, especially climatologists: Please do the exercise I’ve outlined here to demonstrate that you’re serious. Ask for help if you need it.
Regards.
It’s kind of difficult to do an integration on colors, but I bet that the numbers integrate to a constant, with the deep blue and red cancelling each other. This sounds like a valid analysis if it shows correlation to temperature, as every bit of evidence counts..
WHT, you will most assuredly NOT find cancellation. Also, bear in mind that if you do the calculations, you will have the actual numbers summarized in the colorcontour graph. I’ve linked to the data above. You can ask for help if you get stuck, but my availability drops off sharply again after today as I’m working serious hours the next several days starting tomorrow. I sincerely hope Dr. Curry will try these calculations – or at least delegate them to a talented research assistant &/or grad student. It’s important to realize that temporallyglobal methods will FAIL (hard underscore); windowing is ESSENTIAL. Not only is windowing essential, the role of the SHAPE of the complex envelope is KEY to sensible interpretation, so a firm handle on correlation & regression diagnostics fundamentals is absolutely indispensable, as is a firm foundational handle on the effect integration across harmonics. Regards.
Again these are atmospheric and oceanic indices that capture the major modes of NH climate variability. The shifts are in the records and were recognised long ago.
‘One of the most important and mysterious events in recent climate history is the climate shift in the mid1970s [Graham, 1994]. In the northern hemisphere 500hPa atmospheric flow the shift manifested itself as a collapse of a persistent wave3 anomaly pattern and the emergence
of a strong wave2 pattern. The shift was accompanied by seasurface temperature (SST) cooling in the central Pacific and warming off the coast of western North America [Milleret al., 1994]. The shift brought sweeping longrange changes in the climate of northern hemisphere. Incidentally, after ‘‘the dust settled,’’ a new long era of frequent El Ninos superimposed on a sharp global temperature increase begun.’ No one is trying to prove these shifts occur – they are right in front of our noses. Much like the multimodality of glacials and intergalacials. What we have instead is an approach to investigate the nature of these shifts.
This is science – it is hypothesis, analysis and synthesis. It shows that the signals behave as hypothesised. The synchonise at the right times in the observed records. The indices behave as nonlinear oscillators in the overall complex climate system. The oscillators are not noise at all but coherent signals. You seem to think that nothing in climate has a cause
and everything is a random walk.
I read an article once saying that predictions of ENSO three months in advance were no better than a random walk. This emphatically doesn’t mean that ENSO is a random walk. You are all nonsense – as bad as Paul Vaughan or any of the other obsessive idiot on the blog.
Network synchronization is just symptom “chief”, symptom of tightening global constraint. I suggest that moving forward Dr. Curry tighten a global constraint on your degenerating manners.
Last time this topic came up, it was the attractor idea that I think is suspect here. The weather state may be ergodic within a limited time window, That is, in principle the weather can return to the same state, but there is no way the weather in 2012 can return to a state in 1940. The CO2 level makes sure of that. The whole mean temperature and profile has changed. I think this limits the value of erogodicity in talking about climate change.
True. An engineering view of this is that we are operating on a new transfer function with a neverbeforeseen stimulating forcing function. So it’s a twofer.
The contrarian view of this is that the climate is constantly going through these massive shifts — thus the interest in Tsonis and those who can seemingly detect those shifts.
When ergodicity is discussed it refers normally to a system in thermodynamic equilibrium, which is also stationary. That was certainly true for the original discussion in the 19th century. The formal mathematical definition presented by Tomas may as far as I can see be applied also to nonequilibrium stationary systems like the Earth system when all external forcings remain constant including both genuinely external factors like the sun and properties of the Earth system considered external for the processes being analyzed like the CO2 concentration.
If the changes in forcings are slow enough to allow the system to behave as if in full equilibrium at every moment, ergodicity refers to what would be true, if the forcings would remain at the momentary values. As going really trough all states takes forever (the life of the universe is much too short for that) it doesn’t really matter much whether the effective stationarity is true for a few years or for a few thousands of years, but the slowness of the change in comparison with reaching the equilibrium is essential.
The Earth system as whole including deep oceans is certainly extremely slow in reaching a stationary state. Thus ergodicity will never apply to it. Some sybsystems like the atmosphere and surface ocean to some depth may get close to a partial equilibrium much faster, perhaps in a few years, but this is only a possibility and perhaps not a very likely possibility. It appears more likely that continuous change as dynamics are important for the atmosphere as well on all relevant time scales.
So what is left from ergodicity for the Earth . Some vague concepts that can be used to describe the general nature of the situation, but cannot be expected to be rigorous. Ergodicity in this limited and vague sense means that what we have seen about the subsystem is representative of all future possibilities within the period being considered. That’s very far from the rigorous formal definition, but that’s still perhaps as close as we can get.
Suggestion for Tomas’ next Climate Etc. post
(if there will be one):
Ergodicity in Temporal Chaos
Versus
Ergodicity in Spatiotemporal Chaos
…with examples.
There seem to be a good number of commenters who are hardwired to ignore the spatial dimensions, fixating stubbornly on time only, and a similar number of others whose spatial awareness is at best recurrently fleeting, so I’m suggesting the contrast as another way to confront the recurring conceptual lapses against which Tomas has cautioned.
I add the following admittedly deliberate provocation to help motivate further corrective action:
Hopelessly stubborn fixation on temporal (i.e. timeonly) chaos theory in online climate discussions is of NO VALUE WHATSOEVER in understanding natural terrestrial climate variations. Worse than that, it bombs what might otherwise might be enlightening discussion back to a conceptual stone age.
–
Thanks to Tomas Milanovic for playing an important role that otherwise might be left void.