by Tomas Milanovic
This post has been triggered by the following comment from Eli Rabbett in the spatio-temporal chaos thread :
“Point being that it is possible to handle even classically chaotic spatio-temporal systems because the available parameter space is bounded. Even for a system which is chaotic, paths through the parameter space do not necessarily fill the entire space and measures of the areas which are filled can be used to make future predictions. This can and has been done using equations of motion which are not chaotic which is the approach of GCMs. What you are looking for is the area of parameter space occupied by and ensemble of orbits. The key is that the limits/boundaries are the same for the deterministic and chaotic paths and what you are really interested in are the boundaries, not the specific values at any point in time.
See statistical mechanics for an analogy. Paths of molecules are chaotic, but what we are interested in are average energies (temperature), etc., and the effect of changes in the system on the average energy/temperature. ”
While this statement is incorrect, I find it interesting because this kind of statement is quite common and probably even dominating in climate science. Let us examine separately the different parts of the statement:
1) “Point being that it is possible to handle even classically chaotic spatio-temporal systems because the available parameter space is bounded.”
Dynamical system theory (or chaos theory) are always classical theories and not quantum mechanical. The notion of quantum chaos exists but has nothing to do with the debate that interests us in the context of climate. That the available parameter space is bounded is a trivial tautology, meaning that any physical measure is necessarily finite. It is not because of this tautology that the chaotic system can be “handled”. It can be simply handled because we have developed a chaos theory which does the “handling”.
2) “Even for a system which is chaotic, paths through the parameter space do not necessarily fill the entire space and measures of the areas which are filled can be used to make future predictions.”
In a sense the statement is again trivial. Of course when one deals with a dissipative system (as opposed to a conservative system), the initial volume in the phase space is not conserved during the evolution but decreases. So the orbits described by the system in the phase space are constrained to a finite subspace that can be reduced to a single point (aka equilibrium) or a cycle (aka periodic movement). Any dynamical dissipative system, regardless whether it is chaotic or not, behaves like that.
In the case of a chaotic system there often is an invariant subset of the phase space which is called an attractor. It is called attractor because the system, after a more or less long transient, settles on the attractor and thereafter its orbits stay constrained on it. So it is rather trivial to say that the orbits do not fill the whole phase space but only a part of it, which is precisely defined by the attractor.
Indeed even and especially for chaotic systems, most of chaos theory is dedicated to the study of the attractors’ properties. Needless to add that the existence of an attractor doesn’t make “future predictions” any easier. The orbit stays as unpredictable as ever, e.g it is impossible to know where the system will be on the attractor at any time. Sure it will always be somewhere on the attractor but that is not a terribly interesting and accurate prediction.
Now it is also necessary to stress something that has been said many times but that apparently has not yet sufficiently percolated. We are talking here about finite dimensional phase spaces where the coordinates are well defined, so the system’s orbits and finally the attractor’s geometry are well defined too. This means that we deal exclusively with systems that are described by a finite number of ordinary differential equations, which is equivalent to saying that we have a finite dimensional phase space.
This is what is called temporal chaos theory or often just chaos theory. Lorenz system, logistical equation, oscillating electronic circuits, gravitationally interacting systems (e.g. the Solar system or the 3 body system), and billiard balls are examples of chaotic systems described in finite dimensional phase spaces.
3) “This can and has been done using equations of motion which are not chaotic which is the approach of GCMs. What you are looking for is the area of parameter space occupied by and ensemble of orbits. The key is that the limits/boundaries are the same for the deterministic and chaotic paths and what you are really interested in are the boundaries, not the specific values at any point in time.”
There are NO chaotic or non chaotic “equations of motion”! There are only the CORRECT “equations of motion” whatever they are. Then hopefully they have solutions. It is those solutions that are or are not chaotic. Laminar flow is a solution of Navier Stokes equations and it is definitely not chaotic, whereas turbulent flow is a solution of the same Navier Stokes equations and it is chaotic.
Now while one can debate about what the GCM do, there is no debate about what they do NOT do. They do not solve any “equations of motion” because first they are unknown and second even if they were known, the GCM’s spatial and temporal resolution doesn’t allow to solve them. Even the convergence question cannot be answered because to converge to a solution one would have to reduce the resolution until convergence, which is impossible. So while it is known that there are chaotic solutions to Navier Stokes and that the weather is an example of such a solution that Nature choose, numerical simulations of GCM’s are irrelevant to the question.
When we have a chaotic solution and an attractor, we will be looking preferably in the places where the system is (aka attractor) rather than in the places where the system is not (aka outside of the attractor). Obviously as the attractor is defined as a subset of the phase space invariant by the dynamics, it is difficult to look for the attractor without knowing what the solutions exactly do.
An attractor is a geometrical structure in the phase space. As an example, suppose the shape of the attractor is a 3D sphere in a 4D phase space. Its “boundary/limit” is the surface of the sphere. Now by the definition of an attractor, the system is always somewhere within the sphere. The surface is only a separation between places where the system will be and the places where the system will not be. It is really no key to anything. It is like an engineer designing a turbine and when asked to predict what the RPM will be, he answers “It is key to know that it will not be 100 000 RPM”.
The whole statement becomes even less understandable when one knows that most chaotic attractors are fractal (e.g have a fractional dimension), e.g. the Lorenz attractor. Then the notion of “boundaries/limits” of a set with dimension 1.73 is not even properly defined.
4) “See statistical mechanics for an analogy. Paths of molecules are chaotic, but what we are interested in are average energies (temperature), etc., and the effect of changes in the system on the average energy/temperature. “
Most analogies explain nothing and misinterpret everything, and this one is no exception. This one is not an exception. However consideration of a large number of molecules leads to an important point.
A large number of molecules can be considered to be a system of colliding hard spheres. This system has a finite dimensional phase space and its orbits are indeed temporal chaos. However this system has an interesting property – ergodicity. Ergodicity means that there exists a measure of the phase space (think measure=probability) that is invariant by the dynamics. Another way to say the same thing is that the probability that the system is in a volume dV of the phase space is proportional to dV. Another very slack way to say it is to say that the system finishes by being more or less everywhere if one waits long enough.
The ergodic theorem states that for an ergodic system, a time average of X taken on a given orbit (in the infinite limit) is equal to the weighted average of all X for the whole system. The averages of dynamical variables (degrees of freedom) of a system make sense and are relevant if and only if the system is ergodic. However it is not yet clear whether ergodicity is a necessary or a sufficient condition alone in order to have a stochastical interpretation at least as robust as statistical thermodynamics; this issue is cutting edge science.
Is ergodicity a given for every chaotic system? NO! Chaotic systems can be indifferently ergodic or not ergodic. An example of a non ergodic chaotic system is the already mentioned case of N bodies in gravitational interaction. All this has of course absolutely nothing to do with climate unless somebody wants to suggest that the climate is in fact the same thing as plenty of hard spheresJ
For a good but not too technical summary about ergodic theory, see here. For a good and very deep but very technical paper, see here.
So now lets discuss what is relevant to the climate system. If the reader is ready to admit with me (I am borrowing this insight from R.Pielke Sr and give him credit) that the climate problem is more complex than the Navier Stokes problem, then I will restrict myself in the following to the Navier Stokes problem. This has an advantage that in the Navier Stokes case we know what we are talking about and there are actually real results with real mathematics and physics inside. If you don’t like this idea, then replace mentally Navier Stokes by climate everywhere below.
Fluid dynamics is a field theory. This means that the solutions of the Navier Stokes partial differential equations are fields – functions f(x,y,z,t) like velocity and pressure fields. The “phase space” of fluid dynamics is a Hilbert space where the elements are fields (functions). This Hilbert space is uncountably infinite dimensional (the L2 space of square integrable functions) and exactly the same as the one used to study quantum mechanics and more generally any PDE system.
This above mentioned fundamental property is what makes the difference between temporal and spatio- temporal chaos. It can be summarized in the following table :
| Temporal chaos | Spatio-temporal chaos | |
| Phase space | Finite dimensional. Isomorph to Rn | Infinite dimensional. Hilbert space of functions L2 |
| Elements of the phase space | Vectors with N coordinates. No spatial autocorrelations | Fields F(x,y,z,t). Anistropic spatial autocorrelations. |
| Equations defining the dynamics | N first order nonlinear ODE | Nonlinear PDE |
| State of the system | Fully defined by 1 point P(t) in the phase space | Fully defined by M fields |
| Orbit | Evolution of P(t) in the phase space | Undefined in an infinite dimensional space |
| Attractor | Subset of Rn invariant by the dynamics. Can be fractal. Has a geometric representation | Subset of the L2 Hilbert space containing the fields solving the dynamical equations. No geometric representation |
| Stochastics | Possible invariant PDF in the phase space if the system is ergodic | Here be dragons |
Now by analogy with temporal chaos, attempts have been made to characterize the attractors in the Hilbert space for spatio-temporal chaos and the following major result has been proven for 2D Navier Stokes as well as for some other examples of specific PDE systems, see here. There are more references dealing with modern concepts like inertial manifold, general attractors etc but they are all too technical for a casual reader.
This result proves that for some cases there exists a finite dimensional attractor in the Hilbert space. It means that any solution of the 2 D Navier Stokes may be expressed as a linear combination of a finite number of fields, which constitute the basis of the global attractor. Of course an existence proof and an upper limit on the dimension doesn’t mean that we are able to actually find these fields that would allow to obtain a general solution for the considered PDE system.
Also the proof is still unknown for 3D Navier Stokes equations and there is clearly no certainty that any dynamical spatio-temporal system must possess a general attractor. However, this new and modern research is a very promising direction to reduce the infinite dimensional spatio-temporal chaos to the finite domain what would make it more tractable to obtain a relevant quantitative model.
So what does that mean for climate science? Climate being a spatio-temporal field theoretical problem, must clearly be studied by the correct methods described above in order to get correct results. Even if the problem is more complex than the “simpler” Navier Stokes problem, one may imagine the consequence of a discovery of an inertial manifold of finite dimension for the climate system.
For the sake of illustration, let’s imagine that the attractor dimension is 10. This means that there are at most 10 different and independent functions (fields) that define an invariant subspace of functions (fields) where all the climate solutions live. These 10 functions (fields) can be considered as fundamental “oscillation modes” of the system not very different from the concept of “oceanic oscillations”. Of course there is no reason that the fields be temperature fields (let alone surface temperature fields), but whatever the fields are, it would appear that all climatic states are obtained by just making these 10 fields interact among themselves. Obviously the greenhouse gas (GHG) concentration field would play a role too, but the climate could be reduced to GHG only if the attractor was one-dimensional, which is clearly excluded. This would definitely solve the climate problem.
Nothing tells us that such a finite dimensional attractor exists, and even if it existed, nothing tells us that it would not have some absurdly high dimension that would make it unknown forever. However the surprising stability of the Earth’s climate over 4 billion years, which is obviously not of the kind “anything goes,” suggests strongly that a general attractor exists and its dimension is not too high. I will not speculate about what number that might be. But in any case and to use the IPCC terminology, it is very unlikely (<5%) that naïve temperature averages, 1 dimensional equilibrium models, or low resolution numerical simulations (GCM) can come anywhere close to solving the problem.
As I have already commented here, the only approach that in my opinion goes in the right direction is Tsonis (see also the thread on climate shifts). If one reinterprets the Tsonis theory in the frame of a more general and correct field theory, he suggests that the climate attractor exists and is 5 dimensional. He identifies the 5 fields with 5 oceanic oscillations and quantifies every field by a single number (index). He doesn’t formulate it that way and it is extremely unlikely that a 3D field can be relevantly represented by a single number, but the paradigm is on the right track. I am convinced that this kind of approach will eventually lead to progress.
Moderation note: this is a technical thread that will be moderated for relevance.
