by Tomas Milanovic
This post has been triggered by the following comment from Eli Rabbett in the spatio-temporal chaos thread :
“Point being that it is possible to handle even classically chaotic spatio-temporal systems because the available parameter space is bounded. Even for a system which is chaotic, paths through the parameter space do not necessarily fill the entire space and measures of the areas which are filled can be used to make future predictions. This can and has been done using equations of motion which are not chaotic which is the approach of GCMs. What you are looking for is the area of parameter space occupied by and ensemble of orbits. The key is that the limits/boundaries are the same for the deterministic and chaotic paths and what you are really interested in are the boundaries, not the specific values at any point in time.
See statistical mechanics for an analogy. Paths of molecules are chaotic, but what we are interested in are average energies (temperature), etc., and the effect of changes in the system on the average energy/temperature. ”
While this statement is incorrect, I find it interesting because this kind of statement is quite common and probably even dominating in climate science. Let us examine separately the different parts of the statement:
1) “Point being that it is possible to handle even classically chaotic spatio-temporal systems because the available parameter space is bounded.”
Dynamical system theory (or chaos theory) are always classical theories and not quantum mechanical. The notion of quantum chaos exists but has nothing to do with the debate that interests us in the context of climate. That the available parameter space is bounded is a trivial tautology, meaning that any physical measure is necessarily finite. It is not because of this tautology that the chaotic system can be “handled”. It can be simply handled because we have developed a chaos theory which does the “handling”.
2) “Even for a system which is chaotic, paths through the parameter space do not necessarily fill the entire space and measures of the areas which are filled can be used to make future predictions.”
In a sense the statement is again trivial. Of course when one deals with a dissipative system (as opposed to a conservative system), the initial volume in the phase space is not conserved during the evolution but decreases. So the orbits described by the system in the phase space are constrained to a finite subspace that can be reduced to a single point (aka equilibrium) or a cycle (aka periodic movement). Any dynamical dissipative system, regardless whether it is chaotic or not, behaves like that.
In the case of a chaotic system there often is an invariant subset of the phase space which is called an attractor. It is called attractor because the system, after a more or less long transient, settles on the attractor and thereafter its orbits stay constrained on it. So it is rather trivial to say that the orbits do not fill the whole phase space but only a part of it, which is precisely defined by the attractor.
Indeed even and especially for chaotic systems, most of chaos theory is dedicated to the study of the attractors’ properties. Needless to add that the existence of an attractor doesn’t make “future predictions” any easier. The orbit stays as unpredictable as ever, e.g it is impossible to know where the system will be on the attractor at any time. Sure it will always be somewhere on the attractor but that is not a terribly interesting and accurate prediction.
Now it is also necessary to stress something that has been said many times but that apparently has not yet sufficiently percolated. We are talking here about finite dimensional phase spaces where the coordinates are well defined, so the system’s orbits and finally the attractor’s geometry are well defined too. This means that we deal exclusively with systems that are described by a finite number of ordinary differential equations, which is equivalent to saying that we have a finite dimensional phase space.
This is what is called temporal chaos theory or often just chaos theory. Lorenz system, logistical equation, oscillating electronic circuits, gravitationally interacting systems (e.g. the Solar system or the 3 body system), and billiard balls are examples of chaotic systems described in finite dimensional phase spaces.
3) “This can and has been done using equations of motion which are not chaotic which is the approach of GCMs. What you are looking for is the area of parameter space occupied by and ensemble of orbits. The key is that the limits/boundaries are the same for the deterministic and chaotic paths and what you are really interested in are the boundaries, not the specific values at any point in time.”
There are NO chaotic or non chaotic “equations of motion”! There are only the CORRECT “equations of motion” whatever they are. Then hopefully they have solutions. It is those solutions that are or are not chaotic. Laminar flow is a solution of Navier Stokes equations and it is definitely not chaotic, whereas turbulent flow is a solution of the same Navier Stokes equations and it is chaotic.
Now while one can debate about what the GCM do, there is no debate about what they do NOT do. They do not solve any “equations of motion” because first they are unknown and second even if they were known, the GCM’s spatial and temporal resolution doesn’t allow to solve them. Even the convergence question cannot be answered because to converge to a solution one would have to reduce the resolution until convergence, which is impossible. So while it is known that there are chaotic solutions to Navier Stokes and that the weather is an example of such a solution that Nature choose, numerical simulations of GCM’s are irrelevant to the question.
When we have a chaotic solution and an attractor, we will be looking preferably in the places where the system is (aka attractor) rather than in the places where the system is not (aka outside of the attractor). Obviously as the attractor is defined as a subset of the phase space invariant by the dynamics, it is difficult to look for the attractor without knowing what the solutions exactly do.
An attractor is a geometrical structure in the phase space. As an example, suppose the shape of the attractor is a 3D sphere in a 4D phase space. Its “boundary/limit” is the surface of the sphere. Now by the definition of an attractor, the system is always somewhere within the sphere. The surface is only a separation between places where the system will be and the places where the system will not be. It is really no key to anything. It is like an engineer designing a turbine and when asked to predict what the RPM will be, he answers “It is key to know that it will not be 100 000 RPM”.
The whole statement becomes even less understandable when one knows that most chaotic attractors are fractal (e.g have a fractional dimension), e.g. the Lorenz attractor. Then the notion of “boundaries/limits” of a set with dimension 1.73 is not even properly defined.
4) “See statistical mechanics for an analogy. Paths of molecules are chaotic, but what we are interested in are average energies (temperature), etc., and the effect of changes in the system on the average energy/temperature. “
Most analogies explain nothing and misinterpret everything, and this one is no exception. This one is not an exception. However consideration of a large number of molecules leads to an important point.
A large number of molecules can be considered to be a system of colliding hard spheres. This system has a finite dimensional phase space and its orbits are indeed temporal chaos. However this system has an interesting property – ergodicity. Ergodicity means that there exists a measure of the phase space (think measure=probability) that is invariant by the dynamics. Another way to say the same thing is that the probability that the system is in a volume dV of the phase space is proportional to dV. Another very slack way to say it is to say that the system finishes by being more or less everywhere if one waits long enough.
The ergodic theorem states that for an ergodic system, a time average of X taken on a given orbit (in the infinite limit) is equal to the weighted average of all X for the whole system. The averages of dynamical variables (degrees of freedom) of a system make sense and are relevant if and only if the system is ergodic. However it is not yet clear whether ergodicity is a necessary or a sufficient condition alone in order to have a stochastical interpretation at least as robust as statistical thermodynamics; this issue is cutting edge science.
Is ergodicity a given for every chaotic system? NO! Chaotic systems can be indifferently ergodic or not ergodic. An example of a non ergodic chaotic system is the already mentioned case of N bodies in gravitational interaction. All this has of course absolutely nothing to do with climate unless somebody wants to suggest that the climate is in fact the same thing as plenty of hard spheresJ
For a good but not too technical summary about ergodic theory, see here. For a good and very deep but very technical paper, see here.
So now lets discuss what is relevant to the climate system. If the reader is ready to admit with me (I am borrowing this insight from R.Pielke Sr and give him credit) that the climate problem is more complex than the Navier Stokes problem, then I will restrict myself in the following to the Navier Stokes problem. This has an advantage that in the Navier Stokes case we know what we are talking about and there are actually real results with real mathematics and physics inside. If you don’t like this idea, then replace mentally Navier Stokes by climate everywhere below.
Fluid dynamics is a field theory. This means that the solutions of the Navier Stokes partial differential equations are fields – functions f(x,y,z,t) like velocity and pressure fields. The “phase space” of fluid dynamics is a Hilbert space where the elements are fields (functions). This Hilbert space is uncountably infinite dimensional (the L2 space of square integrable functions) and exactly the same as the one used to study quantum mechanics and more generally any PDE system.
This above mentioned fundamental property is what makes the difference between temporal and spatio- temporal chaos. It can be summarized in the following table :
|Temporal chaos||Spatio-temporal chaos|
|Phase space||Finite dimensional. Isomorph to Rn||Infinite dimensional. Hilbert space of functions L2|
|Elements of the phase space||Vectors with N coordinates. No spatial autocorrelations||Fields F(x,y,z,t). Anistropic spatial autocorrelations.|
|Equations defining the dynamics||N first order nonlinear ODE||Nonlinear PDE|
|State of the system||Fully defined by 1 point P(t) in the phase space||Fully defined by M fields|
|Orbit||Evolution of P(t) in the phase space||Undefined in an infinite dimensional space|
|Attractor||Subset of Rn invariant by the dynamics. Can be fractal. Has a geometric representation||Subset of the L2 Hilbert space containing the fields solving the dynamical equations. No geometric representation|
|Stochastics||Possible invariant PDF in the phase space if the system is ergodic||Here be dragons|
Now by analogy with temporal chaos, attempts have been made to characterize the attractors in the Hilbert space for spatio-temporal chaos and the following major result has been proven for 2D Navier Stokes as well as for some other examples of specific PDE systems, see here. There are more references dealing with modern concepts like inertial manifold, general attractors etc but they are all too technical for a casual reader.
This result proves that for some cases there exists a finite dimensional attractor in the Hilbert space. It means that any solution of the 2 D Navier Stokes may be expressed as a linear combination of a finite number of fields, which constitute the basis of the global attractor. Of course an existence proof and an upper limit on the dimension doesn’t mean that we are able to actually find these fields that would allow to obtain a general solution for the considered PDE system.
Also the proof is still unknown for 3D Navier Stokes equations and there is clearly no certainty that any dynamical spatio-temporal system must possess a general attractor. However, this new and modern research is a very promising direction to reduce the infinite dimensional spatio-temporal chaos to the finite domain what would make it more tractable to obtain a relevant quantitative model.
So what does that mean for climate science? Climate being a spatio-temporal field theoretical problem, must clearly be studied by the correct methods described above in order to get correct results. Even if the problem is more complex than the “simpler” Navier Stokes problem, one may imagine the consequence of a discovery of an inertial manifold of finite dimension for the climate system.
For the sake of illustration, let’s imagine that the attractor dimension is 10. This means that there are at most 10 different and independent functions (fields) that define an invariant subspace of functions (fields) where all the climate solutions live. These 10 functions (fields) can be considered as fundamental “oscillation modes” of the system not very different from the concept of “oceanic oscillations”. Of course there is no reason that the fields be temperature fields (let alone surface temperature fields), but whatever the fields are, it would appear that all climatic states are obtained by just making these 10 fields interact among themselves. Obviously the greenhouse gas (GHG) concentration field would play a role too, but the climate could be reduced to GHG only if the attractor was one-dimensional, which is clearly excluded. This would definitely solve the climate problem.
Nothing tells us that such a finite dimensional attractor exists, and even if it existed, nothing tells us that it would not have some absurdly high dimension that would make it unknown forever. However the surprising stability of the Earth’s climate over 4 billion years, which is obviously not of the kind “anything goes,” suggests strongly that a general attractor exists and its dimension is not too high. I will not speculate about what number that might be. But in any case and to use the IPCC terminology, it is very unlikely (<5%) that naïve temperature averages, 1 dimensional equilibrium models, or low resolution numerical simulations (GCM) can come anywhere close to solving the problem.
As I have already commented here, the only approach that in my opinion goes in the right direction is Tsonis (see also the thread on climate shifts). If one reinterprets the Tsonis theory in the frame of a more general and correct field theory, he suggests that the climate attractor exists and is 5 dimensional. He identifies the 5 fields with 5 oceanic oscillations and quantifies every field by a single number (index). He doesn’t formulate it that way and it is extremely unlikely that a 3D field can be relevantly represented by a single number, but the paradigm is on the right track. I am convinced that this kind of approach will eventually lead to progress.
Moderation note: this is a technical thread that will be moderated for relevance.
In short, we know the result of running a chaotic system will fall within a set of finite results and while the same can be said of a second run, it won’t be the same particular result.
It must be something to do with my magnetic personality but why me Lord, why’s it always me?
Test of technical thread moderation limit setting in progress.
Please stand by.
thx, thread has been cleaned up.
The whole Earth system of atmosphere, oceans, etc. is certainly extremely complex, more complex than fluid dynamical systems that have been analyzed successfully using numerical methods based on Navier-Stokes equations. Straightforward solution appears to be far beyond our capabilities, when approach from basic equations. Does this prove that modeling the Earth system cannot produce good results for climate projections? No it doesn’t, although it certainly tells that the problem is not easy, but that we knew already.
The system has chaotic features. It has also stochasticity and it is dissipative. The fact that the system is dissipative gives hope that useful models can be built, the stochasticity may both help and bring additional problems. Whether useful models can be built cannot be decided based on those theoretical facts that Tomas described. If the dissipation is strong enough, producing useful projections may be relatively easy, but weak dissipation may allow phenomena which go far beyond present modeling capabilities. Where we really stand can be decided only by studying the properties of the Earth system, not by general arguments.
The builders of AOGCM’s have claimed that their model produce results, which is not a trivial statement, because the models might be too dependent on the initial conditions to produce anything that could be called a climate projection. They have also claimed fair success in comparing model results with many empirical observations of the Earth system. This is promising, but not yet decisive. There is still the possibility that something so important is missing from the models that the success is misleading, if our final criterion is the usefulness of the climate projections on the timescale of several decades and for a climate significantly different from the present one.
The theoretical considerations are interesting, but they do not necessarily tell much about the real issue of climate modeling. I do not believe that any version of chaos theory tells much about this real question. They may help in posing some important questions, but the real modeling (and also the real Earth system) is too far from the situations, for which these theories give clear answers.
Pekka – Having read arguments made by Tomas, but also, like you, having some familiarity with the observed behavior of the climate system over decades, centuries, millennia, and more, as well as the observed performance of models, I’m interested in your view of the following tentative assessment.
Climate exhibits fluctuations that are sometimes unpredictable, including a variety of oscillations involving atmospheric/oceanic interactions. Over a variety of timescales of interest, it also indicates a tendency for many of these to average out – this is not a mathematical deduction but an observation. Models perform poorly in characterizing the fluctuations, but typically perform better for long term projections where the averaging out has reduced the disparity between projections and observations. A difficult challenge involves resolving model trends from fluctuations exhibiting similar timescales to the trends.
Chaotic behavior is relevant, particularly to the fluctuations, because we do not know where the climate will evolve in terms of particular attractors. However, equally relevant is the Second Law of thermodynamics. The Second Law tells us that in the absence of external perturbations, there will be some degree of predictability inherent in the tendency of a system to assume a state of maximum entropy. It is also my impression that over the billions of years of Earth history, more than one such state has been possible (usually influenced by external factors such as solar irradiance), but that these alternatives have been widely separated – e.g., “hothouse” and “icehouse” climates in which, respectively, ice was absent even at the poles and present even close to the equator. I’m unaware of evidence for closely spaced stable states. I tentatively conclude that the averaging out I refer to describes the tendency for one particular stable state to represent the average, with the amplitude and time course of fluctuations exhibiting features that are often unpredictable, but not the existence of the state itself. In general, the wider fluctuations have tended to occur during glaciations, while interglacials have been characterized by greater stability (although not absolute stability) in the absence of clear external forcings, including solar variations and the greenhouse gas forcings relevant to current climate change.
What is your perspective?
Qualitative arguments do not bring very far here, the relevant problems are quantitative:
– Does the Earth system (and in this connection oceans, in particular) store and release such quantities of heat that they can influence the climate to a significant extent over periods of several decades or more?
– How rapid are the changes in the OHC that are related to the previous point? This influences the maximal strength of the effects on climate?
– Are there processes of long persistence (decades or more) that affect significantly the albedo and through that the balance of the energy fluxes?
This kind of factors, which are mainly related to the ocean currents and temperature and salinity distributions of oceans (both lateral and vertical) may have significant influences on the climate at different time scales. If the Earth system is strongly dissipative, persistent strongly influential modes that affect the climate strongly over long periods are not likely, but if the dissipation is weak compared to the energies involved in these processes, then understanding the dynamics of the Earth system gets very difficult and building good models gets also very difficult, if not impossible.
The concept of chaos is used in connection of negligible dissipation and stochasticity, which allows for the attractors to form in a mathematically well defined form. With stronger dissipation and more stochasticity these concepts lose significance. Between these extremes lies the plausible situation, which has attractor type properties, but only over some timescales, as the dissipation wins with enough time.
I don’t know enough about the AOGCM’s to tell, how close they are to a situation, where the models start to show major fluctuations, and I do not know, whether their ability to produce results is based on forcing too much dissipation on them (or numerical diffusion that has been introduced by choices made in discretization). This kind of problems seem to me quite plausible, but they are really problems, that must be analyzed. With models one can make experiments to find out their properties, but that is not easy, because the processes being modeled have many different spatial and temporal scales. Thus these issues may occur at one scale, but be less visible in the full model. Still the consequences may be important.
It would be interesting to find good descriptions on the various studies that have been made to determine the quality of the models written in a way, where are known pitfalls are listed and descriptions given on the level that their significance is understood, but nothing like that has come to my knowledge.
Changing the greenhouse forcing is one important factor that is expected to change the climate by an amount comparable to the power of the radiative forcing, but how strong is this effect compared to these other effects is still a major problem affecting attribution of historical observations and the ability of telling, how it influences the future climate. The projections depend on the climate models and reliable only to the extent the models can produce well defined results based on their verified properties.
Very refreshing to see you keep an open mind.
This planet has boundaries that climate cannot cross due to the amount of energy available. The trick is understanding all the players involved and their roles.
We have put to much emphasis into temperature numbers to the exclusion of understanding that temperatures cannot predict physical events. The huge mistake is grouping everything globally when regionally many actions are occurring in many areas.
For some good reading of potential tests and problems with tests read Tebaldi and Knutti. http://www.image.ucar.edu/~tebaldi/papers.html
http://www.cgd.ucar.edu/ccr/knutti/publications.html The Royal Proceedings paper is a good one. They have some interesting work on models and what is known, can be known.
the second law would be good to use, if it applied – the earth’s climate isn’t a closed system.
The Second Law still applies in the absence of forced changes.
At the equator or in the Arctic. They are different in many complex ways that a generalized law cannot cover.
“Over a variety of timescales of interest, it also indicates a tendency for many of these to average out” – interesting, because the tendency for a time series to average out can be directly measured, by the Hurst exponent, and it shows the exact opposite of your claim.
Also, as I explained to you on the previous thread, and even gave you a reference outlining the maths behind it, invoking equilibrium *reduces* entropy, it does not maximise it.
It is interesting that the claim about climate being extremes much of the time rears its head again. If you plot, for example, the ice cores (e.g. vostok, grip, etc) as CDFs then you realise the distribution does not support the idea that the climate spends much time in extreme states. It spends the majority of the time between.
I strongly agree with Tomas’ view that chaotic behaviour is likely formed from a very high number of dimensions (possibly infinite). Although some have claimed embedding dimensions of around 7 or 8, these are usually at the limit of detectability, and fail to account for the problems associated with complex autocorrelated behaviour in the climate time series. For more info, see these references:
Estimating the dimensions of weather and climate attractors, Tsonis et al
On embedding dimensions and their use to detect deterministic chaos in hydrological processes, Koutsoyiannis (click through to presentation)
Observational data show the averaging. Interested readers can review the record of the various oscillations to confirm this. Moving toward equilibrium does not reduce entropy in the Universe we occupy.
I’ve also pointed out that the Koutsoyiannis papers do not contradict these general principles – see our previous discussions.
No, your previous discussion merely asserted that the Koutsoyiannis papers do not contradict these “general principles”. At no point did you explain why, or which aspect of the analysis was wrong. The results are straightforward: your equilibrium reduces entropy, therefore it cannot maximise it.
Furthermore, your statement that you can see averaging is extremely funny indeed. So, the objective mathematical analysis is wrong but when you squint your eyes right and hold the page up to the light just so it is right? Interesting stuff, but nothing to do with science.
Spence – I believe it would be useful for others to review the climate record to draw their own conclusions. That’s probably better than asking them to referee blog disagreements.
Fred, to be honest I’m not interested in what other people think or do. I have looked into the claims being made, including objective, rigorous analysis, and so far I can see nothing wrong with the claims of Koutsoyiannis et al.
However, their analyses show that your claims are clearly and objectively incorrect.
Science isn’t a popularity contest. Claims are right or wrong. What the majority of people think has little or no bearing (and the fact that this means more to you speaks volumes). Your claim, on the back of the analysis of Koutsoyiannis, is just plain wrong.
Since Koutsoyiannis’ analysis shows your claim to be false, I am interested to see if your claim has credible analytical support, or whether you can show Koutsoyiannis’ analysis is wrong in some way (either in assumptions or in analysis).
As a sceptic, this latter thing would interest me the most. I’ve failed to find a fault in this analysis, but I was hoping you may be able to find a problem I did not see. At present I can only assume you have not found a fault. In which case your claims remain without scientific basis. Particularly as your claims do not rise above hand-waving at this point.
I think you are mixing up the fact that the observed climate a) has oscillating patterns within bounds and b) and the fact there are engery bounds which establish bounds for climate variables with the idea that there are observable statistical central tendencies. Most of these statistical quanttities like mean gloabl temp or median global ice cap etc are artificially created entities for convenience, with no direct way to relate them to actual physical entities, arent they?
\\ more than one [stable] state has been possible but that these alternatives have been widely separated – e.g., “hothouse” and “icehouse” climates in which, respectively, ice was absent even at the poles and present even close to the equator. I’m unaware of evidence for closely spaced stable states. //
What is your definition of a stable state? How do you define “closely spaced?”
The Quaternary has had at least 5 ice ages (epochs). None of them could be classified as an icehouse or snowball. The warm interglacials cannot be counted as Cretaceous Hothouse either. Does the Quaternary count as 2.5 million yr geologic period (system) without a stable state? Or is the Q a time period with at least 9 closely spaced semi-stable (lasting < 200 Kyr) states alternating between warmer and colder states?
If it is the latter, then we have many closely spaced stable states, several within the past 2 Ma.
If it is the former, without a stable state, then analysts using GCMs are using an unstable state as their baseline in their definition of anomalies. Because we did not begin with a stable state, entire premise of anthropogenic induced climate change is falsified.
Stephen – I should probably have made clearer my point about stable states in the absence of external perturbations. Forcings, including the orbital forcings that triggered glaciations and deglaciations, introduce trends that change where on a curve, stability resides. In the presence of a forcing (including current greenhouse gas forcings), the tendency toward stability will drive the Earth’s energy budget away from a previous state to a new one, and these of course are closely spaced, but they are closely spaced because of the forcing. My point was the absence of evidence for Earth to settle into equally stable alternative states on its own and then to stay there, simply nudged in one direction or another via processes internal to the system.
With your definition in the absence of external perturbations you will never get any such evidence, because external perturbations are never absent in the real world. The orbital cycles for example are always present and are visible in many high-definition sedimentary deposits back to the early paleozoic.
However there is strong evidence that the effect of these forcings change over time. Until about a million years ago glaciations tracked the 41 000 year obliquity cycle, but then over about 200 000 years they changed to tracking the 100,000 year eccentricity cycle instead. Each of these cycle lengths are associated with relatively constant extreme climatic conditions, though only a small proportion of time ( less than 10%) is spent near these extreme values.
I certainly agree that the climate is never free from external perturbations. These will dictate trajectories that climate will follow outside of the natural fluctuations discussed here, and sometimes the perturbations will be small enough for the trajectories to be relatively flat. I was suggesting that the fluctuations should not cause the climate to deviate permanently in one or another direction from the trend lines. “Permanently” is of course a matter of timescales, and the question arises as to whether the deviation is persistent enough to greatly distort our interpretation of forced variation. As far as we know, none of the identified fluctuations appear to do that on a centennial basis, although they may over the course of one to several decades. That may also happen over the course of many millennia, as you imply, but we don’t yet know why the 41,000 year obliquity dominance to 100,000 ellipticity dominance has occurred, particularly since the latter is not expected to influence insolation as much as the former.
“As far as we know, none of the identified fluctuations appear to do that on a centennial basis, although they may over the course of one to several decades.”
On what evidence? Spectral analysis of the Huybers 2006 reconstruction shows the orbital variation clearly separable from a continuum of “natural variability”. The two are quite separable, and the natural variability shows high variability at the centennial scale and above. The orbital bands explain only a small part of the overall series, although they are distinct.
At the risk of belaboring the point, Spence, I would again suggest that readers can review the climate record for themselves, in this case to determine whether there are strong, centennial scale unforced temperature departures from the variations expected on the basis of forcing from CO2, solar insolation, etc.
I don’t believe we need to pursue endlessly questions that can be addressed by looking at the record. If beyond what’s already discussed, new evidence of unforced variation observable in temperature records is introduced, I’ll try to comment, but otherwise I probably won’t respond to further comments on the existing temperature records.
Fred, why would you look at a record which is little more than 150 years long to estimate centennial variability? Such an approach makes no sense whatsoever. Huybers analysis that I referred to is a proxy record. (Dr Curry was involved in this work as well by the way)
I understand why you don’t answer – because there is no answer. Your position is not defendable. I don’t know why you keep saying “people can see for themselves” – science isn’t a democracy. Your statements are flat wrong and this is evidenced by the data.
Spectral analysis of these proxy records show a continuum consistent with the instrumental record and other proxy records – that we see a continuum of noise with approximately constant spectral power per octave of frequency. This results in increasing amplitude swings at longer timescales – exactly what we see in the ice core and long geological records.
The strange thing is, while I know a number of scientists of differing levels of ability, they will all be consistent on one thing. When data is presented which shows that their theories have a problem, they take an immediate interest. To a scientist, this is pretty much the most important thing. Scientists are continually trying to break their own models, so when someone does it for them they sit up and pay attention. All you want to do is look the other way. I’ve never seen such behaviour and I cannot understand why you would adopt such a view.
“The Second Law tells us that in the absence of external perturbations, there will be some degree of predictability inherent in the tendency of a system to assume a state of maximum entropy.”
What constitutes your system and how are you defining perturbations?
Were I to regard the sun to be a perturbation (constant or not) I would conclude that this alone ensures that the Earth, as a system, never assumes a state of maximum entropy. I can see that arguments can be made about rates of entropy production tending to be maximised but not entropy itself.
I don’t disagree. That is why I used the word “tendency” – I was referring to the direction of change.
The Second Law does tell us something about departures from a steady state in a dissipative system – those departures will tend to result in a return toward the steady state unless some new external perturbation forces a change in that state. If, for example, solar irradiance and atmospheric constituents are constant, the flux imbalance at the TOA is zero, and ocean heat content is at near-equilibrium between the upper and deep oceans, I would expect a temporary shift in heat distribution within the oceans due to stochastic variations in atmospheric circulation patterns to tend in the direction of return to the earlier near-equilibrium distribution. I don’t suggest that exact equilibrium conditions exist in actuality, although they may be approximated at times, but the return in that direction would constitute an entropy increase.
I think if the debate on global warming would be led by people like Fred Moolten and Pekka Pirilä (and Judith Curry) , people who know what they are talking about, who do not treat their opponents as “morons” and respect other people’s views and are ready to discuss the science, instead of some others (on both sides) I’m not gonna mention, we would not have this fight at all. Thank you two, for me your contribution is a hope for a more civilised future…
“The theoretical considerations are interesting, but they do not necessarily tell much about the real issue of climate modeling.”
I’m not sure how you arrive at this conclusion, but I disagree. Even at the most casual level, any modeling results have to be interpreted in light of the fundamental nature of the system. The question of useful results is difficult to define. In the AGW climate change context, useful results would be prediction with some certainty. Again, the fundamental nature of the system poses a severely limiting context for interpretation leading to prediction.
As a note on terminology, you use projection, which typically refers to an extrapolation to the future outside the existing data space. If the model is purely experimental (data derived), then context is the only basis for the extrapolation. My reading of this and other of Tomas’ posts (and supporting material) is that there is currently no theoretical basis for claiming the ability to predict. What’s left is really an extrapolation, with some additional context. The extent that the extrapolation is reliable, then, is determined the extent that the context supports it’s reliability. The available context includes both the theoretical framework Tomas has shared, along with the theoretical details (physics, if you like) of the model(s). In the end, you have to have a basis for believing a prediction is reliable, and the theoretical context certainly puts a limitation on how reliable any prediction could be.
That Tomas points to a paper that gives a practical approach in line with the theoretical limitation is helpful. Models derived in this fashion have a more favorable context for interpretation – other approaches don’t (and can’t) give a firm basis for believing the predictions.
For anyone interested, here’s a mathematical treatment of reasoning, and it’s this type I think applies to the current AGW controversy: http://www.stanford.edu/~kdevlin/ModelingReasoning.pdf
i very much like the article you linked to.
I’m glad somebody found something useful. It’s a broader topic than usually comes up in industrial research, but it does come up in making long term research / product plans. 5-10 years down the road in high tech is always very uncertain.
You may also like this presentation:
Artificial intelligence shines a lot of light on natural intelligence.
“In the AGW climate change context, useful results would be prediction with some certainty.”
Harold – If you are asserting that the ultimate test of models or even deduced principles is empirical, then I agree, always with the realization that agreement between theory and observations is never final, because a new observation may yield a disagreement.
Empiricism is certainly the basis for my conclusion that most of the fluctuations that we know modify climate behavior on timescales of interest tend to average out. Mathematically deduced conclusions that predict otherwise would therefore appear to be wrong, at least for those fluctuations. We also know that models are always “wrong”, in the sense that they invariaby fail to match observations exactly, and perhaps they are always destined to fail in that manner. On the other hand, they have usefully come close enough to a good match for us to have some confidence in their abillity to predict future trends to some degree of accuracy, as well as their ability to demonstrate what values of parameters of interest correctly permit them to match past observations.
I don’t think we can reason our way to an accurate understanding of climate dynamics, without constant checking along the way to ascertain how well our conclusions are consistent with reality.
Slightly off topic but… “On the other hand, they [GCM’s] have usefully come close enough to a good match for us to have some confidence in their ability to predict future trends to some degree of accuracy,…”
This may be true to some extent. However, it is climate scientists that have dramatically undermined the layman’s confidence in such models due to statements such as this: “The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t.” Note that I have no problem with the first half of Trenberth’s statement since it indicates a comprehension that the evidence is not lining up with the offered up theorem. But the second phrase connotes a bias that undermines credibility. All the GCM success in the world won’t be able to overcome such bias in the minds of the general populace. Especially not while climate shows no correlation to CO2 concentration.
I used the word projection to differentiate from prediction or forecast having in mind that the future path is influenced very much by assumptions on external factors, in particular by assumed emissions.
I find all kind of theoretical and also philosophical considerations interesting, but I have found that they are all too often misused as arguments against perfectly valid analysis. What we really need and use is best available knowledge and understanding. Best available is never the full and only truth. It is not strictly correct, but it may be good enough and appropriate for its intended use. Formal arguments from idealized mathematical constructs or logic fail often completely, when people try to apply them to this best available knowledge.
Climate change skeptics use this kind of false reasoning very much. They claim that some scientific results are worthless, because they do not satisfy some formal requirements that they as decisive. This kind of argumentation appears to be mostly totally worthless and close to straw man argumentation. There is often no real reason for requiring the satisfaction of such formal rules, be they about the properties of climate models or about hypothesis testing.
Often these false claims have some relationship with real problems. The mathematical idealizations tell about a real problems that must be taken seriously or there may be faults in the logic of presenting or using scientific knowledge. These problems are, however, usually not absolute: yes or no; correct or wrong. They are relative: good or bad; useful or of little practical value. The formal arguments are not appropriate, because they may be violated even, when everything is done as well as it can be and when the level of understanding is safely accurate and reliable enough. For correct criticism the weaknesses should be pointed out in a quantitative way showing that the data is misused in a significant way or that the conclusion is not supported at the level required for its practical use.
The value of the climate models can be judged only by studying the existing models. If a particular model gives well defined results, that is by itself a proof that certain criticism does not apply to it, because some criticism claims that no results can be obtained. That is of course not a proof that the model is good or of any practical value, but that proofs certain stability. How useful the model is can then be estimated by studying its properties further and by comparing it two empirical data.
“the future path is influenced very much by assumptions on external factors, in particular by assumed emissions”
Begs the question. Buit more to the point:
“What we really need and use is best available knowledge and understanding.”
As a comfirmed experimentalist, I’ll just point out that theory places limitations on interpretation of experimental results. That’s even for the case that independent variables can be controlled. Add in the possibility of not knowing all the important independent variables, and not being able to run controlled experiments, and limits on interpretation become fairly difficult. Data is easy, analyzing and interpreting is difficult.
I agree fully with your latest comment. We need always some theory to give meaning to observations. Concerning this thread my reservations are one step further “on the theory of the theory”. I have found in all writings of Tomas Milanovic much that I agree fully with, but at some point my thoughts diverge. He has emphasized how the standard discrete chaos theory doesn’t apply to the infinite dimensional or field theoretical problems. On that part I agree fully. I agree also on most of the comments he presented on Eli Rabbets comment.
The main difference is that my doubts go even further concerning the practical value of the concepts of chaos theory. It is possible that some of the climate models are formulated in such a way that their behavior has features of chaos. I consider it also possible that something qualitatively similar to attractors may be present in the behavior of real climate, but I do not believe that a hypothetical detailed and accurate mathematical description of the Earth system would have problems related to ergodicity or that it would have attractor in the mathematical sense. The reason is in dissipative and stochastic parts of the full mathematical description.
On the practical modeling level, I do not believe that these concepts can be used to prove the impossibility of having good and useful climate models. The concepts tell about potential pitfalls that must be studied carefully before extrapolating any partial success in using a model, but they do not prove that success is impossible.
How far can we trust model results, when they are subject to all these problems, is a difficult question. Studying, how model reacts to changes in details of discretization (e.g. changing spatial grid size and/or time steps) gives some information, but even this approach has its limits. How the equations are discretized and how those conservation laws are enforced that cannot be presented accurately are other details that must be studied. All these features influence the dynamics of the model. The results can provide some insight into the dynamical processes that occur in the real Earth system, but reaching a level that is as rich as the real system may be too difficult. It’s probably even more difficult to have this richness in even rough agreement with the real dynamics.
“For correct criticism the weaknesses should be pointed out in a quantitative way showing that the data is misused in a significant way or that the conclusion is not supported at the level required for its practical use.”
“On the practical modeling level, I do not believe that these concepts can be used to prove the impossibility of having good and useful climate models.”
And here we bring our different experiences as contexts to interpreting the situation. Having spent many years dealing with Integrated Circuit processes, I’m used to working with several thousand independent variables, any one of which could indicate a serious “problem”. The underlying physics and chemistry is relatively well known, although new phenomena are routinely discovered and some areas are not modeled well. Philosophically, it it was simple for me to avoid known likely problem (these are already definitively known to exist under some circumstances) , so I would avoid these. Controlled experiments would reveal other problems, and mechanisms causing them could be elucidated by designing experiments to confirm the causal structure. Then there were the “problems” indicated by data (not a designed experiment) which could be conformed as problems by seeing effects in multiple dependent variables. The response of multiple independent variables indicated a real effect, as opposed to some phantom, so it gave confidence that the problem was real and therefore worth pursuing. Then there were the “problems” indicated by a single variable. I’d have “problems” indicated by a single variable all the time. Most of these would be phantoms, even though a physical cause could be proposed which would explain the data behaviour. Quantifying limitations on the data and theory wasn’t always possible, and it certainly wasn’t necessary at this stage. Being mindful of these limitations was. My perspective, then, is I have much higher requirements for a single variable indicated “problem” to be accepted as a problem to be worked on even with a proposed mechanism.
Once I agree I have a problem to be worked on, then investigating and quantifying any limitations becomes a critical part of understanding the problem and any solutions. It’s at this stage that I also would take all available information (incuding modeling) in search of a solution zone. Modeling showing general behavior was good enough, the actual behavior usually could be confirmed experimentally.
With this background (and for space concerns, I’ve simplified things somewhat), you can probably see that I view the current climate model state as being appropriate for the solution phase, but not very helpful yet in defining if there is a problem. The current understanding of theoretical limitations is useful for the defining the problem stage, but not very good for defining solutions. Due to there being a possible catastrophe, It’s worthwhile for the scientific community (and I use this broadly) to do the work to develop the field. I wouldn’t tell another manager they’re wasting their R&D money by investigating something that in their judgment is important – they’re accountable for their results, so they get to control their research program. If I was asked by upper management what I thought of it, I would say very carefully.
OOPS! Some “independent” s/b dependent
Yes, a “projection” is properly the elaboration of a stated suite of assumptions, and holds only within their bounds. Given the apparent non-linear hypersensitivity of climate to “forcings”, it doesn’t take much to veer off in some far different direction than the projection projected. Forcings may change in power and even sign over the full range of their operation, e.g.
Speaking as someone with a bit of math background, I would tend to agree that I’m skeptical about how important it is to try to apply the fancier parts of chaos theory, such as strange attractors, to understand the earth’s climate at this stage of knowledge. Even in an abstract mathematical setting, is EXTREMELY difficult to prove anything about a strange attractor. The existence of the Lorenz attractor was proven I believe in the 90s with the aid of many hours of computer time. (I’m talking about formal mathematical existence – of course people thought it was there, but numerical simulations don’t prove that there couldn’t just be a bunch of periodic orbits behaving in a very complicated way.) The existence of the Henon attractor, which is for a discrete two dimensional quadratic system, was one of the crowning achievements of a mathematician name Lennart Carlsen who is known for tackling near impossible problems.
Of course, we don’t need to have formal mathematical proof of any of these things for climate models, but the problem is that the nature and existence of the strange attractor can vary a lot depending on the parameters of the dynamical system. The equations for the Lorenz system will only produce a strange attractor if the coefficients of the differential equation are in the right region. I would suspect that what is known mathematically about how strange attractors behave if you vary the parameters of the system is very limited. Since any model of the earth’s climate is going to have external parameters that change over time, such as radiation from the sun or greenhouse gas levels, I am dubious about whether it would be meaningful to talk about or look for a “strange attractor” that actually represents the real-world climate.
However, the solar system is an example of a potentially chaotic system where we nonetheless see much regular behavior. I would guess that if there is an area where chaos theory could help climate scientists, it would be by analogy with what we know about how to predict things in the solar system. One example is “orbital resonance”, where objects in similar orbits can have stabilizing or destabilizing influences on each other. This is something that occurs over the potentially chaotic “noise” of other gravitational bodies and thus can be used to analyze a complex system without having exact information. Paraphrasing the wikipedia article:
“…an orbital resonance occurs when two orbiting bodies exert a regular, periodic gravitational influence on each other, usually due to their orbital periods being related by a ratio of two small integers. Orbital resonances greatly enhance the mutual gravitational influence of the bodies (i.e., their ability to alter or constrain each others’ orbits). In most cases, this results in an unstable interaction, in which the bodies exchange momentum and shift orbits until the resonance no longer exists. Under some circumstances, a resonant system can be stable and self correcting, so that the bodies remain in resonance. Examples are the 1:2:4 resonance of Jupiter’s moons Ganymede, Europa and Io, and the 2:3 resonance between Pluto and Neptune. …. The special case of 1:1 resonance (between bodies with similar orbital radii) causes large Solar System bodies to clear the neighborhood around their orbits by ejecting nearly everything else around them; this effect is used in the current definition of a planet.”
I think something along these lines was suggested in a previous post about the work of Tsonis, about various oceanic oscillations being coupled together and then either amplifying or disrupting each other.
I love mathematics, and mathematicians almost as much.
Sparsity of language is especially valued by that lot, and where not terse, usually a mathematician is being mischievous in some small and intellectually interesting way.
For example “entirely incorrect” appears reduntant; we know a false statement is false, ‘entirely’ appears superfluous.. However, for a mathematician to start by saying, “entirely incorrect” is approximately as interesting as for your fifteen-year-old daughter to announce, “First of all, I am entirely not pregnant.”
A mathematician once described to me a book on how to play the game of contract bridge as ‘entirely awful’. The book competently presented the rules of the game, was factually correct about its brief history of the game and its important players, and gave good advice about winning strategies. It had no significant errors of spelling, diction or grammar.
However, the mathematician proceeded to demonstrate how he could always defeat every tactic in the book. That he was a world-class bridge champion didn’t really matter in all this. The book was ‘entirely’ awful to him, personally.
Entirely incorrect could not have just the plain meaning of ‘incorrect’ in sum.
It may mean ‘at least a single element is not correct’.
It may mean ‘every element passes tests of correctness, but I do not like how it is put together’.
It may mean ‘every element passes tests of correctness, I admire how it is put together, but there is something artful about the arrangement of facts that produces a higher level false conclusion.’
It may mean ‘every element and every level passes tests of correctness, but the entirety does not equal the entirety of my own knowledge and expertise’.
Now must go back and read past the sixth word of the author to find out which mischief he means in this case.
Looks like my post has been edited out of relevance.
“While this statement is incorrect, I find it interesting “
Didn’t it once say:
“While this statement is entirely incorrect, I find it interesting “ (Indeed, it retains the double space between “is” and “incorrect” where the edit appears?)
It’s very difficult to discuss an invisibly moving target. The old way was funnier.
Thank you for this post. I will admit to being a casual reader and I was only able to follow about 80% of it, but I agreed with everything I think I understood. :) I agree with you regarding the work of Tsonis as being on the right track for explaining 20th century global temp changes.
Eli Rabbett’s original comment springs from his neglect of internal climate variation and his view that “we know all of the forcings,” “we know the basic physics” and “the role of atmospheric CO2 is dominant.” If these statements were true, then the GCMs would only need 1 function (field). Based on what Willis Eschenbach has written, it appears that is exactly what the GCMs do. See http://wattsupwiththat.com/2010/12/19/model-charged-with-excessive-use-of-forcing/
Climate modelers and the defenders of GCMs such as Eli and Pekka will tell you the oceanic oscillations are just internal variability, therefore cannot add any heat to the climate system (only move it around). My view is that we do not know all of the forcings, we do not know the basic physics and so it is impossible to know if atmospheric CO2 is dominant. If we knew all of the forcings and the basic physics, then we could get our accounts to balance and Kevin Trenberth would not be embarrassed about the travesty of not being able to explain the lack of warming.
Here is one example of forcings and physics we do not understand: Leif Svalgaard and others claim changes in TSI are not enough to account for the 20th century warming. Yet Nicolas Scafetta and others have identified a solar signature in the temp record. How do we resolve this? One possible explanation is that small changes in TSI make a bigger than expected difference in our climate.
We have not even begun to talk about the effect of cosmic rays, clouds, etc.
Thank you again for a great post.
Many thanks for an interesting article. I must admit that unfortunately it has been quite a while (and no use) from my brief studies on this subject (N-S, fractals, ODE’s, so quite much went over my head. Just a brief remark from that era : an opening statement for lectures on differential equations – some basic course I recall – was ‘Gentlemen, on this lecture series we will mostly handle nicely behaving, homogenous differential equations. Unfortunately, you will later discover that most of the natural phenomena you will end up modelling will be anything but”.
A small question to Tomas, when we categorize the theoretical problem of climate system as being a system of PDE’s with characteristics you describe, do we recognize the same properties from model outputs? Are we able to demonstrate this via measurements?
“Sure it will always be somewhere on the attractor but that is not a terribly interesting and accurate prediction.”
It seems there is consensus on the possibility of this general strategy for prediction, but disagreement over its “interest”. But while Eli has not explicitly shown that this can yield “interesting” predictions, neither has Tomas shown that the results can never be “interesting”.
I think it is unlikely that the dispute will be resolved at such a high level of abstraction. In particular, I think there are examples where properties of the attractor do indeed yield “interesting” information. I think that the example of statistical mechanics is a good one. Ergodicity is indeed important. But without chaos there would be no ergodicity. So ergodicity is an “interesting” prediction that can be made by examining the chaotic dynamics.
(Of course the precise character of ergodic theorems and ergodicity is a subtle business, even in an ideal gas, but I think my point still stands.)
It is fair enough to point out that ergodicity or some such property has not been established for climate. But I don’t think this is equivalent to the statement quoted at the begining of this comment. For some chaotic systems interesting statements can be made, for some they cannot.
“…the surprising stability of the Earth’s climate over 4 billion years, which is obviously not of the kind “anything goes,” suggests strongly that a general attractor exists and its dimension is not too high. I will not speculate about what number that might be. But in any case and to use the IPCC terminology, it is very unlikely (<5%) that naïve temperature averages, 1 dimensional equilibrium models, or low resolution numerical simulations (GCM) can come anywhere close to solving the problem."
I think the second sentence here is not clearly enough supported.
The first problem is the blurring together of "naïve temperature averages, 1 dimensional equilibrium models" with "low resolution numerical simulations (GCM)". It is not clear whether this is intended to cover all state of the art GCMs? (In some sense they are all still low resolution.) If so, I would like to see some evidence that no GCM can do significantly better than "naïve temperature averages" or "1 dimensional equilibrium models".
The second problem is that "anywhere close to solving the problem" is not quantitatively defined. Withot a metric the term "close" is close to meaningless.
In fact I would suggest that "naïve temperature averages" or "1 dimensional equilibrium models" do surprisingly well. You can get the surface temperature in Kelvin to around 10% by plugging in atmospheric composition, albedo and the solar constant. The problem is that we are interested in variation on a scale of 0.1-1%.
It is not obvious to me that refinement using a GCM cannot get us to 1% or even 0.1%. Going back to statistical mechanics, in some condensed matter systems there is a similar situation where an oversimple model will give around 10% while detailed numerical calculations will give 0.1-1% accuracy (e.g. many viscosity, diffusion, or conductivity calculations). In other situations there is no good theory to really get us off the ground (high temperature superconductivity).
On whether GCMs are outperformed by simple models (e.g. simple energy balance models), I think there is evidence for this. E.g. see this discussion:
Kaufmann and Stern 2005
There was another discussion, which I now cannot find, in which a climate scientist found that EBMs consistently outperformed GCMs. I can’t remember who it was now, which bugs me, he posted up on RealClimate. He conducted a similar analysis and found simple EBMs outperformed GCMs, but his analysis was blocked from peer review by the modelling community.
Interesting but the Kaufmann and Stern article quoted by Steve McIntyre seems to have disappeared from the scientific literature. The link Steve provided is broken. Google Scholar does not show it. I was able to find a 2004 paper by Kaufmann and Stern which seems to be a precursor to the 2005 paper.
If anyone can find the 2005 paper, please post a link here. I am beginning to wonder if the journal pulled the article or something. Or maybe not.
Ron, the 2004 paper contains the exact quotes included by McIntyre, so I think it must be the same paper. Perhaps Steve linked to a copy with minor updates from 2005?
McIntyre quoted from the beginning of the abstract. It is very different from the 2004 paper.
I do note that, as with a few economists, they think an appropriate comparison is with a simple regression. Perhaps so – it’s worth looking at what insights can come from other disciplines can give, and I do understand how econometricians have got to that point.
However, I think even a simple physics based model takes us further in some respects than a regression.
For example, what happens if you take an entire historical temperature vs. forcing series for the earth derive a coefficient of proportionality and then apply it to the moon or to Mars? You get the wrong answer.
A one dimensional model based on solar forcing, atmospheric composition and albedo gives a ball-park right answer for each of the Earth, moon and Mars. Because it is built on understanding we can have a bit more confidence extrapolating, knowing what to change and what the limitations might be.
Curve fitting can be powerful, and if you add enough parameters I’m sure you can beat a GCM on historical reconstruction. But I am more interested in a comparison between simpler and more sophisticated physically based models.
JK, I agree and it is most frustrating that I now cannot place the other analysis, comparing GCMs and EBMs, which would be a much better answer to your question. I have searched before with no success so now suspect I am unlikely to locate it again :(
here are three offers of help. I may be way off and am weary. Will try again tomorrow if you’d like.
Of course you have to use Edit – Find on this page – EBM
Ctrl-F is the easy version.
Followed by F3 to repeat the operation.
Any realistic assessment of model veracity leads to questions of sensitive dependence and structural instability that have not been resolved.
‘Atmospheric and oceanic computational simulation models often successfully depict chaotic space–time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but nonunique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model. Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.’
‘For many purposes that are well demonstrated with present practices, AOS models are very useful even without the necessity of carefully determining their precision compared with nature. These models are structurally unstable in various ways that are not yet well explored, and this implies a level of irreducible imprecision in their answers that is not yet well estimated. Their value as scientific tools is undeniable, and the theoretical limitations in their precision can become better understood even as their plausibility and practical utility continue to improve. Whether or not the irreducible imprecision proves to be a substantial fraction of present AOS discrepancies with nature, it seems imperative to determine what the magnitude of this type of imprecision is.’
‘Simplistically, despite the opportunistic assemblage of the various AOS model ensembles, we can view the spreads in their results as upper bounds on their irreducible imprecision. Optimistically, we might think this upper bound is a substantial overestimate because AOS models are evolving and improving. Pessimistically, we can worry that the ensembles contain insufficient samples of possible plausible models, so the spreads may underestimate the true level of irreducible imprecision (cf., ref. 23). Realistically, we do not yet know how to make this assessment with confidence.’
‘Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable. They echo other famous limitations on scientist’s expectations, namely the undecidability of some propositions within axiomatic mathematical systems (Gödel’s theorem) and the uncomputability of some algorithms due to excessive size of the calculation (see ref. 26).’ http://www.pnas.org/content/104/21/8709.long
Thanks for the link to mcwilliam’s essay, hadn’t spotted this before.
Judy — a very good post. The Physics Today article link was fine, but the one to Eckmann’s site seems to be broken (from at least three points of Internet access). Maybe you should use DropBox. :-)
cb thanks for spotting this. i found a new link, which i added to the main post
Just analyze the data. The temperature of the earth is extremely stable, especially during the past ten thousand years. It is ridiculous to say that climate is chaotic and unstable. You can look at the ice core data and see that this system is stable. Before the events that raised the oceans, 8 to 14 thousand years ago, the system was stable, but the range of temperature went warmer enough to melt all the Arctic Sea Ice and then Massive Arctic Ocean Effect Snow quickly lowered the oceans and built the ice sheets to start the hundred thousand year cold period. You cannot undo eight hundred years of stable data with one molecule of manmade CO2 per ten thousand molecules of other gases.
That should have been: “You cannot undo Eight Hundred Thousand years of stable data with one molecule of manmade CO2 per ten thousand molecules of other gases.
One per Ten Thousand. That is not even a Trace. That is a Trace of a Trace. Have any of you actually thought about that?
This is The Essential question.
(In my experience, the poster who asks the Essential question rarely gets a response.)
Ever wondered why?
I teach critical thinking. Not enough people teach critical thinking.
Libertarians hold with the belief that mankind is essentially rational. I believe we are essentially emotional.
You got a reply! (see next thread)
Steve McIntyre wrote: My own recommendation is an “engineering quality” exposition of how doubled CO2 leads to 3 deg C and to problems.
Tomas, thank you again for another example of how limited my math skills are. Tsonis does seem to me to be on the right path. It still boggles my mind why “any” unforced variation is dismissed by the climate modeler when it is not unforced, just not understood. Moving energy from point A to point B just results in different rates of conversion of that energy at the different points, which can have impacts on climate. The high latitudes warm more quickly and cool more slowly because there is more warming space than cooling space. The reverse in the low latitudes, but the space more limited than the high latitudes (greater ocean thermal mass). Rapid warming in the northern hemisphere high latitudes with much slower cooling is fairly obvious (air temperature wise). So the impact of the different oscillations (or should I say quasi-periodic climate cycles) would appear much like the instrumental temperature record with anthropogenic superimposed on top. (I think Tsonis said that, but didn’t get into why the down slopes would be less than the up slopes).
Back to pondering the optimum way of preparing fish for dinner without having to turn on the A/C.
Dallas – I’m not sure that unforced variations are “dismissed” by climate modelers as much as they are poorly modeled. This is recognized by the modelers, who are attempting to improve the ability of models to address ENSO and other internal variations.
You make a point about the concepts advanced by Tsonis et al. Here, it’s important to realize that these authors disagree vigorously with those who attempt to utilize those concepts to “disprove” a substantial role for anthropogenic forcing in recent change. Instead, they argue that strong climate responses to forcings and to natural variability are – Two Sides of the Same Coin.
Some do attempt to use it to disprove AGW. I don’t, I just believe it helps understand the impact. The general consensus of the modelers, as I see it, is that all internal variation averages out over a time frame where it can be discounted as weather. The time frame for a climate trend varies a little, but Tamino figured the minimum time to be ~14.7 years. Internal variation meets that criteria, looking at 1910 – 1940 period with the more correct (or currently accepted) TSI reconstructions.
As far as the ENSO, Mike Mann entered the study to disprove the impact of internal variability (I would have to dig that up, but it is on realclimate somewhere). Statistics are wonderful, but you can seem to find want you want, instead of what is, if you are not careful. That is one reason I liked Annan’s Bayesian w/expert prior method. Not that it is the best method, but it recognizes the unintentional bias prior knowledge can have on results. Guys are expecting a fat positive tail and finding it by using methods (log normal for example) that will produce the expected results. Modelers are expecting minimal impact of internal variability, and finding it. How much their bias, intentional or not, impacts climate sensitivity I don’t know.
I should add that skeptics are also finding what they are looking for as well. There are a fair number of middle of the road tribe members that are treating the data with respect.
Are you sure you aren’t introducing a strawman in stating that modelers are expecting minimal impact of internal variability? I think the modelers are frustrated that they can’t yet assess unforced internal variability as well as forced variability, but they don’t discount its impact. As before, timescales are critical. ENSO is certainly dominant over greenhouse gas forcing over the course of a few years. The PDO and AMO may have impact comparable to forcing (assuming that they are unforcced, which may not be completely accurate), but that has tended to average out over a century. Millennial scale variations may (or may not) exhibit a dominant unforced component, but their impact on centennial scales is small. The question as to whether we are overlooking some other potent but unforced climate dynamic that should affect our interpretation can’t be settled conclusively yet, but no evidence of such a factor has surfaced, and the point made in the linked reference by Swanson remains pertinent – climate that responds strongly to natural variation will respond strongly to anthropogenic forcing, and so we need to disentangle the relative roles of each.
I really don’t think it is a strawman. What I get from most comments at realclimate is the same song and dance about solar and volcano pre-1950 and GHG after. With more current TSI recons, nearly 0.25 C of the 1910 – 1940 warming appears to be internal. With ensembles it is less of course, but still significant. If you extend the TSI recons back, the Mauder and Dalton require a better explanation.
Would more natural variability indicate a higher sensitivity to 2XCO2? Yes, but there is still a great deal of wiggle room with the range of sensitivity due to GHG’s as it is estimated now. It is not a he’s right, he’s wrong issue. The Swanson post you referenced seemed like placation for the team instead of anything pertinent. “Regardless, it’s important to note that we are not talking about global cooling, just a pause in warming.” Not long before that post there was noise by some of the team players that Tsonis might be a skeptic.
So I am pretty sure it is not a strawman since there are some huge egos out there on both sides.
You can go back to the “Is the Antarctic warming?” in January and see how Eric and Mike defended their RegEm and TTLS methods and how they verified there was no over fitting. They could have used synthetic data if they wished. Intentional or not, they missed a pretty basic test of their algorithm.
Tsonis, imperfect as it may be, does tend to indicate a greater internal variability role if you use reduced TSI impact. You can’t tell what the consequence are unless you look at it. There is no money in do overs, so simple double checking is dismissed for more ground breaking research. So we are stuck in a “let the bad guys find the mistakes” mode.
Dallas – I noticed your earlier citation of different TSI reconstructions. They all, if I remember correctly, showed a rise between 1910 and 1940, although at different rates. However, CO2 was also rising during this interval, and so it’s not clear that the combined TSI/CO2 effect was insufficient to explain the temperature increase. My interpretation is that there is still room to include other natural variations (solar changes are “natural”), but not necessarily a requirement to include them. Do you have data that contradict that assessment?
I do notice that the AMO was bimodal during this interval, while the PDO was more positive than negative.
The magnitude of the rise 1910 to 1940 is pretty small. If there were no other forcing or anything else, the peak to valley TSI change TOA is about 1 W/M^2. Variation as a percentage of average is very small 1/1365. Lean states that Solar may be 0.1 degree C (now not in 2000) . That 0.1 appears to be an upper limit though there is some disagreement.
My desktop is a closet warmer and refuses to boot up :) but if I remember correctly, the Hoyt 1993 and Lean 2000 results (1900 to 1950) were about 4 times the Wang, Svalgaard. For a ball park 0.22 versus 0.7 ish. CO2 was pretty small, aerosols, non volcanic, pretty significant but not huge, volcanic nearly non existent.
Meehl et al 2004 has the break downs this graph on wiki is easier to read than the Meehl http://en.wikipedia.org/wiki/File:Climate_Change_Attribution.png
So I don’t have anything that contradicts anything, just what appears to be significant difference in Solar and questions about the aerosol estimates.
One thing I don’t understand about the Meehl paper is aerosols seem to have a positive forcing for a short period. I haven’t gotten into the data to figure that out.
Since the PDO and AMO are quasi-cyclic, it is difficult to pin anything on them. The “all La Nina’s are not created equal” thing. Tropical reconstructions show the PDO/ENSO better, but the PDO get lost in the higher latitudes probably due to interaction with other oscillations. That is were a turbo charged version of Tsonis Dynamical .. would come in handy.
Eliminating the end points of the 1976 to 1998 rise produces what Swanson says is the ‘true’ trajectory of global warming – about 0.1 degree C/decade or half what the IPCC say was the recent trend.
There is another very significant contributor to global energy absorption which did not even make the IPCC list – black carbon. This is simply and quickly reduced – with positive health effects. http://www.time.com/time/health/article/0,8599,1938379,00.html
There are changes in cloud radiative forcing – again something only peripheral to the IPCC – that are far larger than theoretical CO2 effects – Norris and Slingo 2009 Trends in Observed Cloudiness
and Earth’s Radiation Budget. Again this has such a cursory treatment by the IPCC.
It is all such nonsense.
I have often wondered how much impact black carbon has had on melt rates in Arctic and glacial ice. Do you have an y insights on that.
Thank you for pointing out that the elephant in the room of CO2 obsession- clouds.
That is – quite a lot it seems.
We could also save 1.5 million people a year from an early death as well.
That is – black carbon has quite an effect it seems.
The planet exists in dynamic equilibrium with all the incoming factors of the sun, the solar system , the galaxy and outgoing reactions to that in the biosphere. That does not say that the equilibrium is a) steady, or b) not subject to significant divergences. It means that the pluses-and-minuses average out over some time period, within which we can still have extreme spikes like ice ages and global tropical forestation or desserts.
The interplay of many forces up and down balances itself out but not necessarily to a “best” or a unique place. Chaos theory is borne out by such events as ice ages. What combination of factors lead to it? Perhaps a chaotic combination, one that is present 1:300 times. Like rogue waves.
Rogue waves are both part of chaos theory and a piece of the seas dynamic equilibrium. They are not mutually exclusive.
You will find that the average changes dramatically with the interval, so there is no long term average. Over the last 10,000 years it has been warm. Over the last million very cold. Over the last 100 million rather warm. If the average is different for every scale then there is no average. These are called “strange statistics.”
It is ONLY chaotic when science has never understood all the interacting players and what their role is.
Much of science is negligent in including ALL the contributing factors for a certain outcome.
Energy only has so much force it can work in. This is why we do not have winds in the thousands of km/hr range. The planets maximum physical energy to work with is 1669.8km/hr not factoring in the gases and vapors that generate friction and also gravity.
My first exposure to bounded chaos was via a description of a simple bull whip. The tip of the whip can crack at any of an infinite number of locations but cannot crack an any position beyond the length of the whip. The realm of its chaos was a sphere the diameter of the whip’s length. It was part of a larger essay on reproducible results in analyzing weather, as I recall, and the fact that the results were not reproducible because of rounding errors in the initial state parameters, and quite possibly missing parameters. I’ve had an interest in chaotic systems since.
In another thread I replied that nature does not round off numbers as mere mortals must and so we have chaos. In response I got the same pitch about the parameter space and I didn’t believe it then, either. Parameters that resolve to irrational numbers are beyond our ability to measure and so we can never know with adequate precision the initial state of something we are studying. We get “good enough”. Nature is not bothered by that imprecision and is fully capable of applying the entire set of influences from everywhere in the universe to the study of the 3-body problem, for example. It is chaotic to us, quite predictable to nature (as a concept). Moreover, nature doesn’t retry so is not compelled to recreate the initial conditions, allowing instead the conditions to mature in a continuum. We do attempt to retry and fail completely at reestablishing the initial conditions to the degree of precision needed to reproduce exact results, or in the case of our modeling observed phenomenon we fail to see and parameterize the entire picture.
And yes, there is surely much to critique in what I’ve just said, but you can take an idea only so far in a blog post ;)
If we use the Lorenz equation analogy to our current climate, while it is clear that the Lorenz equations have an attractor, imagine making one of the constants in the equation vary linearly with time. For any given value of that constant there may be an attractor, but I would suggest you can’t define an attractor in the time-varying system, because its definition requires a kind of steadiness. Changes in forcing are equivalent to changing the constants in the equations, and we are currently undergoing a significant such change. The idea of an attractor has no value in a transient situation such as we have now, even if it did in the past for millennia at a time.
With regard to the Tsonis paper:
I feel it would be improved if they reconsidered their predictor for (symbolic) phase.
They constract a least squared predictor matrix:
M = [Z+ ZT] [Z ZT]^=1
and their predicted phase is then:
Zn+1 = M·Zn
so their predictor is multiplicative, which I do not think is correct.
I should say that phase is additive and monotically increasing with a predictor of the form:
Zn+1 = Zn + w·Δt (where Z is the (symbolic) phase, w some measure of angular frequency, and Δt the time step.
Given that they appear to only use the residual mod(2π) of the phase. Their predicted phase will tend to be less than the current phase except over the discontinuity at 2π, that is the predicted phase mostly runs backwards which is problematic.
Now their three dot procedure for extracting a (symbolic) phase could be extended to capture a (symbolic) frequency if extended to consideration of at least a four dot pattern.
Also there is a bit of a nonsence in that the way they construct their predictor is to feed it the entire sequence, so the predictor at each step has been fed the actual outcome of that step and all future steps which conflicts with my notion of a predictor. This could be corrected were there a sufficiently long sequnce to build a predictor based solely on previous knowledge and still have a sequence long enough to evalutate future predictions. Sadly the sequence is probably too short for this.
When I originally looked at this paper, I also had concerns about their measure for coupling but I am now oblivious to what those concerns were.
Given my the first two concerns I could only say that the apporach is interesting but I could not take the conclusions seriously.
I think the point concerning egodicity is not important in the case of climate science.
Ergodicity matters for ‘mainstream’ physics because it takes a long time to measure all the motions of an electron, atom or molecule. By creating a large ensemble of those molecules I can get the same information provided by a time average of ‘motions’ in a much shorter length of time if that system is ergodic.
In the case of the climate science, because we one an ensemble of N=1, we HAVE to time average to get pertinent dynamics. There is no choice in the matter. If climate is ergodic, this time average would give the same dynamics as an ensemble of many earths. If the climate is not ergodic, we still get the dynamics of the earth.
Climate may very well be chaotic or not chaotic, ergodic or not ergodic. It doesn’t matter due to the measurements we have to make.
I guess it’s just another way of showing there is not a good mapping of the climate problem onto statistical mechanics.
The related problem of some significance is:
Does the climate have two or more possible paths of development under the same boundary conditions (using this concept also for CO2 concentration) such that the average temperature will be significantly different over a long period?
For “long” one can substitute 20, 50 or 200 years or whatever and ask the question again. In the question it is implied that the probabilities of all alternatives considered are non-negligible.
This question does not correspond to mathematically complete ergodicity, but it corresponds to a question, which may have practical significance. If the answer is in affirmative, then the practical determination of climate sensitivity becomes questionable.
sometimes I am lost in the world of an experimentalist.
The computer simulations of climate models are run in ensembles. The averaging of those ensembles of model simulations are then used to calculate parameters like climate sensitivity, feedbacks and the like.
In that sense, I can see how there is a necessary assumption of ergodicity from the perspective of the Rabbett.
From the measurement perspective however, I think things are how I describe above.
In the case of sensitive dependence (I think that’s what you are referring to), which would cause the same boundary conditions to produce different outcomes (temperatures or heat content), I have not come to a firm conclusion. It seems that it would be likely that the outcome of climate would be sensitively dependent on initial conditions. That could make climate a combined initial value and boundary condition problem on time scales that are ‘interesting’.
Then again, to prove that there is sensitive dependence we would need to run many, many computer simulations of climate models and assume ergodicity yet again.
It’s a Rabbett hole for sure…
The question exists separately for each climate model, and it exists for the real climate. Climate models that are very sensitive to the initial conditions cannot be used to make useful projections, which becomes clear, when they are tried a large enough number of times.
If the real climate has better statistical properties, then the models that do give stable answers as ensemble properties may give good information on the real climate.
If the real climate has as strong chaotic-like behavior as present empirical data allows on the interesting time scales, then it is very difficult to build models that tell well the properties of real climate. Good models would then also have such behavior, but making it quantitatively similar to the real climate is likely to be extremely difficult.
This is a problem that seems to be difficult to analyze by studying the climate models, unless we are safely on the dissipative non-chaotic side in all important subsystems and the overall system. Again all this restricting the time scales and spatial scales to those of interest to the particular study. This restriction helps as it allows us to exclude very long timescales and it allows also in handling many local processes as dissipation.
All these climate models are chasing a straw in the wind ( Milanovic may be familiar with ‘slamka medju vihorove’ in another more dramatic context ).
Why do I say this:
Rainfall measurements are good indication of type of a climate in a particular place and they are pretty accurate these days, particularly if one has in mind two well known university centres in England, Oxford and Cambridge.
Geographically they are in very similar areas some 100 km apart, Cambridge is closer to the North Sea and Oxford to the Bristol Channel. I can’t see a climatic model which can simply account for the rainfall difference between two during the last 50 or so years.
Now we are suppose that there are climate models for continents or even the globe.
I am not convinced.
Now if you really want to be amused by Oxford’s rainfall than look at this:
as an Oxonian, I love the graphics and the fact that you have decoupled us from that place in the fens…
I note that 1963 was a low poinjt…yes the big Winter. However, 1947 lingers on i9n memory as anm even worse year, and yet it is well above 1963. just a comment. maybe osford was insulated from that…maybe it hit mainly the East of the UK. Which gives greater point to your comment about the dissimilarities in the records of places 100 miles apart and ….dare I say it, the idea of gridding
Eli believes you were pointing to this
However there are examples in the literature of chaotic dynamics in quantum systems
In the sense of positive Liapunov exponents, chaos does not exist in quantum
mechanics, mainly due to the linearity of the Schrodinger equation.
However, non-chaotic quantum systems that lead to chaotic ‘classical’ behaviour when they are measured are still interesting.
Perhaps more relevant are examples of dynamic systems in climate – http://www.pnas.org/content/105/38/14308.full.pdf+html
another v. interesting paper that I hadn’t previously spotted, thx!
Yes thanks Chief. I have read this one.
Interestingly all people who take the subject seriously use necessarily all the “right” theoretical and mathematical tools.
I came back home too late so will have a better look tomorrow.
A fast read gave me an impression that some comments are along the line : why should we need any theory or any mathematics?
Well this would be science of 2000 years ago.
Also some people focus on “chaos theory” and how they think that it exists or doesn’t exist in “reality”.
Chaos theory is just a word, a label. One can use thelabel non linear dynamics if chaos theory is perceived like something “exotic”. What counts is that any physical phenomenon that is described by fields and PED simply MUST be treated with correct mathematical and theoretical tools.
They exist and I tried to describe them. Wanting to ignore the reality never leads far.
And whether somebody likes it or not, weather, climate, oceans, atmosphere etc are systems that are described by PED.This is just a fact that can’t be denied.
I can’t understand why f.ex Moolten wants to ignore 100+ years of rigorous results and deep insights in behaviour of precisely such systems.
If it is good for Navier Stokes which is a “simpler” problem, then it is good for climate too.
Last fast comment on the “averaging out”. Spence UK already said it all.
There is no proof that some (all?) fields should somehow “average out” and indeed Koutsoyiannis among others showed the same conclusion too.
Besides there is absolutely no serious theoretical reason in Navier Stokes or elsewhere that some fields should “average out”.
There is not even a good reason to believe that some invariant probabilities exist at all time scales. And there certainly is not the slightest hint that the system should be ergodic.
If there is somehing more solid than just handwaving or statistics over ultrashort periods then I’d be interested to hear it.
Quantum chaos is a very controversial question. I mentioned it just for the sake of completeness. It s an interesting issue to discuss in the frame of general scientific culture it but it has no relevance to the topic here.
Tomas – It certainly seems that we disagree, and not being omniscient, I can only describe my reasons rather than insist dogmatically claim that I know the absolute truth. As I see it, the averaging out is a fairly clear matter of observation rather than a theoretical deduction. It is conceivable that some non-averaged fluctuation of importance has been overlooked, but if you look at the actual record, including the known fluctuations of importance to temperature on timescales of particular interest (e.g., centennial), the averaging is what emerges. Non-averaged fluctuations on very much longer timescales (multi-millennial) can’t be excluded, but their relevance to the climate of coming decades is almost certainly minimal. Much shorter term fluctuations such as ENSO can be seen also to average out. The more problematic internal dynamics – PDO, AMO, etc., are close enough to mutlidecadal and centennial trends to pose some uncertainties, but the record indicates that their effects over the past century have also averaged out over that time frame. None of this contradicts the principle that the climate exhibits chaotic behavior that is dominant on some timescales, but it tells us that this principle can’t automatically be extrapolated to all timescales; that requires a comparison with observational data. The laws of thermodynamics are also most consistent with a need for fluctuations to generate counteracting trends that push the system toward a restoration of the state from which they deviated.
To be candid, Tomas, I believe that you are trying to prove something that probably runs counter to reality – that spatiotemporal chaos makes the climate so unpredictable, and attribution to specific causes so uncertain, that climate model projections and predictions based on basic geophysical principles combined with measured data are futile. Perhaps I have misunderstood your purpose, but if that is indeed your purpose, I believe you are chasing a fantasy. If you ask whether I can be certain of that conclusion, the answer is that I can’t. But it is probably correct, based on the evidence available in the climate record and some basic principles of physics. It is also consistent with model performance, which is probably subject to uncertainties of the type you specify, but not nearly to the extreme degree you suggest – their performance suggests otherwise. This includes the evidence that models converge toward very similar long term projections from different initial states combined with the fact that the same models do a fairly good job in matching observations on a global, long term basis.
My suggestion is that you combine your knowledge of the mathematical principles you describe with a greater familiarity with the physics and the actual behavior of the climate system, so as to emerge with a more accurate perspective rather than the “denier” label you assign to yourself.
The term “average out” is so vague and imprecise I’m not sure if anyone can understand what your point(s) is, actually.
Any finite set of data has an “average” and therefore you can “average it out”. But that says nothing about extreme values, trends, discontinuities and a whole host of other interesting features the data holds.
The term refers to the observation that over the 1910-2010 interval during which a warming of about 0.8 C was observed, the net result of the most prominent internal fluctuations was close to zero, indicating that they contributed little to the observed rise in temperature. Of the oscillations of more than very short durations, the AMO has been anticorrelated with the temperature trends as frequently as positively correlated. The PDO is the only observed internal dynamic that tends to run in roughly the same direction as changes in the slope of observed temperatures. It may therefore have contributed slightly to the interval change, but if you look at PDO data over that interval, the positive and negative areas under the curve tend to cancel each other, and so its contribution must be presumed small –
Given the different behavior of the AMO – AMO – the net effect of these oscillations combined is further diminished. This observation must then be combined with the known potency of both solar and greenhouse gas forcing in order to reach a reasonable means of apportioning the responsibility for the warming.
Note that the existence of potentially stronger or non-averaged fluctuations at earlier times does not impact conclusions about the strength of anthropogenic forcing during the past 100 years – those conclusions are extractable from the data for that interval, and they provide evidence about the strength of climate responses to the forcings.
Elsewhere, I’ve cited evidence from Meehl and others suggesting that the PDO may be in part a change forced by anthropogenic influences, and therefore not a completely independent contributor to observed trends. At this point, it would be conservative to assume its independence and therefore a potential role player, but a small one during the past 100 years.
Really, Fred? How do you explain papers like this one?
GEOPHYSICAL RESEARCH LETTERS, VOL. 33, L06712, 5 PP., 2006
Long-term behaviour of ENSO: Interactions with the PDO over the past 400 years inferred from paleoclimate records
Long-term behaviour of ENSO: Interactions with the PDO over the past 400 years inferred from paleoclimate records
Danielle C. Verdon
School of Engineering, University of Newcastle, Callaghan, New South Wales, Australia
Stewart W. Franks
School of Engineering, University of Newcastle, Callaghan, New South Wales, Australia
This study uses proxy climate records derived from paleoclimate data to investigate the long-term behaviour of the Pacific Decadal Oscillation (PDO) and the El Niño Southern Oscillation (ENSO). During the past 400 years, climate shifts associated with changes in the PDO are shown to have occurred with a similar frequency to those documented in the 20th Century. Importantly, phase changes in the PDO have a propensity to coincide with changes in the relative frequency of ENSO events, where the positive phase of the PDO is associated with an enhanced frequency of El Niño events, while the negative phase is shown to be more favourable for the development of La Niña events.
” It may therefore have contributed slightly to the interval change, but if you look at PDO data over that interval, the positive and negative areas under the curve tend to cancel each other, and so its contribution must be presumed small -”
Different interval, but same data set, I believe.
Some of this is consonant with Meehl’s conjecture that the PDO includes a forced component. Depending on the interval chosen, the PDO can account for much or little of an observed temperature trend. For the 1910-2010 interval, other factors are needed to explain most of the trend.
Just below, I have linked a series of articles by Nicolas Scafetta and one by LeMouel which find a solar signature in the temp data. These papers cannot be ignored forever.
The overall global warming rate is about 0.06 deg C per decade.
The positive PDO contributes to an additional warming of about 0.1 deg C per decade to provide an overall warming of about 0.06+0.1=0.16 deg C per decade. (Trend from 1970 to 2000)
The negative PDO contributes to a cooling of about 0.1 deg C per decade to provide an overall cooling of about 0.06-0.1=-0.04 deg C per decade. (Trend from 1940 to 1970; and hopefully from 2000 to 2030)
The above is Girma’s prediction of global mean temperature until 2030.
as to the larger point you seem to be making, I think the pertinent question is whether or not the observation period we have at our fingertips gives us an idea of ALL the ‘fluctuations’ possible in the climate system.
The perspective that Tomas is proposing, finding the basis vectors of the Hilbert space in which the climate system resides, WILL give you all the components that can fluctuate.
I don’t think one is more right than the other. Observations are real data and cannot be denied. But you’re assuming that we have observed everything that is to be observed, which is an interesting assumption to say the least. From a mathematical perspective, the utility of linear algebra is describing the dynamics of this physical system is fairly standard work. This makes Tomas’ analysis quite marketable to someone with my familiarity with quantum mechanics.
I do think that Tomas is making a purely theoretical point. If it means that climate models aren’t as good as they have been purported to be, then so be it. But the ‘averaging out’ of some parameters we have observed so far does not mean that models are correct or that fluctuations in the important climate parameters won’t matter in the future.
Again, I think it comes down to a time average. For transients that have periods of decades to centuries, we simply have not averaged enough to say much of worth right now as far as the entire climate system is concerned.
Maxwell – I’m sure we don’t know all possible fluctuations. Certainly, internal climate processes with chaotic behavior may have played a dominant role in past eras. However, if we want to ask the question – as to how potent is the role of an observed CO2 increase in driving a temperature change, we can attempt to assess what other factors are operating at the same time. Toward this end, we can evaluate a putative trend after examining the data for fluctuations that may have modified or distorted it over a timescale similar to the trend interval itself. For the most recent 100 years, there is no apparent fluctuation of the requisite length that would have a huge impact. Some additional work on this is rep;orted at
Trends and Detrending
However, my perspective is not that “models are as good as they are purported to be” – at least by some, but rather that the conclusions Tomas reaches about their inutility is almost certainly wrong. The empirical data tells us that climate is unpredictable, but less unpredictable than he suggests. I would not wish to go to the other extreme, because we will be struggling with problems of uncertainty for a long time.
I also have the impression that many misconceptions of the kind I think may have arisen here derive from a misunderstanding about how models are constructed. I hope Judith Curry will consider inviting a modeler to present a guest post on that topic – Andy Lacis might be a good candidate.
I don’t believe Andy would like the questions people here may ask and I don’t think he would attempt to answer. But it would be fun to give it a go!
What do you do with articles like these by Nicolas Scafetta?
They deserve more than brief comments here. Most analyses attribute less of the warming to solar forcing than does Scafetta, but his attribution still allows more for anthropogenic than solar for the past 50 years or so. Solar was probably more important early in the 20th century. The disagreement in this thread is really about the relative role of forcings vs unforced and unpredictable fluctuations.
You are trying to change the subject. Are you really trying to say solar is not a forcing???? Or that solar amplification is well understood???
Scafetta is not the only one who finds a solar signature in the data. LeMouel also. See http://www.pensee-unique.fr/courtillot3.pdf
You write that Scafetta’s “attribution still allows more for anthropogenic than solar for the past 50 years or so.” But in addition to the solar signature, is internal climate variability on decadal scales like the oceanic oscillations and climate regime shifts of Tsonis. By the time you calculate both, the possible attribution to CO2 is quite small and shows to be completely harmless.
You are loathe to give up a favorite theory of yours, but the evidence does not bear it out.
Ron – see my above comment on what I termed “solar forcing”.
I did read your comment above. I don’t see how it helps.
The people who back solar 11-year cycle forcing walk a thin line between having a solar effect but not having too much positive feedback. Some would say that as much as a 0.2 C 11-year amplitude is driven by the solar forcing of 0.2 W/m2 during the cycles, giving a sensitivity of 1 C per W/m2, which is actually comparable with higher estimates of the CO2 sensitivity, and certainly would require positive feedback. Scafetta seems to come up with a sensitivity about a tenth of this, showing solar forcing to be weak.
I find the statement,
‘For the most recent 100 years, there is no apparent fluctuation of the requisite length that would have a huge impact.’
interesting. I’d say that Occam Razor might have us thinking that we should expect the same to similar dynamics in the next 100 years that we have seen the last 100 years. In that case, it would make sense to try to point out the fact that the only major contributing factor that does not seem to ‘average out’ in past analysis is increased greenhouse forcing. Because if this parameter has affected climate in the past, it will affect climate in the future.
But I wonder, since there are substantial approximations restricting the form of the equations in the physical climate models, if there are not ways in which the climate can change, internally or externally, in the next 100 years that we have observed yet.
Again, for me at least, this goes back to time averaging and recovery of the pertinent basis vectors that can fully explain the time dependent variation of a particular climate parameter, say surface temperature or heat content. The idea that 100 years is ‘enough’ time to get the all important physics seems odd to me when many understand there to be substantial transients in the ocean and cryosphere who characteristic time scales are decades to centuries. In order to see how these physical processes affect climate we need several time periods of decades to centuries for the dynamics of those transients to ‘average out’.
I am not hopeful that we have reached such a point in time as of yet.
I’m not sure how all of this fits into Tomas’ points other than that I think the use of linear algebra for finding the basis states of climate could a very powerful tool for analysis.
Maxwell -Yes, I believe the ocean transients are particularly important. They are incorporated into calculations of the last century’s forcing, based on the principle that most of the response of the upper ocean is apparent within years to a few decades, and that the deep ocean response is far more stretched out, but correspondingly weaker over an interval of only decades, since it asymptotes toward equilibrium over millennia. This applies particularly to solar forcing before mid-century, which probably contributed significantly to pre-1950 warming, but whose effect had waned by late century in comparison with the flat to slightly declining solar irradiance of that time.
Although I’m not sure exactly what you had in mind regarding the cryosphere, changes in sea ice become apparent relatively promptly. Albedo changes due to melting of the Greenland or Antarctic ice sheets might have enormous impact over the course of many centuries, thereby signficantly increasing climate sensitivity to greenhouse gas forcing, but effects over the next one or two centuries are expected to be small.
I understand and agree with your point that the next century might reveal fluctuations with a greater impact than recent ones, but it’s not clear why these should be far stronger. Abrupt changes of the type you may have in mind have tended to occur mainly during glaciations rather than interglacials, and to have been hemispheric rather than global – e.g., the sudden D/O warmings. Still, we can’t ever dismiss the possibility of surprises.
I’m going to build on Pekka’s point. If there is overwhelming dissipation in the deep ocean, I agree it’s likely that fluctuations in ‘long’ lived transients there will not be important. If there is not overwhelming dissipation, I can imagine heat content of two randomly phased physical processes building together in some time periods to modulate climate significantly.
I think this idea is similar to upper ocean heat content manifesting itself in ENSO events, but could occur at longer time intervals between events. I think the deep ocean could contain a great deal of interesting physics if dissipation is not overwhelming.
As to the cryosphere, I was more interesting in land-based ice sheets, which are persistent for year after year in Antarctica in particular. There are substantially long lived transients in those physical systems. As you said, glaciation/deglaciation events can modulate climate for large portions of the globe, that will show up in a global average of surface temperature/heat content, even if it’s not a truly ‘global’ event. I think that distinction is not important given no one lives in a global world.
For more on the role of deep ocean millennial timescales as a factor in warming potential, the PNAS article by Solomon et al is of interest. In particular, it addresses the increasing importance of the deep ocean as a function of the duration of forcing, with particular respect to CO2, but also relevant to natural forcing that might occur with sustained changes in solar irradiance –
Persistent Climate Change
In contrast, brief perturbations affecting the oceans are likely to be manifest primarily in the upper layers.
Let us not forget about “threshold” principles in climate: all sorts of people love to use the Butterfly Principle in CO2 driven climate models, although they may not realize they are using it. Not all events contribute to an end as they have not the impact strength or duration to drive the system away from its initial trend. Along with the Threshold Principle comes an Overwhelming Factor consideration, also: a small effect, like CO2, can be overwhelmed by another factor, like cosmic ray spikes, so that there is no net contributor from the small factor or force.
Chaos theory is a nice theory for explaining unexpected results in a complex situation. Yet Threshold Principles, Buffering considerations (feedback) or Overwhelming Factors are at play also, each tending to minimize or stop chaos from defining the system. That is why the Butterfly Effect is not really important. The only place the BE or chaos theory really has a place in climate science is a) in a small scale, short-term, local situation, and b) where there is a significant Tipping Point.
A Tipping Point is really a Threshold in operation. Nothing happens until that final straw is added to the camel’s back. Tipping Points may be claimed to be behind the initiation of glacial ages, but I’d bet we are more dealing with Overwhelming Factors than Tipping Points. If subtleties are truly important, we would see all sorts of bizarre events – the type Hollywood love. But we don’t, except on the silver screen. Rogue waves may be an exception, but if you consider the numbers of waves on the planet at any given moment, I’d bet the rare event is a rare event indeed. Climate, on the other hand, shows major glaciation events occurring rarely in terms of 4.5 billion years of planetary existence. Even evolution seems a pretty pedestrian affair if you take the number of reproductive acts over that 3.5 billion years of life.
I am always a little tentative in these technical threads as I am merely a Level3 droid. That said, where I have a problem with the concept of averaging out natural oscillations over time, is that the analysis does not seem to account for any interaction or compounding effects between the different processes; it seems to just treat each one as an individual matter. Surely it is more complicated than that?
Rob – I think you are right that various oscillations interact. There certainly appears to be a positive relationship between ENSO and the PDO. There is some evidence, I believe, that increased warming in the Atlantic Ocean (such as might be reflected in the AMO) can drive La Nina (cooling) conditions in the Pacific. Empirically, I don’t think we’ve seen evidence that this has resulted in much wider departures from the temperature trend line of the past century than would have been expected from any of the oscillations individually. However, the 1945-1950 interval may have been an exception.
eye-balling the temperature trend of the last 160 years without any climate knowledge, the lay-man would conclude a long term trend of approx. +.7’C, plus a cycle of approx 70 years where the trend is approx .2’C/Decade. However AGW theory says, No, the .2’C trend in the 80/90s is CO2, and everything you see in the record is due to the interplay of forcings.
Looking at the historical records, one could say that temperature is “average out” in that it hasn’t gotten so hot or so cold that life was permanently extinguished. So if you take the long temperature record and add it all up, and divide by time, you will get a average. However, from what I understand of the analysis done on this average, it has no significance. It doesn’t exists in reality. It is similar to putting one foot in the oven and the other in the freezer and claiming on average you are comfortable. On paper you can calculate the average, but it has no meaning. It is a mathematical illusion.
I very much enjoyed reading this article and it shows great promise. Even if we derive a finite solution for climate, it should be cautioned that the run times on current computers for many finite problems exceed the estimated life of the universe.
For example, solve the game of chess. It is exceedingly finite. An 8×8 board. Each side has at most 16 pieces, of which many have the same properties. Everything about this problem is finite and well understood. Have a computer map all board positions and the probability of win lose at each position. Once you have this mapping, the game of chess is solved. For any board position you would simply look up the best possible move.
How long will it take to calculate this mapping on a fast PC? How much disk space will you need to record the results? This is for 32 pieces on a 64 element array. Now try solving the same problem for climate.
Each molecule and particle is a piece on the board, and there are quite a few of them, with many different energy levels and ways of interacting with other piece. And at the same time there are many players moving the pieces around the board, some even changing the pieces from pawns to rooks and back. Some changes we don’t even know about, which is sort of like one player moving the pieces when another isn’t watching. How long will it take to calculate on a really fast super computer? Even if it is 1 million times faster than the fastest PC (which would be something)? How much disk space will you need to record the results?
‘The laws of thermodynamics are also most consistent with a need for fluctuations to generate counteracting trends that push the system toward a restoration of the state from which they deviated.’
This is an astonishing misapplication of what I assume to be the 2nd law of thermodynamics. There are external influences that change incident radiation – solar and orbital changes. There are also internal factors – clouds, dust, aerosols, gases, land cover, ice – which all change for various reasons. The changes drive a dynamic energy disequilibrium that can’t possibly be associated a single preferred state.
Imagine a state where cloud cover increased substantially over a short period – as indeed seems to have occurred between 1998 and 2001. More light would be reflected and the planetary warming trend was ended. There is no true equilibrium – energy in less energy out is equal to the change in heat stored in climate system (by the 1st law) and all the terms change all the time. The example involves a shift to a different and cooler state driven by changes in cloud cover that occur largely as a result of changing sea surface temperature in the latest abrupt Pacific climate shift.
The claim that the Interdecadal Pacific Oscillation (IPO) even out over time is nonsense. Even if it could be demonstrated for the instrumental record -given especially that good records only commenced in the 1950’s. What we have is a record of a cool mode to 1976, a warm mode to 1998 and a cool mode since. There is little to suggest that this is typical or that, indeed, the IPO has a typical pattern. I quoted Jonathon Nott on cyclones in the decadal variability of clouds post. Cyclones in Australia are much bigger and much more frequent in La Niña years than otherwise. He said that ‘what the record shows is we go through extended periods, hundreds of years, of high activity and extended periods of little activity.’ I also quoted Anastasios Tsonis on a chaotic bifucation of ENSO some 5000 years ago that resulted in drying in Northern Africa.
To make off the cuff remarks about the IPO evening out is the epitome of the pompous arrogance with which some people claim an understanding that cannot possibly be real.
I agree with your main points. However, your last sentence seems to be a bit unfair, because climate science works in a highly nonlinear environment. Evaluating the data requires certain assumptions that might need proof later on. The “averaging out” to me is an assumption that needs experimental proof, especially in the case of cloud cover.
However, to claim we have enough understanding of irreversible thermodynamics to argue for “averaging out” seems to me too far fetched.
It is wrong to think of the earth science in terms of conventional wisdom. Because seasonal changes are so slow with time, and all of the experimental or engineering equations available to us are related to practical processes that are much faster with time than seasons. In short, take the available equations of engineering and science with a grain of salt when considering the earth system. I will present an example: Let us plot the number of heart beats of “Paul” with time when he climbs the stairs to the fourth floor in two ways. In the first, or practical processes, Paul climbs the stairs in two (2) minutes. In the second way, or earth process, Paul climbs the stairs in four (4) hours; Every time he climbs one step, he rests for three minutes. In the first experiment, the plot of Paul’s heart beat with time will be variable and rising with time, say chaotic, whereas in the second experiment, the plot will be unchanged, constant, and steady with time. An this is what we observe for the earth: Solar radiation at surface and surface temperature exhibit constant change and linearity with time, they are predictable and not chaotic. I feel that the earth science is difficult but not impossible, we just need to think differently.
As Fred Moolten, I have also difficulties in figuring out, what Tomas Milanovic wants really to tell about the dynamics and modeling of the climate system (or the Earth system) and on other related issues like attribution of the observations.
Is the point that there are many pitfalls in the modeling and that the system may have significant additional dynamics that has not been given sufficient weight? If that is the point, then I agree largely.
Or is the point that the properties of the equations and the dynamics resulting from them such that there is little or no hope in obtaining useful and significant results from modeling? If this the point, then I do not really agree.
Tomas wrote: “And whether somebody likes it or not, weather, climate, oceans, atmosphere etc are systems that are described by PED.This is just a fact that can’t be denied.”
That is true, but that is also very misleading. Most PDE’s cannot be solved exactly, but they are discretized and the discretized approximation to the original equations describes another hypothetical system. This other system is not identical with the original one, but in very many problems we can create discretized theoretical models that do produce results in good agreement with many properties of the original system. From the way the discretization is done we know always that the model cannot describe all properties well (certainly not those of spatial scale smaller than the grid size and usually not all others either).
The discretization may be based on the full equations (PDE’s) or it can be based more directly on the physical conservation laws and some other physical laws, which are used also in deriving the PDE’s. In addition it is usually necessary to use some additional equations to handle those details that are lost in the discretization, in particular various processes of individual computational cells or interactions between neighboring cells. The discretization cannot either satisfy fully all conservation laws, which requires some procedures to maintain their validity to the degree needed for meaningful results.
After the discretization we are not anymore considering the PDE’s but a discrete set of spatial points and also discrete time. One of the essential questions is, have we lost something essential for the practical applications in this process. In some cases the loss is severe and no ways have been found to get over this problem, but commonly successful discretization is possible, and after such discretization we do not need to consider the theoretical issues that apply specifically to the full PDE’s. How good any particular discretized model is, is a separate issue – and I do believe that the real problems of climate models are in this area, not in the fundamental step from PDE’s to the discretized world.
The other issue discussed in this thread is the value of time averages, in particular on the timescale of one hundred or few hundreds of years. The paleoclimatological data puts some limits on variation on such timescales, but the quality and extent of the data is hardly sufficient for strong conclusions.
The next question is, what does this tell about the attribution problem. To me it gives a warning that there may be important factors that have been neglected, but only a weak signal. It doesn’t necessarily influence much the reliability of the attribution. There are other uncertainties known and admitted, this additional factor doesn’t really add much to that. The Bayesian inference, which is the only way of justifying inferences that I believe in, is not likely to be influenced much. The influence comes from the change in the likelihood of explanations with a particularly large natural component and small anthropogenic component. The likelihood of such an explanation is increased a little, but probably so little that the conclusions are not really changed much at all.
One additional comment on, why I do not believe that the fundamental step from PDE’s to discrete models is really problematic. The reason is in the dissipative nature of small scale processes. They lead to turbulences and they involve also diffusive processes. This kind of details are not likely have important large scale effects in ways that cannot be introduced well enough to the discretized models.
Handling turbulence in the aerodynamics is known to be a difficult problem, but methods that are good enough in handling the small scale turbulences for introducing their consequences to large scale system do exist or can be developed.
The more serious problems of dynamics concern large scale motions and reservoirs. Dissipation is very much weaker for such processes. Therefore they may lead to important effects from the climate perspective. Modeling these effects may be extremely difficult, but the problems are not related to the PDE’s specifically, but are present in discretized models as well.
Mathematics sees a circle only a a single dimensional shape.
Add motion and this circle is mechanically an extremely complex system that is extremely fascinating to explore.
This area was never fully understood due to not having the technology back then to understand the complex nature of creating a planet.
A lever(lift a rock with a stick and pivot point) is part of a full circle. And so is opening a door where you need more force energy as you push toward the hinge side.
Compressing gases and storing energy with circular motion can show how it is possible to still have gases escape from this planet after 4.5 billion years. All this is part of understanding the many faces of centrifugal force.
Personally I would not be much bothered by large scale the ocean dynamics either and I will try and explain this prejudice and others can comment who know more than I.
Whereas the bulk motions of the ocean move a lot of energy it is in the form of heat not motion, hence the momentum is small.
Also the driving forces are I believe primarily due to mixing to produce the density gradients not heating. To explain this all I can say is: that it is as, or more important that there are salinity gradients then that there are thermal gradients. The water is dragged around largely by the phenomenom that if you mix to adjacent bodies of the same density but different temperatures and salinities, the resultant mixture has a different density. As much of the ocean is stable due to stratification these processes are meso-scale eddy properties, with their location fixed by topographic features, that are dissipative and resemble diffusive processes at the next higher scale. Basically the ocean doesn’t so much flow as slump. That is not to say that they do not have any interesting dynamics or that they cannot show any periodic variations but they are dominated by their damping component. Looked at in detail they will be complex and baffling where there is relatively fast motion but these features are old and stable and dominated by the shape of the basins.
As I said, these are prejudices, but the ocean dynamics are so rarely discussed I think they are worth stating just to get the topic aired.
Unfortunately there seem to be so few oceanographers that can talk about how the oceans can be modelled.
You may be right, I do not have any evidence to contradict you. What I had in mind is not very specific, but it would certainly require several different factors that combine to form a mode of long persistence (whatever “long” means).
In the present oceans the thermohaline circulation follows certain patterns. Perhaps there is an alternative pattern differing significantly in some way, but with comparable persistence, when a transition to it has occurred. While I don’t know, how realistic this idea is, it’s perhaps the most likely form of long term internal variability.
I am also very likely to be wrong. It is partly my point that the debate is very atmospheric-centric. A bit like Hamlet without the rest of the cast and the set. I could possibly get away with all sorts of nonsense about the oceans, and quite possibly have. But I am possibly correct in as far as the significance of the role of fixed features such as oceanic margins and submarine topography in limiting the number of plausible outcomes.
We, at least I, are in need of some serous oceanographic input.
Really? You don’t get what Tomas is saying? He was saying Eli Rabbett is wrong! Perhaps that is not a message you want to hear, but you certainly have not refuted Tomas.
I hope Judith Curry will consider inviting a modeler to present a guest post on that topic – Andy Lacis might be a good candidate.
It would be just great.
I second that wish. It would also be very nice to have links to papers that active modelers consider best in describing the present status of climate modeling for people, who know something about modeling, but not as much about climate models specifically.
It would be really good to get a modeler that has the MWP, the Roman Warming and the LIA nailed right down in their baseline backcasting.
Don’t you think?
Maybe here’s a start Isaac Held’s Blog, especially this post.
I’ll note in passing that none of these N-box, parameter-estimation, approaches, based on linear ODEs, can exhibit chaotic response. They are basically interpolation methods and extrapolation, IMO, should be highly discouraged.
plazaeme & Pekka:
There are a load of recent presentation by modellers here:
The series is called:
Mathematical and Statistical Approaches to Climate Modelling and Prediction (Isaac Newton Institute for Mathematical Sciences)
There is far too much to watch unless you have a lot of spare time, but it seems that they do have similar concerns as are dealt with here. They are technical presentations pitched to fellow climate scientists.
Thanks a lot, Alex.
But blogland has its advantages, and I am interested not only in a presentation, but in debate. I would like to hear what a climate modeler has to say to Milanovic, and vice versa. After seeing the points of each one, it’s easier to digest the beast.
I am not a proxy for a modeller but from what I have digested their concerns about chaotic behaviour are limited to certain specifics.
One being, does the “climate” display what could be described as choatic telltales, types of behaviour that indicate that we are near any points where the climate might do something unexpected or dramatic, or evidence of such behaviour in the past.
The answer is a definite maybe. The evidence is suggestive that there have been times in the past that do look like the climate was close to bistable or multistable conditions, but there are no definitive smoking guns. One of the signatures they look for can be characterised as hotting up whilst slowing down. That is one or more of the climate measures starts to increase in variance whilst simultaneously dropping in frequency. This would be in the class of temporal, not spatio-temporal phenomenon. This a simple form of chaotic behaviour and, in my opinion, if we can argue as to whether the climate exhibits any salient chaotic features at all, I wonder how important Tomas’ concerns are. Somewhere in that lecture series someone talks about ENSO predictions, and argues that whereas they might be chaotic, they are not much different to what you would expect from a resonance driven by noise, I think that would be the case if the chaotic nature was dominated by dissipation.
Anyway I should like for a modeller to comment and at sometime for an ocean modeller to comment, as so much of what we call climate is determined by the oceans (and large scale semi-stable atmospheric motions). It is my prejudice that the ocean component is the poor relation of the AO models, historically it was always an after-thought.
So far as I know there has been no serious modeling of chaotic climate dynamics. For example, a chaotic system will oscillate under constant forcing. Who is modeling that aspect of climate, pushing the chaos to see how much there might be?
So far as I am aware, all of the classic small systems of non-linear ODEs that exhibit chaotic response include only constant, time invariant energy input into the systems. The energy input has been carefully balanced, exceedingly so, with the dissipation so as to maintain (1) the bounded response and (2) continuation of the motions.
None of these systems exhibit a trend in the dependent variables as a function of the independent variable. If at some arbitrary value of the independent variable a discontinuous change in the parameter representing the energy input is made, the response might show a step-like change in the dependent variables. [ Recall that the original Lorenz system does not exhibit chaotic response for all values of the parameters. Some parameter values indicate that the system returns to an equilibrium state from arbitrary ICs. ] The only valid method to evaluate the response of chaotic systems is through calculations. Without calculations all hypotheses about what the response of chaotic systems will be are merely untested WAGs.
By calculations I assume you include modeling. If so then I heartily agree. I recall a simple model described in JGR back around 1993-5. It showed that the 20th century temperature profile could be generated from a simple non-linear model of ocean upwelling. So far as I can tell this result was ignored.
The internal or unforced variation is more like varying efficiency of the work performed in the system. It is more of a feedback to certain forcing or forcings. Changes in precipitation would be a dependent variable, circulation pattern variation is also dependent and temperature differential the independent variable which the dependent variables are working to reduce. Forgive me because I haven’t used PDQ’s in a long time, but that is my basic understanding. You can substitute pressure gradient for temperature differential since they are closely linked.
The atmosphere is a turbulent airflow problem, appearing chaotic but with what appears to be bounds, somewhat less than truly chaotic. Better description of the ocean oscillations (or psuedo-oscillations or quasi-period events) will lead to a better understand on the boundaries.
Increased CO2 should shift the boundaries upward and may reduce the range of variation if the temperature gradient (or pressure) is reduced.
Still determinedly looking in the wrong places, and making the perceived problems harder, not easier. Science is the disciplined art of making things simple, by learning the simple truth behind them, not by hanging useless ornaments on incompetent theories.
True, science is supposed to make understanding of complex things simpler. Forecasting events can be simplified to historical averages, like the farmers almanac. For decadal and longer forecasts, the application on some version of chaos theory should lead to better results. Tsonsis’ paper was a simplification of a chaos theory application which may have lead to a better method of forecasting. (Tomas can correct me if my understanding is off).
Tsonis’ Dynamical paper was limited to the two major oscillations. In another paper he used a similar method to determine the role of atmospheric teleconnections, which shows some of the interactions, even hints that the AMO is a Tripole.
The PDO for example is not an oscillation, is not decadal and is not limited to the Pacific. It is quasi-periodic and more stable near the equator. The northern hemisphere is the most chaotic portion of the surface temperature because of the higher percentage land surface. Chaos theory may be able to better show what happens in the higher latitudes where the tip of the whip is cracking.
And I agree with inviting a modeler. GISS models are available online for those that want to play around with them so that could be a good place to start.
A guy by the name of Einstein once said something like “things should be as simple as possible, but not simpler”. Sometimes a spherical cow isn’t good enough.
“Moderation note: this is a technical thread that will be moderated for relevance.”
I hope you find this relevant. This was a wonderful presentation and an enlightening exchange of comments. Were you all on a stage, and if you could see and hear me, I would stand and applaud you all. Thank you!
Additional feedback on your comments: I stand in your midst as a simple, retired, old taxpayer. From where I sit, from what I’ve heard, I see no reason to invest in, or pay taxes for, anything related to Anthroprogenic Global Warming research or remediation. Would anyone care to tell me I’m being narrow minded, or that I missed something in what has been said here?
Your spot on. :-)
Climate science jumped in and was giving advise long before understanding anything by generating a temperature model and projecting it forward without looking at any physical changes that are occurring.
That depends on how the money is spent. Better long term forecasting is worth the money, though how useful the results will be is questionable. In the UK and some other places, equipment and material for snow mitigation was in short supply. So no matter how much better forecasting gets, you will still need to look out the window. Used wisely, the improved forecasts can be very beneficial as long as the level of uncertainty is clearly stated.
You are not being narrow minded. Your stand is supported by the data and published papers.
Global Warming has taken a vacation until about 2030!
Another supporting paper of my graph:
Another falsification of IPCC’c 0.2 deg C per decade warming.
When is IPCC going to say it got it wrong?
The decade of the 2000’s was 0.19 degrees warmer than the decade of the 1990’s. This should be obvious if you are looking at the numbers.
Seeing is believing:
Why is the second decadal trend much, much smaller?
Because the 90’s ended with a big El Nino and the 00’s ended with a cooler period that was still as warm as the big El Nino of ’98.
You are not being narrow minded. Global temperatures and sea levels are rising at a modest rate, which is by and large undetectable by human senses on the time scale of human lifetimes. The more tax payer money we supply to investigate this, the more money that is misapplied to present this small change as an excuse for even more taxes. Quite simply, the more we pay, the more people are encouraged to ask for more.
Think of a pan-handler or beggar on the street. You walk the same street every day. One day the beggar approaches you, asking for money. You ignore the beggar and walk on. Very soon they will ignore you, and you will be able to make your daily walk in peace and security.
Now consider what happens if you give the beggar money when they approach you. They learn that you will give money. So, from then on they will be very motivated to approach you for money. Other beggars and pan-handlers, seeing you give money, will be encouraged to approach you as well.
Soon you will tire of this, and not want to give money. You will try and avoid the beggars. However, there is a principle in psychology, of intermittent reward. If you withhold the reward and only supply it on occasion, the beggars will try all the harder to get your money. Some will try reason, others will try aggression, until they discover what works best. Testing shows that if you give money only 1 time in 7, this has the strongest training effect.
Taxpayers have created the current situation by funding climate science. Climate science has found that by painting scary pictures of the future, the can get even more money from the taxpayers. When the taxpayers try and stop giving money, climate science tries all the harder to get money, to paint increasingly scary pictures.
This is the situation we are in a present. Think of all the stories of doom and gloom we have been told over the past decade. How much of it has turned out to be true? Climate science didn’t create the problem. We the taxpayers created the problem by giving the first hand-out. Now that we are trying to stop giving hand-outs, the beggars on the street are trying all the harder to scare us into giving money.
The precise underpinnings of the dynamic system are interesting and non-trivial but can they be shown to have primacy or even palpable significance in the quest to understand the development of the climate system during the historic instrumental record, or in the needs to forecast the development of the climate during the period 2011-2050 and 2050-2099, viewed using the narrow, but standard measure of global temperatures anomalies.
The challenge is as follows.
Is there any conclusive evidence derived from the historic monthly global surface temperature anomaly series which must lead us to conclude that the same series cannot be explained as the result of of known flux forcings, given their uncertainties, plus stochastic iid noise flux acting upon a “linear” temporally invariant system containing well damped resonances, that must compel us to expect that the future development of the system will be incompatible with such a model in the near term and upto 2050 or 2100.
This is not the same question as whether the causes are linear and stochastic, which is arguably not the case, it is whether the result in terms of the temperature series can be distinquished from what could result from such a model. I am not solicitating arguments about the theory or about other measures, or regional behaviour, simply evidential arguments based on the global temperature series as we know it. I am not asking as to whether there are alternative explanations but whether there is compelling evidence against a linear stochastic model.
I think this is a small but necessary hurdle that should be easily overcome if there is a compelling need to consider more complex mechanisms before we can provide forecasts of this narrow measure with error bounds based on the implied statistics of such a model. I have deliberately chosen perhaps the simplest of all reasonable models and one that is analytical which makes the question more easily decidable.
By way of illustration of potential arguments, it would be insufficient to state that a fluctuation such as the PDO is unexplained, it would be necessary to show that the associated temperature component has features that cannot be modelled as a resonance driven by noise. For the ENSO variations it would be necesary to show that the record has features that will produce variance that is incompatable with that produced by coupled resonances driven by iid noise, an example being that it it is not statistically stationary.
I am not saying that such arguments do not exist, or that their due consideration might not improve forecasting skill. I query whether there is sufficient evidence in the historic temperature record to compel us to abandon even the simplest of stochastic models as a basis for forecasting.
The overall global warming rate is about 0.06 deg C per decade.
The positive PDO contributes to an additional warming of about 0.1 deg C per decade to provide an overall warming of about 0.06+0.1=0.16 deg C per decade. (Trend from 1970 to 2000)
The negative PDO contributes to a cooling of about 0.1 deg C per decade to provide an overall cooling of about 0.06-0.1=-0.04 deg C per decade. (Trend from 1940 to 1970; and hopefully from 2000 to 2030)
The above is Girma’s prediction of global mean temperature trend until 2030.
I think you have just tried to prove Fred’s point about averaging, Girma. It is more complicated than that IMHO.
have you read the thread…your averages and extrapolations are meaningless because you do not understand the system dynamics
…you do not understand the system dynamics
I’ve attempted a layperson’s field guide below to present my own interpretation of the very worthwhile posting by Tomas Milanovic, which has seeded so much interesting and valuable commentary and promises much for the future.
The word ‘entirely’ troubled me earlier, and one cannot decently approach any mathematician’s published words until one reads them entirely.
I’m no mathematician, so feel free to skip to my last paragraph at any time.
From the end of Eli Rabbett’s statement, the bulk of the first half of Tomas Milanovic’s post seems bumpf, self-reversing or auto-falsifying rambles apparently (as is customary for mathematicians taking on a topic they are not certain they can adequately counter head on) intended to break up Rabbett’s frame of reference without actually addressing its point, instead manufacturing semantic and irrelevant trivialities to dispute without a point, until here:
..GCM’s spatial and temporal resolution doesn’t allow to solve them.
Doesn’t http://www.fi.isc.cnr.it/users/antonio.politi/Reprints/017.pdf as pointed out recently by Al Tekhasski suggest that only a billion or so dynamically-significant points per attractor dimension* might be necessary to perform basic nonlinear dynamics analysis once the problem is well-defined?
That’s a merely technically difficult obstacle, not an impossibility. It points up that current models can’t be used for the purpose described, but then we already know that.
Even the convergence question cannot be answered because to converge to a solution one would have to reduce the resolution until convergence, which is impossible.
I really distrust ‘impossible’ not followed with a proof of impossibility (especially given the clear and obvious bias brought to the table by the author) because I practice skepticism, and ‘impossible’ is one of those words skeptics abhor unsupported.
So while it is known that there are chaotic solutions to Navier Stokes and that the weather is an example of such a solution that Nature choose, numerical simulations of GCM’s are irrelevant to the question.
I’m afraid at the halfway point Tomas Milanovic pretends both A) insufficient knowledge of Chaos Theory as applied to climate with this statement (which we know to be untrue of him if true of any of us), for the reasons above pointed out, and also B) too ready willingness to brush aside contributions that satisfice because they do not fully satisfy (he is too virtuous for the dirty world of practical field research).
Just because models will never predict the exact daily temperature anomalies worldwide forever does not mean clever interpreters can derive no useful conclusions from them, nor that there is no use collecting and analysing climate data, which is the shocking plain-language terms for the suggestion Milanovic apparently makes at this point.
Also, C) he’s discussed some of this before (http://judithcurry.com/2011/02/10/spatio-temporal-chaos/), and in some ways Milanovic is pretending all that came before has not happened.
He’s made little apparent progress in evolving his ideas from the discussions here, merely restating the same entrenched ipsedixit position in the apparent hopes he will obtain a different result, up to the end of his point (3).
While Milanovic’s objections are learned, important and possibly valid in some limited and nuanced way, for the first three quarters of this offering, one would be better off to read the Eli Rabbett quote in its entirety, and then feed Milanovic’s writing through a phrase sorter and use Google Scholar on each unique term to educate oneself, then decide for oneself what Rabbett means and if there is anything wrong with that, if one stops before reading Milanovic’s full (4).
From Milanovic’s point (4), he begins with sagacity about the value of analogy and takes valuable steps to introduce the relevant topic of ergodicity.
*Here, Tomas Milanovic belated addresses his earlier resolution contention, and begins to show us the light that what he first expressed he now — as mathematicians will often do in exposition — begins to reverse: “Nothing tells us that such a finite dimensional attractor exists, and even if it existed, nothing tells us that it would not have some absurdly high dimension that would make it unknown forever. However the surprising stability of the Earth’s climate over 4 billion years, which is obviously not of the kind ‘anything goes,’ suggests strongly that a general attractor exists and its dimension is not too high.”
The author ends sagely, and with a note of optimism for progress.
Overall, Milanovic has demonstrated excellent conservativism in opinion, and is well worth studying.
It is a priviledge to read his work and disagree with his opinions.
The real question left open on this subject is; wherez the Rabett’s Rebuke on this? Wots up Doc?
I have just one question for Tomas Milanovic:
Which terrestrial series have you carefully explored?
Tsonis is tonic
Koutsoyiannis could be cure.
Code Blue: Planet Earth.
Several interesting comments but as I have difficulty to react in real time, I’ll have to make a batch. Sorry if not everything is covered.
To the last comment :
– Which terrestrial series have you carefully explored?
The Earth’s orbital parameters. After that it depends what one means with carefully explored. I think the Nile flow data over 1000 years would qualify. If we extend to fluid dynamics which is also terrestrial then there are too many to be mentioned.
However I generally don’t explore data, I have seen too many EOF and PC attempts at “exploring” which finished just with wasted time at best and disasters at worst.
Instead I prefer to use data to verify predictions. I am more interested by making predictions and verifying them than by just “exploring data”.
For me data without a sound and consistent theoretical foundation are just epicycles all over.
– Pekka seemed to ask what my purpose was.
Mainly it was answered in the post itself.
The quote I have read seemed to represent for me a very typical approach. A mix of trivial tautologies and misunderstandings. As I strongly believe that the right paradigm to study the complex system like a climate is a paradigm that has already shown its ability to study complex systems like spatio-temporal chaos then I try to explain to those who are not familiar with these paradigms how it works.
And why I think that it is only this paradigm which will allow progress.
Of course this is not starting from scratch – there is much more deep and significant work on dynamical systems starting with Poincare and finishing with Ruelle&Co than there is work done on climate.
Of course I can only observe and it is also true for this thread that these deep insights have not yet percolated in the climate circles.
In the tone you used Pekka, I also thought to detect a trace of the temptation that was rather current in the past but still exists also today that I’d summarize by “Why bother with mathematics when we have such a nice curve fit”. Active scientists will understand.
But perhaps I am mistaken.
Yes, you have understood what I wanted to say. As my origins are QM too, I was sure that if there was somebody familiar with infinite dimensional Hilbert spaces, he’d immediately understand what an attractor in a field theory such as the climate would mean and how useful it would be.
– Spence UK
I agree with everything you wrote. Especially papers by D.Koutsoyianis are very interesting and I think that I have read them almost all.
One very interesting paper generalising his Kolmogorov Hurst model on the spatial domain is : http://dx.doi.org/10.1016/j.jhydrol.2010.12.012
– Bart R
Sorry, I have read your very long comment twice but I really can’t see what you are trying to say.
As english is neither my first nor second language it is perhaps my reading skill that is at fault.
It seemed that you took exception with some words like “entirely” and “impossible” because you seem (?) to think that it IS possible to solve Navier Stokes kind problems numerically with a spatial resolution of 100 km.
Well sorry to disappoint you but this is really impossible.
You don’t need to take just my word on it, here is why a Fields medal (hard to find better in this domain) thinks that Navier Stokes is difficult : http://terrytao.wordpress.com/2007/03/18/why-global-regularity-for-navier-stokes-is-hard/
– A question about varying parameters in the Lorenz equations and how they change the attractor.
Like Dan Hughes already wrote, the chaotic solution only exists for certain values of parameters. It is for those values that the Lorenz attractor exists.
But one cane say, in the nature parameters are never constant so what happens when they vary?
Well this question has already been asked and answered long ago.
The Lorenz system of ODE is described in a 3D phase space. If one of the constant parameters became variable, you would simply increase the dimension of the phase space by 1. It would be 4D now.
Of course the new attractor of the 4D system would be very different from the one in 3 D and even its (fractal) dimension would change. In a nutshell it is not important how many parameters are constant or variable. You just need to make up your mind which is which and only then begin to study the system.
In any case I warn one more time not to put much stress on analogies with temporal chaos where the Lorenz attractor is. It is why I conceived the table in the main post to show how deeply different the spatio-temporal chaos is.
– Fred Moolten
There were several interesting points but I will take only 2 due to how long the post already became.
1) the “averaging out”.
You certainly know that any time series can be interpreted as either non stationary or stationary with a variable mean.
The choice is free and both methods may hindcast with same accuracy. They will differ only in the predictions.
So clearly when one says that something “averages out” , he already decided that the process was not stationary.
What I am saying is that you don’t know that.
Besides what does it mean that f.ex ENSO “averages out”?
That a single number on a chart after suitable transformations has approximately the same surface under a line as above a line.
Of course that doesn’t mean anything concerning the issue I am talking about, e.g the dynamics of the whole system because I certainly don’t think that they reduce to 1 number. And not even 2.
Then comes also the irrefutable argument by Spence UK that with a spatially poorly resolved and temporally very short series of fundamentally only 1 parameter (temperature) one can say nothing about oscillations at longer time scales.
However even if data say nothing, the spatio-temporal chaos theory does.
It predicts excited oscillations modes at all times scales.
Btw somebody said (rightly) that the existence of chaotic solutions was linked to the amount of excited oscillation modes. This is exactly the point of view of Ruelle.
So while there is no experimental evidence for general “averaging out” of all fields, there is theoretical evidence and actually a prediction that it doesn’t.
Please note that I am not talking about single numbers like indexes etc, I consider that these are irrelevant for the dynamics.
I am talking about the whole fields like pressure field, density field, velocity field etc.
And of course I add that there is absolutely no reason (theoretical or experimental) that the system be ergodic.
Actually similar spatio temporal chaotic systems often are not.
That is more than enough to say that it is very likely that the fields do NOT “average out”.
About numerical models.
I hope that you agree with me that the models (GCM) do not SOLVE any equation of the dynamics.
So where the disagreement would be is that you think that whatever numbers a given GCM produces and even if they are not solutions of any PED describing the dynamics, they are not “very far” from what a real solution would be.
I do not share this point of view at all.
First hint is the wild dispersion of the models results. I have looked on the precipitations and pressure field results and it was appaling.
Second hint is the inability to make some predictions at better than 1 century.
Farther hints are more theoretical.
If I conjecture that the established result about the finite dimensionality of the field attractor (link in the main post) applies to the climate, then there is no way to be sure that the numbers produces by the computer belong to the fuileds that constitute the attractor.
Then I would find it likely that none of the computed fields finds itself on the attractor.
Of course the numbers, despite a huge dispersion, would not be necessarily pathological. They may even look like something possible especially if energy, momentum and mass is conserved.
But unless one can verify that they really are approximate solutions of the dynamics, I do not consider them having any predictive skills.
Especially at bigger time scales.
apologies, this one got caught in spam
At least I hope that you are mistaken, but who can understand fully even his own motives.
Formal mathematical results apply to formally defined situations, e.g., in the limit of infinite time period. In physics or in climate science that may be of little relevance, if the results are significantly different for shorter periods. In the climate science the minimum period of interest includes several years (although also intra-annual variations are of interest). Concerning our abilities to make projections we are particularly interested in periods of decades and perhaps a couple of centuries. These periods are far from infinite for a system of the size of Earth. (Similar considerations can be made about small spatial scales and very short periods.)
Trying to analyze periods up to few centuries (and also longer periods of the past), we may get excellent ideas from the study of exact mathematics, but the theorems of mathematics leave very much freedom for the actual development over those time scales. Any serious attempt to model this behavior faces problems that are related to the issues, you have described, but the mathematical theories can tell little about the details of these relationships. The behavior of the models can be studied only by studying the models. The relationship between the models and the real Earth system can be studied both empirically and by analyzing the approximations made in building the models.
Physicists have always used mathematics that they cannot justify rigorously, but for whose use they have good enough arguments. Often the mathematicians have found the rigorous justifications much later, often making everything rigorous is just overwhelming. For physicists the mathematical methods and physical considerations form a whole, where the mathematical rigor is often replaced by physical considerations. If the stability of some solution can be argued on physical arguments, then they have often good justification to believe also the mathematics – even when many steps are not rigorously shown to be correct. (Some of the recent rigorous proofs of pure mathematics have borrowed from the approach of physicists, but of course continued to formalize it as needed.)
Much of the mathematics of chaos is related to non-dissipative non-stochastic models. All the theoretical issues of ergodicity are in an essential way in this class. With stochastic perturbations the ergodicity is not a formal problem, but something very much like attractors may still persist over long periods and may have an essential influence on the actual behavior of the Earth system. This combinations of facts tells that the points that you have raised may indeed be important and that the modelers should take this into account, but on the other hand this combinations tells that theorems of mathematics cannot provide answers on the importance of the issues, but that has to studied by climate scientists.
I have emphasized in several earlier messages that modelers may err systematically in the direction of too much dissipation both in the equations and in the numerical approximations related to discretization. If they err in that direction, they are likely to miss important processes of the real Earth system, such as periodic or quasi-periodic oscillations or irregular transitions between modes. They may interpret their model results as evidence against such processes and it may be that all or most models err in the same way due to the common reason of the need of obtaining any results.
It may be that all stable enough models to give any results for climate have been stabilized in a common and wrong way. The real Earth system may be as stable as it is based on some stabilizing non-linear effects that go beyond the capabilities of all present models, while the models have all been forced to sufficient stability in a different way.
From the previous it should be clear that I have in mind several potential pitfalls in modeling. They are related but not identical to the theoretical issues that you have discussed. On the other hand I do not exclude the possibility of building good climate models. There may be essential problems that cannot be presently solved, but there need not be such problems. With careful analysis of the models it is possible to find out, how good evidence can be provided to support their applicability for making projections. It is quite possible that the analysis ends up in conclusion that we cannot know, because we are forced to work too close to the limit of validity of the modeling approaches that are available. If all careful checks tell that we are safely inside the region, where models behave regularly, then there is more reason to believe the models. I have no knowledge about the answers to these questions. I have not seen evidence that anybody would have good answers, but that may be only, because I have not seen evidence that already exists.
I appreciate your reply, and will do my best to be spare of language and clear.
Yes, you misread me. That is more due my writing than your reading, I believe.
I take no issue with the word ‘entirely,’ but comment only that as your thought and study are very advanced, one must read your post with care.
I take issue with ‘impossible’ when it is used ambiguously or unproven.
For example, you disambiguate now by speaking of 100-mile resolution*, on which point I had already acceded that the present models cannot be used for the purpose you propose. This clarification of course means no further proof is required, though I appreciate the link.
The purposes some imaginatively apply these models too may have some validity, though I agree it is not very plain or complete.
*There is a resolution which may be high enough to meet your purpose, though it appears technically difficult with at least a billion dynamically-significant points per attractor dimension, and is as yet unproven.
My disagreement on this point is better said by Pekka and Fred; whom you address and who hold still to their views.
About collapsing the system to a single field; when all one has is one input variable (CO2 level is all we can manipulate) should we not look for other equations to base our actions on, too?
The issue is to find an equation with that single manipulable variable that produces a meaningful result.
For me, that result is in terms of Risk and its associated costs. Does increasing CO2 level not have implications for ergodicity?
All these other equations used in all these models, I largely share your conservative view of, though I have been surprised by the advances made with them over the past few decades and am not certain how far and in what way they will develop.
I gather that any such attractor would be valid only under a given set of boundary conditions? Which conditions we can be quite certain would eventually fail to be satisfied by the real world?
In fact, I should start by asking what “stable” even means, given that the composition of the atmosphere itself has changed, radically, twice in the 4 million year period offered as “stable.” Dr. Curry, did you mean to suggest that, for example, if the atmosphere returned to the composition it had before the advent of cellular life, that would not, in and of itself, constitute a dramatic change in “climate”?
In any event, one thing that the climate does appear to have in common with the billiard balls example is that we have a pretty good idea of its initial and final states. In the beginning, there was only uncondenced dust, and in the end, we’ll have an atmosphereless rock without anything we’d recognize as “weather.” In your abstract model, are you thinking that the eventual burning off of the entire atmosphere will described by the atractor, even if the path to the terminal point isn’t known? Or, as I’m presuming, is the theory that the attractor would only describe “climate” during a period without large, external stimuli, e.g. the sun leaving the main sequence, or the advent of life with new respiration reactions?
In the beginning, there was only uncondenced dust, and in the end, we’ll have an atmosphereless rock without anything we’d recognize as “weather.” In your abstract model, are you thinking that the eventual burning off of the entire atmosphere will described by the atractor, even if the path to the terminal point isn’t known?
This is not correctly formulated but the answer is broadly yes.
The attractor doesn’t “describe” any specifical state like burned off atmosphere, ice ball Earth etc.
It is a set of all physical fields (e.g pressure, density, velocity, temperature etc) that are allowed by the dynamics of the system.
If the dynamical equations allow a state with pressure, density and velocity fields uniformely zero and some non zero temperature field, then there you have your Earth with burned off atmosphere and oceans.
Detailed examination of the orbits and of the dynamical equations will then tell you that this state will be visited in a very long time and that the system will then stay stuck in this state “forever”.
During this process the attractor will be gradually destroyed and transformed in a single (attracting) point in the phase space.
Actually at this stage the system will stop being chaotic because as I have said multiple times, energy dissipation is a necessary condition for the existence of chaotic solutions.
As in this “final” state there is no energy dissipation anymore, there are no chaotic solutions and attractors either.
Just look at the Moon, it is not a chaotic system.
So the point is that there is no point using chaos theory on fields that are not or no more chaotic.
Girma | March 6, 2011 at 1:32 am |
GEOPHYSICAL RESEARCH LETTERS, VOL. 33, L06712, 5 PP., 2006
Long-term behaviour of ENSO: Interactions with the PDO over the past 400 years inferred from paleoclimate records
Multidecadal oscillation between el Nino dominated and La Nina dominated periods – this is the same conclusion Bob Tisdale arrived at.
It looks suspiciously like a Lorenz or a Lorenz-Roessler butterfly type torn attractor. Such as that which charcterises the Belousov Zhabotinsky reaction (or some flavours of it). This was the point of my earlier posting:
Formal mathematical results apply to formally defined situations, e.g., in the limit of infinite time period. In physics or in climate science that may be of little relevance, if the results are significantly different for shorter periods.
You managed to loose me already in the first sentence.
Do you really mean that mathematics only apply for the “infinite time limit” in physics?
Schrödinger equation, Navier Stokes, Maxwell equations, Einstein equations … only valid in “formally defined situations” and even that only in “infinite time limit”?
They are of “little relevance” for shorter times?
One of the strongest and deepest results of all times in physics are the Noether’s theorems.
Without these brilliant mathematical proofs we’d still only guess whether and why energy and momentum conservation exists. Yet both are the most fundamental physical concepts that can’t be understood without these theorems. And they are true for all time scales – all the way from Planck’s time to billions of years.
You can’t be really meaning what you write, can you?
Any serious attempt to model this behavior faces problems that are related to the issues, you have described, but the mathematical theories can tell little about the details of these relationships.
Can’t they? This sounds like saying “The Einstein’s equation can tell little about the details of the relationships between space-time metrics and eenergy-momentum of matter.”
In reality it is the contrary that is true – not only these equations tell everything about the details of these relationships but as a bonus they enable to make predictions which may then be confronted to observations. Yes, observations might have falsified them. But that is precisely the engine that makes the science progress.
Physicists have always used mathematics that they cannot justify rigorously, but for whose use they have good enough arguments.
Yes this is right and examples abound. This is called a conjecture which may be right or wrong. To stay with the relativity example, Einstein had the right physical insight to formulate the relativity principle.
However he was completely stuck as long as he was unable to find the right mathematical language to express his intuitions. It was Minkowski who did out of those rather vague intuitions a predictive theory which is taught today as general relativity.
The opposite exists too when Dirac deduced the anti matter existence only by mathematics even if there was no good physical argument for its existence.
Ond should not confuse 2 things.
One is that in physics relying on a conjecture is fair game provided that it is correctly (e.g without ambiguity) formulated. I have no problem with that.
The other is that just a bunch of qualitative speculations based on no sound mathematical basis (with or without conjectures) is just epicycles. This is no science for me.
Much of the mathematics of chaos is related to non-dissipative non-stochastic models.
This is wrong. May I suggest that you reread my post because this is precisely one of the points that I would have wished to be understood?
The chaos theory (which is physics) deals to 99% with dissipative systems. Their “stochasticity” is a clearly defined property which may but must not exist. In temporal chaos it often does (see the linked Ruelle-Eckmann paper).
There is a very specific and well understood (both physically and mathematically) case of conservative e.g Hamiltonian systems – the N body problem. These are indeed non dissipative and non stochastic but this class of systems doesn’t interest us here.
All the theoretical issues of ergodicity are in an essential way in this class.
This is also misunderstood. The issues of ergodicity have naver played any role in this (conservative) class. On the contrary they are fundamental for statistical mechanics and for dissipative systems. This has been understood since Boltzmann.
These “theoretical issues” are so fundamental that the favorite tool of statisticians which is time averaging would have no physical justification without ergodicity.
I have emphasized in several earlier messages that modelers may err systematically in the direction of too much dissipation both in the equations and in the numerical approximations related to discretization. If they err in that direction, they are likely to miss important processes of the real Earth system, such as periodic or quasi-periodic oscillations or irregular transitions between modes
This is too vague. Why would an “erring” in the direction to “too much dissipation” lead to “miss some oscillations”?
Too much dissipation of what? Where? What is too much and what is too little?
The existence of energy dissipation (damping) together with energy supply is a necessary condition for the existence of chaotic solutions. It is the detailed examination of the dynamical equations that concludes whether this condition is sufficient too.
You would miss everything and not only some oscillations if you got these equations wrong. If the equations are wrong, the orbit is wrong too regardless how much dissipation (damping) there is or isn’t.
And if you have no equations at all, then of course anything will just be a wild guess.
On the other hand I do not exclude the possibility of building good climate models.
We agree on this one. We probably disagree on what “good” is because for me it implies a very high predictive skill and a high mathematical consistency.
To use the already commented Tsonis approach and one could also talk about Bejan approach, I think that this might suffer a comparison with the Einstein’s problem of gravity.
I think that they have the right intuition but they are completely stuck because they didn’t find solid theoretical and mathematical feet for their intuition.
I am convinced that a Minkowski will appear one day and formulate a consistent quantitative and predictive theory using these intuitions.
The mathematical shape of this theory already exists quite like the tensorial calculus already existed in the beginning of the 20th century but Einstein wasn’t familiar with it.
Of course I do not mean that mathematics would not apply on any period, but some specific theorems are only stated in that limit. The standard definition of ergodicity refers to that limit (time averages equal microcanonical ensemble averages in the limit of infinite time). Theorems by themselves may be unable to state anything about the approach to that limit, or if they state, how the limit is asymptotically approach, then this statement is again well defined only in that limit.
When I referred to mathematical theories, I referred to their use without an essential input from quantitative analysis of physical systems, like the atmosphere or Earth system.
I have objected to some writings of Terry Oldberg stating that logic alone does not tell anything about the real world. Physics is not a subfield of logic. Similarly I object on the idea that mathematics alone would tell anything (a friend of mine, a professor of mathematics, told me the joking definition of mathematics “the highest level of tautology”. Both logic and mathematics tell about certain relationships, but not, what these relationships are about.
For a physicists mathematics is an extremely important tool. Without mathematics we would have very little understanding of physics, but the theory of the physical world is physics, and mathematics is a tool. When we know certain things about the physical world, we can calculate additional things, but without the sufficient knowledge there is nothing real to calculate. Any mathematical result that has not been connected to the real values of the real physical system, is almost certain to have nothing to do with the real world.
You cannot make realistic guesses about, how important spatiotemporal chaos is for climate science without studying thoroughly the Earth system. Guessing that it is an important factor is worthless. Claiming that PDE’s bring in something important that is not described by discretized models is just a guess, and a guess that is not even particularly likely to be true, because dissipation and stochastic disturbances are likely to dampen the influence of smallest scale phenomena. Turbulent eddies are important at some spatial scales, but it is likely that small scale turbulence can be handled by approximative methods. In many cases k-epsilon -model may be sufficient, some other cases may require better turbulence models, but it appears very likely that discretized models can capture everything important for understanding climate.
The analysis of empirical data by Tsonis et al and other similar work is interesting. How useful it is in understanding the Earth system remains to be seen.
Einstein and numerous other great physicists have created new basic theories of physics. They have often picked some little used field of mathematics and used it to describe the new theory. Basic theories may be genuinely new and also concise to define, but the Earth system is not a new theory. The Earth system cannot be described concisely in any theory as it includes details like the shape of the ocean floors and continents. It will always be a system that can best be analyzed combining many fields of science. Analyzing it doesn’t require new physical principles, but it may well involve new mathematical tools. Developing such tools is valuable, but they are not likely to revolutionize, what we think about the system. We know already that the Earth system has chaotic behavior in a loose and descriptive meaning of the world. This influences the modeling efforts. What is the formal mathematical classification of this chaoticism is not necessarily important or even useful.
Similarly I object on the idea that mathematics alone would tell anything
Well tell that to Dirac,Feynman, Schwartz etc not to me. It was mathematics alone that lead Dirac to predict anti matter and even to determine the properties by … mathematics alone.
There has never been (and can’t be) an experimentalists who could have explained energy and momentum conservation untill Noether came and did it with … mathematics alone.
I could go on and on but it would only tell the same story.
So actually my first impression in the first post was correct after all.
You really think that mathematics are not necessary for physics.
It is just a gadget that can be sometimes of limited use but one can do physics as well if not better if mathematics don’t get in the way, right?
Basically they can’t tell anything anyway.
I can’t be farther from this of this opinion and I think that I am in a quite good company.
Claiming that PDE’s bring in something important that is not described by discretized models is just a guess, and a guess that is not even particularly likely to be true, because dissipation and stochastic disturbances are likely to dampen the influence of smallest scale phenomena.
A guess?? Well then here too I share this “guess” with virtually all people who did research on PED in general and numerical algorithms in particular.
What else are these “discretized models” than an attempt at finding an approximate solution of a … PED system?
Every practicioner of fluid dynamics knows that some numerical approximations (RANS) work only for a limited part of the parameter space while some other work for other parts.
There is a wealth of studies about finite algorithm convergence. If J.Stults happened here he could have written pages about how superficial your vision of numerical methods applied on PED solutions is.
Did you read at all the link for the Temam result on the finiteness of the attractor dimension for 2D Navier Stokes?
Also a guess?
In any case this particular “guess” could not have been found by any numerical simulation.
The Earth system cannot be described concisely in any theory as it includes details like the shape of the ocean floors and continents…We know already that the Earth system has chaotic behavior in a loose and descriptive meaning of the world. This influences the modeling efforts. What is the formal mathematical classification of this chaoticism is not necessarily important or even useful
Sorry Pekka but this really looks like a prescientific statement.
Basically “we” (perhaps you but surely not me) know everything, there is nothing new. Science is settled. Any attempt at bringing some mathematics in this mess is impossible because the shapes of continents are too complicated. And it would be useless anyway. Did we mention that it’s unimportant too?
It reminds me what one reads about the “old guard” in the 20-30ies who would say just like you that all these fancy quantum mechanical mathematics are useless and unimportant. After all we already know everything anyway, don’t we?
We prefer to keep our good old luminiferous aether and the epicycles, if one listened to those crazy youngsters with their maths we’d finish by believing that there’s something in this relativity nonsense :)
Well at least that settles one important thing for me – it leads nowhere to try to discuss constructively non linear dynamics in weather/climate with people sharing this kind of mindset because unfortunately in this domain mathematical understanding is a necessary and sufficient condition to talk about the underlying physics.
It is actually important and useful too.
Yes. New physical theories are often described by mathematics that hasn’t been used widely before, sometimes even by completely new mathematical formalism; and yes, that brings sometimes with it new unforeseen predictions that are then confirmed.
Dirac didn’t choose his mathematical formalism just because such mathematics existed, but it was used to describe relativistic electrons. The result was so significant, because he was developing a basic theory of physics, as did Einstein, and Maxwell and … (As a curiosity: The book that I used most in learning QM was Dirac’s. At liked his way of presenting the formalism more that the textbooks used in the lecture course. That was in 1967, I think. A few years later I was using his formalism, when I was lecturing QM myself.)
I am in no way arguing against new theories, where they are needed. I am arguing against unjustified certainty that somebody’s pet theory works better than the models developed by others. It may do that, but that claim has to be justified properly. The fact that some mathematics contains such features is in no ways evidence that those results are important. There is an innumerable infinity of possible hypotheses, most of them are wrong.
Tomas and Pekka have made several points that deserve attention, and I would like to comment briefly on a few.
The “averaging out” issue has been bandied about repeatedly now. My perspective is empirical – timescales are critical. I have no problem agreeing that oscillations are possible on all timescales, but empirically, in terms of magnitude, we have little evidence for wide oscillations on approximately centennial-scale intervals that would seriously distort the interpretation of forced trends. More importntly, however, for the past century, we have ample positive evidence from radiative physics complemented by observational data that a combination of forcings from solar variation, CO2 and other anthropogenic greenhouse gas emissions, anthropogenic black carbon aerosols, and sulfate and other cooling aerosols must account for a significant fraction of the observed temperature rise of about 0.8 C – in other words, their contributions can be stated as significant from what we know about them and not merely because the rise can’t otherwise be explained. There are likely to have been minor contributions from known fluctuations such as the PDO (to the extent that it was independent rather than anthropogenically forced). This leaves relatively little room for other fluctuations that might have occurred but have not been observed, and therefore provides informative data on the strength of all the forcings, including the forcing due to CO2. The alternative is that the conclusions derived from the physics and the confirming observations are seriously in error and must be discounted, and that most of the observed trend was due instead to an unforced fluctuation whose existence can be theorized but not proved. I will leave it to others to judge where the burden of proof lies.
Regarding the statement that GCMs don’t solve the equations for the dynamics, my response is that they do not try to. Pekka has earlier emphasized a critical feature of how the models approach climate – discretization. By using grid cells and atmospheric layers rather than continuous functions, the models approximate the kind of averaging that would be hopeless if precise solutions were attempted. How well do they do? That too is a matter of judgment. For long term global temperature trends, they have done fairly well not only for hindcasting, but also in the case of Hansen et al (1988) for predicting future trends over multiple decades. If Hansen’s model, which entailed a climate sensitivity of 4.2 C per CO2 doubling, had instead used inputs closer to the current modal estimate of about 3 C, the predictions would have been highly accurate. The assertion that predictions beyond a century are very problematic may be true, but that does not invalidate their utility for many assessments they are currently involved in.
It is also true that the models perform less well in other regards, including regional temperatures, precipitation, etc. It’s reasonable to expect models to improve in these areas, but the point also illustrates another aspect of why climate dynamics like temperature variation may be relatively less vulnerable to some of the obstacles mentioned by Tomas. As an approximation, the equilibrium temperature response to CO2-mediated forcing can be derived from a radiative imbalance calculated at the tropopause, combined with assumptions about lapse rates that can be verified by observation. It is unnecessary to do any averaging over the surface, because the surface simply responds via the lapse rates to the changes at the tropopause in conjunction with the Stefan-Boltzmann equation. Indeed, this rough approximation yields a no-feedback response to CO2 doubling of 1deg C, and the models, by incorporating variations in lapse rate, seasonality, grid cell temperature variation, and other variables, have yielded values for globally average anomalies that differ from 1C, but not dramatically – at about 1.2 C/doubling. As far as I know, the kind of abrupt discontinuities over short distances that might seriously distort any attempt at surface averaging are much less in evidence at the altitudes that determine flux imbalances at the tropopause, and this may be one reason why the rough approximation based on a homogeneous tropopause, the improvements introduced by the models in accounting for heterogeneity, and the ability of the models to match observations are not in serious conflict. The estimation of feedbacks is more problematic, but water vapor and ice/albedo can be calculated and observationally documented with some accuracy, leaving clouds as the more uncertain of the major feedbacks, but with good confidence that the feedbacks amplify the no-feedback response.
It is also a consequence of the above that variables such as precipitation, mentioned above, are more susceptible to some of the problems inherent in heterogeneity.
I would summarize my perspective by agreeing with the value of the theoretical approaches based on spatio-temporal chaos, but only when combined with a full knowledge of the physical principles and empirical data that apply to our climate and the factors that affect it. That empiricism leaves room for a role for unforced variation, but it does not permit the role of anthropogenic and other forcings to be trivialized, nor for their future impact to be disregarded.
“Regarding the statement that GCMs don’t solve the equations for the dynamics, my response is that they do not try to. Pekka has earlier emphasized a critical feature of how the models approach climate – discretization. By using grid cells and atmospheric layers rather than continuous functions, the models approximate the kind of averaging that would be hopeless if precise solutions were attempted.”
Fred, I believe you are incorrect here. You always start with the differential equations which describe your system physically and apply numerical discretization techniques (e.g. finite/spectral element, finite difference, finite volume) to obtain your numerical model. If GCMs “do not try” to solve some set of differential equations grounded in the physics they are purportedly modeling, then they are in more trouble than I ever imagined…
Fortunately, most GCM research group publish documents which list the differential equations, and associated boundary and initial conditions, they are solving with their codes (with the notable exception of NASA GISS).
I clarify my point. The models are certainly based on laws of physics that are described by differential equations, but they may be written directly for the discretized system bypassing the explicit consideration of the partial differential equations. Actually the partial differential equations are commonly derived by starting from equations written for a discrete system of finite elements and the numerical method is based essentially on these equations directly.
This is always an approximation, because the equations are typically accurate only in the limit of infinitesimal elements, but this approach is quite useful in many practical applications.
Frank – Thanks for your comment. Although not a climate modeler, I’m aware that differential equations are at the heart of climate modeling. It is also my understanding that to render equations tractable, discretization is used to allow numerical techniques to substitute for exact analytical solutions, with the expectation that good approximations are possible. I have also seen discretization described as relevant to the application of the equations to non-continuous properties such as grid cells and atmospheric layers. Those were my intended meanings when I stated that the GCMs “don’t try” to solve the equations in the sense of “solve” used by Tomas. The models certainly try to arrive at “solutions” that come realistically close to how the climate behaves in regard to the relevant variables, even if those solutions are not exact. In fact, that was my larger point – that important climate dynamics can be well approximated without exact solutions, particularly when they entail changes at the tropopause rather than attempts to average temperature over the global surface.
Just to clarify: can what you wrote above be translates as: the fundamental mathematical issues with PDE systems can be “circumvented” with numerical equation solving methods? Not knowing your background, are you familiar with numerical methods used to perform these calculations, for instance to obtain one or more numerical approximations of solutions for a PDE system?
Just to clarify: can what you wrote above be translated as: the fundamental mathematical issues with PDE systems can be “circumvented” with numerical equation solving methods? Not knowing your background, are you familiar with numerical methods used to perform these calculations, for instance to obtain one or more numerical approximations of solutions for a PDE system?
Quick fingers today, please disregards or delete the first message. Thanks.
Pekka is more expert than I am with numerical methods and discretization. I’m sure he would be willing to respond to your question. I could probably cite examples in regard to radiative transfer calculations, but he will give you a more comprehensive answer.
Fred and Anander,
I think that I have already answered Anander’s question on the level appropriate to this discussion.
Dr Moolten & Pekka, there is no need to go into details of numerical approximations of course. It is of course true that if we use these theorems within computer models they need to be discretisized and solved numerically – if not for any other reason than computers being digital. This is obvious.
But what is not that obvious, and what I interpret being one of the points Tomas was rising in this post, is that none of the original characteristics of the PDE systems – chaos, nonlinearity, initial value sensitivity, etc – do not magically disappear if we use numerical methods when solving then instead of seeking exact analytical solutiol.
Of course under certain boundary condinitions (and other necessary limitations) that are needed when developing and running computer programs doing these computations (e.g. those arising from finite word length, keeping physical variables within reasonable boundaries, ensuring finite computation time for solving function etc), yes, we might get some estimates for various phenomena relavant to state of the climatic system.
How I see it, and what I’ve been taught about the relevant topics & learned from my (very small scale compared to CGM work) modelling, in no way this approach – discretization and numerical equation solving methods – hides or changes the fundamental characteristics of the underlying system – if it did, we wouldn’t be modelling the same system – would we? No strawmen here Dr Moolten or Pekka, you didn’t directly say this would be the case. Just wanted to underline this.
I have in some messages of this thread commented that it is not at all obvious that the difference between PDE’s and the ordinary differential equations of a spatially discretized model are significant. The differences concern the question, whether the small scale processes lead to such large scale consequences that cannot be analyzed essentially as well by appropriate discretized equations. I consider it unlikely that such consequences would be important, because the dissipative effects are quite strong.
On this basis I consider it much more likely that all essential large scale effects can be described by a discretized model with a cell size something like that of the most detailed present AOGCM’s. To reach that result the models must be improved in many ways. This includes a better description of clouds, which is dependent on processes of smaller spatial scale, but when the processes are understood better it is likely that they can also be introduced to the larger cells using appropriate additional equations that describe those effects that are important for the larger scale results. Similarly the oceans must be modeled much better, but again including very small spatial scales to the Earth model is not likely to be needed.
To conclude. I consider it quite possible and even likely that the Earth system has essential complex dynamics that is presently not understood, but I see no strong reasons to expect that the differences between PDE’s and discretized models is an important factor in this issue.
My impression is that this thinking differs essentially from what Tomas Milanovic presents in his posting, but it is also possible to see something like spatio-temporal chaos, while looking only at the larger scale processes of a discretized model. The number of cells is still very large and a system with such a large number of discrete variables may be very different from the typical examples of temporal chaos in systems with very few variables. Thus the difference is not necessarily as large in practice as it is formally.
Some follow up comments about numerical methods and GCMs.
The reason I was stressing the partial differential equations is because any numerical discretization must recover the solution of the original differential equations in the limit as the mesh size shrinks to zero. The error (known as the truncation error) is not only proportional to the spatial discretization but to the time step size as well. This can be shown using classical Taylor series methods for finite difference discretizations.
You also have to consider the stability of the numerical method. Stability means that the errors inherent in the unsteady solution remain bounded as the solution is integrated in time and space. Unstable solutions produce garbage, and don’t necessarily become unbounded (i.e. “blow up”). The manner in which the discretization is derived has a huge impact on stability. Practically speaking, the stability “limit” of a method constrains the maximum allowable time step that can be taken. Unfortunately, you can only prove stability limits for simple model equations, and certainly not for systems of non-linear partial differential equations! For GCMs, you basically have to estimate a reasonable time step, cross your fingers, and run. Eventually you find one that works, but that still tells you nothing of the accuracy unless you can compare with exact solutions or data.
The situation is even more ominous for GCMs once you begin to add highly non-linear source terms, boundary conditions, and a myriad of other “physics”. Then there is the coupling with other models (ocean models + atmospheric models + aerosol models). Nothing can be proven at this point – the codes become black boxes, and you have to introduce all kinds of unphysical tricks (e.g. artificial smoothing, limiters, etc.) to keep the “solutions” from blowing up.
It for these and many other reasons that I have been very vocal about the importance of documenting GCM source codes with descriptions of both theory (physics) and the numerical discretizations, each written out in great detail. Otherwise GCM codes are simply complex black boxes, and, in my opinion, can not be used to “prove” or “predict” anything, even though may sometimes generate a result that “looks realistic”.
Tomas Milanovich has posted an excellent description of the current understanding on the chaotic behaviour of accepted systems and I hail you for this inclusion to your blog. It’s very refreshing to see the attractor type, attractor boundary, and it’s field intensity discussed openly.
Otherwise these ‘systems’ are considered separate and remote from one-another.
Best regards, Ray Dart.
Hi Thomas, I hope you are still monitoring this thread. I was thinking about how CO2 would vary the relative strengths of the strange attractors, glacial and interglacial. While both have their own set of strange attractors, these appear to be the dominate attractors.
Playing around with another equation, I have a rough estimate of the maximum change in radiative forcing at the surface of 1.21 which would produce a rough maximum climate sensitivity of 2.3. This may increase the strength of the interglacial attractor. I don’t know of course, but is there a way to estimate the probability?