by Tomas Milanovic

There are few scientific concepts that are more often misunderstood in blog debates than Determinism and Predictability. For many commenters, these two concepts are considered to be in fact equivalent, which leads to faulty or irrelevant arguments.

Having read Dan Hughes’ interesting post about climate models and many but not all of the comments, I thought that it might be interesting for interested Climate Etc denizens to obtain further insight into these problems.

The post is structured in the following way:

The first section examines some particular classes of natural laws and draw from them lessons about predictability.

In the second section we consider the Navier-Stokes equations and review what we know about them and their solutions.

In the third and last section we extend the insights from the second section to the climate system, to see what hypotheses about the predictability of this system can be addressed.

If you have an allergy for Hilbert spaces, uncountably infinite dimensions and operator’s spectra you should stop reading at this sentence.

### I. Deterministic Laws of Nature

The following four equations and systems are well known dynamical laws.

A first observation is that all four equations describe dynamical variables as unknown functions:

- X is the position in (1);
- X, Y, and Z are variables related to the fluid flow in (2);
- T is the temperature in (3);
- and Ψ is the wave function in (4).

A second observation is that the equations have been grouped into two groups of two. In the first group the dynamical variables are functions **only** of time. In the second group the dynamical variables are functions of time **and** space. This difference has huge mathematical consequences that we will find throughout the whole post. The laws of nature of the first group, having one independent variable, are expressed as Ordinary Differential Equations (**ODEs**) while the laws of nature of the second group, having four independent variables, are expressed as Partial Differential Equations (**PDEs**).

The third observation is that all four equations and systems are **deterministic**. This means that if we know the value of the dynamical variables at time *t* for (1) and (2) the equations yield a unique value of the value at a later time (*t* + *dt*). For (3) and (4) if we know the value of the dynamical variable at time *t* and at a position *x*, then the equations yield the value of the dynamical variable at a later time (*t* + *dt*) and/or at another position (*x* + *dx*).

These observations are enough to start considering the **predictability** of the dynamical variable.

Indeed the equations allow to compute future states that are dt later but dt is an **infinitesimal**. Yet what we would like is to know the value at a macroscopically later time *t*+*t1* and ideally until eternity when *t1* is infinite. The mathematical process which allows exactly that is called integration.

When the integration is analytically possible (and we suspect that there is no reason why it should always be possible) we will obtain functions which give the values of the dynamical variables for eternity and for any positions. In this case, **and only in this case**, we will be able to uniquely predict the values of the dynamical variables in the whole space-time.

So do **unique** solutions of our four natural laws (1) exist and, (2) can we find them?

The answer on the first question is easy. It can be shown that a unique solution exists for all four equations under some conditions. For the first two ODEs, there is a general proof that under some conditions of continuity these kinds of equations has always a unique solution provided that we know the required number of **initial values**. As (1) is of second order, we need two initial values (X(0) and dX/dt(0)) while three initial values X(0), Y(0) and Z(0) are enough for (2) which is of first order.

The situation is much more complex for the PDEs of equations (3) and (4). We still need initial conditions but as this time the system is extended over some spatial domain D, we need an initial condition for **every single point** of D.

In other words, for example in the case of equation (3), we need the solution to satisfy the condition T(0,*x*)=a(*x*) for all *x*, where a(*x*) is a function representing the temperature at every point at time *t*=0. In addition the domain *D* is not always the whole space. For example, if the physical domain *D* is a rod, we are only interested in the temperature distribution inside the rod. This implies that in addition to initial conditions which are **always** necessary, for finite domains we need to also impose **boundary conditions** which specify the properties of the dynamical variable at the boundary of *D*. Finally, given the initial conditions in the whole D (always) and boundary conditions (only if *D* is finite) we can prove that equations (3) and (4) have a unique solution .

The second question, can we find the solutions, is very difficult and we are at the heart of our problem. Each of the equations is considered in turn in the following discussions.

**Equation (1)** In many cases of practical interest it is integrable. So for these cases this natural law is both deterministic and predictable with arbitrary accuracy. We will see a special case of this law below.

**Equation (2)** These equations have an obvious property of being non-linear and a subtle property of being sensitive to initial conditions for some values of the parameters (S, R, B). Of important note is that these two properties are **not** equivalent. A sensitivity to initial conditions always implies non-linearity, and in the case of ODEs three or more dependent (phase space) variables, but non-linearity does not always imply sensitivity to initial conditions, analytical solutions to non-linear cases of equation (1) exist. This has far reaching consequences because even if we know that a unique solution exists, we are unable to compute it.

The sensitivity to initial conditions means that the smallest difference in initial conditions will lead to solutions which will separate exponentially with time. No computer can help here – as every computer works with a **finite** accuracy. Choosing two initial conditions which can’t be resolved by the computer we will obtain two wildly different solutions even if the computer sees only one. The computer’s “solution” will only be reasonably correct at the beginning but will become totally wrong for later times. Such a system is deterministic but unpredictable on long time scales and is called a **temporally** chaotic deterministic system.

The word **temporal** is very important because the dynamical variables depend only on time. If we apply the equation (1) to a system of three bodies and F is the gravitational force among them we obtain a non linear differential equation which is also sensitive to initial conditions and all conclusions drawn from equation (2) apply.

A system of three (or more) bodies interacting gravitationally is also deterministic but unpredictable on longer time scales. In this particular case of (1) we deal again with temporal deterministic chaos. Temporal chaos is relatively well understood, and there are many useful results especially concerning attractors (strange or not), but it is not the subject of this post to study temporal chaos.

**Equation (3)** It is linear so there are no particular problems (unless the initial conditions are extremely savage, discontinuous and non-differentiable, for examples). The solution can be computed and this natural law is both deterministic and predictable with arbitrary accuracy, both analytically and numerically.

**Equation (4)** This equation is formally identical to equation (3) – a linear PDE and the conclusions of equation (3) apply. The solution can be computed and this natural law is both deterministic and predictable with arbitrary accuracy. The reason why I mentioned this particular natural law governing the quantum mechanics is not because the equation is special. It is because the dynamical variable has a very specific role not seen in any other law of classical physics. The presence of “i” in the equation shows that Ψ is a complex number so it cannot represent any usual quantity in the physical domain as these are necessarily real.

Actually Ψ has a property that doesn’t exist in classical physics. It plays a role of a probability density which allows to calculation of **probabilities** of measure of **all** physically significant variables (energy, momentum ….). The only knowable information we can have about the system are the probabilities of the outcome of measure. The Hermitian operators playing the role of physical variables and Ψ impose the spectrum of authorised values for the physical variable and the result of a measure is any arbitrary number belonging to the set of allowed values. We have then here a very special case of a deterministically computable equation which allows only prediction of probabilities.

### II. NAVIER STOKES EQUATIONS

The N-S equations govern the fluid mechanics. They are basically an expression of energy, momentum and mass conservation applied to a fluid.

We will use the variational form which is the most used and will focus only on the momentum equations which concentrates all the difficulties.

The advantage of this form is that by a suitable projection we have ensured a divergence-free velocity field and eliminated the pressure so that we are left only with the velocity vector as dynamical variable. The purpose is not to study N-S so that the reader doesn’t need the details of the operators . It is enough to see that is a linear operator comparable to a Laplacian and that the operator is obviously nonlinear because it contains products of the form .

What do we know about the existence, unicity and regularity of solutions of 3-D N-S equations? Well basically not much.

The existence, unicity and regularity of solutions in the general case has been an open problem for 2 centuries and remains so still today. There was a reason why I didn’t take the N-S equations as an example in section 1. It is because we have here the worst case of what we could get so that it would not be a simple example. The N-S equation is a nonlinear PDE with sensibility to initial conditions. It cannot be solved in the general case. It is an example of deterministic **spatio-temporal** chaos.

We will now examine more closely what that means and why spatio-temporal chaos is very different from the simple temporal chaos that we have seen in section I. The best start is with the phase space because it is there that the dynamical variables live.

In temporal chaos the phase space is simply **R**^N. For instance the phase space of the Lorenz system (2) has 3 degrees of freedom X, Y and Z so that the evolution of the system can be described by a curve in the ordinary 3-D space. More generally the temporally chaotic systems have their phase spaces in a finite and often low dimensional Euclidean space. The number of dimensions is equal to the number of degrees of freedom and each degree of freedom is a dynamical variable.

In spatio-temporal chaotic systems the phase space is an infinite dimensional Hilbert space. This can be intuitively easily understood. If we fix a point *M* of the spatial domain *D*, then the PDE in the point **M** is an ODE because only *t* may vary in a fixed point. The solution of the ODE in the fixed point *M* is then one degree of freedom of the system. But as there is an infinity of points in *D*, the PDE is equivalent to an infinity of coupled ODEs so that there is an infinity of degrees of freedom and the phase space is an infinite dimensional functional space.

Another way to characterize the phase space is to consider the example of solutions by separation of variables. When looking for solutions f(*x*,*t*) of a PDE, one looks for solutions of the form

f(*x*, *t*) = X(*x*) T(*t*). For linear PDEs like (3) and (4) this allows to obtain 2 ODEs – one for X with variable *x* and one for T with variable *t*. Solving these 2 ODEs for given initial and boundary conditions leads to a general solution which is an infinite sum of Xi(*x*) Tj(*t*).

These “elementary” functions Xi (*x*) and Tj (*t*) can be seen as vectors of an orthogonal basis of the set of solutions of that given PDE. Here again the basis contains an infinity of vectors so that the space of solutions is infinite dimensional. For example the Eigenfunctions Xi(*x*) of the hydrogen atom form an orthogonal infinite basis of the phase space and are called spherical harmonics.

Finally the phase space where the solutions of N-S live is an infinite dimensional Hilbert space of integrable functions f(*x*, *t*) which is very different from the phase space of a temporal chaotic system which is a simple finite dimensional Euclidean space.

This prompts a word of caution that analogies between temporal chaos (relatively well understood) and spatio-temporal chaos (badly understood) should not be used because they are likely wrong and misleading.

At this stage as we know that N-S equations cannot be solved and exhibit unpredictable behaviour, it is time to ask: “*And what about Computational Fluid Dynamics (CFD) ?*”

Well, there are many reasons why N-S is difficult but one of the most important is the dependence on the initial conditions and on the nature of the boundary of the physical domain. One can simplify both by constraining the former in a very small volume of the phase space and the latter in very simple geometries (a cube, 2 parallel planes etc). Aditionnaly one must eliminate the turbulence at very small spatial scales. This happens in industrial applications mostly by using Reynolds Averaged Navier Stokes (RANS) which is a transformation of the original N-S. The problem with RANS is that it is not closed (there are more unknowns than equations ). The only solution is to make up empirically new equations and that boils down to a sophisticated curve fitting. These equations can then be only used in special cases for which the curve fitting was validated.

After all this work CFD will give reasonable results but only for very small volumes (a few meters), strictly defined initial/boundary conditions, and simple geometries. It is obvious that these particular applications don’t give any insight about the solutions of N-S when the strong constraints are not respected.

To give a practical example – CFD can be used on a simple, smooth, small, horizontal wing moving at constant speed in calm air with constant temperature and give reasonable results on air pressures and velocities near to the wing. However if the speed exits the specifications (the plane stalls) the velocities and pressures become strongly chaotic and the CFD model is no more able to compute accurately drag and lift and the trajectory of the plane becomes unpredictable.

CFD is extremely taxing the computer resources. To realize how taxing it is, for example the CFD studies of combustion engines with a typical scale of 10 cm, need a space mesh size of 0,1 mm and a time step of approximately 1µs. This explains why CFD cannot be used for systems whose typical size is larger than a few meters. If we were to compare a CFD domain with a climate model then the spatial resolution of CFD is **one billion** times finer and the time step is **100 millions** time shorter what gives an idea how coarse a climate model grid is.

Last important question about N-S that we will examine is “*And what about asymptotic behaviour ?*”

This question is inspired by the fact that N-S deals with a **dissipative** dynamical system. As it is known that volumes in phase spaces **decrease** when a system is dissipative, there could be hope that the solutions will settle on some manifold after a long time instead of exploring the whole infinite dimensional phase space forever.

Recent research answers that this hope could be, at least under some conditions, justified. Indeed it has been proven for 2-D N-S that there exists a global finite dimensional attractor and an upper bound for the dimension was found. Let us not be mislead by the expression “finite dimensional” – the number of dimensions depends on many factors and is counted in billions. Nothing such has been proven for 3-D N-S even if some encouraging results for weak topologies and with some constraints have been obtained.

This means concretely that if we speculate on the existence of such a global attractor (for some sufficiently regular initial conditions and domains) then every solution could be expressed as a combination of a few billions of well defined functions Fi(x,t). These functions can then be considered as the basic dynamical modes which fully describe the asymptotic behaviour of the N-S solutions.

For both questions asked above it is important to stress again that we are studying here the momentum conservation N-S equation only. The kinetic viscosity being considered constant, this implies that we deal with a newtonian, incompressible and isothermal fluid – for example water.

Indeed for isothermal fluids the internal energy is approximately constant so that the momentum conservation is approximately equivalent to energy conservation and no additional equation is needed. When the fluid is not isothermal, it is necessary to add a specific energy conservation equation which adds a new variable, the temperature.

### III. WEATHER AND CLIMATE

If the N-S equations are difficult, weather analysis is much more difficult and climate analysis is infinitely more difficult, the word infinitely being meant literally.

Weather is studied basically with the N-S equations discussed above including the energy conservation equation. Additional complexity is that weather analysis must deal with polyphasic flows (liquid, vapour, solid). Following the discussion of N-S, weather is deterministic, sensitively dependent on initial conditions, spatio-temporal chaotic and therefore unpredictable beyond a very short time scale (days). The quality of the prediction is extremely sensitive on initial conditions, too – an anticyclonic system gives regular, slowly varying velocity fields which are rather stable while depression fronts and strongly varying velocity fields can not be predicted sometimes even within 24 hours.

The weather models, like CFD and for the same reasons, are useful in conditions where the velocity field is regular. These conditions can be described by the popular saying “*The best prediction for tomorrow’s weather is that it will be the same like today*.” They are however relatively useless for extreme weather, irregular (stormy) systems where the numerical simulation cannot find the right solution.

A didactical example is given by the storm of the century over western Europe in December 1999. The ECWMF model saw a stormy weather but failed to predict a storm of the century. Fifty different forecasts were realized by perturbing the measured initial conditions. The resulting forecasts varied anywhere between “nothing happens” and “there will be a severe storm”. Approximately 10 forecasts among the 50 looked like the reality 40 hours later. From the discussion about N-S we know that the system is governed by **deterministic** equations therefore we know that the “probability” of a storm of the century in the real world was 100 %.

This observation is very important for the notion of “probability” that we will examine now. Indeed one might want to consider the 50 forecasts as a representative sample of what the weather could be, from where it is only one step to suppose that each individual forecast has a 2 % probability and thus deduce that a storm of the century has about 10×2 = 20 % probability to happen.

Nothing could be more wrong.

First we know that the storm had to happen because the equations are deterministic. So the probability of the event was 100 %.

Second, the sample of 50 initial conditions is arbitrary both in its size (50) and in the choice of each specific perturbation. Let us recall that we have an infinite dimensional phase space so that if we randomly choose 50 perturbations of the initial conditions, the probability that we find the same forecasts that were obtained with the initial sample **would be 0 !** Therefore the sample doesn’t represent in any way the whole space of possible final states.

Third, there is no reason that the final states (forecasts) corresponding to a given perturbation of the initial conditions have all **the same** probability (2% in our example). On the contrary, the Lyapounov coefficient that measures the divergence of orbits in the phase space depends on the initial conditions. From there follows that a perturbation with a “large” Lyapounov coefficient will occupy a “large” proportion of the final states. However as the Lyapounov coefficients are unknown, it is impossible to associate a “probability” to a given final state and certainly not by naively dividing 100 by the number of perturbations selected arbitrarily.

The only fundamental difference between CFD and weather forecasts on one side and climate on the other side is the size of the grid used for the numerical scheme. Climate models can therefore be seen as a weather model with a much larger spatial grid – hundred(s) of kilometer(s).

##### NUMERICAL CLIMATE MODELS

In this last part we will examine what is a numerical scheme and what consequences we can derive for the predictability of climate models.

Basically every numerical scheme consists to choose a time step and a spatial grid and replace the partial derivatives of a variable U by finite fractions . The equation is then rewritten with indexes for the discrete time variable and the discrete space variable. For instance the time discretisation of incompressible N-S yields:

It appears clearly that if we know the initial value *U*(0) and boundary conditions of U then we can easily compute *U*(*t*+1) from (6) and by substitution in (5) we solve for *P*(*t*+1). To fully discretise the equations we have still to express the spatial operator . There are many choices for example for a 2-D square grid where the index I is in the *x* direction and the index J is in the *y* direction we can choose .

For those interested in more details, a good synthesis and discussion of discretisation techniques for incompressible Navier-Stokes can be found in this paper Numerical Methods for Incompressible Viscous Flow.

By substituting the discretisation of the spatial variables in (5) and (6) we obtain a linear system:

where :

**M**is a matrix with elements depending on**U**(t, I, J) and constants**(U, P)**is a vector containing**U**(*t*+1, I, J) and P(*t*+1, I, J) at all points of the grid**(f, 0)**is the vector of the external force at all points of the grid.

The whole N-S problem has now been reduced to the question of invertibility of the matrix **M**. If **M** is invertible then the system has a **unique solution** **U**(*t*+1, I, J) and P(*t*+1, I, J). The iteration of this method yields then **U**(*t*+2, I, J) and P(*t*+2, I, J) knowing **U**(*t*+1, I, J) and P(*t*+1, I, J) so that a solution of (8) and (9) in [0,T] can be computed for all T.

In fact all problems start really here.

##### The 2 convergence problems

“*Do the solutions of (8) and (9) converge when the grid step and the time step decrease to 0 ?*”

This question cannot be tested for climate models because a minimum scale is a few hundreds of km and it is impossible to decrease the space and time steps to 0. So the answer is “*We don’t know*“.

“*Do the solutions of (8) and (9) converge to the solutions of N-S ?*”

Here the answer is a clear “*No*” because a single value in a grid 100km x100km (with or without subgrid parametrization) cannot represent in any way the continuous solution which is anyway unknown.

A weaker form of this question is “*Do the solutions of (8) and (9) have some similarity to the solutions of N-S at spatial scales greater than the grid scale ?*”

Again this cannot be tested because the N-S solutions are unknown.

However even low dimensional approximations of large scale atmospheric circulation, for instance using the constructal theory, and assuming only 3 partitions of a rotating planet, get the large scale circulation in Hadley, Ferrel and polar cells correctly. This is also what Large Eddy Simulation (LES) methods, or subgrid parametrisation methods, which is a synonym, try. The basic idea here is to simply average (filter) everything that is below the grid scale *L*. The N-S solution can then be written as

**S(x,t)** = **Sa(x,t)** + **Sna(x,t)**

where:

**Sna**is the solution of the non-averaged part (on scales >*L*)**Sa**is the solution of the averaged part (on scales <*L*)

**Sna** could then be numerically computed by equations similar to (8) and (9) **IF** one knows the initial and boundary conditions for the scale L. But as these depend on the solution **Sa**, we are back to the problem that we cannot solve N-S for scales smaller than *L*.

The problem is here identical to the RANS problem– the nonlinear advection term effectively couples the averaged (small) scales to the non-averaged (large) scales so that it is impossible to consider **Sna** and **Sa** as independent. This leads to the necessity to model the interactions between all small scales up to the grid scale where the interaction between **Sna** and **Sa** happens.

Unfortunately the subgrid scales contain phenomena (boundary layer dissipation, clouds, storm convection, phase changes, biological reactions, . . . ) that impact significantly the large scale dynamics and that are not well understood. For that reason there is a very large number of possible subgrid parametrizations which lead each to different large scale dynamics.

Finally the answer on the initial question can only be “*Maybe sometimes*.”

##### The stability problem

There is a unique solution for (8) and (9) if the matrix **M** is invertible. However as its elements depend on initial conditions, there is no guarantee that **M** is invertible. Actually there is an infinite number of initial conditions for which **M** is not invertible and this leads to numerical instabilities.

The behaviour of **M** also depends on the size and topology of the grid so that purely numerical artifacts may appear when the grid is badly chosen. Some of these artifacts can be easily identified but those that are not identified (unknown unknowns) cannot be corrected.

##### The chaos problem

This problem is severe and unsolved.

We know that the N-S equations exhibit spatio-temporal chaotic behaviour. The consequence is that 2 initially very close orbits in the (infinite dimensional functional) phase space will diverge exponentially, and that property leads to unpredictability beyond some time T dependent on the system considered. Yet we have also seen that the solution of (8) and (9) is unique and predictable for all times when it exists. From these it follows that the solutions of (8) and (9) can never represent a solution of N-S because they lack the defining feature which is the chaotic behaviour.

As we have seen in the discussion about LES, the subgrid parametrization may, but must not, introduce random behaviour (especially for stochastic models) which could give an illusion of chaotic behaviour. However this is only an illusion because randomness and chaotic behaviour can be readily distinguished and each leads to different solutions.

A word for the sempiternal but artificial distinction between initial and boundary conditions. It is simple – no PDE and no numerical scheme like (8) and (9) can be solved if initial **and** boundary conditions are not prescribed. It is irrelevant whether the system is chaotic or not – the heat equation, N-S, weather, climate they all depend on initial **and** boundary conditions. In addition for chaotic systems it is also irrelevant whether the instantaneous values, their discretization or their averages are used because as we have seen, if the defining continuous equations are chaotic then their discretization, and averages, are also chaotic and they are generally different from the solution of the continuous equations.

Sometimes the fundamental unpredictability of chaotic systems is challenged by examples like seasonal averages – for example “We can predict that a summer temperature average in [insert place] will be greater than a winter temperature average.” This pseudo argument is based on a deep misunderstanding of spatio-temporal chaos. Of course chaos must not be equated with exactly zero predictability. There is sometimes a number of phenomena and parameters’ ranges which can be predicted at least approximately even in a fully chaotic system. They are almost all based on the existence and invariance of a finite dimensional attractor. Indeed in this case we can predict that the system will **always** stay inside the attractor. If we know its number of dimensions and its topology we can analyse the properties of its subsets and predict what will happen when the system will visit a particular subset.

For the sake of clarity imagine an ergodic chaotic system whose attractor looks like: O=O. The system wanders quasi-periodically from left to right and back again – for example it is periodically driven. While it is impossible to predict where the system will exactly go and how long it will stay there, we know that it will spend broadly half of its time in the left part and half of its time in the right part of the attractor.

As we know the topology of the attractor we can compute **Ts**, the average of a degree of freedom **T** in the volume of the phase space which represents the right hand “O” and compare it to **Tw**, the phase space average of **T** in the left hand “O”. If we find for example that **Ts>Tw** then the ergodicity hypothesis will allow to conclude that because the system spends approximately half of its time **W** left and half of its time S right, then the time average of **T** during the period **S** will be greater than the time average during time period **W**.

Of course this conclusion stays true only as long as the system stays ergodic and the attractor invariant. Eventually this conclusion may only be valid for some ranges of some parameter (let us call it latitude for example). We must stress that ergodicity is **the** necessary condition to draw this conclusion for all variables and all parts of the attractor. The ergodicity of the climate system has **not** been proven and this proof is a challenge that still waits to be met.

Over large time scales the attractor will change and even this simple conclusion will no more be valid. Also, trivially, the ability to compute phase space averages for particular attractor topologies changes nothing on the fact that the system is still chaotic and will react on perturbations in an unpredictable way over larger time scales.

##### The probability problem

An argument often heard is that climate models do not predict the future states but **the probability** of future states.

This statement would be true for equation (4) (Schrödinger) because the unknown function Ψ represents explicitly the probability of future states. However there exists no such equation for fluid dynamics and by extension for climate. So as it is impossible to compute the probabilities of future climate states explicitly, the claim cannot be understood literally.

What the climate models do is to compute a **SINGLE** future state instead. The argument makes also no sense empirically because it is not possible to observe a large number N of climate realizations and to define statistics on the set of these realizations.

The only way to give sense to this argument is to consider that multiple runs of a climate model are equivalent to **potential** multiple realizations of climate and that a probability density may be defined over the finite set of these runs even if the results can never be observed in reality.

Can this hypothesis be true? We have already shortly discussed this hypothesis for the “storm of the century” in 1999 and the answer was no. The same answer holds for the climate.

A necessary but not sufficient condition is that all computer runs correctly solve N-S with an arbitrarily low accuracy for all times and all initial conditions. This condition is not fulfilled as we have discussed above and that is enough to reject the hypothesis.

But there are many additional arguments to reject the hypothesis too. The most important is related to the finite sample of initial conditions. Indeed if we define a finite number N of arbitrary perturbations Pi applied to a fixed initial condition we obtain a set {ICi} of N initial conditions. But as the system is chaotic, we know that for any neighbourhood of a given initial condition ICi, the orbits will diverge exponentially for any 2 points chosen in this neighbourhood. Thus we may choose a perturbation Ri as close to Pi as we wish and obtain a final state FS(Ri) as far from FS(Pi) as we wish, eventually bounded by the size of the attractor if it exists.

Finally from the unicity of solutions follows that the intersection of the sets of final states {FS(Ri)} and {FS(Pi)}is empty so that if we define a probability in both sets, for example P(FSi) = 1/N, then we have P(FS(Pi)) = 1/N over {FS(Pi)} **and** P(FS(Pi)) = 0 over{FS(Ri)} what is a contradiction. It follows that no consistent probability can be defined over a finite set of initial conditions (or scenarios which is the same thing) and therefore climate models cannot predict any probabilities.

There is an even more bizarre argument related to the issue of probabilities and averages. Real Climate says: “*Multi-model Ensemble – a set of simulations from multiple models. Surprisingly, an average over these simulations gives a better match to climatological observations than any single model*.”

It is indeed surprising because it is most certainly wrong, or misleading, or both.

Now as there are dozens of papers dealing with Multi-model Ensembles, it seems that there are people who take the above Real Climate statement seriously. Yet as everybody could have seen, it fails already at the basic logic level.

The Real Climate statement is equivalent to the implication: “*If model A says 1 = 0 AND model B says 5 = 0 then the model (A+B)/2 saying 3= 0 gives a better answer*“.

This implication is false and therefore useless. The observation is simply explained by the trivial statistical fact that the variance of the average of Xi is smaller than the variance of the Xi. This fact, however, doesn’t allow any useful conclusion about the validity of solutions obtained by averaging different models which are known to give wrong results.

##### ARE NUMERICAL CLIMATE MODELS USEFUL ?

I do not think so.

Not because climate models are wrong and they are indeed wrong. But because they drain financial and human ressources to the least efficient and most cost intensive research direction. When one invents concepts like “ensemble averages” which have no rational fundaments and when there are more papers studying why model A doesn’t behave like model B than papers studying the climate itself, you suspect that something must have gone wrong.

Also if after 30 + years of huge investments we are still unable to robustly describe and predict the defining features of the system (pressures, velocities, precipitations, temperatures, cloudiness) at the only scales of interest which are regional, then it is reasonable to suppose that this research direction is not adapted to explain and predict regional features. However if this post only criticized the shortcomings of numerical climate models what is quite easy, it would miss the mark .

There are other research directions actually unfortunately understudied. I believe that the weather and therefore the climate have a global finite dimensional attractor. As the boundary conditions of the system are given by the shape and location of the continents and of the ocean floor on one side and the orbital parameters as well as the energy output of the Sun on the other side, this attractor can be considered as invariant over the time scales of interest – e.g hundreds or thousands of years.

Considering that almost all energy of the system is in the oceans and in the water cycle (ice, water, water vapour), the characteristic spatio-temporal functions defining the attractor would mostly describe oceanic dynamics. These characteristic functions would appear like spatio-temporal quasi periodic oceanic oscillations and currents. Even if the attractor had millions of dimensions, by analogy with a Taylor expansion, only a small number of them could be enough to explain the system’s behaviour at the scales of interest. For instance the observations suggest that ENSO is the leading order oscillation with other large scale features like the Gulf Stream and the Circumpolar Antarctic stream following. Of course such dominating features like ENSO are certainly not a single oscillation but rather a composite of a number of smaller and shorter oscillations, but these can be looked for.

Techniques allowing to reconstruct the attractor properties from lower diemnsional projections exist for temporal chaos. They could be extended to spatio-temporal chaos. I am convinced that the direction of research aiming to understand oceanic oscillations and their interactions as they are observed could lead to a real breakthrough in our understanding of climate. My personnal hope is that realizing the lack of results, sooner or later resources will be diverted from numerical models and super computers towards theoretical work on spatio-temporal chaotic attractors and their applications which would identify the dominating oceanic modes.

**Moderation note:** As with all guest posts, please keep your comments civil and relevant. Because of the technical nature of this post, confine your comments to technical comments or questions requesting clarification.

Pingback: Determinism and predictability – Enjeux énergies et environnement

“A sensitivity to initial conditions always implies non-linearity”Not true. A classic counter-example is

y”=y, y[0]=1, y'[0]=-1.

The solution is y=e^(-t), which diminishes to 0. But if y'[0]=-1+2ε, the solution is εe^t+(1-ε)e^(-t), and pretty soon the εe^t dominates and runs to ∞, no matter how small ε.

Nick,

Your equation doesn’t seem terribly linear to me.

Can you post plots of the results for values of epsilon which result in straight line (linear) results? Is my definition of “linear” the same as yours?

Cheers.

Mike,

I think you are out of your depth here.

But what’s all the fuss about anyway.

We all know that climate is linear. All you need to do to analyse a climate time series is do a little running mean in Excel to “smooth” it then find the linear ‘trend’.

This is then readily attributed to rising levels of atmospheric CO2 since all the rest is just ‘internal oscillations’ that average out to zero.

This methodology, which ensures we get the right result, avoids are silly technical arguments. It’s worked remarkably well for the last 30 years and has lead to the historic Paris Agreement.

Why would we want to change that now?

Probably not. In differential equations (e.g., Nick’s post), we often talk about “linearity” with respect to the

combinationsof functions, rather than being about the functions themselves.Generally, it’s not a big deal whether the functions themselves are linear; they can still be pretty easy to solve so long as their combinations are linear. It’s non-linear combinations that are a pain in the ass.

Nick you know or should know that we are talking about physical solutions which means that we look only for bounded functions .

In your example the solution is y = A.exp(t) + B(exp-t) with A and B determined by initial conditions .

The only physical solution for unbounded t is y=B.exp(-t) with A = 0 and there is no exponential divergence of nearby orbits . Among others this means that you only need 1 initial condition for physically relevant solutions .

It is just irrelevant and misleading to give examples of functions which are unbounded for t = infinity – it has nothing to do with sensibility to initial conditions in chaotic systems which have bounded solutions .

Aditionnaly you know or should now that the short cut “sensitivity to initial conditions” is used to say that the solution cannot be found in a closed form and that nearby orbits diverge exponentially.

In your example a closed form exists what implies a total predictability for all initial solutions (regardless whether they are physical or unphysical) .

Not surprising because the equation is linear .

“Among others this means that you only need 1 initial condition for physically relevant solutions .”You may need only that, but may not have it. The fact is that if you specify initial y and y’, then a small discrepancy gives a very different outcome. You may choose to castigate it as unphysical, but whatever, it is a very different result from a small initial change. This is non-trivial, because non-linear equations numerically are solved by linearising locally, and it is the existence of such growing solutions from small perturbation that determines instability. You don’t have advance information that lets you determine just one ratio that gives a “physical” result.

Nick’s comment is spot on. His example is precise. The philosophical concept that “reality” will lead the model to the right answer is dead wrong. However, @Nick it is this very concept that I take huge umbrage with any hope of the models having any predictive skill, as you have so clearly pointed to in a simple second order linear DE! I can’t imagine the total pitfalls to be avoided inside of the, say, the CIMP5 models!

Any unstable linearized system has sensitivity to what the minutest initial perturbation is, say its phase in a periodic system, namely the sine wave period is fixed by the dynamics but not where it’s located.

Herringbone clouds, for instance.

Yes, correct. What makes the nonlinear equations particularly intractable is the added FOLDING. Periods of approximate linear expansion followed by periods of contraction.

Fold and stretch, fold and stretch.

Very injteresting. Thank you.

I wonder how reliable are computer projections of temperature change out 300 years? And how reliable are predictions of ‘climate damages’ attributable to human caused GHG emissions at any time in the future?

This is worth emphasizing:

After 30 years of research, mostly focused on temperature changes and projections, we have virtually no empirical evidence to justify the assumptions, assertions and innuendo that GHG emissions will do more harm than good.

I would like to play with a 20 by 20 km atmospheric model coupled to an ocean model. The combined model would have say 10 million cells, and run on variable time steps. This could be used to figure out how the system works by tuning it with an extremely detailed set of real life data taken in 40 by 40 km sectors (the data is taken over a larger volume to understand how to set the boundary parameters. This will allow me to build all sorts of gadgets and sensors to take the data and then torture the model developers to create something that’s fun to look at, with gobs of 3D visualization products. And eventually we could write lots of papers. I know climatologists don’t use this approach, but if they ever manage to do it they’ll get hooked and have a lot more fun.

It’s pretty simple.

You don’t need agcm model to predict that temperaturewill rise about 3c. Give or take.

And that the sea level rise that results will cause billions

In damage.

Epa studied it as far back as1990.

U just can’t read without being given a reading list.

Did u graduate and learn self study?

Mr. Mosher’s last few posts might lead one to conclude some form of cognitive impairment on his part. Maybe it’s just excessive Wandering in the Weeds?

Steven Mosher,

You say it’s pretty simple. If so, why can’t you state the damage function – i.e. the damage cost per degree of warming or per change in CO2-eq concentration?

Why can’t you state the net-damage for the world over a given time period, e.g. to 2100, in constant 2010 US$?

Why can’t you show the empirical data to calibrate the damage function?

So what? How many billions? Over what period? what are all the benefits and damages so you calculate net damages? What are the net damages to 2100 as a proportion of cumulative global GDP over the period?

Epa studied it as far back as1990.

So what? What’s the relevance of that. Nordhaus started in the 1980’s. That doesn’t mean the answer is valid or supported by defensible empirical evidence.

Furthermore, IPCC AR5, says the damage functions used in the IAMs are next to useless .. read it for yourself.

“Did u graduate and learn self study?”

Steven Mosher:

“You don’t need agcm model to predict that temperaturewill rise about 3c.”Except for the very fortunate fact that it won’t, of course. Not even close.

Or rather, not for many millennia yet.

In maybe 400 years.

The sign of the viscous term is wrong. Wiki:

Oops, wrong link. Wiki:

The sign of a term with an external constant can be changed by changing the sign of a constant. Yours is a purely notational objection.

Nobody speaks of negative viscosity.

No, but the equation or its solubility doesn’t care if you use viscosity, or a constant that would be interpretable as negative viscosity.

Not that I would ever use Navier Stokes for anything.

Tomas,

Very nice post. Thank you.

You wrote –

“I believe that the weather and therefore the climate have a global finite dimensional attractor.”I believe that the attractor is not only strange, but incapable of useful definition in mathematical terms. Please excuse me if I misunderstood your thrust.

It would be nice if I was wrong. We would all like to see into the future. This is generally not usefully possible, outside specified ranges, and under specified conditions. As you point out CFD is useful while things are fine. Feynman pointed out that the flow of dry water is a completely different beast from the flow of wet water.

Thanks once again.

Cheers.

Yes. Feynman Lectures on Physics V. 2 chapters 40 (dry water) and 41 (wet water). Dry water he ‘solved’ then pointed out the solutions were observationally incorrect. ‘We must now mention a serious difficulty…Clearly we must fo to a theory of wet water to get to a complete understanding of the behaviour of a fluid.’

Wet water, he could not. The last three paragraphs of 41-12 are also known as his ‘sermon on equations’. To quote the last three sentences of the last paragraph, ‘Today we cannot see whether Schroedinger’s equation contains frogs, musical composers, or morality–or whether it does not. We cannot say whether something beyond it like God is needed, or not. And so we can all hold strong opinions either way.’

Apropos to AGW.

Mike, chaos per se is a branch of math, pioneered by Poincare around 1900. Certain nonlinear equations exhibit it, including very simple ones like the widely used logistic equation. See https://en.wikipedia.org/wiki/Logistic_map.

The scientific question (as with all math) is where does this math fit a real world phenomenon? Lorenz found it in some weather equations in the early 1960s but it is now part of most sciences, generally termed “nonlinear dynamics”.

See this text, for example: https://www.amazon.com/Nonlinear-Dynamics-Chaos-Applications-Nonlinearity/dp/0738204536.

Tomas is an expert and a pioneer. Most research is on temporal chaos but he is developing spatio-temporal chaos theory, a hairy beast indeed.

“Tomas is an expert and a pioneer. Most research is on temporal chaos…”Details?

Yup CFD is not so perfect.

If the CFD code told you the wing had no lift , would you fly in the airplane?

cant trust models after all

if 50 of the forecasts, predicted a horrible storm, would you plan a picnic?

cant trust models I know.

if the spagetti of hurricane track prediction, showed you house was in the path, would you close your storm shutters? maybe by some spare food?

cant trust the models I know

When the best science you have, limited as it is, tells you that dumping

c02 into the atmosphere may not be the smartest move , do you ignore

that information.. go ahead fly in that plane, have that picnic, open the windows.. cant trust “Teh Modelz” unless they are perfect and precise

Mosher said:

What science is that, Mosher? As we’ve discussed and you have dodged and weaved but not refuted, and as IPCC AR5 says, there is a lack of evidence to support your belief that GHG emissions will do more harm than good. So your belief is based on innuendo and unsupported assumptions.

For others who may not have followed this discussion, the argument is about the lack of empirical evidence to support the damage functions used in the IAMs to estimate future climate damages. As AR5 WG3 Chapter 3, http://www.ipcc.ch/pdf/assessment-report/ar5/wg3/ipcc_wg3_ar5_chapter3.pdf says:

“Damage functions in existing Integrated Assessment Models (IAMs) are of low reliability (high confidence).” [3.9, 3.12]”

“Our general conclusion is that the reliability of damage functions in current IAMs is low.” [p247]

“As discussed in Section 3.9, the aggregate damage functions used in many IAMs are generated from a remarkable paucity of data and are thus of low reliability.”

The belief that GHG emissions will be damaging is driven by advocates using innuendo and unsupported assumptions.

Nobody with half a brain trusts the damage functions in IAMs quantitatively, but in that case uncertainty is not your friend because worse is favored

Nobody with half a brain trusts the damage functions in IAMs quantitatively, but in that case uncertainty is not your friend because worse is favoredHmmm…

The future is not predictable, but it will be

worse than expected.The climate may not be predictable, but Eli is.

Eli Rabbett,

Are you saying that nobody with half a brain should trust an IAM, leading to the fairly obvious conclusion that it requires someone with far less than half a brain to trust an IAM?

Does this stricture also apply to climate models, or can they be believed by people with no brains at all?

The world wonders!

Cheers.

Talking about damage function, I believe Katherine Hayhoe coauthored a paper which reports results of a climate/glacier flow model which shows higher sea level rise in some cases where emissions are reduced. This is caused by changes in precipitation over Antarctica.

http://blogs.plos.org/models/seeing-antarcticas-future-more-clearly/

Peter, again, where’s the damage function for future terrorism strikes in the US? We’re taking *that* risk very seriously.

The Cheney Doctrine, as originally stated: “If there’s a 1% chance that Pakistani scientists are helping al-Qaeda build or develop a nuclear weapon, we have to treat it as a certainty in terms of our response. It’s not about our analysis … It’s about our response.”

https://en.wikipedia.org/wiki/The_One_Percent_Doctrine

Mosher, you are disingenuous, and probably know it.

CFD works for aircraft wing lift because parameter tuned in wind tunnels. Sort of works for tropical cyclones thanks to hurricane/ cyclone hunters since WW2. My father was the command pilot of the 409th typhoon chasers off Guam 1948-1951, at the beginning of that scientific effort, after a double masters in meterology and weather radar courtesy of USAF at UCLA in 1946.

Now identify an equivalent observational confirmation for AGW. There isn’t an equivalent. ~1920-1945 not GHE per IPCC AR4, yet statistically indistinguishable from ~1975-2000. See AR4 SPM fig. 4.2 lest you have any fact objections. You cannot solve the attribution problem. Anthropogenic assertions/beliefs are not scientific solutions. Period.

“CFD works for aircraft wing lift because parameter tuned in wind tunnels.”The only parameters, apart from physical properties, are those of the turbulence models. And there are not specific to wings and needn’t be obtained in wind tunnels. In fact they aren’t really critical here – you can get a fair way with potential flow (Joukowski, laminar).

Be careful Nick. In any RANS wing simulation there are hundreds of parameters including those controlling grid generation. Potential flow by itself ignores the boundary layer and will dramatically overestimate lift and drag. Potential flow with a boundary layer code is as you perhaps are meaning can be very accurate.

Also, at cruise conditions, wings generate attached flows. To model the entire flight envelop, strongly separated flows must be modeled and that is still not done with CFD. Flight and wind tunnel data is used for these parts of the design. People are exploring the separated flows at the present time and its gong to be a long road I fear.

“Mosher, you are disingenuous, and probably know it.”

Sorry Rud, After Working at Northrop Aircraft I went to work for a small research company started by some Northrop execs who were upset at the way we were walking away from all of our f5 customer–

. We specialized in building Water Tunnels for studying high AOA flight controls, like forebody vortex control.

Spent time working on fixes to the F16 (Deep Stall, the electronic lawn dart), And most importantly F5 Enhancements — let me tell you about the east timor deal.

http://articles.latimes.com/1993-09-13/business/fi-34781_1_aerospace-firm

Pretty much Killed the company.

Most of the work I did was in Displays for air combat agility management systems and in building flight trainers using COTS–

( http://connection.ebscohost.com/c/articles/9312091316/usaf-simulator-order-leads-training-trend)

heres an example from our work

https://www.google.com/patents/US5641136

http://ntrs.nasa.gov/search.jsp?R=19950007836

later the company would split into two groups. One group that did combat simulation ( for advanced aircraft– super manueverability ) and air combat training ( did the first COTS trainer using SGI reality Engine

https://www.flightglobal.com/FlightPDFArchive/1994/1994%20-%202928.PDF )

The other group carried on with the core resarch into high AOA

here is a picture

example work

https://www.sbir.gov/sbirsearch/detail/153168

So.. Nice to hear the stories about your dad.

Quaint actually. I spent a bunch of years in advanced design. Test pilots, fighter pilots, Wild weasels, Thud drivers, and my office mate was an original thunder bird and every weekend we had a golf match

The Kid, The Thunderbird, and the Blue Angel (pete knight)

Nice stories about your dad.

bet they dont compare.

Guys who fly, test pilots, fighter pilots, well, neither of us can hold a candle to them. We can only say that we :knew one” or “talked to one”

or played golf with one. they do have the right stuff. I am humbled to have known a few.

I know this. if the model said the plane would crash… they believed the model.

Thanks for the detail David.

Mosh’

Since most forecasts are now model projections done by machines rather than be experienced meterologists with local knowledge and decades of experience, I only give them margin relevance over the 24h to come.

First I’d look out of the door then watch the last 24h satellite animations. Then I’d plan whether I do the picnic.

If I want to plan a picnic is 5 days, I will wait four days before doing so. I will not even bother looking at the forecasts 5 days hence since they are not worth squat.

Saying “it’s the best science we’ve got” is not a reason for using it, if it’s crap, and that is untrue anyway.

The best science we’ve got says that you can’t predict/project what such a system is going to do that far ahead. So the best science we’ve got tells us not to be so stupid as to pretend we can.

Those with undeclared political motives try to pretend we can and try to pretend they are making scientific case when they are not.

–

Saying “it’s the best science we’ve got” is not a reason for using it, if it’s crapNo way to guess why this is apparently so difficult to grasp, for so many.

‘Saying “it’s the best science we’ve got” is not a reason for using it, if it’s crap’The alternative is worse science.

“it’s the best science we’ve got” is being used to justify Third World economics and notions of social justice.

Too bad the F-20 never caught on.

Another alternative is no science Nick.

There are other fields of science to spend the money on. Or simply other things besides science.

“Too bad the F-20 never caught on.”

Old wounds.

I worked in Advanced Design. Threat Analysis. its kind of confusing for folks ( like Poptech ) to understand what that involves. When I came into Northrop I had the perfect Boss. Lt Col, agressor squadron nellis.

he sent me around to every department in the program to get an education. ( sponge was my nick name ). To do Threat analysis you had to understand everything about planes as weapon systems. and everything about performance metrics. So, basically we were tasked with selling the F20 against the F16 to countries like Taiwan.

Step one was understanding the threat Tiawan faced. That meant studying PRC aircraft.

Step two: Setting up war scenarios. complete with political back story

Step three: running simulations of combat between China and Taiwan

Step four: going incountry with your combat simulations, sitting down with Ops research in Tiawan ( in taichung ) and training them in using the models you created for war gaming. A typical “Sales” visit would be

two weeks. So we would pitch the Aircraft I would explain the modelling I had done and then run a class for two weeks and leave behind an open model the guys could use to put in their own inputs—

Losing the Sales battle after spending a BILLION dollars ( circa 1985) of our own damn money was hard. Losing co workers to accidents at Marketing air shows , well,, what can say…pretty fricking grim .

“When the best science you have, limited as it is, tells you that dumping

CO2 into the atmosphere may not be the smartest move , do you ignore

that information” – Agreed! Ignoring that info is not the smartest move. But that begs the question of how much effort, how much investment, how much sacrifice, and how much hand wringing is appropriate in the process of paying attention to that information?

“But that begs the question”

Huh?

http://www.nizkor.org/features/fallacies/begging-the-question.html

Steven Mosher,

You assumed, (wrongly it might seem), that Donald Rapp was using a logical fallacy to arrive at a conclusion.

Of course I may be wrong, but I read his statement “But that begs the question . . .” as a way of pointing out that your bald unfalsifiable assertion would lead a reasonable person to make further enquiries, even assuming that your implication was correct, and that CO2 is harmful.

The one word dismissive “Huh?” is typical of GHE fanatics. Deny, divert, confuse. Ascribe an irrelevant construction to any reasonable comments in an effort to make a valid question relating to the GHE appear foolish.

I apologise to Donald Rapp if I misunderstood his statement. It seemed clear enough to me.

Cheers.

I think you mean “invites the question”. Begging the question is a colloquial term for circular reasoning.

Steven Mosher,

If your CFD model told me that the wings on the aircraft I was flying in had no lift, I would definitely say your model can’t be trusted.

If 50 forecasts predict a terrible storm, and it doesn’t happen, I would say the model couldn’t be trusted.

If a spaghetti of hurricane track predictions all show different tracks, all except one are wrong. If there’s a hurricane in the vicinity, and it has affected your area before, there’s nothing to say it won’t happen again, model outputs notwithstanding.

When the best GHE science can’t even provide a falsifiable hypothesis to allow its bizarre and outrageous claims to be tested, would you believe a model output based on unsupported assertions?

I know you would. A rational person might not.

Cheers.

Mike Flynn ==> Actually, they could all be wrong.

The point of the post, and the point of weather and hurricane forecasting, is that even when a system is chaotic (temporally) then near-present solutions will be close to one another — and thus more reliable — having very little divergence.

For Hurricane Matthew — even out just 24 hours — a direct hit on Cape Canaveral/Cocoa Beach seemed near certain. At the last minute (just hours before landfall was expected) the hurricane jogged east 20 or 30 miles and spared that area.

Steven’s examples are always based on short-term model prediction, which everyone agrees can be acceptably accurate if the situation is not critical.

My life often has depended on weather prediction — sailing the northern Caribbean short-handed — the 24-36 hour predictions have met the standard of “acceptable” — which means I have sailed out in predicted fine weather only to meet an unpredicted gale only a few times.

“Steven’s examples are always based on short-term model prediction, which everyone agrees can be acceptably accurate if the situation is not critical.”

Same goes for long term models

We all agree that economic forecast models are not very precise.. Projecting out a quart or a year is dicy.

However, as a fiscal conservative when a model tells me that Social Secuity will go bankrupt in a couple decades, when a “stoopid Modull”

tells me that medicare will go bust,

Do I ignore those models merely because they lack precision or because they are wrong sometimes?

NOPE.

The bottom line is we reason based on all the evidence we have even the evidence from HORRIBLE MODELS.

My car has a horrribly wrong Distance to empty model. It always tells me I have less gas than I do..

BUT when that model tells me that I will be out of gas in 20 miles.. I pull over

What? I follow the advice of a model I know to be incorrect?

Yup.

In fact every day you walk around and trust your eyes you are using a flawed model.

Steven ==> The analogy is closer to the gas gauge on my old 1961 Plymouth station wagon — at any given time, my gas gauge would tell me any one of the following: half-full, quarter-full, near empty, three-quarters full, nearly full — all without any regard whatever to the actual amount of gasoline in the tank — it predicted the fuel tank contents similar to the 30 Earth’s CESM-LE projections of “Winter temperature trends (in degrees Celsius) for North America between 1963 and 2012″ — all over the place.

The only way I could judge fuel reserve was by a scrap of paper stuck in the ashtray (yes, they used to have them in cars) with the odometer mileage of the last fill-up.

I did NOT trust my fuel gauge model.

Kip,

I doubt results of the models discussed here are as broad as the measurements of your crazy gas gauge, going from we have no problem (full gas tank) to we are totally out of luck (empty gas tank), and back again.

Steven Mosher,

Feel free to believe all the economic models you wish. I wish you luck.

You are perfectly free to believe in anything you like – luminiferous ether, GHE, the honesty of politicians, climate models, whatever.

Trying to attract others to share your beliefs may be difficult.

CO2 heats nothing. There is no GHE, there is not even a falsifiable hypothesis in support of such nonsense.

The IPCC doesn’t believe that future climate states are predictable. Fight them all you like. I wish you luck with that, too.

Cheers.

“I doubt results of the models discussed here are as broad as the measurements of your crazy gas gauge, going from we have no problem (full gas tank) to we are totally out of luck (empty gas tank), and back again.”

1.5 to 4.5 would be more like either a 1/4 tank or full, so assume you have just over a half a tank until the next update.

Everything could be just fine. Everything could go to hell. Give me your firstborn to ensure everything will be just fine.

max1ok ==> Look at my post here on Lorenz Validated — the image from the CESM-LE tells the story — 30 identical runs of the identical climate model, over 50 years (1963-2012 — known data) with infinitesimal differences in a single initial condition produce 30 wildly different 50 year projections — none of them matching the real life result — representing the entire spectrum of the possible: full-tank, empty-tank, quarter-full, etc.

And those are 30 for-all-intents-and-purposes identical runs.

” 30 wildly different 50 year projections”

Huh?

Not really Kip.

try again

All trivially true Wandering in the Weeds misdirection, Mr. Mosher. The examples given have proven track records and the responses are simple and inexpensive reactions people can take.

IPCC climate models have horrible track records and we are being asked to “fundamentally alter our social and economic systems.”

Please, please stay out of policy discussions for which you are unprepared educationally and professionally. For all our benefits.

Charlie Skeptic

“IPCC climate models have horrible track records and we are being asked to “fundamentally alter our social and economic systems.”

huh.

they have great track records and at most are off by 10-20% Biased high

hey, my gas guage is biased high and there is NO PROBLEM using that.

Now to the policy

1. Did you see me suggest FUNDAMENTAL changes to society?

nope

2. To economics?

Nope.

The Question of what ACTION can be supported by the information provided by models, is A DIFFERENT FRICKING QUESTION

charlie

I love that you stalk me..

but please read harder

and FWIW a full on carbon tax would not change very much of our society or economics..

So stop your scare mongering

or show me the economic model that tells you a carbon tax would kill us?

or whats your evidence that a carbon tax is a problem/

show your work.

Tell the world Steven, all about the ‘Domino Theory’. Nobody cared to ask just anybody about anything. Remember?

At the time LBJ called his economic plan: ‘Guns & Butter’.

Mr. Mosher, the IPCC is calling for fundamental transformations in our economic and social systems based on their climate models. Read their documents. Then tell me what sort of commitments we are being asked to make based on their prognostications.

You support their models as being off by only 10-20%, biased high. What a joke. They screw with the Southern Hemisphere to try to get the Northern Hemisphere close. Are you unaware of their regional inaccuracies with every climate metric? Are you unaware of their gross misrepresentation of ocean basin SSTs? They can’t even get global average temperatures within 3 degrees C!

A carbon tax is the least of the impositions our betters are proposing. Have you not heard of the EPA’s plans for our electric power system overhaul? Did you look at the derivation of their social cost of carbon? Were not all U.S. Federal agencies commanded to consider the IPCC-defined extreme climate change in their decision-making? And you prattle on about a carbon tax? That is the most minor thing being pushed by the green NGO-backed politicians.

Are you unaware of the many studies of the economic consequences of rapid decarbonization? Have you looked at the cost and system reliability concerns of forced wind and solar additions to extant power grids internationally? The economic and social costs are manifest, and I don’t have to do any economic calculations for you by myself.

Oh, and your gas gauge being biased high? It is A PROBLEM if you run out of gas in the wrong neighborhood. Please get it fixed, for your own (if not your family’s) safety. The weeds you wander in may not be your own.

He is the work that you wanted.

Lyndon Baines Johnson and the Challenge of "Guns and Butter" Transcript pdffrommhagyWhy does memory lane look like a circular track to us today?

Mosher, “When the best science you have, limited as it is, tells you that dumping

c02 into the atmosphere may not be the smartest move , do you ignore

that information.”

If we magically turn off CO2 today, models indicate 0.5 C to 1.0 C more additional warming. If we take bold action and reduce CO2 emissions to neutral by 2030, models indicate roughly 0.6 to 1.2 C of addition warming. If we take responsible action and reduce CO2 emissions to neutral by ~2060 models indicate we can expect 0.8 C to 1.6 C of additional warming. If we go out of our way to find and burn everything at the lowest possible efficiency ignoring cost, we can expect greater than 1.6 C with a possible limit of 4 C of additional warming.

In every case there will still be extreme weather events, but they may be 7% to 10% more severe than what we have experienced in our fairly short “reliable” observation period.

I believe your most recent suggested “fix” was killing coal which will reduce total emissions by about 25% if no one is intelligent enough to use coal more efficiently. So why don’t you spend a little time outlining your grand plan with all the potential unexpected consequences and tell us just how much warming we will avoid and at what cost.

my rationale for killing coal has nothing to do with warming.

Coal kills. kill coal.

pretty effin simple. even you get that math

Hospitals kill, kill hospitals, pretty f’in simple. Now if you think medical error and hospital acquired infections can be reduced, perhaps killing hospitals isn’t a good idea.

Steven Mosher,

Water kills. Kill water.

Yes?

Cheers.

“When the best science you have, limited as it is, tells you that dumping

c02 into the atmosphere may not be the smartest move.”

If you were a tree you may think otherwise….

trees dont talk.

I have no rational obligations to them

plus c02 is trace gas, there is no way it could have any effect

/sarc off

One of the spectacular V22 crashes was a result of the inadequacy of simulation. Settling with power (vortex ring state) was not predicted in the simulator, nor were we able to reproduce such an effect in studying the accident, as it requires the modelling of a highly nonlinear chaotic state.

That said it didn’t really surprise anyone too terribly much. The small rotor diameter was susceptible to such a problem.

The problem is certainly possible to simulate in a very contained and structured setting, small domain with huge computational resources.

Steven,

Based on the performance of the models for the recent “super storm” the media was hyping up here, the answer to your picnic question is – maybe.

I did lose power at my Portland home, which is rare, but that was due to one tree kicking out the entire distribution circuit. We had very few customer outages up in Western WA where I work. I was not put out by not getting called out for storm duty, even though it is the only time we qualify for OT.

When you look at the best data we have, it shows nothing of an effect from Co2.

Lots of effects from warm ocean water moving around, but none from Co2.

One model I am particularly fascinated with, is the model you painted of your research history at Northrop. Having personally worked for Tom Jones in the ’80s, I believe he would agree (though he passed away in 2014), your model is somewhat in the mold of Larry’s “Jet Hangar Hobbies” – not to scale. Not even close. No Cigare Volant.

Here’s another model, albeit a different painter: http://www.populartechnology.net/2014/06/who-is-steven-mosher.html

On the other hand, nice stories about your friends. You write quite well.

“Northrop. Having personally worked for Tom Jones in the ’80s”

dear god… small world

the man was a hero, him and jack

We went to the ends of the earth to sell the dreams

When the B2 rolled out it was hard to keep a dry eye– knowing how the YB49 had been cut to pieces. Still chokes me up.

Funny that Poptech…. someday I will explain how my algorithm made it into the Falcon series ( involves Gilman Louie, Leon Rosenshein, and my BFF Steven Blankenship https://todayshistorylesson.wordpress.com/2008/12/12/falcon-40-the-flight-sim-standard-bearer/)

Short version. Blankenship met me when I worked at Eidetics and he was interested in the fiberglass full scale replicas we had built of the F16

in 1994-5 I was offerred the job as producer, and accepted, but it was silicon valley and my company counter offerred so I stayed at Kabota.

Gilman was pretty pissed as was Steve, so I introduced them the Leon

who had worked for me at Eidetics. There we had devised a pretty cool sim technology

“http://www.google.ch/patents/US5272652”

Any way, After looking at thousands of Air combat dogfights ( to test that patent ) I suddenly occurred to me that about 80% of the pilot behavior could be modelled by a simple rule: Zero the roll offset ( roll your plane until the opponent is at 12 oclock and PULL ) So the algorithm was called “roll and pull.”

Turns out that if both fighter do this you get a classic SCISSORS as an emergent property..of the fight. If you fought AI versus AI they always ended up in a scissors until one ran out of energy.

Anyway, Roll and pull was super simple AI for an digital adversary, but I added some twists to it. basically though interviewing the guys who taught at the agressor squadron. Most of the additions had to do with EM basically dont let your bleed rate get too high..or you fall out of the sky …Then there were special cases for forcing opponents into overshoots..

So, after Leon and Steve published the game, I asked Leon how he did the AI.??? Roll and Pull!! what else.!! Later Blankenship would join me at Creative Labs to help manage 3rd party development and after that he went back to doing simulations.. for black programs..

Interesting you should mention that mosh. That’s why we don’t just rely on computer models for that sort of thing. There was once a Virgin formula one team. That decided they didn’t need wind tunnel time for their aero packages as they had state of the art modelling… Strangely, that didn’t work out too well for them…. All models are wrong… but they can still be helpful….

what exactly do you put into a wind tunnel?

think now…

“After all this work CFD will give reasonable results but only for very small volumes (a few meters), strictly defined initial/boundary conditions, and simple geometries.”There is nothing in the N-S equations that gives a scale limit in metres. In typical problems, like your wing example, the initial conditions are known to very little accuracy, and are not an issue in computing. As with GCMs, the problem is run for an initial stretch until the influence of the initial condition details has been smoothed out. Then you start getting meaningful results. Nor is complex geometry an issue.

“This explains why CFD cannot be used for systems whose typical size is larger than a few meters.”Again, that is complete nonsense. The metric system is not built into the N-S equations. At all reasonable scales, the flow is turbulent and sub-grid modelling is necessary. The nature of that modelling is not dependent on scale.

” When the fluid is not isothermal, it is necessary to add a specific energy conservation equation which adds a new variable, the temperature.”Yes, and that is done. There is no problem allowing for spatially varying viscosity (use self-adjoint form), and variable temperature, with viscosity effects.

As to the rest, on utility of climate models, I’ll show this GFDL model showing SST as a tracer of currents:

What is clear:

1. The solutions, which derive basically from energy inputs and bottom topography, are clearly not nonsense. In partiular, they show important attractors – Agulhas current, Gulf stream etc.

2. The spatio-temporal process like ENSO appear in the models. Since complete information about those emulations is available in the model, and the are otherwise (without GCM) mysterious, there is much to be learnt

3. A point I emphasise about initial conditions, and climate vs weather. The model shows many processes of a model earth, It has model eddies, model ENSOs etc. These are

notpredictions. The eddies will not happen on the actual earth exactly as shown. The ENSOs will not happen at the times stated. But, as you see, the current patterns that emerge are familiar. And the evolution of the model, as it is influenced by the changing forcings (ebergy flows etc) is constrained, and those are the constraints of the real Earth.Nick says: “This explains why CFD cannot be used for systems whose typical size is larger than a few meters.”

Again, that is complete nonsense. The metric system is not built into the N-S equations. At all reasonable scales, the flow is turbulent and sub-grid modelling is necessary. The nature of that modelling is not dependent on scale.

Nick, what is a reasonable scale? What model uses it?

Little later, 1. The solutions, which derive basically from energy inputs and bottom topography, are clearly not nonsense.

Why not? Have you changed your criteria? Anything that remotely resembles a reality is “clearly not a nonsense”?

“reasonable scale”

At very small length scales, viscosity dominates inertia and laminar flow solutions work. But at scales where CFD is commonly used, flow is turbulent and that has to be modelled.

Far be it from me to defend Mr Stokes but in this regard he’s correct.

The swirls are very nice and we do see swirls in real Earth too. But like you say they do not happen in the right time and place. Despite the fact they have wiggles, they do not simulate ENSO well.

Knowing the short term wiggles are not right should give decreased confidence in the long term wiggles being right. You seem to assume the opposite.

It’s like looking at a Renoir, if you squint hard enough you get the impression it looks a bit like the real world.

That is simply untrue. Many of the key ‘forcings’ are simply guestimates. It’s disingenuous to pretend that we understand all aspects of climate as well as things like radiative transmission properties of gases.

Things like volcanic forcing just get tweaked to fit in with other assumed values. Scaling of AOD changed by over 30% from when Lacis et al estimated it from direct physical calculations in 1992 to when Hansen et al fudged it to reconcile model outputs in 2005.https://judithcurry.com/2015/02/06/on-determination-of-tropical-feedbacks/

There’s plenty of the “basic physics” which we do not know how to model or can not model on a fine enough scale.

> It’s disingenuous to pretend that we understand all aspects of climate as well as things like radiative transmission properties of gases.

Exactly. We don’t even know all the aspects of a pot of water. How are we to conclude that when you heat it, it’ll boil?

By lots and lots of observations Willard.

> By lots and lots of observations

Yet Thomas’ universal reasonment should apply, TimG.

Willard,

I assume a pot of water will boil when sufficient heat is applied for a sufficient length of time.

If you give me enough information, I can probably predict how long it will take to get to the boiling point and so on. In reality you cannot provide sufficient information to enable an exact solution. That’s just for a bit of water in a pot.

The atmosphere is somewhat more difficult to solve. GHE enthusiasts claim it will all average out – sometime in the future. A wonderful non falsifiable claim – the answer lies in the future!

Cheers.

Willard | October 17, 2016 at 9:26 am |

“. We don’t even know all the aspects of a pot of water. How are we to conclude that when you heat it, it’ll boil?”

–

The discussion is about how the fluid moves in the pot of boiling water Willard and when that hot moiety of boiling water will jump out and burn your hand.

Not about when it will change state from a liquid to a gas which happens at 100C under the usual boundary conditions , pressure elevation, gravity etc.

Good try at misdirection though.

> The discussion is about how the fluid moves […]

Not really, Doc – it’s about determinism and predictibility.

You could say though that the

argumentunder discussion amounts to claim that because we don’t know how the fluild moves at every single moment and every single location, we can’t know nothing about its future states.Under that interpretation, my argument is more than relevant.

It’s not mine, BTW – it’s Vaughan Pratt’s (pers. comm.).

We’re simply revisiting the But Random Walk all over again.

On viscosity, in 3D – in 3D the flow goes to shorter and shorter scales, reaching a short enough scale eventually that the air is no longer a fluid but rather particles flying around, and the NS equation doesn’t describe it at all. The flow is then turning up as heating.

So it’s not viscosity that saves you but that the NS equation breaks down.

Inferring the reverse direction, at bottom the NS equation can’t work, because small scales influence large scales.

(In 2D, vorticity is conserved and it doesn’t go to shorter scales. Vortices, in short, can’t kink and break up.)

Yep a vortex like any system is resilient to change. Too drunk . Will stop commenting. Happy though. Amazing what free drinks can do for you…..

On the Schrodinger equation, as Eddington points out, it describes the evolution of something that does not exist in the world.

rhhardin wrote:

“On the Schrodinger equation, as Eddington points out, it describes the evolution of something that does not exist in the world.”

That’s true for a lot of physical theories, and is the essence of 20th century physics. The metric tensor of general relativity does not “exist in the world,” and yet Einstein’s equations, which solve for it, and from which physically meaningful predictions can be extracted, has never made a prediction known to be false. (It even made predictions Einstein himself didn’t believe were physically possible, but he was wrong, at least twice.)

The metric tensor is just a systemization of equations that describe things that do exist. It’s like components of vector equations.

Eddington does say of it that it makes a lot of physics a tautology; I don’t follow him on that.

> These observations are enough to start considering the predictability of the dynamical variable.

No really, for the concept of predictability has yet to be introduced.

> What do we know about the existence, unicity and regularity of solutions of 3-D N-S equations? Well basically not much.

Either we know much about existence, unicity, and regularity from that post alone.

Bravo. You almost made me understand it. Nevertheless I agree with the conclusions: Climate models are not giving the correct answer, and attractors are the most interesting part of climate change as they define the boundaries of what can be expected.

But will attractors give us the information needed to calibrate the damage function?

This part is important for climate.

“As the boundary conditions of the system are given by the shape and location of the continents and of the ocean floor on one side and the orbital parameters as well as the energy output of the Sun on the other side, this attractor can be considered as invariant over the time scales of interest – e.g hundreds or thousands of years.”

Note that forcing is a boundary condition – solar input in this case, but also atmospheric composition. Doubling CO2 is equivalent to adding 1% to solar forcing. It is not staying constant over thousands of years, but quickly changing within a century. This is the problem we face, and that we are already measuring. It doesn’t matter that the system is chaotic, because the forcing change is large enough to show through, and at a rate that does not surprise anyone based on the physics of the forcing.

http://www.woodfortrees.org/plot/gistemp/from:1950/mean:12/plot/esrl-co2/scale:0.01/offset:-3.2/plot/gistemp/mean:120/mean:240/from:1950/plot/gistemp/from:1985/trend

Thomas, Interesting post. Some of it is beyond my limited knowledge of the theory but the incompressible NS reference looks interesting and worth a read.

Nick Stokes, I think the reference to CFD being limited to a domain a few meters in extent refers to Direct Numerical Simulation in which all eddies are resolved. In this case, he is correct. Larger domains require LES or DES or RANS. With RANS full airplanes can be easily simulated even though the accuracy and stability of the solutions remains unresolved. Recent work shows grid convergence to be questionable even in attached flow. And then there is separated flow.

The atmosphere is far harder of course because the eddy sizes range from 1000 kilometers to the microscopic so direct simulation is truly impossible.

I would point out a couple of recent results that Thomas you might find interesting. There is a new paper not yet published showing some negative results for LES that calls into question grid convergence of these simulations. Not surprising perhaps but a blow to LES advocates. Similarly there are a lot of papers on multiple solutions for RANS, particularly for separated flows.

The other issue to remember above all else here is that the CFD literature is strongly affected by selection bias and positive results bias. Replication often fails, but the results are seldom published. There is tremendous pressure to show “good” results to keep funding alive. In fact CFD is far more problematic and replicatible than naive laymen and young scientists believe. Bear in mind most papers show results obtained after long studies in which the code is run again and again adjusting parameters and grids until “good” result is obtained. In many cases, the test data is known beforehand. Not exactly a research methodology that inspires confidence.

Sorry, should have said “CFD is far more problematic and less replicatgible than naive laymen and young scientists believe.”

“I think the reference to CFD being limited to a domain a few meters in extent refers to Direct Numerical Simulation in which all eddies are resolved. In this case, he is correct. “No, he says “after all this work”, which includes RANS etc. DNS is feasible for Re up to a few thousand –

Re=1000, L=1 m, water ν=1E-6 m²/s ⇒ velocity 1 mm/sec

This is not useful CFD.

Pingback: Tomas Milanovic is a pseudo scientist | context/Earth

“From the discussion about N-S we know that the system is governed by deterministic equations therefore we know that the “probability” of a storm of the century in the real world was 100 %.”You don’t know that. The reason is that there is no presently practical way to acquire the initial information from which the inevitability could be inferred. So no application of N-S, no matter how perfect and deterministic, could infer it.

That is the point of numerical weather forecasting (and CFD). It matches the initial information that you have to a reasoning process that predicts broad outputs. It won’t tell you which pylons will blow down.

And it is the point of CFD. It isn’t particularly useful to ask if it converges to an exact N-S solution. N-S is the continuum expression of conservation relations. Discretised N-S finds a solution that expresses that conservation for the integrated quantities on the discrete mesh at discrete times. You can check that it does. It matches input and boundary information that you can acquire, and gives results on a corresponding grid scale that you can handle. It fails to discriminate effects on a sub-grid scale, and where necessary (*eg updrafts), these have to be modelled.

Nick, I must respectfully disagree with your assertion that “It isn’t particularly useful to ask if it converges to an exact N-S solution.” It is very useful because without this convergence, you are left with a parameter dependent solution, particularly dependent on grid parameters. It is easy for outsiders to underestimate the complexity of RANS grinding processes and the number of parameters.

David,

“particularly dependent on grid parameters”So that is a question of grid invariance, which is different from convergence to an exact N-S solution, and is testable without going to infinitesimal grid.

A sensitivity to initial conditions always implies non-linearityNot so. A linear ODE can be ill-posed.

Killer Marmot,

Maybe I am using different definitions –

“What are inverse and ill-posed problems? While there is no universal formal definition for inverse problems, an “ill-posed problem” is a problem that either has no solutions in the desired class, or has many (two or more) solutions, or the solution procedure is unstable (i.e., arbitrarily small errors in the measurement data may lead to indefinitely large errors in the solutions). Most difficulties in solving ill-posed problems are caused by the solution instability. Therefore, the term “ill-posed problems” is often used for unstable problems.”I tend to associate chaos with ill-posed problems – possibly incorrectly. My definition of linear vs non linear functions may also differ from other people’s.

Some time ago, Gavin Schmidt told me he saw “nothing to convince him” that either weather (or climate, weather’s average) was chaotic in nature. I believe Gavin Schmidt has a PhD in mathematics. The IPCC states that climate behaves chaotically, and that future climate states are not predictable as a result. Who’s right?

As to “a linear ODE can be ill-posed”, indeed it can. However, in the context in which it is presented, it seems fair to say that sensitivity to initial conditions always implies non-linearity. Implies – until shown otherwise.

Cheers.

“What are inverse and ill-posed problems?

Every problem becomes inverse and ill-posed when incorrect assumptions are used in the attempted solution.

They are recognized when model output does not match real data.

Consensus Climate Theory and Consensus Climate Models provide the most examples of this.

Some time ago, Gavin Schmidt told me he saw “nothing to convince him” that either weather (or climate, weather’s average) was chaotic in nature. I believe Gavin Schmidt has a PhD in mathematics. The IPCC states that climate behaves chaotically, and that future climate states are not predictable as a result. Who’s right?

Climate data shows that the past ten thousand years has had temperatures that cycled between a little warmer and a little colder with well defined bounds. There is nothing chaotic in this nature. It is very predictable that the next ten thousand years will behave the same.

popesclimatetheory,

Your theory is a good as anyone’s, as far as I can see.

As to the next ten thousand years, one might as well assume the future will be the same as the past, in general.

The main benefit is that it’s very cheap. A secondary benefit is that it’s impossible to prove you wrong.

I follow your principle to predict my eventual demise. I was alive yesterday, I assume I’ll be alive tomorrow. One day, I’ll be wrong, but at least I won’t be worrying about it!

When do you think something like the Younger Dryas might occur? Or a recurrence of a megadrought on the North American (or South American or African or . . . ) continent?

Hopefully, I won’t experience a goodly weather, or “climate”, event in my lifetime.

I hope.

Cheers.

Well, Mike, we are told that all we have to do to avoid such climate extremes is quit producing CO2. [Yes, yes: One can take that statement a number of ways!]

Charlie Skeptic

Gavin Schmidt:

A simple linear ill-posed problem is backwards heat conduction. That is, given the temperature distribution of an insulated metal bar, say,what was its temperature distribution an hour ago?

There are wildly different solutions, all of which pretty near match the current temperatures.

Killer Marmot,

It’s obvious that we differ in our definitions of linear.

“Linearity is the property of a mathematical relationship or function which means that it can be graphically represented as a straight line, that is, that one quantity is simply proportional to another.”Given one initial point, and one other point, a straight line can be plotted. My linear equations conform to y = mx + b.

Your backwards heat conduction analogy does not fit – as a matter of fact, you refuse to state initial conditions, thereby ensuring the impossibility of coming to any conclusion at all. Wouldn’t you agree?

Sensitivity to initial conditions implies non linearity. A linear equation of the form y = mx + b is not sensitive to initial conditions. According to me, and any reasonable person.

Cheers.

Gavin Schmitt had a PhD in Applied Mathematics – and I believe it was along time ago. He may never have studies chaos theory.

Tomas Milanovic

I realize I am adventuring well past your admonition.

Nevertheless. I did not see in any of the equations above the ability to assess nor predict abrupt weather or abrupt climate change. This quality of incorporating abrupt change may be inherent in Navier-Stokes equations and I don’t realize it.

Your thoughts would be helpful if I am on the right track.

Thanks Tomas for an interesting overview of the logical and computational issues underlying climate science. While I know that my theoretical and mental limitations will preclude a proper understanding of the prediction problem inherent in non-linear chaotic systems, there seems to be plenty of climate scientists (and other blog contributors) punching well below their intellectual weight!

Sorry, I meant well ABOVE their intellectual weight!

The third observation is that all four equations and systems are deterministic. This means that if we know the value of the dynamical variables at time t for (1) and (2) the equations yield a unique value of the value at a later time (t + dt). For (3) and (4) if we know the value of the dynamical variable at time t and at a position x, then the equations yield the value of the dynamical variable at a later time (t + dt) and/or at another position (x + dx).

These observations are enough to start considering the predictability of the dynamical variable.

This is valid if nothing else changes. In real Climate, other stuff changes. It snows more when oceans get warm and polar oceans thaw. It snows until it gets cold again. This is not in the equations.

That numerical modelling of the climate is not feasible was once the common viewpoint. Then some people decided to get their hands dirty and try to see how far they could get. I think that this can only be admired. Since then, using the tools of weather prediction, climate modelling has developed into an industry. Among many others, one nasty little problem has remained unsolved, however: how can we determine whether our predictions are good enough for the purpose? So yes, you might say that climate modelling is still more or less where it began, despite all the technology. We can call it “best available science” but we have no way to determine how good it is for the purpose of predicting climate.

So the problem is not in the equations or in the numerics (weather prediction has been hugely successful). The problem is and will remain the time scale of climate change; we can not verify models (

For key purposes, we can already verify that the latest models (CMIP5) are inadequate for policy purposes. They predict a tropical troposphere hotspot that does not exist, running >3x (UAH) or 1.7x (latest Mear RSS paper after stratosphere fiddles) too hot. And they produce a median ECS of 3.2 when the observational data suggests 1.65 (e.g. Lewis and Curry 2014). In addition to the reasons cited in this post, computational intractability forces parameterizarion that inherently introduces the attribution problem.

I agree. And indeed comparison to observation-based climate sensitivity estimates offers probably the only useful “indicator” for the fitness for purpose of climate models.

But over the longer term, there is a powerful incentive for modellers to come up with “solutions” to explain away discrepancies or make them disappear somehow. This will go on as long as the fit to historical data is considered a valid measure of model skill. Which it is not; you can only evaluate the skill of a prediction made before the measurements were made. But giving up the illusion would mean accepting defeat.

It’s nice to see some old-school physical analysis in these columns. I’ve long been puzzled by the fixation on the Navier-Stokes equations. It seems worth re-emphasizing that these are limited to linear, isothermal, viscous dissipation as Tomas clearly states. Viscous dissipation requires a steady material flux between two reservoirs of differing potential (pressure /density). Unlike a wind tunnel, in the atmosphere this transport is wholly rotational for there is no net material transfer from the equator by Hadley currents. Rather there is a transfer of energy in the form of heat. An equatorial fluctuation triggers a material flux which is then viscously dissipated as heat in somewhat cooler climes.

When one observes a steady 1kW energy transfer from a 300K source to a 200K sink, classical thermodynamics tells us energy is being dissipated at a 333W rate, whether through a copper bar (no mass transport) or an atmospheric column a kilometer high (convective turbulence). This is basic thermal dissipation.

Of some relevance: Michel Baranger

Chaos, Complexity, and Entropy, A physics talk for non-physicists

http://globalintelhub.com/wp-content/uploads/2013/10/cce.pdf

This shows how complicated to is trying to model that which is solar forced, without actually knowing that is solar forced. The only thing that is truly ‘deterministic’, is the missing solar signal driving the noise that is assumed to be internal variability.

oops… how complicated it is… etc

“Constant solar input is sufficient to drive chaotic oscillation.”

A variable Sun drives natural variability at the scale of weather, and very predicable by the planetary ordering of said solar variability. That’s the reality.

Chaotic oscillation and variable solar forcing are two different aspects of climate change. Constant solar input is sufficient to drive chaotic oscillation. Both need greatly increased study, rather than pouring ever more money into paradigmatic AGW climate modeling.

Great work, Tomas. Your brief discussion of the research needs related to chaotic climate is especially relevant to my goal of refocusing the USGCRP. Additional thoughts from you (or anyone) will be most welcome.

See https://judithcurry.com/2016/08/29/refocusing-the-usgcrp/.

We need to define the research issues in fundable detail.

I gathered some of the comments and try to answer them here .

Unfortunately my available time is rather scarce but I will try to answer other future questions in the same way .

.

“I believe that the attractor is not only strange, but incapable of useful definition in mathematical terms.Well the definition of an attractor in mathematical terms is pretty straightforward . It is a subset of the phase space wich is left invariant by the dynamics of the system . Strange is just a qualification of a particular kind of attractors . So it is not only useful but fundamental because of its invariance .

.

if 50 of the forecasts, predicted a horrible storm, would you plan a picnic?Yes, no , what of it ? If it is a question about how, when and why probabilities are defined (or not) in a chaotic system then you need to read the post again . If it is a hyperbole about strategies in environments where no probabilities are defined then it is out of topic .

.

There is nothing in the N-S equations that gives a scale limit in metres.Nick this is an irrelevant and silly remark .

The point is not whether there are meters in N-S . The point is that to discretize N-S you

MUSTchoose a grid and there thenAREmeters to define the size of a a grid .And it is these meters which constrain you heavily in the number of points you can have in a grid of a given scale .

It is also nonsese to write that “complex geometry” is not an issue – I give you as exercice to find and study a few papers which analyse the impact of the topology and grid choice in the finite element methods . Another nonsense is that there is no parametrization in DNS – I guess that the hundreds of papers dealing with the dependence on grid scale, form and knots doesn’t exist .

Well yes, these decisions have a quite large impact on the results and have to be carefully studied – they cannot be chosen randomly what you implied .

.

Knowing the short term wiggles are not right should give decreased confidence in the long term wiggles being right.This is correct Greg . As I have written, there IS an interaction between subgrid scales and grid scales . The solutions over both domains are not independent .

As I have also written, some global features at very large scales are sufficiently invariant that they can be found even by zero dimensionnal models (Hadley cell circulation via constructal theory) . It is not really surprising that other models even quite primitive ones that conserve more or less energy and momentum will find similar global features . However when the scale decreases (towards f.ex regional scales), it is as you said – the “small wiggles” matter because they are no more so small .

.

But will attractors give us the information needed to calibrate the damage function?This is a very interesting question Peter Lang . The short answer is yes because the topology of the attractor gives you the right functions that describe a particular portion of the phase space .

It would allow you to exclude what is impossible (outside of the attractor) and perhaps to evaluate the probability that this particular region of the attractor will be visited and for what time .

The practical difficulty is that such an attractor would have a very large dimension so that you would have to find the right projections to reduce the dimensionality . But that is what the research is for, isn’t it ?

.

There is a new paper not yet published showing some negative results for LES that calls into question grid convergence of these simulations.Thanks for your input David Young . Per definition I cannot know this particular paper but I am generally quite interested by LES because the method based on scale separation and independence of solutions is at the heart of the matter . I read quite a number of papers that focus specifically on LES problems due to the interaction of subgrid and grid scales .

Even if some comments tried to do deny this observed fact, it is sure that for example clouds (subgrid scales) heavily interact with energy at large grid scales . If you don’t get the former right, no LES will help to make the latter right .

.

“From the discussion about N-S we know that the system is governed by deterministic equations therefore we know that the “probability” of a storm of the century in the real world was 100 %.”You don’t know that. The reason is that there is no presently practical way to acquire the initial information from which the inevitability could be inferred. So no application of N-S, no matter how perfect and deterministic, could infer it.

You are at it again Nick. It looks like if your main objective in this thread is to maximally muddy the waters so that a maximum of readers becomes confused .

It is trivially true what I wrote, so yes, I know that .

Actually it seems that you have no notion about non linear dynamics what could explain why all your posts were out of topic , irrelevant or misinterpreting the initial post which is about determinism and predictability .

N-S is deterministic so if one supposes that a unique solution exists – not a big stretch for sufficiently regular initial conditions, then there exists only 1 final state for given initial condions .

So if you observed a storm of the century today, its “probability” in some recent point of the past was exactly 100 % .

This is a completely different issue from the question whether you can compute this solution or not .

My whole argument is saying that you cannot do that . But it says also more – it says that regardless how many (wrong) finite simulations you make, you cannot approach this 100 % probability .

Theoretically a consistent concept of probabilities of final states could be defined and computed if the attractor was known and ergodicity established (what is not the case) but certainly not by running 10 or 50 times some numerical model which is known to not to converge to N-S solutions anyway.

I am quite willing to answer your questions but try to make comments which make sense and are relevant .

Your unconditionnal faith that numerical models are accurate and the only way to deal with spatio-temporal chaos is admirable but unfortunately wrong .

.

I did not see in any of the equations above the ability to assess nor predict abrupt weather or abrupt climate change. This quality of incorporating abrupt change may be inherent in Navier-Stokes equations and I don’t realize it.RiH008 you did not because you cannot . The notion of abrupt implies a (quasi) discontinuity .

You cannot know whether the solution of N-S for particular initial conditions is discontinuous or not by just looking at the N-S .

Actually you can have a better idea if you look at the surf on a beach during a storm (or at a waterfall) . The waves breaking on the shore are solutions of N-S with pretty wild initial conditions and they are quite discontinuous indeed .

In these examples you have evidence that N-S does incorporate the quality for very abrupt changes when the initial and boundary conditions are in the right shape .

In non linear dynamics vocabulary you could say that the system visits a very tortured (fractal) region of the attractor .

“It is also nonsese to write that “complex geometry” is not an issue – I give you as exercice to find and study a few papers which analyse the impact of the topology and grid choice in the finite element methods.Another nonsense is that there is no parametrization in DNS – I guess that the hundreds of papers dealing with the dependence on grid scale, form and knots doesn’t exist .”

Nonsenses all round here. No-one us using DNS for any practical CFD. There has been some seguing from parameters of the equations to parameters in grid generation, which is a different issue. But there are no issues about grid topology, knots etc in GCMs. The mesh is just layers on the earth surface, and the surface mesh nowadays is usually just a map of a regular mesh on a cube. But even lat/lon works well enough.

“It is trivially true what I wrote, so yes,”The only argument seems to be that it happened, so the probability was 100%. Well, that is indeed trivial. The issue is whether you can assign a probability in advance. And for weather forecasting, that depends primarily on your knowledge of the initial state, which can never be 100% for anything. Determinism or not of N-S can’t change that.

Ultimately, the fact is that CFD works, and so does numerical weather prediction. And yes, NWP works on this scale of thousands of km, and manages to make the meshes work.

Ultimately, the fact is that CFD works, and so does numerical weather prediction.NWP:

Works most of the time for 3 day forecasts ( an amazing improvement ).

Works some of the time for 7 day forecasts.

Fails most of the time for 10 day forecasts and longer.

“Works most of the time for 3 day forecasts ( an amazing improvement )”Yes. So they’re getting something right. The claim here is that CFD is restricted to a scale of metres.

Yes it fails on weather most of the time on 10 days, but… it will get the climate right most of the time for the 10 days, the end of the month, the next season, etc.

Yes it fails on weather most of the time on 10 days, but… it will get the climate right most of the time for the 10 days, the end of the month, the next season, etc.No. The things which matter ( precipitation, wind, heatwaves, cold fronts, hurricanes, etc. ) are not remotely predictable beyond a week or so. It helps to have lots of meteorologist friends that get very excited about false positive forecasts.

Kinda like the CAGW crowd.

“No. The things which matter ( precipitation, wind, heatwaves, cold fronts, hurricanes, etc. ) are not remotely predictable beyond a week or so. It helps to have lots of meteorologist friends that get very excited about false positive forecasts.”

Then you would have GCM’s become NWP models then?

“It helps to have lots of meteorologist friends”

And (as a retired meteroologist)

I’d suggest you have the wrong kind of “meteorologist friends”.

Real ones don’t get excited by *forecasts* beyond 7 days.

They know that it is just one deterministic forecast from a range of possibilities.

If the ensemble has enough members clustered around that deterministic then there is some “room” for “excitement”.

GCM’s do what they do… and very well:

https://tamino.wordpress.com/2016/10/17/by-request-validation/

Don’t drag unreasonable expectations into it.

Tony Banton:

“GCM’s do what they do… and very well:”What they do very well is give unscrupulous scientists and politicians an excuse in the first instance to have cushy, over-remunerated careers entirely bereft of scrutiny and in the second case an excuse to wield almost unlimited power without responsibility.

Aside from that they are of zero scientific value and they are an extraordinary waste of resources and responsible for the justification of policies that have and will continue to be responsible for untold death and suffering.

A northern hemisphere January is not going to get turned on its head by chaos. All in all, they are pretty much the same, and as predictable as the sunrise… cold, gradually getting warmer, but still cold.

Half of the years after 2009 were predicted to be warmest years… the Brits, attempting what was thought to be impossible, using their

toy models:2010 – warmest year

2011 – not a warmest year

2012 – not a warmest year

2013 – not a warmest year

2014 – warmest year

2015 – warmest year

2016 – warmest year

2017 – unlikely to beat 2016, but odds of an El Niño are going up.

Current score – looks like four to four.

GCM’s do what they do… and very wellNo.

Manabe’s 1 D models demonstrated global warming.

They also demonstrated the importance of convection which is a function of circulation. But circulation is not predictable ( outside of some constraints ).

The GCMs are worthless at improving the understanding of the original 1 D models of half a century ago.

See my comments below.

RF at the tropopause is not nearly as much a function of chaotic fluid flow as is the arrangement of troughs and ridges at 500mb is.

Therefore, global average temperature is not (as) subject to the unpredictable numerical solutions described so well in the post.

But, what determines your weather and climate is much more what happens at 500mb than at the tropopause.

Extremists that don’t really understand climate gravitate toward the global average temperature, but it is largely an irrelevant number.

“Ultimately, the fact is that CFD works”

Not even backwards, Nick.

The UK MET is celebrating that their new £97 million computer can now create slightly better 12 month predictions than tossing a coin.It was previously thought that the NAO was a chaotic system which could not be predicted but the Met Office has used a technique called ‘hindcasting’ to check whether their new supercomputer could have predicted past winters. they discovered that they could largely predict what the winter weather would have done for the past 35 years, a year in advance, with 62 per cent accuracy.

It still defies prediction even when the result is known.

“The GCMs are worthless at improving the understanding of the original 1 D models of half a century ago.”

I disagree:

As this plot shows when forcings are adjuted to values that actually happened.

That tells us we are very much on the right track.

Unicorns (as someone once or twice has said), not withstanding.

“They discovered that they could largely predict what the winter weather would have done for the past 35 years, a year in advance, with 62 per cent accuracy.

It still defies prediction even when the result is known.”

You obviously don’t – but I think that is damned impressive, especially for it being fairly early in the program, and, as they say, they see a means of improvement already.

For those interested – here is the full paper….

http://sci-hub.bz/10.1038/ngeo2824

“The GCMs are worthless at improving the understanding of the original 1 D models of half a century ago.”I disagree:

As this plot shows when forcings are adjuted to values that actually happened. That tells us we are very much on the right track.Unicorns (as someone once or twice has said), not withstanding.

1D models told us this half a century ago!

The GCMs cannot and do not accurately predict circulation change.

“The GCMs cannot and do not accurately predict circulation change.”

That’s OK then – as they’re not designed to.

Thomas Miklanovic,

Thank you for answering. However, I don’t understand. your answer. You say:

The damage function defines the net-economic damages (could be positive or negative damages) per degree increase in GMST or per change in CO2 concentration in the atmosphere. What is missing are valid, objective, unbiased studies to obtain the empirical data to define and calibrate the damage function.

Life thrived when the planet was warmer than now and struggled when colder than now. The planet is currently in very cold period; only the second time it has been this cold in the past 540 million years. For only 25% of that time has the planet had ice caps at the poles.

How can the attractor give us the empirical evidence to define and calibrate the damage function (GHG emissions may actually be a net benefit, not net damage)?

I should have clarified the reason for this paragraph:

Life thrived when the planet was warmer than now and struggled when colder than now. The planet is currently in very cold period; only the second time it has been this cold in the past 540 million years. For only 25% of that time has the planet had ice caps at the poles.

The significance of this is that it does not seem to support the belief that increased GHG concentration in the atmosphere are a serious threat, let alone dangerous – perhaps they will do more good than harm.

One of your favorite engineering societies thinks otherwise.

https://www.ospe.on.ca/public/documents/advocacy/submissions/2015-climate-change.pdf

The Alarmist’s bible, IPCC AR5, says the damage functions are next to useless. Furthermore, no one here has been able to answer the question: what is the empirical evidence to calibrate the damage functions?

The damage function is an essential input for estimating the Social Cost of Carbon (SCC). Without a valid damage function, SCC estimates are meaningless.

IPCC AR5 WG3 Chapter 3 mentions ‘Damage Function’ 18 times http://www.ipcc.ch/pdf/assessment-report/ar5/wg3/ipcc_wg3_ar5_chapter3.pdf . Some examples:

“Damage functions in existing Integrated Assessment Models (IAMs) are of low reliability (high confidence).” [3.9, 3.12]”

“Our general conclusion is that the reliability of damage functions in current IAMs is low.” [p247]

“To develop better estimates of the social cost of carbon and to better evaluate mitigation options, it would be helpful to have more realistic estimates of the components of the damage function, more closely connected to WGII assessments of physical impacts.”

“As discussed in Section 3.9, the aggregate damage functions used in many IAMs are generated from a remarkable paucity of data and are thus of low reliability.”

These observations are enough to start considering the predictability of the dynamical variable.This is valid if nothing else changes. In real Climate, other stuff changes. It snows more when oceans get warm and polar oceans thaw. It snows until it gets cold again. This is not in the equations.

.

Well it is .

What you describe are phase changes . N-S deals with polyphasic flows too .

Of course in my example I used a monophasic flow to avoid complications but the reasonment is universally valid .

Your point which is right is that polyphasic flows are much more complex and more badly understood than monophasic flows .

If there is “other stuff” which is relevant to your problem then you simply need to include this “other stuff” in your equations too .

I have fixed sign typos in the N-S equations. These got mangled in the editing. Re-load the page and the changes should show.

Very nice article. One thing that a lot of people are unaware of is that there is a second kind of sensitive dependance on initial conditions, nearby initial conditions can lead to very different attractors, which in the case of GCMS would mean very different climates.

Tomas, thank you for this detailed review – it will provide a good reference.

I wonder if you have comments regarding:

As you have explained, the dynamic features ( troughs and ridges ) over a large area ( perhaps North America ) are not predictable. Consequently, climate events ( droughts, floods, heatwaves, cold waves, intense storms, etc. ) are not predictable either. However, it occurs to me that the top-of-the-atmosphere radiative imbalance

isprobably a more predictable aspect. Reflected SW and emitted-to-space LW can vary, but it appears from inter-annual and seasonal observation that the state of the atmosphere is not as large a determinant as the less chaotic TOA radiative forcing from, for example, GHGs ( because, troughs in one place imply ridges in another and cloud modulating albedo doesn’t wander too far from the mean ).So,

Predictable: global average temperature

Unpredictable: precipitation, extreme temperatures, winds, storms, etc.

Of course, global average temperature is not a term in the equations of motion and not a very meaningful measure. Never the less, it would appear to be predictable.

“Predictable: global average temperature” By whom? How? Over what time period?

The complex flow of heat through the atmosphere, up, down and sideways, is due in large part to these motions, so the chaos is there. Given that the GH effect begins where the molecules are, and there are none at the TOA, I have to wonder what “TOA radiative forcing” is, there being nothing there to apply the forcing. The GHG forcing really begins when the GH molecule gives off its newly absorbed energy via kinetic collision. Chaos follows.

For any moment on earth, there is a range of conditions: pressure ridges in certain places, troughs in others. Cold air in places, warm in others. Clouds in places, clear in others.

For all of these conditions ( with the exception of high Antarctic winters ), adding CO2 does result in a calculable surplus of net radiance at the TOA. The unpredictable fluid flow does change the distribution of cloud, precipitation, temperature, etc., within the atmosphere which does effect outgoing longwave radiance, but the effect appears to be small wrt, for example, the RF from 2xCO2.

There may well be a case for predicting an increase in global average temperature with increasing GHGs.

It’s just that global average temperature is not particularly significant ( if it were, wouldn’t we know and follow what that value was? ) and most significant climate phenomena are unpredictable.

What does “calculable” mean here: “For all of these conditions ( with the exception of high Antarctic winters ), adding CO2 does result in a calculable surplus of net radiance at the TOA.”

Calculated how and by whom? What is a surplus of net radiance?

Turbulent Eddie,

I mean no offence, but as you know I get a wee bit picky at times.

I draw a distinction between prediction and assumption. Without going into details, I would put assumptions about future global average temperature in the same basket as assumptions that the Sun will rise tomorrow.

I see instances where the GHE crowd realise the difference, and use weasel words such as scenarios, while demanding action based on supposedly scientifically based predictions (implied, but described in non-accountable terms).

I don’t really care too much, but some people might think that the unfounded speculations of GHE promoters are as soundly based as, say, the next predicted total solar eclipse, or where the transit of Venus might best be observed.

Again, no offence intended.

Cheers.

Any one realization of a climate model simulation will have 100% numerical/model error within some number of months (3, 4, 6?).

So averaging many of them together, i.e. monte carlo, is nothing more than fraud.

We know even more than that: what happened in the past — iincluding climate change — is 100% likely!

Exceptional post, Tomas.

I suspect the model worshippers are going to be upset.

The model worshipers are well aware of this sort of chaos. They ignore it studiously. James Gleick pointed this out in his best selling chaos intro, published in 1987, somewhere around page 160 as I recall. https://en.wikipedia.org/wiki/Chaos:_Making_a_New_Science.

(I used to lecture at NRL on chaos theory, about the same time.)

He said the modelers “go to great lengths” to ignore large scale chaos. I have always considered this an indictment of sorts.

“He said the modelers “go to great lengths” to ignore large scale chaos”Which modellers? There weren’t many GCMs active in 1987. Hansen was publishing a bit, GFDL’s SKYHI. No AOGCMs. It seems they hadn’t had much time to go to great lengths.

Nick Stokes,

May I respectfully point out that climatologists are not the only people in the world to use models of one sort or another.

Maybe you should direct your query to Mr Gleick if David Wojick’s quote was accurate. If Mr Gleick is in error, you could certainly correct him, if you have facts to back up any assertions of your own.

Cheers.

Wojick ==> While the paraphrase of James Gleick is perfectly accurate in general, my recent post Lorenz Validated illustrates climate modellers specifically [erroneously] using the chaotic nature of their model to “simulate” natural variability.

So, certainly, they know it is there — they know they can get 30 (or a million) different results/projections by infinitesimal perturbations of a single initial condition (far smaller than most computer rounding errors).

As my post and this post illuminates, they just don’t understand the implications at a deep enough level.

“If the CFD code told you the wing had no lift, would you fly in the airplane?”Wing models will have been repeatedly validated with many wings and real life experiments. Then, if a valid wing model tells me so, I will accept its result.

Problem: we have only one laboratory and only one on-going experiment to study the climate (weather forecasts have totally different aims and can be repeatedly tested… and have only short term validity).

Out of the 90 CMIP5 model runs reported, 88 reconstruct temperature anomalies above actual observations, a lot of them grossly overshooting (worst case: actual = 0.3 °C, calculated 1.0 °C). I’m not going to indicate an average of these exaggerations because it does not make sense to calculate average values of wrong things.

With the only Earth experiment at hand it is impossible, and will stay so for decades or centuries, to determine which models may be valid enough to allow making any forecast calculations of any kind of future climate scenario.

Add to this that no observational evidence is available that correlates validly warming with CO2. Only a plausible radiative forcing phenomena may suggest some contribution, searching for the sensitivity of which including feedback responses is similar to seeking the holy grail.

Then I will not take any action on this lack of science that makes impossible claims of temperature sensitivity to CO2, and imagines only catastrophic scenarios.

One only question remains: why so much passion in predicting anthropogenic catastrophes and demanding drastic action hic et nunc?

The answer (which I don’t know, despite of a few clues) is not in science but in humans.

Tom, congratulations on a masterly review of this subject. Your math seems valid. It is probably not open to any form of ‘new math’ breakthrough.

What if the real answer to global average temperature variation is ‘merely’ a matter of applying Fourier analysis to the long term global average temperature record? The German team of Ludecke, Hempelmann and Weiss certainly think so and have published very persuasive papers to that end. Not at all what the mainstream media wants to hear.

Their papers show that a stated drop in temperature will occur in coming years (irrespective of the level of CO2). They hold that the major temperature cycle is dependent on the well known DeVries solar cycle.

They offer forecasts with degrees C and year dates attached. This has the great advantage and merit of being ‘falsifiable’. If they prove wrong – so be it. If they are right it is game over for the anti-CO2 brigade. You can’t fool all the people all the time.

Yes, I’m sure those forecasts will pan out…just like JC’s “slam dunk” prediction that we would be cooling now.

Determinism and Laplace’s demon os a meaningless philosophical debate. Even if a complex chaotic system is deterministic, Lorenz-style, the amount of knowledge needed to make deterministic predictions is almost infinite so in practice there is no determinism.

Climastrologers need to give up photorealism and explicit predictive models, and look instead for chaotic-nonlinear analogous model systems, such as thon film BZ reactor systems and other thin film intermittent turbulence models, for general insights into the climate system. This combined with mental freedom from ideological totalitarianism and a little free-thinking creativity. This may feel terrifying at first but try it – it really is OK! “Feel the fear and do it anyway!”

Interesting post.

Looking at the equations with a backward flow of time it is obvious that a high degree of determinism is evident in all situations. The spilt water will always flow back into the cup.

Even in the abstract the water washed up on the beach will flow back into the structured wave it came from.

There is thus some hope that in the short term some degree of predictive usefulness can be achieved with a large dose of warning that it will not always be accurate.

Again while the odds of a snowball forming in hell are still not impossible one has to realize that it will usually be hot.

The sinking ship analogy,”if we were all to be destroyed by a runaway phenomenon” is disproven by the fact that we are here discussing it.

There is always a real but small chance of a major catastrophe and on the smaller scale of time and space that our climate and weather operate in that frequent changes, catastrophic for some areas at times, will always occur. Human input is a very small, currently unquantifiable, part of the risk spread.

Actuaries have roamed the Earth, but now a little while.

An excellent post. My paper on this topic currently in review. An extract:

Fluid dynamical systems are commonly dealt with deterministically using the Navier-Stokes equations or variations thereof. In order to do this it is commonly assumed that the fluid involved is a `continuum’, i.e. that it is everywhere continuous and differentiable and so is deterministic, as when the Navier-Stokes equations are converted to the discrete form of finite difference equations for the purpose of numerical modeling of fluids. Here Taylor’s theorem is commonly used to make the transition to the discrete case, the continuum assumption being implicit in this use of Taylor’s theorem.

It has been well known for more than a century that no real fluid is a true continuum. This is demonstrated by the Brownian motion. The velocity field in any real fluid varies rapidly and discontinuously in both time and space. It is not differentiable. Hence the use of Taylor’s theorem in defining the finite difference equations of numerical models is not justified. Instead there is a probabilistic relationship between the state of a fluid and the preceding state. Real fluids are stochastic not deterministic. Observed fluid dynamic phenomena such as turbulence, vortex shedding and wave breaking are testament to the stochastic nature of real fluids. Even at laboratory scales, the Navier-Stokes equations cannot adequately describe the behavior of fluids in high Reynold’s number regimes. Aspects of these regimes which can be quantified are dependent on other methods such as dimensional arguments, self-similarity and physical intuition. Kolmogorov’s turbulence spectrum is an example.

If a system is deterministic then its variables are all single-valued functions of time. Experimental observations of dynamical variables are commonly displayed as functions of time and a `line of best fit’ or regression line fitted to the observations to display the trend or rate of change with time. This is commonplace, something most researchers learned at school.However there can be serious problems with this methodology when the system under investigation is stochastic. Nelson and Kang (1984) demonstrated that, for certain stochastic processes such as a `random walk’, the use of time as the explanatory variable can lead to the appearance of a trend even though none was present in the original data. It follows that an observed trend obtained by regressing a physical quantity on the time may or may not be real, depending on the deterministic or stochastic nature of the system under investigation.

In my paper I use statistical methods to confirm the stochastic nature of climate. See http://blackjay.net/?p=335

” Here Taylor’s theorem is commonly used to make the transition to the discrete case, the continuum assumption being implicit in this use of Taylor’s theorem.”I don’t think that is true. You’d need more justification. It certainly isn’t true for finite element methods, nor finite volume, which are the more usual methods now.

Getting down to the molecular level won’t help. The true limit of smoothness is turbulence, which is effective on much larger scales. That is again a reason for using integral formulations, as people do.

Climate models are finite difference models for which my statement IS true. In any event, finite difference, finite element and finite volume solutions of p.d.e.s ALL require that the variables be continuous and differentiable. Even the formulation of the NS equations as p.d.e.s makes this assumption. The Brownian motion and the atomic theory which it supports, indicate that no real fluid is a true continuum and hence is not deterministic; Boltzmann and Planck brought an end to Laplacian determinism. The problem with fluid dynamics is that it attempts to model stochastic phenomena with deterministic equations, something mathematicians find hard to accept. Appeals to chaos theory don’t get you off the hook either. Chaos is still deterministic; at best it is a description of the pathological behaviour of deterministic equations rather than a description of the real world.

“Climate models are finite difference models for which my statement IS true.”Not true. The dynamic core (pressure/velocity) is almost always solved by a spectral method. The other parts are usually expressed in an integral form, which is essentially finite volume. Even in FD, I think you should explain your claim that Taylor’s theorem is involved. That theorem says that you can expand as an infinite power series, with derivatives of all order. There is n one of that in FD.

FEM etc are just the application of conservation laws to the elements. They do require some estimate of values over the space, which is usually done by interpolation. But differentiability is not a big part of it.

https://en.wikipedia.org/wiki/Finite_difference_method

“Derivation from Taylor’s polynomial

First, assuming the function whose derivatives are to be approximated is properly-behaved, by Taylor’s theorem, we can create a Taylor Series expansion”

https://en.wikipedia.org/wiki/General_circulation_model

“The fluid equations for AGCMs are made discrete using either the finite difference method or the spectral method. For finite differences, a grid is imposed on the atmosphere. The simplest grid uses constant angular grid spacing (i.e., a latitude / longitude grid).”

I have called many times for an estimate of accuracy of climate models. I could not find it anywhere. Now I know why: modeler’s chastity prevails.

Can those optimistic about the future of modelling climate, like Tony Banton and Nick Stokes, please respond:

Are climate models good enough to guide public policy on attribution, remediation etc?

If they have not reached the stage of being good enough for policy, why are they being used?

By what date do you forecast that models will be good enough to guide policy?

Is there an expectation that progress with models could be faster if there is a breakthrough in understanding, or will it be more a matter of slogging on to fill in more and more detail?

What future criteria should be used to determine whether modelling funding should be either curtailed or increased?

Are there any “Go – no go” review procedures for modellers now?

What would it take to conclude that modelling has failed?

Ditto succeeded?

Are there already aspects of chaos theory that plausibly make it possible now to predict that the success of modelling is limited ?

You might see that I am wondering why so much faith is being placed in modelling when its performance is dismal. Is it because it is a last remaining technique to keep global warming hypotheses alive?

Geoff

“Are climate models good enough to guide public policy on attribution, remediation etc?”Climate models give an estimate of what will happen following GHG emission. I think they are good, but in any case, they are the best we have. The alternative favored here of assuming the consequence is nil is also an estimate, and one made with no basis. Models can help with attribution, showing the effect expected. I’m not sure what aspect of remediation you have in mind.

“What would it take to conclude that modelling has failed?”There are appropriate statistical tests that could tell you that a prediction is so far out that it could not be consistent with observations. Critics rarely apply such tests, and misapply others. I’m not aware of any such tests that failed. But in any case testing has to be done carefully. You can’t scan many different results, and then say that because one failed at 95%, say, then models fail.

“Are there already aspects of chaos theory that plausibly make it possible now to predict that the success of modelling is limited ?”That is relevant to the topic of this thread. But it’s a misunderstanding of the relevance of chaos theory, as is plentiful in the head post. Chaos implies that small changes in initial value make big changes to outcome. The inutility of initial conditions has long been recognised in climate modelling. In fact, that is where GCMs come from. They are weather forecasting problems used beyond their forecast range. Deliberately, GCM modellers start many decades before the period of interest, to make sure that the influence of initial conditions, both good and bad, has faded. This is common in CFD too. In chaos terms, you run the model to find out about the attractor, statistically. And then to find how the attractor changes with forcing.

Nick – who found about an attractor, and what?

BTW, you acknowledge that models are extremely sensitive to initial conditions. I add that they are also extremely sensitive to a grid size. And to physics; I found that the latent heat of water vaporization they use is up to 3% off. What good does it to anybody to know an attractor of a flawed model? (It does a plenty of good to modelers, who get paid for their “work”.)

“Nick – who found about an attractor, and what?”The attractor is the collections of characteristics that we call climate. Diurnal, annual, AMO, ENSO etc. You can see some of them in the video I showed above. All the well-known currents (Gulf stream etc) that are also generated in the model.

“BTW, you acknowledge that models are extremely sensitive to initial conditions. I add that they are also extremely sensitive to a grid size.”I should more carefully specify that by outcome I mean instantaneous state (~weather). Climate results are not sensitive to initial state, nor to grid. You can see this illustrated by looking at the gulf stream in the video. At any point in time it is a very squiggly line. Change the initial conditions, grid or anything, redo, and at that time you’ll see a different squiggle. But you’ll still see a Gulf stream, and it’s still carrying on average about the same amount of heat to the N Atlantic. That’s climate.

Well Nick, this here is your problem, AMO and ENSO are attractors, dinural and annual are not.

Thank you Nick,

I suspect you are talking precision rather than accuracy when you mention statistical methods to estimate quality of model output.

I have more interest in non-cancelling bias.

If there is an effect due to chaos Incorporated into a model, how does on express its bias? Are there already mathematical/statistical tests to allow confidence bounds to be put around such modelling? Is there an established procedure for showing how attractors that might cause swings in projections have known magnitudes of effects or ones that can be calculated? Are there known relations between biases, attractors and natural bounds observed with the main climate variables?

Finally, I have great difficulty accepting the averaging of model run outputs as have several others above. Do you consider it an acceptable procedure?

Geoff

Nick, you seem to think that there is only one chaotic attractor. No. Each system of equations has a different one. I am not an expert on chaos, but I expect that the shapes of attractors are as sensitive to small changes of equations – or changes of grid size – as “solutions” are to initial conditions. So running a flawed model of “climate” yield an attractor of that model, which has very little if anything at all to do with the an attractor of real climate.

Nick Stokes, “But you’ll still see a Gulf stream, and it’s still carrying on average about the same amount of heat to the N Atlantic. That’s climate.”

Right, If you use circa 1200 to 1800, the average could be roughly 10% lower than today, still the same squiggly but significantly lower heat transport.

http://www.nature.com/nature/journal/v444/n7119/full/nature05277.html

Nick Stokes,

How can estimate the damages or benefits of GHG emissions if there is no valid justification for the key inputs used, such as the damage function, which apparently is the case?

If there is valid justification for the damage functions used in the IAMs to estimate costs and benefits of GHG emissions, why does even the IPCC say the evidence is sparse to non-existent?

I suggest, the fact is there is a lack of valid evidence to support the belief that GHG emissions are or will doing more harm than good. I suggest, the belief is based on supposition, innuendo, and dogma.

This will take some time, Mr. Mosher, so please send out the search teams and post my picture on the milk cartons. I will respond to your strange statements as they occur:

You: “1. No they [IPCC] are not calling for fundamental transformations.” Me: What, pray tell, are they calling for if not the transformation of our economic systems? Reduction in CO2 emissions without fundamental changes?

You: “2. We already tax energy production and pollution, we regulate it as well.” Me: And that relates to benign CO2 in what manner? Why not tax O2?

You: “3. they DO not rely on climate models to recommend policy. Period.” Me: Choke….gasp….puke. All IPCC documents base policy recommendations on climate models of the future, hey? Why else do they recommend reducing CO2 production?

You: “4. I read their documents. you did not, otherwise you would cite them” Me: I do. AR5, anyone?

You: “1. facts are facts. The models have run a few percentage points high

NOT and [an?] issue since NO ONE basis policy soley [sic] on climate models” Me: If policy relating to future climate states is not based on models, what is the basis? 1880 to 1910 cooling?

Oh, the hell with it. Life is too short to spend it responding to paid alarmist prattle. My time is more valuable than yours, especially when it involves Wandering in your Weeds, Mr. Mosher. How much do they pay you, anyway? I’ll work cheap if they need me. My qualifications are better, no doubt.

This thread has a lot of comments that really miss the point of the post.

1. Having 40 years of experience in CFD, my expert opinion is that Nick Stokes has too much confidence in CFD. That’s not surprising as the literature is very misleading and suffers from selection bias. What you see is the “best” results often after long studies in which parameters are adjusted. There are a number of recent papers talking about these issues and there seems to be at long last a realization that CFD has a problem.

2. I don’t understand why people seem to think that climate policy stands or falls on GCMs. The IPCC perhaps places far too much confidence in their output, but there are other models that can give us a good estimate of what might happen. There are simple energy balance models for example that can be carefully calibrated to match historical data.

3. Thomas’ basic point I fear is correct. The huge investment in GCM’s and more importantly in running GCM’s for all sorts of studies merely diverts resources from the more important fundamental research we need and also from projects to gather better data. A lot of people, including a lot of climate scientists, like running GCM’s. It’s relatively easy and can produce a lot of data to analyze and a lot of papers to publish. I would argue that it is giving us little in the way of real progress or understanding.

4. I also don’t understand the unusual lengths to which climate activists go to defend GCM’s. This effort has generated more wasted bits than virtually any other climate “concerned” talking point. If the policy arguments are not based on GCM’s, why do you damage your credibility by defending something that is well known by experts to be highly speculative?

All this show me how politicized the science here has become and how rare it is for voices of reason to break through the nonsense.

David,

1. I leave your opinions of Nick well alone.

2. Nobody I am aware of says that policy stands or falls on GCMs. The broad swathe of understanding of climate would be much as it is if they didn’t exist, with perhaps more uncertainty. Specifically, estimates of TCR and ECS would remain pretty much where they are today.

3. You may be right; I see no evidence in the literature that you have sufficient mastery of the area to judge, however. Perhaps if you could point to a review paper laying this out?

4. I disagree. I see no “unusual lengths” on this from climate activists on GCMs. Again, perhaps you could point to an example? As to wasted bits, my vote would go to paeoclimate reconstruction and temperature homogenisation.

You assert much and cite little. You claim great expertise, and that your expertise gives you special insight into the uselessness of GCMs. Surely,

ifyou have this expertise andifit is so easy to demonstrate their uselessness, then you would publish these findings in the literature. That you have not leaves me sceptical as to the veracity of your assertions.All this shows me that you are unable to back up your claims with references to the scientific literature, and that you are thereby helping the politicization of the science so well demonstrated by Climate Etc.

Oh, and this seems to sum it up quite well

Excellent comments David Young, whether you have published on this or not.

Very tall guy, You and I have interacted before here and at Ken Rice’s. If you send me your email address i can send you 10 or so references dealing with points 1 and 3. Nick Stokes has had not trouble verifying credentials. :-)

Post them here, by all means

“There are other models that can give us a good estimate of what might happen. There are simple energy balance models for example that can be carefully calibrated to match historical data.” A link, please. I wonder how you run a simple energy balance model without making a lot of assumptions about clouds, for example.

If you could get one reference from David P. Young about his third point, that’d be great. Here is the point, btw:

> The huge investment in GCM’s and more importantly in running GCM’s for all sorts of studies merely diverts resources from the more important […]

vtg,

David Young and I have both spent many years researching and publishing in fluid dynamics. For my part, I am very aware of the difficulties and imperfections in CFD which I have seen overcome over those years. My contention here is simply to point out that CFD works:

“The application of CFD today has revolutionized the process of aerodynamic design, and CFD has joined the wind tunnel and flight test as a critical tool of the trade.”I say that to refute posts like this that crop up from time to time, saying that GCMs can’t work because:

1. No-one has proved N-S existence and uniqueness etc

2. CFD can’t work on scales of more than a few metres. Well, that is a new one I hadn’t seen before, at least not so explicitly. But it’s a variant of saying that the atmosphere is too big and complex

3. Practitioners are ignoring chaos or some such

All those, if they had merit, would apply equally to CFD. But again, CFD works. There are real difficulties in applying CFD in GCMs, which seem to attract far less interest:

1. You can’t assume incompressibility, and certainly not constant density. Dynamic variations in pressure/density are important. That means the solution admits sound waves, and these have to be resolved. That limits spatial resolution, because it is linked to time.

2. Grid elements have to have an extreme aspect ratio – 100+ kms horiz, maybe 100 m vertical. That makes some soluble numerical difficulties, but combines with the need to resolve sound to create issue 3:

3. You have to reduce the vertical momentum equation to hydrostatic pressure. The time step to resolve vertical sound would be seconds or less, so the term that would allow it is eliminated. That mostly corresponds to physical reality – vertical inertial and viscous forces are mostly small. But there are important updraft situations like thunderstorms, and these need extra modelling.

4. As always, boundary complexities, especially ocean/air interface

5. All sorts of subsidiary modelling issues – clouds etc. here I’m focussing on the basic numerical hurdle – pressure/velocity calc.

Most problems can be solved if you don’t panic.

The policy makers panicked before the problems could be solved.

Nick, the fact remains that the motion of the atmosphere is not predictable beyond about 7 days and probably never will be.

It’s not just a given state – the frequency of wave patterns over a year/decade/century/millenium are not predictable either.

Now, perhaps some aspects are predictable.

As I noted, the arrangement of wave pattern probably doesn’t change the effect of 2xCO2 RF. So global average temperature might be predictable ( with some error bars ).

But the significant aspects of climate ( storms, droughts, floods, winds, heatwaves, etc. ) are all dependent on the motion of the atmosphere which is not predictable beyond a very brief horizon.

“Nick, the fact remains that the motion of the atmosphere is not predictable beyond about 7 days and probably never will be.”

OMG

Ya know we could not use CFD or wind tunnel or water tunnel or even flight test to determine which way and Aircraft would nose slice when it departed at high AOA.

but…

we could predict that it would depart

and we could predict that it would nose slice.

right or left?

50/50

beyond 7 days.. will the motion stop?

point being… not what you cant know ( where do unicorns hide)

but rather, what can you know.. and how well.

Thats determined by test.

not by blog comment

Steven Mosher,

You wrote –

“Thats determined by test.”Exactly. Testing of GCMs show they are completely useless. Pretty, expensive, but useless.

Someone famous might have said the definition of insanity is doing the same thing over and over, and expecting different results. As in, keep running simulations – maybe they’ll produce useable results one day!

Rattling on about aircraft, replete with aircrafty jargon like “nose slice”, or “high AOA” is about as relevant as someone else’s preoccupation with overcoats.

Getting back to your need for a test, what is your test to establish the planet heating properties of CO2 in the atmosphere? How wold you account for the vastly increased amounts of heat generated by a vastly increased population using vastly more energy since 1910, for example?

What would be the falsifiable hypothesis leading you to devise such a test?

Oh, there isn’t one? Cargo Cult Scientism don’t need no stinkin’ reproducible experiments?

I thought so.

Cheers.

Nick is quite knowledgable about CFD. He and I disagree on its utility, which is really at the moment limited to near cruise flows which are attached and only mildly transonic. Separated flows or flows with large 3D vortical structures are in fact poorly predicted. The atmosphere of course has 3D vortical structures as a prominent feature.

A second important point is that CFD prior to about 2014 had a very biased literature, Things are changing and I can claim some credit for it. People might start with Slotnick et al in Phil Trans of the Royal Society A in I think 2015 or possibly 2016. This paper actually acknowledges the strong limitations of current CFD. We have a paper in AIAA Journal I think summer of 2015 that also lays out honestly the issues. And another one is in press that is even better. Anyone who wants to read it can email me.

I also have some contacts with Spalart, who has developed perhaps the most widely used turbulence model at least for aerodynamics. He told me just yesterday that he is beginning to believe that turbulence modeling is basically hopeless at some fundamental level. We continue to get better incrementally, but those who rely on the CFD literature are probably going to get an inaccurate picture. Spalart has a new paper on LES showing some rather disturbing problems.

I actually have a detailed post on this that runs to 10 pages and 25 references. Perhaps I can ask Judith to post the reference list. The case here is becoming clearer as is the case that science is rather flawed at the moment. I have about 15 references on that too.

David, could you please link to your detailed post? I am not a Facebook user and I don’t intend to become one. Thanks.

I emailed Judith the reference list. Most of the main points are contained in those. One she puts it up, I can point to each specific point.

“Exactly. Testing of GCMs show they are completely useless. ”

On the contrary. they are pretty damn good.

Imagine this.

taking only forcings from GHGS ( c02, ch4 etc) and Solar forcing

predict the temperature field.

All GCMs get the answer correct to with 10%. To wit the average temperature of the earth is around 15C and they predict with 10% of that

So,e areas they do better, some worse. but really fricking awesome accuracy given how “complex” and “chaotic” the climate is.

Who would have thunk it?

Not the flim flamming flynn

And then imagine you take the same equations, a few surface temperatures, then calculate a fake temperature field, and they match within 10%.

Brilliant!

Nick, The paper you quote is quite old. It refers to the design process and is about predicting small changes to very well behaved flows. The paper was before we even started considering separated flows. Forrester in particular is well aware of the issues and knows all about the problems of turbulence modeling and separation. His team really “discovered” the multiple solution phenomenon, which I had predicted a decade earlier. They tried to rationalize it away for a long time and trotted out a variety of witches to burn. Finally, the truth set in and Forrester’s fundamental honesty required him to acknowledge the truth. Our recent work is far more relevant to the question of applying CFD to more challenging flows.

Steven Mosher,

Billions of dollars to predict future temperatures will be about the same as now +/- 10%?

A twelve year old child (or you or I) could do just as well. In Kelvins, present temperature of around 288K, 10% is +/- 28 K or so. Say 260 – 306 K. Maybe you prefer Celsius – assuming today’s “temperature” – 15C, +/- 1.5 C. 13.5 to 16.5 C – how silly!

But what’s the point? You claim this is “useful”. You can’t even tell us what today’s temperature is – or why it is supposedly “useful”.

No use at all. It is what it is.

Awesome accuracy? For climatological Cargo Cultists, sure. For scientists, completely useless. When, where, how much? What are the quantified benefits? The costs?

You just make this stuff as you go along, I’m sure. 10% accuracy? I’m reasonably sure that you wouldn’t fly with an airline that guaranteed it would get you to your destination with an accuracy of 10%. Or that the aircraft only had a 10% chance of falling out of the sky on each flight.

Good enough for climatologists, I suppose.

Cheers.

Nick,

thanks for the informative reply.

David,

to a non-CFD expert, which I’m more than happy to confess to being, you seem to be arguing:

1. CFD has problems

2. GCMs rely on CFD

3. We should therefore disregard GCM results

1) is surely true, as it is generally true of all science

2) is obvious

but

3) Depends on you being able to demonstrate that the problems you’ve identified with CFD materially affect the results of GCMs to the problems they are used to investigate beyond the error range currently ascribed to them.

Until you’ve done that in the scientific literature, no-one will pay very much attention to your complaints, and pointing to literature saying there are problems with CFD isn’t anywhere near sufficient.

I’m not dismissing what you say at all, I’m merely sceptical. Given your avowed expertise, and your apparent certainty of how obvious the issues are, I genuinely don’t see what the problem with publishing your thesis is, rather than complaining on blogs.

David,

“Nick, The paper you quote is quite old.”It describes how Boeing used CFD for 30 years (to 2003) to design aircraft. The aircraft seem to be flying quite well. That information hasn’t become incorrect. You say there may be problems with separated and transonic flow. These are not big issues in the atmosphere.

VTG

You said;

‘Depends on you being able to demonstrate that the problems you’ve identified with CFD materially affect the results of GCMs to the problems they are used to investigate beyond the error range currently ascribed to them.’

Seems to me there is a need for an up to date article on the subject which succinctly summarises current thinking and Davids’ and Nick’s take on it.

Ideal for this venue as it would encourage immediate back and forth.

tonyb

What I’d quite like to see is those who appear to promote the view that there are serious problems with GCMs (for whatever reason) illustrating that they actually understand the system that they are suggesting cannot be modelled by GCMs. Yes, it is a complex system, but aspect can still be understood in pretty basic ways. One example being that it will always tend towards a state of energy balance. Another, for example, is that global circulation will be influenced by temperature/pressure gradients and the Coriolis effect. The reason I say this is because sometimes confidence in a model can develop because the model is able to reproduce aspect of the system that we expect from an understanding of the basics. Hence, criticism of a model should at least include some understanding of how one might expect such a system to evolve.

climatereason

Seems to me there is a need for David to publish his thesis in the literature. Everything else follows from that.

attp, “The reason I say this is because sometimes confidence in a model can develop because the model is able to reproduce aspect of the system that we expect from an understanding of the basics. Hence, criticism of a model should at least include some understanding of how one might expect such a system to evolve.”

Let’s see, I would expect the system to have a distinct hemispheric imbalance, a single ITCZ that would shift with the amount of imbalance and a gradual swing in the imbalance similar to the hemispheric SeeSaw noted in paleo. In fact most of the larger swings in, hemispheric, meridional and zonal temperature gradients noted in paleo should be evident on smaller scales.

However, if you expect stable conditions with very small changes in hemispheric imbalance, no shift in the ITCZ, a double ITCZ and no changes in gradients your evaluation would be different.

Ya know we could not use CFD or wind tunnel or water tunnel or even flight test to determine which way and Aircraft would nose slice when it departed at high AOA.Aircraft models are not relevant to this discussion.

Predicting an aircraft, powered by thrust which dominates the other terms is not the same as predicting the

turbulent eddies, of all scales, which occur in the atmosphere.Take your aircraft, put it on the tarmac in the daily winds, but don’t apply thrust, and then predict the

turbulent eddiesthat occur around the wings and get back to us.There are differences in the forces from the micro-scale to planetary wave scale, but none are predictable more than a week out.

–

…and Then There’s Physics | October 20, 2016 at 5:52 am |

What I’d quite like to see is those who appear to promote the view that there are serious problems with GCMs (for whatever reason) illustrating that they actually understand the system that they are suggesting cannot be modelled by GCMs.

–

Be my guest

–

– Steven Mosher | October 19, 2016 at 11:26 pm |

“Exactly. Testing of GCMs show they are completely useless. ”

On the contrary. they are pretty damn good.

taking only forcings from GHGS ( c02, ch4 etc) and Solar forcing

predict the temperature field.

All GCMs get the answer correct to with 10%. To wit the average temperature of the earth is around 15C and they predict with 10% of that.”

–

Could use a little science here but anyway,

Average temperature of the earth – meaning the air surface layer of course.

is around 15C -actually 288K

and they predict within 10% of that?

That is 28.8 C discrepancy either side so a whopping 57.6 C error range

is really fricking awesome.

Of course you might mean within 10% of 15C [1.5C either way] which is wrong expressed mathematically as you are only considering approx 1/20th of the true temperature of the earth but would be really fricking awesome as it would be 1/200 accuracy with so many chaotic variables.

Then again you have left out 70% of the GHGs ( c02, ch4 etc)

Something called H20.

So your GCM models are getting the right answer with the wrong input,

That is called really fricking awesome fraud.

Or they could be really awesomely wrong if they included the H20 forcing.

But we forget the really fricking awesome main point, the GCM model the climate of the globe after all from the data inputted into them. What is the first bit of modelling data they put in?

Oh, the average global temperature?

At the start of the model run?

Yes?

and then you check the model and all models give you back the answer correct to with 10%?

What else are they supposed to do and why do they get it wrong by up to 10% ?

Really fricking unbelievable.

–

Sorry about the language, feel free to remove the word really if it offends or any others.

Nick, You quote out of context a single sentence that is placed in context by more recent work. You should look at the more recent work. I know Forrester, and what you quote amounts to a misrepresentation of his views.

Turbulence and large scale vortical structures are important in the atmosphere and are two areas where CFD has problems.

Judith should post the reference list here soon.

angech,

I think it is more like 10% in Celsius, not 10% in Kelvin. In other words, most GCMs produce average temperatures between about 12C and 15C, or something like that.

Always worth reading IPCC. Here’s the temperature field as computed by GCM ensemble mean.

Afficionados will recognise this as being from Chapter 9 on the evaluation of climate models.

David Young will doubtless be wailing and gnashing his teeth on the discovery that CFD and Navier-Stokes score zero hits on a search of Ch 9.

Others may have an entertaining read on the no longer extant hiatus in box 9.2 and the prophetic

as we approach the end of the third successive global temperature record year.

ATTP, 10% in Celsius, not in Kelvin? No error at all around a freezing point? There is physics, you should know …

” You quote out of context a single sentence”What I quoted was the opening sentence of sec 2 titled “The role and value of CFD”. Here is the full abstract:

“Over the last 30 years, Boeing has developed, manufactured, sold, and supported hundreds of billions of dollars worth of commercial airplanes. During this period, it has been absolutely essential that Boeing aerodynamicists have access to tools that accurately predict and confirm vehicle flight characteristics. Thirty years ago, these tools consisted almost entirely of analytic approximation methods, wind tunnel tests, and flight tests. With the development of increasingly powerful computers, numerical simulations of various approximations to the Navier–Stokes equations began supplementing these tools. Collectively, these numerical simulation methods became known as Computational Fluid Dynamics (CFD). This paper describes the chronology and issues related to the acquisition, development, and use of CFD at Boeing Commercial Airplanes in Seattle. In particular, it describes the evolution of CFD from a curiosity to a full partner with established tools in the design of cost-effective and high-performing commercial transports. “The conclusion:

“During the last 30 years at Boeing Commercial Airplanes, Seattle, CFD has evolved into a highly valued tool for the design, analysis, and support of cost-effective and high-performing commercial transports. The application of CFD today has revolutionized the process of aerodynamic design, and CFD has joined the wind tunnel and flight test as a critical tool of the trade. This did not have to be the case; CFD could have easily remained a somewhat interesting tool with modest value in the hands of an expert as a means to assess problems arising from time to time. As the reader can gather from the previous sections, there are many reasons that this did not happen. The one we would like to emphasize in this Conclusion section is the fact that Boeing recognized the leverage in getting CFD into the hands of the project engineers and was willing to do all the things necessary to make it happen. “My point isn’t that CFD is perfect, but that it is a major established engineering tool. That hasn’t changed in the last few years. And it is a complete counter to the nonsense that the Navier-Stokes equations can’t be used to get results on fluid flow because of some failure to find and relate to theoretical exact solutions.

“A twelve year old child (or you or I) could do just as well. In Kelvins, present temperature of around 288K, 10% is +/- 28 K or so. Say 260 – 306 K. Maybe you prefer Celsius – assuming today’s “temperature” – 15C, +/- 1.5 C. 13.5 to 16.5 C – how silly!”

Simple challenge Flynn.

Using only the watts from the sun, and watts from GHGs

Calculate the temperature for every 200km grid of the planet.

Simple. just one height above terrain. 1.5 meters.

You cant get within 10%

Steven, what assumptions if any do you make about clouds?

The key assumption of modelers seems to be that all the chaos averages out on long time scales and large space scales, that the boundary values (forcing) is over-all governing the system. While this seems sensible in a way, there is no proof of this. In fact, turbulence in the Earth system is known to produce ENSO and big el nino years like 1998 and 2015. The tropical upwelling zone can put more or less moisture into the atmosphere which could produce more or fewer clouds and thus alter forcing via moisture or clouds. Ocean and atmospheric circulation could put more or less snow in places and alter albedo, as well altering the polar temperatures thus affecting the overall heat balance. Ocean circulation could cause more or less heat to be stored in the deep ocean. So there are multiple plausible mechanisms by which the chaotic fluid motions could alter the overall heat balance of the earth, making the “it all averages out” hypothesis far from obviously true. THIS is why mosher’s eyeballing 3 deg warming is inadequate: the details actually DO matter for the overall picture. This is why Thomas’ very nice presentation matters.

“The key assumption of modelers seems to be that all the chaos averages out on long time scales and large space scales, that the boundary values (forcing) is over-all governing the system. While this seems sensible in a way, there is no proof of this. In fact, turbulence in the Earth system is known to produce ENSO and big el nino years like 1998 and 2015”

No proof?

1. science doesnt know anything about proof.

2. Absent any external forcing, the system cannot create energy ex nihilo

3. So yes, some ups and downs. And yes, Absent any evidence to the contrary, it is assumed that natural pseudo oscillations, or unforced variation, will integrate to zero, since the climate isnt god and cant create something from nothing.

That said you have fluid running through your body and you know turbulance and chaos happens and who knows you could spontaneously combust ..

Steven Mosher,

You wrote –

“2. Absent any external forcing, the system cannot create energy ex nihilo”It seems that climatological “forcing” is “energy” in ordinary scientific language. CO2 is not “energy” nor a source of “energy”, in the sense that “forcing” is apparently used.

No wonder climatologists have to constantly invent and redefine terms.

Still no GHE. CO2 heats nothing. CO2 provides no external energy to anything.

Climatology is a sham. A pointless, useless waste of time, effort, and money. Obviously, climatologists disagree. They would, wouldn’t they? Oh, and their sycophantic hangers on, as well.

Cheers.

Don’t you realize how stupid this statement is?

Nobody’s talking about

creating/destroyingenergy except people using false analogies. And anybody who understands the system knows it. It does nothing but confuse people who don’t understand.“Proof” — I am talking about mathematics, Mosh. Proof of ergodicity of the system of equations. You assume the oscillations will integrate to zero but that means you do not understand radiative heat loss very well. If turbulence causes more clouds that will dampen the effect of GHG. Spencer has been arguing this point for decades. Negative feedback due to convection and clouds is intimately tied up with turbulence. The Earth is not a solid body, but a fluid in terms of heat dissipation.

Craig,

But if you’re suggesting that internal perturbations could easily lead to long-term warming/cooling (through clouds, for example) then you’re arguing for a scenario in which it is possible that the non-Planck feedbacks can have a similar magnitude to the Planck response. If so, the system would only just be stable. The reason we think it is long-term stable is that the non-Planck feedbacks plus the Planck response will have the opposite sign to that of any change in forcing, hence they can act to bring the system back to a quasi-equilibrium without inducing any kind of runaway.

“

Forcing” is myth. On a global scale especially.he key assumption of modelers seems to be that all the chaos averages out on long time scales and large space scalesWell, albedo remains very poorly measured, so these things are possible.

But I imagine the variation being small for the global scale, because in general, the significance of chaotic fluid flow appears to be troughs in one place, ridges in another. So, for the global scale, it appears to me that the net radiance of earth would not appear to be as tied to dynamic fluctuation.

So global average temperature may well be predictable. But global average temperature is not particularly significant.

For the continental scale, things are quite different. The Dust Bowl ( 1930-1939 ) was significant for most of North America, and by many measures, the most significant decadal climate occurrence for the United States. The Dust Bowl was natural, more significant than global warming, and not predictable ( nor is any future occurrence ).

There may be other aspects of climate that are predictable with 2xCO2, but continental scale events such as the Dust Bowl are not predictable, so pronouncements about droughts, floods, and heatwaves should be disregarded on the basis of the physics.

An example for Nick of where gridding and the surface representation matter: the jet stream is governed by North American Rocky mountains. At the south pole it is pretty much a circle, but the rockies create a bump that propagates in a rather chaotic way all around the N. hemisphere. This matters a lot to snowfall and regional climates.

Another thing for those who defend the GCMs: yes they are good. But how good? If it will warm 1 degree this is not the same as warming 5 deg in terms of risk/damage.

Craig ==> How good? We don’t need models to tell us that if GHG concentrations rise, then there will be some warming (well, at lest increased energy in the Earth system). How much? is what they are intended to be able to tell us, but they can not do so reliably.

The very nature of the climate system — its non-linearity — prevents a clear picture. The climate is bounded, so no snow-ball and no fire-ball. I believe that the real climate has smaller — narrower — boundaries than the models — and am working on an essay to illustrate why I think so.

In reality, they tell us there will be warming because

that factor is hard-coded in. Look at my post on the CESM-LE, Lorenz Validated. If you dig in, you find that if they mapped just temperature of the mean of the ensemble, it would be going up. The same with all the model ensembles — the warming must appear of the model gets adjusted until it does (run over known data…). So, model-wise, the amount of warming depends almost entirely on the hard-coded elements.> they tell us there will be warming because that factor is hard-coded in

Then it should be possible to find the hard code.

Best of luck.

This is the modern version of the hard code around the physics that underperformed water vapor generation based on their theory. So, they “conserve” water vapor mass making it away from the water surface and the general boundray air layer to be added into the global water vapor air mass. This causes co2 feedback multiplication, in fact it causes the models to run hot.

Doh!

http://www.cesm.ucar.edu/models/atm-cam/docs/description/node13.html#SECTION00736000000000000000

willard ==> This certainly is no secret — there is a factor that forces the expected climate sensitivity — adjustments are made to tune the models to produce/reproduce the expected temperature rise for the GHG increase over 1970-2000 or whatever — it is not wrong to do so, but it skews the results.

> it is not wrong to do so, but it skews the results.

Indeed, Kip, and when adjustments are made to your model of a bridge so that it does not crash, it also skews the results toward not crashing.

Must be a vocabulary thing.

> The very nature of the climate system — its non-linearity — prevents a clear picture. The climate is bounded, so no snow-ball and no fire-ball. I believe that the real climate has smaller — narrower — boundaries than the models — and am working on an essay to illustrate why I think so.

The difference between snowball and fireball seems a bit broader than Teh Modulz Ensemblez range, Kip. Narrowing that range even further might tend to invalidate Lorenz.

> In reality, they tell us there will be warming because that factor is hard-coded in.

You know what else is odd? Here’s the “hard-coded” warming for a doubling of CO2 from GISS ModelE:

And here’s the “hard-coded” warming from a 2% increase in solar output from the same model:

Those crazy climatologists. Only they would be foolish enough to “hard-code” in a warming response to increased solar output!

you dont need a gcm to tell you how much it will warm.

paper and pencil is enough.

Paper and pencil will give an answer. You have no way of knowing whether that answer is correct. Same with GCM’s. Except the latter have been

trainedto give the same answer as the paper and pencil.GCM’s are just big, expensive, VR Games.

Steven, please publish your calculation by all means.

Now here is the HILARIOUS THING

this is TOO Funny

Dr. Craig asks us… how good are GCMs?

Really

What did Dr. Craig Publish

https://judithcurry.com/2011/07/25/loehle-and-scafetta-on-climate-change-attribution/

Why of course he created a MODEL..

So clearly craig thinks modelling can answer questions. he created a model to explain how much warming was human versus natural. And he must believe in what he published. That is, that a model can seperate these two.

And we can even use his model to predict the future. Now…

Did his model consider the rocky mountains?

Why did he think that was unimportant for HIS MODEL but important for GCMS?

hmm

Mosher: it is not “Dr. Craig” it is Dr. Loehle.

I am not the one claiming the science is settled, nor that pencil and paper are adequate for deciding to double the cost of energy with carbon taxes. Nor am I the one saying you can’t build houses on the beach anymore due to sea level rise. In my research I have been exploring what can models and data tell us, as one should. I never said all models are bad. I am a modeler. I am asserting that one must take care as to what a model can and cannot tell you.

My point about the rocky mountains was for Nick who makes claims about CFD.

Dr. Craig

“Mosher: it is not “Dr. Craig” it is Dr. Loehle.

Its not Mosher, its steven or Mr Mosher, Dr, Craig

“I am not the one claiming the science is settled, nor that pencil and paper are adequate for deciding to double the cost of energy with carbon taxes.”

1.Did I claim the science was settled?

2. You need ZERO calculations to decide that you want a carbon tax.

3. Who said double the cost of energy?

“Nor am I the one saying you can’t build houses on the beach anymore due to sea level rise. ”

1. Who said that?

2. My position is that we should NOT SUBSIDIZE people to build in

areas that have a higher risk. Its called insurance reform Dr. Ding Dong

“In my research I have been exploring what can models and data tell us, as one should. I never said all models are bad. I am a modeler. I am asserting that one must take care as to what a model can and cannot tell you.

My point about the rocky mountains was for Nick who makes claims about CFD”

Thats TOO FUNNY. Look the rocky mountains played ZERO role in your model, and yet you believe your model. Now, you have people who take more detail into consideration and you complain that the DETAILS your model IGNORES are now somehow important..

Theres a word for that kind of thinking

.

Mr. Mosher, somewhere in your Weed Wandering you apparently decided that one needed no reason to tax something so benign as CO2. Evidence, your: “You need ZERO calculations to decide that you want a carbon tax.”

Why in the world would a rational person agree to give his money to politicians and bureaucrats for “ZERO” reason? Faith in their goodness and intelligence?

No U.S. carbon tax will have any measurable impact on global CO2 emissions. So your idea is: Let’s just take wealth from the producers and give it to the takers for no identifiable reason.

ZERO critical thinking, Mr. Mosher.

Charlie Skeptic

I must be a glutton for punishment.

Steven Mosher: If you build a model based on fluid dynamics, it does matter whether you did it right or not, and can affect the global energy balance, as I noted. If you compute multi-model ensemble means, it matters at least to some of us whether such a thing is meaningful.

I never said all models need to include all detail. My work with Scafetta and since was phenomenological. GCMs are not–they claim to be mechanistic.

You, and you alone, seem to know what policies are needed based on pencil and paper. IPCC does not take that approach–they base it on the models.

Who is saying the price of energy should double? Maybe not you, but plenty of greens and alarmists indeed are, and in many places the cost of electricity is twice what we pay in US, largely due to such green policies. Governments in many places are indeed saying you can’t build or even improve your house on the beach. Yes I agree don’t subsidize risk, but I am not talking about insurance subsidies but zoning laws (California for example, places in Australia, others).

Oh, and sneering at people doesn’t win you any arguments.

Mr. Mosher does not post here to win any arguments. He posts here to earn his paycheck.

His boss really should read some of his stuff. As I said; I’ll work cheaper.

Tomas Milanovic ==> Great Post, on a real blood-and-guts maths level.

Nick Stokes,

First 3 iterations of the differential logistic equation where r = 4 –

First column – randomly chosen value. Second column – varies initial value by 0.000000000000001. Third column – difference – subtract second from first.

1 0.671975000000000 0.671975000000001 -0.000000000000001

2 0.881698397500000 0.881698397499999 0.000000000000001

3 0.417225333383728 0.417225333383732 -0.000000000000004

Iterations 98 – 100 –

98 0.484871964605728 0.978755453191839 -0.493883488586111

99 0.999084570180439 0.083172864156307 0.915911706024132

100 0.003658367231228 0.305020555297376 -0.301362188066148

Iterations 998 – 1000

998 0.242936421129807 0.347187246118365 -0.104250824988558

999 0.735673265673792 0.906593049004443 -0.170919783330651

1000 0.777832447386601 0.338728370005082 0.439104077381519

Couple of points.

Simple equation generating chaos.

A small variation does not result in exponential difference. The results are bounded. However, a small variation in initial conditions results in completely different outputs after a relatively small number of iterations.

A bigger problem is that averaging outputs tells you nothing about the next value. Nor does anything else.

iterations 99 and 100 illustrate this fairly graphically.

Now you can see why it is impossible to predict the output of even a simple chaotic equation. The IPCC accepts the chaotic nature of the atmosphere. Many GHE adherents seem to think that everything averages out, reverts to the mean, or that chaos is really just randomness, and can be dismissed out of hand as being irrelevant.

If you like, tell me I’m out of my depth, and show that predicting the next value in a chaotic system is actually easy. You might care to let the UK Met Office, NASA, NOAA, the IPCC, and all the rest, how to make accurate climatic predictions. Billions have been spent already, so I’m sure that your solutions would be worth hundreds of millions!

I believe you are living in some sort of delusional alternative reality, but I’m always willing to change my opinions on the basis of new facts.

By the way, there are an infinite number of inputs which iterate to zero. There are an infinite number which iterate to infinity. There are infinite inputs which result in cycles, steady states and so on.

One simple equation. One variable. Just one commonplace ODE.

How hard can it be to predict the output?

Cheers.

Mike

“Just one commonplace ODE.”It is not an ODE at all. It is the non-linear recurrence relation

yₙ₊₁=4*(yₙ-yₙ²)

And it is not any reasonable approximation to a timestepping solution of an ODE.

Nick Stokes,

If you say so.

“This equation is an Ordinary Differential Equation (ODE) because it is an equation which involves ordinary derivatives.”or

Let’s solve the following first-order ordinary differential equation (ODE). This equation is commonly referred to as the Logistic equation, . . .I know. You’re right, everyone else is wrong.

Typical GHE acolyte response. Ignore anything inconvenient, and try the worn out tactics of deny, divert, and confuse.

If that’s the best you can do, maybe you could consider concentrating your talent elsewhere. For all I know, you are a better mathematician than Gavin Schmidt. He obviously needs a bit of help. Have you offered your services?

The fact remains that the GHE does not exist. You can no more predict the weather or climate than any 12 year old child child with 30 minutes tutelage from myself.

As that eminent PhD Gavin Schmidt challenged me – “Wanna bet?”

He turned to water when I accepted his challenge. I can assure you that I am made of sterner stuff. Take up my challenge, and I won’t run away like the aforementioned PhD ( who is no doubt a legend in his own lunchbox, if nowhere else).

Billions of climatological dollars against my assumptions? Wanna bet?

Cheers.

“If you say so”Simply resolved. If you think it is an ODE, write it down so we can see.

Nick Stokes,

Typical GHE distraction.

I provide output from a simple equation which provides chaotic output, to support my contention that predicting quantifiable output from a completely deterministic “system”, with total knowledge of the algorithm and input is impossible.

I point out that the smallest change in output that I can easily manage, produces results which unpredictably converge or diverge from the initial results, chaotically.

This makes a mockery of some of your previous dismissive assertions.

You then flatly state that I have not used an ordinary differential equation, and go on to tell me precisely what it is. I point out that other authorities support my description.

Now you have the gall to demand that I provide you an equation that you previously claimed to know in sufficient detail to tell me just how wrong I was. Typical.

Maybe next time you might check your facts (or indeed, mine), before dismissively telling me how wrong I am.

It might not appear so, but my care factor is zero. Whether you demand that I call the simplest chaotic equation I can think of a “non linear recurrence relation”, or a pumpkin, makes no difference to the existence of chaos.

If you think you can make chaos irrelevant by ignoring it, I wish you luck.

To anyone else, a search for the following quote should lead to various explanations of chaos and descriptions of the logistic equation –

“Edward Lorenz, the father of chaos theory, described chaos as “when the present determines the future, but the approximate present does not approximately determine the future.”If you can’t be bothered, I generated the chaotic output that I posted above, using the following –

ƒ(x) = rx (1 − x). I don’t know how to do subscripts or superscripts here, so apologies if my expression is wrong, or misleading. The programming is trivial.

Nick, I assume you used the royal “we” when you made your statement. You and God? I feel so flattered being asked to let you and God decide whether chaos exists. I’m pretty sure a God already knows, so if you listen hard, God might give you a hint or two – one is called turbulence.

Cheers.

Heisenberg and Schrodinger were driving down the highway…

S, did you you bring the cat?

Sure, H, he’s in the trunk.

The trunk? You think he’s OK in there, S?

Sure, H… well, at least I think so. Maybe you should stop the car so we can check?

If I stop how can we even know where he is, S?

They continue driving…

F’ing Yogi Bera WAS a genius, wasn’t he, S?

Yes, he was H… don’t forget to take that fork just ahead.

“Indeed, the results of both Truscott and Aspect’s experiments shows that a particle’s wave or particle nature is most likely undefined until a measurement is made. The other less likely option would be that of backward causation – that the particle somehow has information from the future – but this involves sending a message faster than light, which is forbidden by the rules of relativity.”Obviously, the GCMs model the future precisely, until a non believer observes the results. The GCM then determines to provide an incorrect answer, having realised it is being observed.

If the output is never examined, it remains correct.

Quantum climatology in a nutshell – or just plain nutty.

Cheers.

Thanks Mike for a good laugh. It made my day. Hypothesis: Quantum tunneling and climate change is only observed by believers.

God is not just A relative, he wants One day, be your daddy…

Chaotic dynamics involves more than sensitivity to starting values. This seems relevant to climate ensemble averaging. Wikipedia is okish on this. They give the two principal defining properties for chaotic dynamics, observe that sensitivity to initial values follows from these two properties and provide book references. The key defining property is probably topological transitivity (Wikipedia: mixing??). For practical purposes, this means that a chaotic process over a region of space has trajectories which must ‘wander’ arbitrarily closely to each point in that region. Basically, unless the starting values are known precisely, a trajectory of this system can be anywhere within the region after a while. If IPCC really think climate is chaotic, then what is accomplished by averaging a handful of such (GCM) trajectories needs serious explanation. Apart from being in the region concerned, they offer nothing as to where the actual climate trajectory might be.

It really is a boundary problem!

On the last point, weather ensemble averaging is different. With luck, trajectories are sufficiently short that Taylor’s theorem still constrains the amount of ‘wandering’ (aka perturbative smooth solutions of Navier-Stokes may be proved). Averaging may help mitigate outliers.

Worse than this (as if this isn’t bad enough), they use the model equations, to validate the theory the equations come from.

While everyone can go out and measure how much it cools on a clear night, and with a IR Thermometer measure the second body the surface cools to radiatively, and see that cooling is non-linear, and regulated by dew point tempertures, and the change in co2 does nothing to how much is cools at night!

Now, this is because we have a primarily water based temperature regulating system around the planet that works around 2 phase transitions.

You can see how co2 impacts cooling rates in deserts, and on a clear night when the temps are well away from dew point temperatures. And we all know about those warm muggy humid nights that don’t cool very much at all. Water vapor regulating cooling, and because it is temperature controlled, any excess energy from co2 leaks out to space before the temps drop enough to slow cooling.

and the change in co2 does nothing to how much is cools at night!I made this argument some decades ago, (also with latent heat of dewfall as a consideration ), but it is lacking.

Humid nights are more likely to include cloud cover with closes the ‘window’ through which most radiative inversions cool. Further, net radiance does increase into the surface at the tails of the CO2 spectrum with increased CO2.

But most importantly, RF is a consideration at ( take your pick ) either the Tropopause, or the TOA. There’s a good reason for that, relevant to the original post, which is: the higher in the atmosphere one goes, the less significant motion becomes and the more important radiance becomes. ( wrt to chaos, it means one can’t predict ‘climate’ but may be able to predict the increase in thermal energy of the earth as a whole ).

The IR thermo measures the transparent window, should be nothing there. Turns out there are 2 strong water lines, and NASA has a paper telling you how to calibrate it for total precipitable water vapor, but minus water when you point it up under clear skies it is cold. If it’s low humidity it can be over 100F colder than the surface. That is the temp half the BB temp at room temp cools 100% of the time to this surface. Yes there is a tail, it’s like a bucket with half a bottom.

But all of that doesn’t even matter, it’s regulated by dew point temperatures, and rel humidity. Because even early in the morning right before Sunrise, the sky is still 100F colder.

This is why my grass will be 5 or 10F colder than air temp in the morning if it was a clear night, and my patio brick is 5F warmer than air temp.

Ultimately, TOA whether it’s the top or the bottom, is the only way for energy to leave. But it’s is never in balance, most of the planet, tomorrow is either a longer or shorter day all year, and the land masses are not distributed evenly. So their toa balance calculations? what are the error bands on a year of trying to measure the entire planet 24×7? Give me a break.

I won’t comment on posts motivated by policies, behaviour strategies or other soft non scientific disciplines .

It has been a pleasant surprise that this kind of posts is very limited here even if it would probably be a long shot to hope that it would disappear altogether .

.

Nick, I must respectfully disagree with your assertion that “It isn’t particularly useful to ask if it converges to an exact N-S solution.” It is very useful because without this convergence, you are left with a parameter dependent solution, particularly dependent on grid parameters .David Young I agree with that and go farther . If one wants a model giving solutions that respect energy and momentum conservation and then for fluids this is given by N-S . In this case a convergence to N-S solutions is not only important but its absence is enough to kill the model . Indeed if a model didn’t converge to the N-S solution then (assuming unicity) the trivial conclusion is that the results are wrong and the worst is that you don’t even know by how much they are wrong .

.

The only argument seems to be that it happened, so the probability was 100%. Well, that is indeed trivial. The issue is whether you can assign a probability in advance.Nick I am glad that you see now trivial what you considered impossible to know a few posts ago .

There is still one point you don’t get . The issue is

NOTif you can assign a probability because as I already said, you canNOTassign any probabilities with a numerical model. This is also quite easy to understand why – if you select a discrete finite sample in a infinite dimensionnal Hilbert space what is what multiple runs of a numerical model do, then the measure of the computed final states is 0 . As every probability is a measure defined on the Hilbert space, you cannot attach another value than 0 to a finite set of isolated points .To avoid farther misunderstandings, this doesn’t mean that you can’t define probabilities in principle, it only means that you cannot do so with a finite set of points .

Personnally I would be interested on your idea how “probabilities” in chaotic systems should be defined, their existence proven and computed .

How can the attractor give us the empirical evidence to define and calibrate the damage function (GHG emissions may actually be a net benefit, not net damage)?Peter Lang the answer is already partly in the question . If you can write your damage function in terms of wind velocities, precipitations, temperatures and perhaps other relevant physical parameters then you already did a big part of the work.

What you would then like to know is in what intervals and with what frequence will those parameters be . The knowledge of the attractor would help you because it could give you the frequencies at which different parts of the attractor will be visited (this is equivalent to probabilities) so that you could say of your damage functions things like : there is a probability of 15 % over a century that the damage function will be + 10 and 25% that it will be – 20 etc .

It seems to me that the real difficulty is to write the damage function because it is necessarily local (e.g depending on local human population, density of buildings, agricultural productivity etc) and then it is also hard to predict spatio temporal chaos on small local spatial scales .

.

However, it occurs to me that the top-of-the-atmosphere radiative imbalance is probably a more predictable aspect.Turbulent Eddie this is possible . Actually it depends how you define it . If it is local then it is necessarily chaotic therefore unpredictable (if only because of the clouds). If you make a space integral over the whole sphere then it is also chaotic (an integral of a chaotic solution is chaotic) but you would decrease the variability . So if and only if you had a good reason to think that there are no oscillations with high periods (dozens or hundreds of years) then you might have something that has a relatively low variability so a lower divergence of orbits .

What if the real answer to global average temperature variation is ‘merely’ a matter of applying Fourier analysis to the long term global average temperature record?John Robertson if you generalize this idea you get at the heart of my post . Indeed Fourrier series are an example of a countably infinite set which is a basis in the space of functions which have some special properties . Such a basis is then equivalent to a countably infinite dimensionnal attractor where each basis function is a degree of freedom . Considering the Earth system , this then applies to every dynamical variable – velocity, pressure, density … and not only temperature .

As this is difficult to do for an infinite dimensionnal attractor , research was done to find a

LOWdimensionnal attractor by using the Ruelle embedding method . I read several such papers but the results were not convincing so that it is now sure that if an attractor exists, it has a large dimension .Another difficulty is that this kind analysis has to be local so that the transition to “global averages” is neither easy nor sure to be valid .

.

It has been well known for more than a century that no real fluid is a true continuum.John Reid yes . But there have been numerical experiments at a scale of some 100 molecules and using explicit inter molecular potentials which have shown that N-S is still valid at these scales . I have not the reference handy but by writing your paper you have probably come across it .

Going below these scales makes the use of quantum mechanics a necessity (especially for non elastic collisions) and then, indeed, it makes sense to expect some stochasticity because QM is intrinsically stochastic . However I am not sure that this fundamental very small scale probabilistic behaviour would change the conclusions about the behaviour of fluids on scales where they can be considered a continuum .

It is like the grey boundary between the quantum world and the classical world . It can’t be uniquely defined but it is clear that the classical limit of probabilistic quantum laws becomes classical physics (e.g the quantum correlations decay very fast)

.

I also don’t understand the unusual lengths to which climate activists go to defend GCM’s.David Young I share this lack of understanding . If GCM were the right way to go then I would expect that in 40 years the multitude of models would have converged to a single model outperforming any competition . This is how science always worked – in a multitude of theories eventually only one survives and kills all others .

Yet what happens here is exactly the opposite – not only everybody’s doing his own model on a continuous scale from very simple to very complex but there are now concepts like averaging N models known to be wrong which should hurt everybody’s common sense .

The only explanation I see is the classical defence of one’s stock in trade – too many people make their living without fear of having to deliver results and they want it to last as long as possible .

But perhaps I am wrong and there is something we don’t see .

Thomas Milanovic, thank you for your essay, and responses to some of the comments.

” Indeed if a model didn’t converge to the N-S solution then (assuming unicity) the trivial conclusion is that the results are wrong and the worst is that you don’t even know by how much they are wrong .”But the argument here is not that they

don’tconverge to a N-S solution. It is that you can’t show that they do, which is very different. And the reasons for that are structural – you can’t refine the grid beyond a certain degree (Courant constraint and computer time), and you don’t have a continuum N-S solution to compare with anyway. What you can do is:1. Check that the conservation principles expressed by the N-S equations are satisfied at the grid element level

2. Check for grid invariance – ie the results you get don’t vary significantly with grid size/shape in the feasible range.

> As every probability is a measure defined on the Hilbert space, you cannot attach another value than 0 to a finite set of isolated points

Attaching another value than 0 to intervals work just fine, and I hope the whole argument does not rest on that point.

Must be a vocabulary thing.

Yes Thomas, the ratio of meaningful comments here is higher than usual. There are still a lot of meaningless ones however.

I’m trying to closely moderate comments on guest posts that are technical

You are doing a good job.

Thank you for your work.

Dave Fair

Tomas, Thanks. I would greatly appreciate that reference if you can recall it. However I am not talking about quantum effects. Chaos and fractals are purely mathematical constructs. Fractals cannot exist in a world made up of atoms and molecules because there must be some scale below which fractal self-similarity breaks down. At best natural phenomena can only be “fractal-like” over a finite range of scales.

More importantly a deterministic description of the real world can only lead to the sort of issues that you describe. In deterministic system all variables can (theoretically) be expressed as a single-valued function of time (forever to infinity!) whereas in a stochastic description the state at any time is only probabilistically related to the state or states at previous times. A stochastic description is more powerful because it enables us to distinguish between random fluctuations and evolving trends empirically. For example a stochastic analysis of global average temperature shows it to be a centrally biased random walk. As such, observed variations in this quantity over the last century and a half are no more than we would expect. There is no significant trend in global average temperature; the apparent trend is an artifact of the deterministic paradigm. See http://blackjay.net/wp-content/uploads/2016/08/MS.pdf

In my view the Navier-Stokes equations can be reconciled with the stochastic description by considering them as constraints, i.e. any fluid dynamical system evolves so that the (Shannon) entropy is maximized in accordance with these constraints. Unfortunately I am not a good enough mathematician to follow this through but it might be possible to determine the critical Reynolds Number theoretically in this way.

So, what things

arepredictable?✔ Global average temperature increase?

Could be, because GAT is not as dependent on chaotic fluid flow. Somewhat less than past models predicted, but within a general range.

✔ Stratospheric cooling trend?

Also not as dependent on chaotic fluid flow.

✔ Enhanced Arctic winter time warming?

Also not as dependent on chaotic fluid flow – but obeys parameterizations of albedo and latent heat of ice, and remarkably well predicted a third of a century ago, using a much simpler GCM.

❓ High latitude increase in precipitation?

Precipitation is more difficult to measure than temperature ( which has problems enough ). Any signals don’t appear to exceed the noise, but bears watching.

✖ Hot Spot?

At least not since 1979, and not much since 1958. Not particularly surprising since the GCMs create a

double ITCZas a matter of course, instead of just for El Nino years in the Eastern Pacific, which is what is observed.✖ Regional temperature and precipitation? Extreme weather?

Numerous variances from models. This is not surprising since the aspects are very dependent on the general circulation and the variations which naturally occur.

Turbulent Eddie ==> “chaotic fluid flow” is not the limiting factor on predictability. There are many chaotic systems involved in climate. Non-equilibrium heat transfer, the dynamical flow of unevenly heated fluids, convective cooling, the formation of clouds, phase shifts for water vapor, are a few examples.

Another unknown is the effect these non-linearities have on one another when “coupled” — when inter-dependent.

Here is a link to David Young’s list of references

https://curryja.files.wordpress.com/2016/10/david-young-ref-list.pdf

Thank you, Prof. Curry.

Thanks Judith for posting the reference list. Particularly relevant to topics raised in this thread are 10,11,14,15,17,18,20,33, and 34.

This was a great article, Nick “Navier” Stokes’ largely disingenuous trolling to the contrary notwithstanding. I’ve been wanting to write a similar piece myself for some time, though with a somewhat different slant.

Where I do take issue with it is in the assumption of the equivalence between the Navier-Stokes equations and the behavior of fluids. I have a pet peeve, which began at Purdue when I studied CFD under Joe Hoffman: and that is use of the term “governing equations.” Though most people think I’m making too much of my objection to it, the main part of this article begins with “The N-S equations govern the fluid mechanics.”

No, they don’t. At the very, very most, they (like every other mathematical expression in physics)

describefluid mechanics in quantitative terms. Equations don’t “govern” nature. And Mr. Milanovic may not consciously think they do, but many of the article’s conclusions indicate that he does. One in particular is the conclusion that the storm that only 20% of models predicted was in fact 100% certain given that the equations are deterministic. Again, the Navier-Stokes equations are a mathematical model that wethink, but don’tknow, describes fluid flow. And none of the GCMs solve them anyway; they all solve a set of algebraic equations which modelers would have you believe are approximations to the N-S equations, even though they are not.In order to capture turbulence, the GCMs use the “Reynolds averaged N-S equations,” which are equations that are not the N-S equations. This is done by Reynolds decomposition of the velocity terms, dividing them into a mean and fluctuating term. These then produce a Reynold’s stress tensor, and more variables than equations. In order to close the system, non-physical variables are introduced which amount to curve fitting parameters. When CFD models are used to predict turbulent flow in, say, a pipe elbow or an airplane, the model is set up and run with a set of initial parameters. Then actual measurements are made. The results are invariably not even close, and the curve-fitting parameters are adjusted until the CFD “matches” measurements. After that is done, the CFD model will produce results that are good to engineering approximation even when run beyond the original test conditions. That level of approximation is in the low single digit percent range. When the same model is used to simulate a different pipe elbow or airplane, the model will have to be completely readjusted. One situation doesn’t transfer to another.

What is missing from GCMs is the test case that adjusts the turbulence parameters. It can’t be done, because there is not enough data on the level of climate to do it. As Mr. Milanovic accurately notes, the N-S solution requires initialization with data at every grid point, and real data of that nature doesn’t exist. Further, since the solutions are all extremely sensitive to initial conditions (that is a reality) putting in made-up data is pointless.

Personally, I think climate models are a huge waste of time and money. Because they are trying to model climate over a century in 20 minute time steps, they of necessity must use far fewer grid points than the average airplane designer has at his disposal, but applied to a physical scale 5 orders of magnitude larger. And remember, the available solution techniques have error terms no better than second order in h. h may be a millimeter on an aircraft wind tunnel model, but it’s 100 km or more in a GCM. So the GCM will have grid spacing such that h is huge compared to the important flow phenomena, and consequently very poor spatial accuracy. The highest resolution GCM running today would have devoted no more than 170 grid points to hurricane Matthew.

The finely meshed aircraft CFD analyses may involve complex geometries, but at least the geometries are very accurately specified. Further, the flow is single-phase, with no chemistry or heat transfer. Even those models never match such gross parameters as drag coefficient to better than 5% at high angle of attack, very turbulent flow in a run that reaches “steady state” in a few seconds. How could a hugely more complex, lower spatial resolution model produce better results over a century?

There’s one final reason I don’t believe GCMs, now or in the future. Remember, the GCM temperature is backed out of the equation of state of air. In the models, it is absolute temperature. The global average temperature is around 15.3 C, so one might be tempted to think that a change of 2 C in a century is a 13% change. But it is actually a 0.7% change in absolute temperature. There is no unsteady CFD model that has such accuracy over any extended period of time, and never will be.

Well written comment and more understandable to non-scientists such as me. Intuitively, I have always sensed that there is insufficient reliable data, temporally and spacially, for any coherent conclusions to be made on climate trends.

Thank you Tomas Milanovic, David Young and Michael Kelly for your excellent contributions to this thread and for taking the time to debunk the trolls.

Also thanks to Mike Flynn for doing yeoman’s duty in handling Mr. Mosher.

I second that. Mike Flynn never fails to put up where other more timid souls would wither on the vine.

“….. and for taking the time to debunk the trolls.”

I would like to comment on this.

Nick Stokes ( for one ) is about as far away from a “Troll” that you will find on any climate blog he posts at.

FYI: A Troll adds nothing to the *discussion* by way of substance and is there merely for his amusement only, in “winding-up” others posting there.

That Nick is knowledgeable *should* should be quite obvious – if one weren’t overwhelmed with confirmation bias.

On other words a “Troll” is not someone with whom you disagree on the broader aspects of AGW.

To boot, I have never seen him retaliate to any of this kind of attack from “sceptics” ( the asterisks are there for a reason).

You may say I’m trolling.

Me – I say I am merely pointing out what should be obvious to a fair minded and rational person.

Now Flynn, I consider a Troll… And I suspect he has finally been found out on a least one “contrarian” blog ( along with another ).

There are other Dragon-slayers here to.

They are the Trolls.

“Now Flynn, I consider a Troll…”Heh, what was it you said about Trolls earlier, Tony? Ah, yes…

“On other words a “Troll” is not someone with whom you disagree on the broader aspects of AGW.”Tony Banton,

You wrote –

“Now Flynn, I consider a Troll… And I suspect he has finally been found out on a least one “contrarian” blog ( along with another ).There are other Dragon-slayers here to.

They are the Trolls.”

Your considerations and suspicions have no doubt been taken into account. By whom, I cannot say – and I must admit, I care even less.

And yet, you still cannot provide anything remotely resembling a falsifiable hypothesis relating to the alleged planet heating properties of CO2 (or any other gas).

But go ahead. Blame my existence for the last four and a half billion years of the Earth cooling. Blame me for the inability of anyone at all to demonstrate the GHE, if it makes you feel better.

Have you ever considered that facts don’t care what you or I think? Do you blame gravity if you drop a hammer and hurt your foot? I care as little for your sensibilities as gravity does for your sore foot!

If you believe that the GHE depends on what people think, good for you! Count me amongst the unbelievers. A troll if you wish, or maybe just a rational believer in facts and the scientific method.

Cheers.

> You still cannot provide anything remotely resembling a falsifiable hypothesis relating to the alleged planet heating properties of CO2 (or any other gas).

Not the virtus warmista again, Yeoman MikeF.

“As Mr. Milanovic accurately notes, the N-S solution requires initialization with data at every grid point, and real data of that nature doesn’t exist. Further, since the solutions are all extremely sensitive to initial conditions (that is a reality) putting in made-up data is pointless.”That is true in CFD too. For flow over a wing, you don’t have meaningful initial data, so you put in made-up data, and wait till any perturbations from that have decayed to measure lift etc. But even in wind tunnel tests, you don’t have initial data either.

This isn’t a bug; it’s a feature. With GCMs and CFD, the results you want are independent of the initial conditions (chaos). They have to be, because in the situations you want to apply them too, you don’t know initial conditions either. Suppose you did have a CFD analysis that gave lift on wing that varied with initial conditions. How could you relate that to real flight? What can you say about the initial conditions there? What would it even mean?

Nick, What you say here is proven wrong by recent work on multiple solutions showing that in fact the final solution is a strong function of the initial conditions. Its in my reference list. “Numerical Evidence for Multiple Solutions for the Reynolds’ Averaged Navier Stokes Equations.” That’s just the tip of the iceberg. There is much more we didn’t publish.

It is well known that even in real life, there are multiple solutions, even though not as many as RANS shows. This is really very old work with wind tunnels.

David,

“Nick, What you say here is proven wrong by recent work on multiple solutions showing that in fact the final solution is a strong function of the initial conditions.”That is way too broad a claim. This from the abstract of the paper you recommended earlier:

“The observed appearance of the multiple solutions seems to be closely related to smooth body separation (sometimes massive) routinely observed in flows over high-lift configurations, especially near stall angles of attack.”Difficulty of modelling near-stall conditions simply mimics the difficulty of the conditions in real flight. But it has nothing to do with anything that might be found in GCMs.

But what I said is not proven wrong:

1. CFD is used for a vast range of real engineering analysis

2. In almost none are initial conditions properly known in reality or important in analysis

3. The results that are actually used are unaffected by this.

Nick, We have found many more multiple solutions than discussed in the paper. Some of them are nowhere near stall and don’t involve smooth surface separation. For example, even for mild flows solutions for a symmetric geometry are not symmetric. By definition there are multiple solutions that are mirror images of each other. Another one emerged for a simple subsonic vortex street separation case.

It is untrue that the results from CFD that are used are nowhere near conditions where multiple solutions appear. This is the impression you might have gotten from the flawed literature but it is not true. What we found is that this issue is systematically ignored by most people in the field. It will have to be addressed if we want to do more than very small changes to very benign flows.

Your are just echoing the “conventional wisdom” that has grown up because of the fact that these issues (sensitivity to parameters, grid, and initial conditions) are almost never reported. That’s a problem science in general suffers from. Its been amply documented recently and only the apologists for science or those who are ignorant seem to be unwilling to recognize it.

2. In almost none are initial conditions properly known in reality or important in analysisThat’s why aircraft analysis is not relevant to the discussion at hand.

Flow over an aircraft wing is determined by the flow of the environment compounded by the thrust of the aircraft. The controlled thrust dominates ( not too many 600mph winds in the atmosphere ), and as I understand it, is described by equations which are not chaotic.

This is not comparable to the atmospheric motions of Rossby waves and other smaller scale phenomena which are dominated by chaotic terms.

Now, that does go to the point I raised above: the given state and motion and even the given statistics of motion for a year/decade/century are not predictable. But some aspects of the atmosphere may well be predictable.

Convection is chaotic, but radiance is not. And since the higher one goes in the atmosphere, the less significant convection becomes and the more significant radiance becomes, the TOA RF is not dominated by chaos, so global surface temperature increase may well be predictable on the basis of GHGs. But weather and climate are not.

Michael, an interesting comment. With regard to tuning model constants. What you say is true of many of the less scrupulous in CFD. There are now at least 20 or more turbulence model versions to choose from. The more honest do not like this methodology and insist that one set of constants should suffice for all model runs. As in any system where “good” results are rewarded, they often are not the dominant voices.

Your point about turbulence models is exactly correct. This has nothing to do with the “laws of physics” and other faux justifications of these models. In my list of references, the Spalart one goes through a thorough debunking of some of the myths and unscientific dogmas the advocates of “just physics” mistakenly advocate.

You are also correct that the temperature anomaly allegedly sought by GCMs is very small compared to the overall energy level and thus is smaller than the numerical truncation error. That is a warning signal for most competent numerical analysts and mathematicians.

I would be more cautions about aerodynamic CFD. We have recently found that even attached near cruise flows can give codes problems. In particular strong pressure gradients stress the models. And as you say in strongly separated flows errors can be as large as 50% in lift force. Drag is just completely wrong in these cases. We have been documenting the positive results bias exhibited by the literature and are making some progress in getting people to acknowledge it. And then there is the multiple solution problem and bifurcations, etc. These are ignored in the literature. Most people just run the code varying the parameters until they get the “right” answer. That’s not science, its little better than technical marketing.

Steven Mosher,

You wrote –

“Simple challenge Flynn.

Using only the watts from the sun, and watts from GHGs

Calculate the temperature for every 200km grid of the planet.

Simple. just one height above terrain. 1.5 meters.

You cant get within 10%”

May I point that you are parroting specious nonsense, but trying to sound sciency.

I might just as well pose you this simple challenge Mosher – demonstrate to my satisfaction that you have a clue. Pointless, I know. It’s an impossible demand.

“. . . watts from the Sun . . . ” is nonsense. Average? By frequency? Intercepted by the atmosphere, the surface, the aquaphere?

” . . . watts from GHGs . . ” is even more nonsensical. There is no GHE, and so called GHGs have no more “watts” than any other gas.

As to 200 km grid of the planet, what is the temperature at 1.5 meters above the rocky top of Mt Everest? At what time? Or 1.5 meters above the crust at the bottom of the Marianas Trench?

You can’t even pose a well defined question, can you?

Setting all your nonsensical demands aside, isn’t it easier to just use a thermometer if you want to know the temperature? You claim I can’t calculate a temperature to within 10% of that which is measured. My calculation would be to read the temperature in Celsius, and write it down. Adding about 273.15 would give me Kelvins. Calculation done. Accurate to within better than 10%! Where there is no thermometer, guess. You can’t dispute my guess – it will be reasonable, and I’ll revise the calculation if you’ve got a sneaky hidden thermometer in the vicinity.

As someone pointed out on another post, every GCM run provides different answers, the atmosphere being chaotic and all. I can only assume that your challenge related to the future, rather than to current reality.

Typical GHE adherent behaviour. Demand answers to nonsensical ill posed questions. Sorry, but I have intention of dancing to your tune. An offer of payment might change my opinion. How much are you offering for my predictions? (If you pay, they are predictions, obviously).

Foolish chap. The Earth has cooled for four and a half billion years. There is no GHE. More CO2 is good. No CO2 is very, very, bad.

Calculate that, and let me know the answer if you wish. Show your workings, if you think I need convincing. I await the results.

Cheers.

Mike you are correct and Mosher is not correct it is that plain and that simple.

The models are useless when it comes to climate predictions and this will really be brought out to light in the coming years.

My prediction is the data over the next few years will completely show AGW theory and the models will be proven to be 100% incorrect.

Salvatore del Prete:

“My prediction is the data over the next few years will completely show AGW theory and the models will be proven to be 100% incorrect.”Not if Mosher and his Warmist cronies get to Mannipulate it with their AlGoreithms first it won’t

Fortunately, they are rapidly losing their credibility and becoming recognised for the laughing stock they have in fact been all along.

Which is why they are becoming increasingly hysterical, of course.

Tony Banton another believer in AGW ,the blind leading the blind.

If you say so Salvatore.

Just as in your Dragon-slayer physics.

Some here seem to argue that GCM models have some worth and the ‘one’ value output – the global temperature, is ‘good enough’ for policy decisions.

If say, the USA could experience a temperature rise of say, 0.5 degrees in one year, and say, Australia could experience a 0.5 degrees fall, making the overall global average temperature unchanged (made up figures I know, just go with the concept please) then:

The globe has experienced a flat year temperature rise, but with radically different hemispherical temperatures.

How can the GCMs be ‘good enough’ for policy use?