Global climate models and the laws of physics

by Dan Hughes

We frequently see the simple statement, “The Laws of Physics”, invoked as the canonical summary of the status of the theoretical basis of GCMs.

We also see statements like, “The GCM models are based on the fundamental laws of conservation of mass, momentum, and energy.” Or, “GCM computer models are based on physics.” I recently ran across this summary:

How many hours have been spent verifying the Planck Law? The spectra of atmospheric gases? The laws of thermodynamics? Fluid mechanics? They make up climate models just as the equations of aerodynamics make up the airplane models.

And here’s another version:

Climate models are only flawed only if the basic principles of physics are, but they can be improved. Many components of the climate system could be better quantified and therefore allow for greater parameterisation in the models to make the models more accurate. Additionally increasing the resolution of models to allow them to model processes at a finer scale, again increasing the accuracy of the results. However, advances in computing technologies would be needed to perform all the necessary calculations. However, although the accuracy of predictions could be improved, the underlying processes of the models are accurate.

These statements present no actual information. The only possible information content is implicit, and that implicit information is at best a massive mis-characterization of GCMs, and at worst disingenuous (dishonest, insincere, deceitful, misleading, devious).

There are so many self-contradictions in the last quoted paragraph, both within a given sentence and between sentences, that it’s hard to know where to begin. The first sentence is especially self-contradictory (assuming there are degree of self-contradictions). There are a very large number of procedures and processes applied to the model equations between the continuous equations and the coded solution methods in GCMs. It is critical that the actual coding be shown to be exactly what was intended as guided by theoretical analyses of the discrete approximations and numerical solution methods.

The articles from the public press that contain such statements sometimes allude to other aspects of the complete picture such as the parameterizations that are necessarily a part of the models. But generally such public statements always present an overly simplistic picture relative to the actual characterizations and status of climate-change modeling.

It appears to me that the climate-change community is in a unique position relative to presenting such informal kinds of information. In my experience, the sole false/incomplete focus on The Laws of Physics is not encountered in engineering. Instead, the actual equations that constitute the model are presented.

The fundamental unaltered Laws of Physics that are needed for calculating the responses of Earth’s climate systems are never solved by GCMs, and certainly will never be solved by GCMs. That is a totally intractable calculation problem, both analytically and numerically. Additionally, and very importantly, the continuous equations are never solved directly for the numbers presented as calculated results. Numerical solution methods applied to discrete approximations to the continuous equations are the actual source of the presented numbers. It is important to note that it is known that the numerical solution methods used in GCM computer codes have not yet been shown to converge to the solutions of the continuous equations.

In order to gain insight from of the numbers calculated by GCMs, deep understanding of the actual source of the numbers is of paramount importance. The actual source is far removed from the implied source that is conveyed by the statements. The ultimate source of the calculated results is the numerical solutions from computer software. The numerical solutions arise from the discrete equations that are used to approximate the continuous equations that are the basis of the discrete equations. Thus a bottom-up approach for gaining an understanding of GCM reported results requires that the nitty-gritty details of what is actually in the computer codes be available for inspection and study.

The Actual Source of the Numbers

The ultimate source of the numbers calculated by GCMs is the result of the following processes and procedures:

(1) Application of assumptions and judgments to the basic fundamental “Laws of Physics” in order to formulate a calculation problem that is both (a) tractable, and (b) that captures the essence of the physical phenomena and processes important for the intended applications.

(2) Development of discrete approximations to the tractable equation system. The discrete approximations must maintain the requirements of the Laws of Physics (conservation principles, for example).

(3) Development of stable and consistent numerical solution methods for the discrete approximations. Stability plus consistency imply convergence. Yes, that’s for well-posed problems, but some of the ODEs and PDEs used in GCMs represent well-posed problems (heat conduction, for example).

(4) Coding of the numerical solution methods.

(5) Ensuring that the solution methods, and all other aspects of the software, are correctly coded.

(6) Validation of the model equations by comparisons of calculated results with data from the physical domain.

(7) Development of application procedures and user training for each of the intended applications.

Validation, demonstrating the fidelity of the resulting whole ball of wax to the physical domain, is a continuing process over the lifetime of the models, methods, and software.

The long difficult iterative path through the processes and procedures outlined above, from the basic fundamental Laws of Physics continuous equations to the calculated numbers, is critically affected by a number of factors that are seldom mentioned whenever GCMs are the subject. Among the more significant, and never mentioned, factors is the well-known user effect; item (7) above. Complex software built around complex physical domain problems requires very careful attention to the qualifications of the users for each application.

In these notes the real-world nature and characteristics of such complex software and physical domain problems are examined in the light of the extremely simplified public face of GCMs. That public face will be shown to be a largely false characterization of these models and codes.

The critically important issues are those associated with (1) the modifications and limitations of the continuous formulation of the model equation systems used in GCMs (generally, the fluid-flow model equations are not the complete fundamental form of the Navier-Stokes equations, the fundamental formulation of radiative energy transport in the presence of an interacting media, for examples), and this applies to equations used in GCMs, (2) the exact transformation of all the continuous equation formulations into discrete approximations, (3) the critically important properties and characteristics of the numerical solution methods used to solve the discrete approximations, (4) the limitations introduced at run time for each type of application and the effects of these on the response functions of interest for the application, and (5) the expertise and experience of users of the GCMs for each application area.

These matters are discussed in the following paragraphs.

Background

Such statements as those mentioned above provide, at the very best, only a starting point relative to where the presented numbers actually come from. It is generally not possible to present an accurate and complete description of what constitutes the complete model in communications intended primarily to be informal presentations of a model and a few results. However, the overly simplistic summary that is usually presented should be tempered to more nearly reflect the reality of GCM models and methods and software.

Here are four examples of where GCM model equations depart from the fundamental Laws of Physics.

(1) In almost no practical applications of the Navier-Stokes equations are they solved to the degree of resolution necessary for accurate representation of fluid flows near and adjacent to stationary, or moving, surfaces. Two such surfaces of interest in modeling Earth’s climate systems are; (1) the air-water interface presented by the boundary between the atmosphere and oceans and (2) the interface between the atmosphere and the land. When considering the entirety of the interactions between sub-systems, including, for examples, biological, chemical, hydrodynamic, and thermodynamic interactions, the number of such interfaces is quite large.

The gradients, which appear in the fundamental formulations at these interfaces, are all replaced by algebraic approximations. The replacement occurs at the continuous equation level, even prior to making discrete approximations. These algebraic models and correlations are used to represent mass, momentum, and energy exchanges between the materials that make up the interfaces.

(2) The assumption of hydrostatic equilibrium normal to Earth’s surface is exactly that; an assumption. The fundamental Law of Physics, the complete momentum balance equation for the vertical direction, is not used.

(3) Likewise, whenever the popular description of the effects of CO2 in Earth’s atmosphere is stated, that hypothesis, too, is based on an assumption of nearly steady state balance between in-coming and out-going radiative energy exchange. And this is sometimes attributed to the Laws of Physics and conservation of energy. However, conservation of energy holds for all time and everywhere. The balance between in-coming and out-going radiative energy exchange for a system that is open to energy exchange is solely an assumption and is not related to conservation of energy.

(4) There is a singular, of upmost importance, critical difference between the, proven, fundamental Laws of Physics and the basic model equations used in GCMs. The fundamental Laws of Physics are based solely on descriptions of materials. The parameterizations that are used in GCMs are instead approximate descriptions of previous states that the materials have attained. The proven fundamental laws will not ever, as in never, incorporate descriptions of states that the materials have previously attained. Whenever descriptions of states that materials have experienced appear in equations, the results are models of the basic fundamental laws, and are not the laws as originally formulated.

A more nearly complete description of exactly what constitutes computer software developed for analyses of inherently complex physical phenomena and processes is given in the following discussions.

Characterization of the Software

Models and associated computer software intended for analyses of real-world complex phenomena and processes is generally comprised of the following models, methods, software, and user components:

1. Basic Equations Models The basic equations are generally from continuum mechanics such as the Navier-Stokes-Fourier model for mass, momentum and energy conservation in fluids, heat conduction for solids, radiative energy transport, chemical-reaction laws, the Boltzmann equation, and many others. The fundamental equations include also the constitutive equations for the behavior and properties of the associated materials: equation of state, thermo-physical and transport properties and basic material properties. Generally the basic equations refer to the behavior and properties of the materials of interest.

Even though fundamental basic equations of mass, momentum, and energy conservation are taken as the starting point for the modeling the physical phenomena and processes of importance, several assumptions and approximations are generally needed in order to make the problem tractable, even with the tremendous computing power available today. The exact radiative transfer equations for an interacting media, for example, are not solved, but instead approximations are introduced to make the problem tractable.

With almost no exceptions, the basic, fundamental laws in the form of continuous algebraic equations, ODEs and PDEs from which the models are built are not the equations that are ultimately programmed into the computer codes. Assumptions and approximations, appropriate for the intended application areas, are applied to the fundamental original form of the equations to obtain the continuous equations that will be used in the model. The approximations that are made are to more and lesser degrees important relative to the nature of the physical phenomena and processes of interest. A few examples are given in the following paragraphs.

The fluid motions of the mixtures in both the atmosphere and oceans are turbulent and there is no attempt at all to use the fundamental laws of turbulent fluid motions in GCM models/codes. For the case of two- or multi-phase turbulent flows, liquid droplets in a gaseous mixture for example, the fundamental laws are not yet known.

The exchanges of mass, momentum, and energy at the interfaces between the (atmosphere, oceans, land, biological, chemical,etc.) systems that make up the climate are, at the fundamental-law level, expressed as a coefficient multiplying the gradient of a driving potential. These are never used in the GCM models/codes because spatial resolution used in the numerical solution methods do not allow the gradients to be resolved. The gradients of the driving potentials are not calculated in the codes. Instead algebraic correlations of empirical data, based on a bulk state-to-bulk-state average potential, are used. These are almost always algebraic equations.

The modeling of radiative energy transport in an interacting media does not use the fundamental laws of radiative transport. Assumptions are applied to the fundamental law so that a reasonable and tractable approximation to the physical phenomena for the intended application is obtained.

While the fundamental equations are usually written in conservation form, not all numerical solution methods exactly conserve the physical quantities. Actually, a test of numerical methods might be that conserved quantities in the continuous partial differential equations are in fact conserved in actual calculations.

This comment should not be interpreted to mean that the basic model equations are incorrect. They are, however, incomplete representations of the fundamental Laws of Physics. Additionally, as next discussed, the algebraic equations of empirical data are often far from based on fundamental laws.

2. Engineering Models and Correlations of Empirical Data These equations generally arise from experimental data and are needed to close the basic model equations; turbulent fluid flow, heat transfer and friction factor correlations, mass exchange coefficients, for examples. Generally the engineering models and empirical correlations refer to specific states of the materials of interest, not the materials themselves, and are thus usually of much less than a fundamental nature. Many times these are basically interpolation methods for experimental data.

Models and correlations that represent states of materials and processes do not represent properties of the materials and are thus of much less of a fundamental nature than the basic conservation laws.

3. Special Purpose Models Special purpose models for phenomena and processes that are too complex or insufficiently understood to model from basic principles, or would require excessive computing resources if modeled from basic principles.

The apparently all-encompassing parameterizations used in almost all GCM models and codes fall under items 2 and 3. There are many physical phenomena and processes important to climate-change modeling that treated by use of parameterization. Some of the parameterizations are of heuristic and ad hoc nature.

Special purpose models can also include calculations of quantities that assist users, post-processing of calculated data, calculation of quality control quantities, for examples. Calculation of solution functionals, and other aspects that do not feed back to the main calculations, are examples.

4. Important Sources from Engineered Equipment Models for phenomena and processes occurring in complex engineering equipment, if a physical system of interest includes hardware. In the case of the large general GCMs, the equipment and processes involved in conversion of materials in one form and composition into other forms and compositions will use engineered equipment.

Summary of the Continuous Equations Domain The final continuous equations that are used to model the physical phenomena and processes usually arise from these first four items. The continuous equations always form a large system of coupled, non-linear partial and/or ordinary differential equations (PDEs and ODEs) plus a very large number of algebraic equations.

For the class of models of interest here, and for models of inherently-complex, real-world problems in general, the projective/predictive/extrapolative capabilities are maintained in the modeling under Items 1, 2, 3, and 4 listed above.

5. The Discrete Approximation Domain Moving to the discrete-approximation domain introduces a host of additional issues, and the ‘art’ aspects of ‘scientific and engineering’ computations complicate these. Within the linearized domain the theoretical ramifications can be computationally realized by use of idealized flows that correspond to the theoretical aspects of the analyses.

The adverse theoretical ramifications do not always prevent successful applications of the model equations. In part because the critical frequencies are not resolved in the applications, and in part due to the properties of the discrete approximations which usually have inherent implicit representations of dissipative-like terms. Such dissipative terms are also sometimes explicitly added into the discrete approximations. And sometimes these added-in terms do not have counterparts in the original fundamental equations.

Reconciliation of the theoretical results with computed results is also complicated by the basic properties of the selected solution method for the discrete approximations. The methods themselves can introduce aphysical perturbations into the calculated flows. And these are further complicated whenever the discrete approximations contain discontinuous algebraic correlations (for mass, momentum, and energy exchanges, for examples) and switches that are intended to prevent aphysical calculated results. In the physical domain any discontinuity (in pressure, velocity, temperature, EoS, thermophysical, and transport properties, for examples) has a potential to lead to growth of perturbations. In the physical domain, however, physical phenomena and processes act to limit growth of physical perturbations.

6. Numerical Solution Methods Numerical solution methods for all the equations that comprise the models are necessary. These processes are the actual source of the numbers that are usually presented as results.

Almost all complex physical phenomena are non-linear with a multitude of temporal and spatial scales, interactions, and feedbacks. Universally, numerical solution methods via finite-difference, finite-element, spectral, and other discrete-approximation approaches, are about the only alternative for solving the system of equations. When applied to the continuous PDEs and ODEs and the algebraic equations of the model these approximations give systems of coupled, nonlinear algebraic equations that are enormous in size.

Almost all of the important physical processes occur at spatial scales that are less than the discrete spatial resolution employed in all calculations. Additionally, the range of temporal scales of the phenomena and processes encountered in applications range from those associated with chemical reactions to time spans on the order of a century. In the GCM solution methods almost none of these temporal scales are resolved.

It is a fact that numerical solution methods are the dominant aspect of almost all modeling and calculation of inherently complex physical phenomena and processes in inherently complex geometries. The spatial and temporal scales of the application area of GCMs are enormous, maybe unsurpassed in all of modeling and calculations. The tremendous spatial scale of the atmosphere and oceans has so far proven to be a very limiting aspect relative to computing requirements, especially when coupled with the large temporal scale of interest; centuries of time, for example.

In GCM codes and applications, the algebraic approximations to the original continuous equations are only approximately solved. Grid independence has never been demonstrated, for example. The lack on demonstrated grid independence is proof that the algebraic equations have been only approximately solved. Evidence of independent Verification of (1) the coding and (2) the actual achieved accuracy of the numerical solution methods also have never been demonstrated.

Because numerical solutions are the source of the numbers, one of the primary focuses of discussions of GCM models and codes must be the properties and characteristics of the numerical solution methods. Some of the issues that have not been sufficiently addressed are briefly summarized here.

Summary of the Discrete Approximation Domain The discrete approximation domain ultimately determines the correspondence of the properties of the fundamental Laws of Physics and the actual numbers from GCMs. The overriding principles of conservation of mass and energy, for example, can be destroyed in this domain. One cannot simply state that the Laws of Physics insure that fundamental conservation principles are obtained.

7. Auxiliary Functional Methods Auxiliary functional methods include instructions for installation on the users’ computer system, pre- and post-processing, code input and output formats, analyses of calculated results, and other user-aids such as training for users.

Accurate understanding and presentation of calculations of inherently complex models and equally complex computer codes demands that the qualifications of the users be determined and enhanced by training. The model/code developers are generally most qualified to provide the required training.

8. Non-functional Methods Non-functional aspects of the software include its ease of, and fitness for, understandability, maintainability, extensibility and portability. Large complex codes have generally evolved, usually over decades, in contrast to being built from scratch and thus include a variety of potential sources of problems in these areas.

9. User Qualifications For real-world models of inherently complex physical phenomena and processes the software itself will generally be complex and somewhat difficult to accurately apply and the calculated results somewhat difficult to understand. Users of such software must usually receive training in applications of the software.

Summary

I think all of the above characterizations, properties, procedures, and processes, presented from a bottom-up focus, constitute a more nearly complete and correct characterization of GCM computer codes. The models and methods summarized above are incorporated into computer software for applications to the analyses for which the models and methods were designed.

Documentation of all the above characteristics, in sufficient detail to allow independent replication of the software and its applications, is generally a very important aspect of development and use of production-grade software.

Unlike a “pure” science problems, for which the unchanged fundamental Laws of Physics are solved, the simplifications and assumptions made at the fundamental-equation level, the correlations and parameterizations, and, especially, the finite-difference aspects of GCMs are the overriding concerns.

Spatial discontinuities in all fluid-state properties (density, velocity, temperature, pressure, etc.) introduce the potential for instabilities, as do discontinuities in the discrete representation of the geometry of the solution domain. Physical instabilities, captured by the equations in GCMs, and the behavior of the numerical solution methods when these are resolved becomes vitally important. The solutions are required to be demonstrated to be correct and not artifacts of numerical approximations and solution methods.

GCMs are Process Models Here’s a zeroth-order cut at differentiating a computational physics problem for The Laws of Physics from working with a process model of the same physical phenomena and processes.

A computational physics problem will have no numerical values for coefficients appearing in the continuous equations other than those that describe the material of interest.

Process models can be identified by the fact that given the same material and physical phenomena and processes, there is more than one specification for the continuous equations and more than one model.

Some processes models are based on more nearly complete usage of fundamental equations, and fewer parameterizations, than others.

The necessary degree of completeness for the continuous equations, and the level of fidelity for the parameterizations, in process models is determined by the dominant controlling physical phenomena and processes.

The sole issue for computational physics is Verification of the solution.

Process models will involve many calculations in which variations of the parameters in the model are the focus. None of these parameters will be associated with properties of the material. Instead they will all be associated with configurations that the material has experienced, or nearly so, at some time in the past.

Moderation note:  As with all guest posts, please keep your comments civil and relevant.  Moderation on this post will be done with a heavy hand, please keep your comments substantive.

978 responses to “Global climate models and the laws of physics

  1. Pingback: Global climate models and the laws of physics – Enjeux énergies et environnement

  2. Dan Hughes,

    In my experience, the sole false/incomplete focus on The Laws of Physics is not encountered in engineering. Instead, the actual equations that constitute the model are presented.

    Here, knock yourself out.

    Process models will involve many calculations in which variations of the parameters in the model are the focus. None of these parameters will be associated with properties of the material. Instead they will all be associated with configurations that the material has experienced, or nearly so, at some time in the past.

    Wonderful. When can we expect to see your model?

    • Laws of physics are exaggerated opinions of infallibility.

    • “Laws of physics are exaggerated opinions of infallibility.”

      Tell that to the electricity coming into your house.
      Or your car when you try to start it.
      Or your brakes when you want to stop.

    • Looks like Dan nailed part of it. Why do you think Dan should produce a climate model? :

      The following adjustable parameters differ between various finite volume resolutions in the CAM
      4745 5.0. Refer to the model code for parameters relevant to alternative dynamical cores.
      Table C.1: Resolution-dependent parameters
      Parameter FV 1 deg FV 2 deg Description
      qic,warm 2.e-4 2.e-4 threshold for autoconversion of warm ice
      qic,cold 18.e-6 9.5e-6 threshold for autoconversion of cold ice
      ke,strat 5.e-6 5.e-6 stratiform precipitation evaporation efficiency parameter
      RHlow
      min .92 .91 minimum RH threshold for low stable clouds
      RHhigh
      min .77 .80 minimum RH threshold for high stable clouds
      k1,deep 0.10 0.10 parameter for deep convection cloud fraction
      pmid 750.e2 750.e2 top of area defined to be mid-level cloud
      c0,shallow 1.0e-4 1.0e-4 shallow convection precip production efficiency parameter
      c0,deep 3.5E-3 3.5E-3 deep convection precipitation production efficiency parameter
      ke,conv 1.0E-6 1.0E-6 convective precipitation evaporation efficiency parameter
      vi 1.0 0.5 Stokes ice sedimentation fall speed (m/s)
      245

      • brandon, “How many angels can dance on the head of a pin?”

        yawn, typical progressive BS. “Scientific” press releases are getting more and more hyped in self promotion efforts which creates less “faith” in scientists. If scientists do not that the gonads to correct the record, the distrust will just get “progressively” worse.

      • Dallas,

        yawn, typical progressive BS. “Scientific” press releases are getting more and more hyped in self promotion efforts which creates less “faith” in scientists. If scientists do not that the gonads to correct the record, the distrust will just get “progressively” worse.

        Yawn, typical contrarian disconnect from reality:

        http://content.gallup.com/origin/gallupinc/GallupSpaces/Production/Cms/POLL/mv4nnuxuy0-t17h_w0su9g.png

        http://content.gallup.com/origin/gallupinc/GallupSpaces/Production/Cms/POLL/kk6qz4ey2kqlojjpj6wnna.png

        Cue: “See? The propaganda is working”.

        Here, does <a href="http://phys.org/news/2012-11-limitations-climate.html"this comport to your arbitrary and vague definition of “accurate picture of the science”?

        How accurate is the latest generation of climate models? Climate physicist Reto Knutti from ETH Zurich has compared them with old models and draws a differentiated conclusion: while climate modelling has made substantial progress in recent years, we also need to be aware of its limitations.
        We know that scientists simulate the climate on the computer. A large proportion of their work, however, is devoted to improving and refining the simulations: they include recent research results into their computer models and test them with increasingly extensive sets of measurement data. Consequently, the climate models used today are not the same as those that were used five years ago when the Intergovernmental Panel on Climate Change (IPCC) published its last report. But is the evidence from the new, more complex and more detailed models still the same? Or have five years of climate research turned the old projections upside down?

        It is questions like these that hundreds of climate researchers have been pursuing in recent years, joining forces to calculate the climate of the future with all thirty-five existing models. Together with his team, Reto Knutti, a professor of climate physics, analysed the data and compared it with that of the old models. In doing so, the ETH-Zurich researchers reached the conclusion: hardly anything has changed in the projections. From today’s perspective, predictions five years ago were already remarkably good. “That’s great news from scientist’s point of view,” says Knutti. Apparently, however, it is not all good: the uncertainties in the old projections still exist. “We’re still convinced that the climate is changing because of the high levels of greenhouse gas emissions. However, the information on how much warmer or drier it’s getting is still uncertain in many places,” says Knutti. One is thus inclined to complain that the last five years of climate research have led nowhere – at least as far as the citizens or decision makers who rely on accurate projections are concerned.

        Still too apologetic? Try this:

        Once again this brings us back to the thorny question of whether a GCM is a suitable tool to inform public policy.
        Bish, as always I am slightly bemused over why you think GCMs are so central to climate policy.

        Everyone* agrees that the greenhouse effect is real, and that CO2 is a greenhouse gas.
        Everyone* agrees that CO2 rise is anthropogenic
        Everyone** agrees that we can’t predict the long-term response of the climate to ongoing CO2 rise with great accuracy. It could be large, it could be small. We don’t know. The old-style energy balance models got us this far. We can’t be certain of large changes in future, but can’t rule them out either.

        So climate mitigation policy is a political judgement based on what policymakers think carries the greater risk in the future – decarbonising or not decarbonising.

        A primary aim of developing GCMs these days is to improve forecasts of regional climate on nearer-term timescales (seasons, year and a couple of decades) in order to inform contingency planning and adaptation (and also simply to increase understanding of the climate system by seeing how well forecasts based on current understanding stack up against observations, and then futher refining the models). Clearly, contingency planning and adaptation need to be done in the face of large uncertainty.

        *OK so not quite everyone, but everyone who has thought about it to any reasonable extent
        **Apart from a few who think that observations of a decade or three of small forcing can be extrapolated to indicate the response to long-term larger forcing with confidence

        Aug 22, 2014 at 5:38 PM | Registered Commenter Richard Betts

        Are the bits I bolded ballsy enough for you? Does Dr. Betts suitably anti-hype things to your discerning and exacting specifications? Do I need to assign even more climatologists to the role of media nanny at the expense of them doing, I don’t know, their real jobs?

        I’m just trying to make sure all your concerns are being properly addressed in a timely fashion. The customer is always right, you know.

        ***

        At what point does it occur to you that if we don’t know f%#kall about how the real system works that we probably shouldn’t be making changes to it?

        Sweet weepin’ Reason on the cross.

    • Brandon – Specifically which of Dan’s assertions do you dispute? You just tossed off a throw-away comment as far as I can tell.

      • jim2,

        Specifically which of Dan’s assertions do you dispute?

        I quoted it: Instead, the actual equations that constitute the model are presented.

        Is the popular press he’s critiquing supposed to do that every time they write about the physical underpinnings of climate models?

        Why do you think Dan should produce a climate model?

        Don’t tell me, show me. Engineering is *applied* science. Instead, he’s giving the climate modeling community homework assignments by way of attacking what’s written about GCMs by journalists.

      • brandon, “Instead, he’s giving the climate modeling community homework assignments by way of attacking what’s written about GCMs by journalists.”

        Right, the climate modeling community should be communicating with the churnalists so the public gets an accurate picture of the science. How many churnalists “know” that the models run hot so they are providing an upper limit instead of likely possibility? How many churnalists know that RCP 8.5 isn’t really the business as usual estimate but one of a number of business as usual estimates?

      • brandon, and just for fun, the Kimoto equation is a simple model that produces a low, but reasonable estimate of response to CO2 based on “current” energy balance information. Everyone should know that it is a lower end estimate, though can be adapted for more detail, but there is huge opposition to anyone producing a low end estimate, with reasonable caveats vs acceptance of known high end estimates.

        It is almost like a high bias is the most useful for some reason.

      • the Kimoto equation is a simple model that produces a low, but reasonable

        That’s mutually exclusive.

      • Micro, “That’s mutually exclusive.”

        No it isn’t, it is a boundary value problem so having a reasonable lower bound estimate is desirable. If you expand Kimoto you have a fair complex partial differential equation set. Not much different than NS except it considers a combination of energy flux..

      • /sarc sorry lol

        No you’re right, I draw boxes around solutions all the time, very useful in getting your point across.

  3. “Colorless green ideas sleep furiously” obeys all laws of grammar.

  4. Hard to believe people won’t accept the scientific evidence that science has completely F@@ked up the environment in complete clueless like fashion pretending all this time to understand nature! Maybe they cling to the outdated notion that science has a clue!!!!

  5. All laws of physics are appropriate in some bell curve sweet spot, but tail off in all dimensions of space time. No law of physics is complete [yet].

    So we develop “parameters” to dance between the bumps and holes of the laws we have ratified. You can think of a double black diamond ski run or a class 4 river rapid. Lots of bumps and holes, but many different “parameterized” ways down.

    • “All laws of physics are appropriate in some bell curve sweet spot, but tail off in all dimensions of space time.”

      Talk about pseudoscientific babble speak.

      The Planck law holds everywhere, not only in some “sweet spot,” in, as far as we know, all of spacetime, not as a Bell Curve, but as a fundamental law of nature.

      • Not true.
        eg definition
        “The law may also be expressed in other terms, such as the number of photons emitted at a certain wavelength, or the energy density in a volume of radiation. ”
        is very suggestive of a Bells Curve.

      • You call that a kna-ife, angech? That is a kna-ife:

        Within metaphysics, there are two competing theories of Laws of Nature. On one account, the Regularity Theory, Laws of Nature are statements of the uniformities or regularities in the world; they are mere descriptions of the way the world is. On the other account, the Necessitarian Theory, Laws of Nature are the “principles” which govern the natural phenomena of the world. That is, the natural world “obeys” the Laws of Nature. This seemingly innocuous difference marks one of the most profound gulfs within contemporary philosophy, and has quite unexpected, and wide-ranging, implications.

        http://www.iep.utm.edu/lawofnat/#SH2a

        Clifford was probly a regularist, BTW.

      • “The law may also be expressed in other terms, such as the number of photons emitted at a certain wavelength, or the energy density in a volume of radiation. ”
        is very suggestive of a Bells Curve.

        No — you can describe things in other ways without it fitting a normal distribution. The S-B Law is very much not a normal distribution (not a “Bell Curve”).

        Are you using “Bell Curve” to just mean any kind of statistical distribution with some spread?

      • “The S-B Law is very much not a normal distribution (not a “Bell Curve”).”

        I didn’t say it was. Nor did the OP. He said, if I understood correctly, that F=ma gives, for a given a, a range of values of F that are Gaussian distributed.

        Which isn’t true. Nor, for a given temperature T, does the S-B Law give a range of emission intensities. There is a 1-1 relationship.

      • Willard | September 14, 2016 at 9:27 am |
        You call that a kna-ife, angech? That is a kna-ife:
        True.
        Thanks. The death of a thousand cuts.
        One has to be careful talking the planck in case one falls off.

      • angech wrote:
        “Not true.
        eg definition
        “The law may also be expressed in other terms, such as the number of photons emitted at a certain wavelength, or the energy density in a volume of radiation. ”
        is very suggestive of a Bells Curve.”

        That is not my understanding of what the original commenter wrote, which was “All laws of physics are appropriate in some bell curve sweet spot, but tail off in all dimensions of space time. No law of physics is complete”

        This is vague, but my interpretation of it is that F does not equal ma, but that, given an F, a is not exact but is represented as a probability distribution that’s a Bell Curve. Which is wacky.

      • Appell, ““All laws of physics are appropriate in some bell curve sweet spot, but tail off in all dimensions of space time.”

        Talk about pseudoscientific babble speak.”

        More of an engineering truism. The limits of most models are approximations. Newtonian physics need corrections as you near the speed of light, the ideal gas laws need various corrections as you approach limits. S-B is an ideal approximation that nearly always needs some adjustment. So if you use exclusively ideal approximations your expectations will always be unrealized to some degree.

        Engineers normally start by looking at how unrealistic expectations are then adjust accordingly. Which makes it humorous when “philosopher” types start telling engineers, what engineers always do :)

      • The laws of physics aren’t the problem, it’s the application of them. Of course they pertain to idealized situations. But if you’re trying to use Newton’s F=ma law at relativistic speeds, you’re not using the right law, and that’s your fault, not the law’s. Instead you should be using the laws of general relativity.

        The real world is complex. But that complexity is the world’s, and not the laws. The laws are useful precisely because they aren’t complex.

      • Appell, “Instead you should be using the laws of general relativity.”

        Well, it is pretty obvious that a number or climate model parameterizations need using an updated version for whatever they were using. That is kind of the point, revisiting simplifying assumptions instead of defending flawed results.

      • “Well, it is pretty obvious that a number or climate model parameterizations need using an updated version for whatever they were using.”

        Why is that obvious?

        “That is kind of the point, revisiting simplifying assumptions….”

        Climate modelers do this constantly.

      • “Why is that obvious?”

        Perhaps because clouds and aerosols, the same big uncertainties that have always been the big uncertainties, are still big uncertainties? Clouds are pretty funny really since convective triggering a parameterization based on a real surface temperature and the models miss the temperatures that are being parameterized.

        Aerosols have been revised since the last set of runs, but what should be “normal” natural aerosol optical depth is a bit of a guess so the current period of low volcanic activity could contributing to warming currently attributed to just about everything else.

        Unless models can more accurately “get” real surface temperatures, not many of the parameterizations based on thermodynamics are very “robust”.

      • “Assuming that models now run warm means assuming surface temperature models have no errors.”

        No. It means that the models register the surface and the surface does not represent the “planet”. Ignoring for the moment dubious adjustments to the oceanic parts of the surface record, even the surface temperatures after Nino equilibrium is reestablished, fall short of model predictions.

        That is just the beginning of the story. Secular temperature increase decreases continuously to the tropopause. The tropopause is about flat, and above it, temperature decreases with time. Very sharply in the middle stratosphere, the highest altitude our Nimbus style satellites can see.

        Co2 continues to radiate well into the mesosphere, cooling the “planet” above the view of common satellites.

        Tell me. Which altitude referenced above represents the “planet”.

    • dallas wrote:
      “Perhaps because clouds and aerosols, the same big uncertainties that have always been the big uncertainties, are still big uncertainties?”

      And the carbon cycle — huge uncertainties there.

      But why is uncertainty a reason for inaction? Are you assuming uncertainty about a factor means its influence is zero until the uncertainty is reduced to…what?

      The ironic thing about all this is that the radiative part of models regarding GHGs like CO2 is the *least* uncertain part of climate models, because it’s the most approachable via fundamental physics.

      Still pseudoskeptics complain about CO2, when the real uncertainty in models is in the carbon cycle, clouds, and aerosols. They have it all upside down.

      • “But why is uncertainty a reason for inaction?””
        Try
        Look before you leap?
        Out of the frying pan into the fire?
        Not terribly original
        Even a stitch in time means you have known insight into the problem before you attempt to fix it
        Which leads to if it ain’t broke don’t fix it.
        The road to hell is paved with good intentions.

      • angech: Cliches, and not very applicable at that.

        Uncertainty does not mean nothing is changing.

      • No. The radiative forcing is the MOST uncertain part. The fluid dynamic parts of the models are actually pretty good. Otherwise they would not produce index for ENSO, however incorrect in space and time.

        Just go to MODTRAN. Look up and down. It “sees” NO radiation in the atmosphere below a kilometer. At a kilo it begins to pick up signal from the CO2 bands as a lessening of the outbound blackbody radiation. The altitudes for water and ozone are about 4 and 5 km respectively.

        The problem here is complex. If MODTRAN is correct, there is NO radiation in the first km of the atmosphere to transfer. The transfer must all be accomplished by conduction and convection…

      • “Just go to MODTRAN. Look up and down. It “sees” NO radiation in the atmosphere below a kilometer”

        Again, what does that even mean, “sees?”

      • “If MODTRAN is correct, there is NO radiation in the first km of the atmosphere to transfer. The transfer must all be accomplished by conduction and convection…”

        That sounds ridiculous, and I’m sure it’s incorrect. Or rather, your interpretation is. There is lots of IR in the bottom km of the atmosphere….

  6. Yeah why is it when its garbage in garbage out its always the computer’s fault?

    • Where is the “garbage” here?

      “Description of the NCAR Community Atmosphere Model (CAM 3.0),” NCAR Technical Note NCAR/TN–464+STR, June 2004.
      http://www.cesm.ucar.edu/models/atm-cam/docs/description/description.pdf

      • Know it well, have studied it cover to cover. Two basic sections plus appendices. First section describes the dynamical core. Secton section describes parameterizations– all the values given to all those PDE’s. Therein lies the validation problem, because hindcasting cannot validate them given the attribution problem. So CAM runs objectively hot compared to balloon and satellite observarions. If anything, invalidation.

      • So CAM runs objectively hot compared to balloon and satellite observarions.

        Which gets to the kluge I mentioned, the history of the models I remember reading was that they all ran cold, until they changed how they “conserved” water vapor, after which they all run warm.

      • Assuming that models now run warm means assuming surface temperature models have no errors. Which may well not be true:

        “Historical Records Miss a Fifth of Global Warming: NASA,” NASA.gov, 7/21/16
        http://www.nasa.gov/feature/jpl/historical-records-miss-a-fifth-of-global-warming-nasa

      • David, you know a lot of people, would you please ask around for a manifest or table of contents, even a few index pages, of just what data it was that Phil Jones ‘dumped’? I know the original data is no longer available but it would be helpful if you would provide a link that would be wonderful. Thank you for any help you might be able to bring to this discussion.

      • The point is that the equations in the model description *are* specific applications of the laws of phyics, which Dan said did not happen.

  7. Thanks for addressing this.

    Regarding: “Validation, demonstrating the fidelity of the resulting whole ball of wax to the physical domain, is a continuing process over the lifetime of the models, methods, and software.”

    I think that one thing which can never be understated is the need for independent testing. That´s the whole idea behind use of independent laboratories which are accredited in accordance with ISO/IEC 17025 General requirements for the competence of testing and calibration laboratories
    : “In many cases, suppliers and regulatory authorities will not accept test or calibration results from a lab that is not accredited.”

    The requirement for used of accredited laboratories is imposed by authorities, for measurement of CO2 emissions, on even small businesses with the tiniest CO2 emissions.

    While the reason why authorities are imposing all this on mankind – models – are nowhere near being tested or validated by the same standards.

  8. There is a lot of general harrumphinging here that isn’t attached to an examination of what GCMs actually do. But almost all of what is said could be applied to computational fluid dynamics (CFD). And that is established engineering.

    On the specific issues:
    “(1) In almost no practical applications of the Navier-Stokes equations are they solved to the degree of resolution necessary for accurate representation of fluid flows near and adjacent to stationary, or moving, surfaces. “
    True in all CFD. Wall modelling works.

    “(2) The assumption of hydrostatic equilibrium normal to Earth’s surface is exactly that; an assumption. The fundamental Law of Physics, the complete momentum balance equation for the vertical direction, is not used.”
    Hrdrostatic equilibrium isthe vertival momentum balance equation. The forces associated with vertical acceleration and viscous shear are assumed negligible. That is perfectly testable.

    “The balance between in-coming and out-going radiative energy exchange for a system that is open to energy exchange is solely an assumption “
    It isn’t assumed in the equations – indeed, such a global constraint can’t be. It is usually achieved by tuning, usually with cloud parameters.

    “The fundamental Laws of Physics are based solely on descriptions of materials. The parameterizations that are used in GCMs are instead approximate descriptions of previous states that the materials have attained.”
    There is a well developed mathematics of material with history, in non-newtonian flow. But I’m not aware of that being used in GCM’s. In fact, I frankly don’t know what you are talking about here.

    • Nick,
      I’m surprised at you suggesting that CFD can be treated as a mature science. Without experimental data specific to the problem, there are still major problems with turbulence modeling.

      You wrote:- : “Hrdrostatic [sic] equilibrium is the vertival [sic] momentum balance equation. The forces associated with vertical acceleration and viscous shear are assumed negligible. That is perfectly testable.”

      You are correct that it is testable. It has already been tested. The Richardson assumption gives useless (arbitrary) answers after a period of less than three weeks. ( See The Impact of Rough Forcing on Systems with Multiple Time Scales: Browning and Kreiss, 1994; and Diagnosing summertime mesoscale vertical motion: implications for atmospheric data assimilation: Page, Fillion, and Zwack, 2007.) It leads to nonphysical high energy content at high wavenumbers, sometimes called “rough forcing”. It then becomes necessary to fudge the model by introducing a nonphysical dissipation to avoid the model blowing up. The GCMs all seem to be locked into this problem because of the legacy of large investment in adaptation of short term meteorological modeling. I read recently of some moves afoot to fix this problem but it requires a radical change-out of the numerical scheme for atmospheric (and shallow ocean) modeling and it hasn’t happened yet.

      What we have now is a heavily parameterised, non-physical solution which does not converge on grid refinement and which shows negative correlation with regional observations in most of the key variables. Do you want to buy a bridge?

      • “It leads to nonphysical high energy content at high wavenumbers, sometimes called “rough forcing”.”
        The corollary to testing, where instability is found, is updraft modelling, which emulates the meteorological processes. Checking your reference Pagé et al 2007:

        Using a state of the art NWP forecasting model at 2.5 km horizontal resolution, these issues are examined. Results show that for phenomena of length scales 15-100 km, over convective regions, an accurate form of diagnostic omega equation can be obtained. It is also shown that this diagnostic omega equation produces more accurate results compared to digital filters results.

        People can solve problems, if they try.

      • Nick,
        Your reference to updraft modeling takes me to an empirical subgrid model of deep convection. Was that your intent?
        Please do read the Page et al reference. There are two omega grids generated for comparison, one is “an accurate diagnostic form of omega equation” and the other (derived from the Richardson equation) is a high resolution version of what is used in climate models. Note the separation after 45 minutes in the modeling with a little wind divergence. Note also the pains taken to eliminate the high energy variation at high wavenumbers in the diagnostic version.

      • “empirical subgrid model of deep convection. Was that your intent?”
        The section heading of 4.1 is “Deep Convection”. The first subsection, 4.1.1 is the updraft ensemble.

      • Nick,
        There are a number of steps in converting a set of continuous equations into a numerical model:-
        1) Choice of spatial discretisation ( which controls spatial error order and grid orientation errors)
        2) How to evaluate time differentials, specifically whether time differentials are evaluated using information available at the beginning of the timestep (explicit scheme), at the end of the timestep (fully implicit scheme) or using a combination of end-timestep estimates and beginning-timestep information (semi-implicit) scheme
        3) Control of timestep
        4) Choice of SOLVER. This is the mathematical algorithm which solves the set of simultaneous equations which are developed as a result of the above choices.
        The original question was about whether it was possible to relate a numerical solution to the continuous equations – the governing equations from which the numerical formulation is derived. At most, your proposed test will tell you whether your solver is working to correctly solve your discretised equations. It can tell you nothing about whether your numerical formulation is working to satisfy the original continuous equations.
        David Appell,
        I say with no disrespect that it is evident that you have never developed numerical code for a complex problem. You wrote:
        “But in the real world, numerical solutions are calculated because there is no possibility of an analytic solution, or anything close to it. That’s certainly the case for climate. And if you are close to an idealized, textbook situation you’d first try perturbation theory anyway.”

        Your first statement is almost true. (A number of years ago, I was asked to develop a high resolution numerical code to test the validity of a solution which was analytic in a non-physical mathematical space, but which then had to be numerically inverted back to physical space.) Of necessity, this means that the best we can ever do to test a numerical formulation is run a series of NECESSARY tests. There is no fully sufficient test of the validity of the solution for the actual problem to be solved. For most physical problems, however, there is an analytic solution available for a simplified version of the problem. You may wish to solve a problem (acoustic propagation, fluid displacement, tracer dispersion, whatever) across naturally heterogeneous media. You can create analytic solutions for the same problems but applied to layered homogeneous media (for example), and this then allows one (necessary) test of the numerical formulation, which directly relates the discretised numerical solution to the continuous equations. Alternatively, you may have a real problem involving arbitrarily varying boundary conditions. You would then always at least ensure that the solution is valid for a fixed functional variation of boundary conditions with known solution. Finite element model of stress-strain? You test the formulation against a simple mechanical problem with known solution before applying it to the real problem you are trying to solve.

        Even with N-S, as I have mentioned, there are analytic solutions available for certain steady-state problems. There are also valid analytic bounds that can be put on N-S solutions at certain length and time scales. Research labs and universities might be happy to use untested code, or even worse, to use results from code which is known to be in error when such tests are applied. Engineers cannot afford to be quite so imprudent.

      • “There are a number of steps in converting a set of continuous equations into a numerical model”
        Yes, of course. And they have been developing them for essentially the same geometry for over forty years. The GCMs are outgrowths of, and often use the same code as, numerical weather forecasting programs. These are used of course by met offices, military etc. They are not programs tapped out by climate scientists.

        The programs use a regular grid topology, elements typically hexahedra, 100-200km in the horizontal, and 30-40 layers in the vertical, so they are long and thin. They are mapped to follow terrain. Grid and timestep size is highly constrained by the Courant stability condition, because of the need to resolve in time whatever sound waves can be represented on the grid (horizontally). For solution, they focus on the pressure velocity (dynamic core) which has the fastest timescales. In CAM3, they use finite differences in the vertical, and a spectral method in the horizontal, for speed. The spectral method requires inversion of a matrix with rank equal to the number of harmonics used.

        I think that if you verify the solution using a different differentiation procedure, you are finding whether the processes are solving the pde, because that is what they have in common. But in any case, in CFD at least, that is not commonly what is done. Instead various local diagnostics are examined – flow at sharp corners, boundary layer detachment, vortex shedding, stagnation points etc. These are not analytic solutions, but they are circumstance under which some subset of terms are dominant. I’m not sure what the corresponding diagnostics in GCM’s would be, but I’m sure they have some.

    • Wall modeling, ‘laws of the wall’ work if the present flow of interest corresponds to both the fluid and the flow structure on which the empirical description is based. The requirement that the fluid corresponds can be eased off a bit, if the empirical description has correctly accounted for fluid effects; i. e. the description uses only fluid properties and the controlling dissipative phenomena have been correctly captured. Surface curvature (concave vs. convex), microstructure of the interface of the surface bounding the flow (wall roughness), large-scale flow-channel geometry effects (abrupt expansion, abrupt contraction, smoothly converging, smoothly diverging, inter-connections between embedded solid structures), solid stationary vs. compliant fluid interface, forced convection vs. natural convection vs. mixed convection as in natural circulation, among others, require different ‘laws’. Even for very high Reynolds number flows; the Super Pipe experiments.

      Such ‘laws’ are not laws.

      The case of single-phase flow is in much better shape than even two- or multi-phase flows when phase change is not an issue. Then we have flows that involve phase change, both in the bulk mixture and at a wall. There are no ‘laws’ for such flows, only EWAG correlations for specific fluids, specific flow-channel geometry, specific types of mixtures, and specific wall heating, or cooling, states.

      • “Wall modeling, ‘laws of the wall’ work if the present flow of interest corresponds to both the fluid and the flow structure on which the empirical description is based.”
        Yes. And the kinds of surfaces encountered in atmospheric modelling are not infinite, and have been studied for decades. Particularly ocean, with evap and heat and gas exchange.

    • 1. Hydrostatic equilibrium is the vertical momentum balance equation. The forces associated with vertical acceleration and viscous shear are assumed negligible. That is perfectly testable.
      2. kribaez testable and not negligible.
      3. When proven not negligible use updraft modelling state of the art NWP forecasting model at 2.5 km horizontal resolution.
      4. of course this was not used as see point 1. assume forces are negligible.

    • “The forces associated with vertical acceleration and viscous shear are assumed negligible.”

      This sounds like a world without cold fronts or thunderstorms.

      I like cold fronts and thunderstorms.

      • I like SSW’s. They certainly shake things up, as do changes in the jet stream

        tonyb

      • “This sounds like a world without cold fronts or thunderstorms.”
        They have specific models for “deep convection” – updrafts etc.

      • Nick

        I am not sure that SSW”‘s come into that category. Thunderstorms etc also differ in their effect as do tornadoes etc etc.

        How do you include random numbers of each type, each of which are themselves of differing intensity?

        Tonyb

      • Nick

        Speaking of tornadoes the UK has more for its land area than any other country although they are not as severe of course as those in tornado alley.

        http://www.manchester.ac.uk/discover/news/new-map-of-uk-tornadoes-produced

        I do not recall seeing these mentioned in the recent output from the Met office who are now modelling regional impacts.

        Tonyb

      • Tony,
        From that Sec 4.1:
        “The scheme is based on a plume ensemble approach where it is assumed that an ensemble of convective scale updrafts (and the associated saturated downdrafts) may exist whenever the atmosphere is conditionally unstable in the lower troposphere.”
        They don’t model individual plumes. They extract the average effects that matter on a GCM scale – entrainment etc. And yes, GCMs won’t model individual tornadoes.

      • “This sounds like a world without cold fronts or thunderstorms.”
        They have specific models for “deep convection” – updrafts etc.

        Yes, they have to, because for cold fronts or thunderstorms, the hydrostatic approximation is invalid.

        As I understand it, most GCMs currently are based on hydrostatic assumptions but most future plans are for non-hydrostatic models.

        Maybe that would help some of the failings but not the limits to prediction because the basic chaotic equations still remain.

      • “most future plans are for non-hydrostatic models”
        They won’t reinstate the acceleration term. The main reason for the hydrostatic assumption is that it overcomes the limit (Courant) on time step vs horizontal grid – if the equation allows sound waves, it has to resolve them in time, else the distorted version will gain energy and blow up.

  9. Dan Hughest wrote:
    “In my experience, the sole false/incomplete focus on The Laws of Physics is not encountered in engineering. Instead, the actual equations that constitute the model are presented.”

    Dan, you don’t know what you’re talking about. Read:

    “Description of the NCAR Community Atmosphere Model (CAM 3.0),” NCAR Technical Note NCAR/TN–464+STR, June 2004.
    http://www.cesm.ucar.edu/models/atm-cam/docs/description/description.pdf

  10. @author
    ‘It is important to note that it is known that the numerical solution methods used in GCM computer codes have not yet been shown to converge to the solutions of the continuous equations.’

    Interesting… is there any reference to peer-reviewed papers demonstrating this?
    Thanks.

    • Lower order effects related to the discrete approximations are known to be present in current-generation GCMs.

      One of the more egregious aspects that explicitly flies in the face of the Laws of Physics are the parameterizations that are explicit functions of the size of the discrete spatial increments. That’ll never be encountered in fundamental laws of material behaviors.

      Importantly, such terms introduce lower-order effects into the numerical solution methods. As the size of the discrete increments change, these terms change.

      As the spatial grids are refined, for example, the topology of the various interfaces within the solution domain changes, along with various spatial extrapolations associated with parameterizations.

      The theoretical order of the numerical solution methods cannot be attained so long as these lower-order effects are present.

      • Dan Hughes wrote:
        “One of the more egregious aspects that explicitly flies in the face of the Laws of Physics are the parameterizations that are explicit functions of the size of the discrete spatial increments. That’ll never be encountered in fundamental laws of material behaviors.”

        Ohm’s law is a parametrization. So is Young’s modulus.

        GCM parametrizations aren’t ideal, of course. But they are necessary to make the problem tractable. There simply isn’t enough computing power, by far, to handle all the scales of the problem necessary for a full physics treatment, from microscopic to macroscopic.

        So modelers make approximations. All modelers do this, of course, even engineering modelers. But modeling climate is a far harder problem than any engineering problem; it’s the most difficult calculation science has ever attempted.

        As Stephen Schneider used to say, there is no escaping this uncertainty on the time scale necessary if we are to avoid climate change, so we will have to make decisions about what to do, or not do, about the CO2 problem in the face of considerable uncertainty. We makes lots of decisions with uncertain or incomplete information, but this one has the most riding on it.

      • There simply isn’t enough computing power, by far, to handle all the scales of the problem necessary for a full physics treatment, from microscopic to macroscopic.

        This is another misconception – the problem isn’t compute power or resolution. The problem is the non-linearity of the equations.

      • the problem isn’t compute power or resolution

        It is at budget review time.

      • As Stephen Schneider used to say, there is no escaping this uncertainty on the time scale necessary if we are to avoid climate change, so we will have to make decisions about what to do, or not do, about the CO2 problem in the face of considerable uncertainty. We makes lots of decisions with uncertain or incomplete information, but this one has the most riding on it.

        The problem David is that when you dig, all of the evidence falls back to the hypothesized effect of increasing Co2.
        What we lack is a quantification of it’s actualized change in temp. Because I can go out and point an IR thermometer at my asphalt driveway, my roof, and point out more forcing then the surface co2 over my yard has.
        I have also calculated how much the temperature goes up and down as the length of day changes, and that sensitivity is less than 0.02F/Wm^2 outside the tropics.

      • “The problem David is that when you dig, all of the evidence falls back to the hypothesized effect of increasing Co2.
        What we lack is a quantification of it’s actualized change in temp.”

        Um, have you ever read a climate textbook?

      • David Appell said Ohm’s law is a parametrization. So is Young’s modulus.

        No, Young’s modulus ia a property of materials.

        Ohm’s law. Ohm’s law is one of the basic equations used in the analysis of electrical circuits. I assume you are referring to the resistance as being a parameter. Nope again, the resistance is a property of materials.

        Ohm’s law is a model that is applicable to materials that exhibit a linear relationship between current and voltage. There are, however, components of electrical circuits which do not obey Ohm’s law; that is, their relationship between current and voltage (their I–V curve) is nonlinear (or non-ohmic).

      • The problem is the non-linearity of the equations.

        This is a misconception. “Non-linearity” isn’t a problem; there are many easily solvable non-linear equations. Here: find the root of x^2 = 0. You can do it analytically or numerically, but either way it’s quite easy.

        No, Young’s modulus ia a property of materials.

        Young’s Modulus is a paramaterization of an isotropic materials property. Go deeper, and you get the anisotropic properties. Then you get them as a function of time. Then you look at how elasticity also varies microscopically, at grain boundaries, at high-stress regimes, etc. Young’s Modulus is a simplification of all of these nitty-gritty details.

        The Ideal Gas Law is another paramaterization, coming from treating gas particles under conditions of elastic collisions. (Which is why it breaks down as you get close to phase transitions).

        And, yes, Ohm’s Law is another one, a simplification of the laws surrounding electron scattering at quantum mechanical levels. You can get there from the Drude Model, if I recall right, which is itself a simplification.

      • Dan, yes, Young’s Modulus is about material (obviously), and it’s a parametrization, as Benjamin just explained very well. And so is the ideal gas law, Ohm’s law, and the law of supply and demand — simple summaries of the complex collective interactions of large systems.

      • Dan,

        True believers will attempt to deny, divert, and confuse by any means possible.

        They cannot bring themselves to accept the IPCC statement that prediction of future climate states is impossible.

        Cheers.

      • And look at the very next sentence the IPCC wrote:
        “Rather the focus must be upon the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions.”

        https://www.ipcc.ch/ipccreports/tar/wg1/501.htm

        Yes, there is the possibility of abrupt change — and scientist warn about that. (There are two National Academy of Sciences reports on that in the last two decades.) But if you look at the climate of the Holocene, it looks quite smooth and predicable. If you look at the ice ages of the Quaternary, there are clear patterns that dominate the progression of climate. Milankovitch cycles don’t explain them exactly, but that’s where you start, and you get a useful fit to the actual climate history.

        We may never be able to predict something like the Younger Dryas (but maybe), or a rapid change due to, say, melting clathrates. But with big enough emsembles you get a good sense of what final climate states are most probable, and that may be the best climate models can offer. Then human intelligence has to take all the projections in and think and decide and plan. Maybe you plan for the worst case scenario. Maybe you choose to take a risk and not plan for it but for the maximum of the probability distribution. But there’s still uncertainty involved — that’s just reality. That doesn’t you simply and smugly reject computer models — instead you use them with their strengths and limitations in mind.

      • David Appell,

        The main limitation of climate models seems to be that they cannot predict future climate states. You seem to be implying that they have some other unspecified utility.

        You tell me I can’t simply reject the climate models just because they have no demonstrated utility. OK, I won’t reject them. I’ll just ignore them. They still cannot predict the future climate.

        Saying you can get a good sense of the future by averaging more and more incorrect outputs is akin to saying that future climate states can be usefully predicted.

        You seem to agree that sudden changes can’t be usefully predicted. The logical course of action would seem to be to assume that things will go on more or less as usual, with occasional moments of terror and panic when confronted by sudden unpredictable adverse changes.

        This seems to be be what history shows. I assume it will continue.

        Cheers.

      • “Saying you can get a good sense of the future by averaging more and more incorrect outputs is akin to saying that future climate states can be usefully predicted.”

        The outputs aren’t “incorrect.” They’re the outputs of the model. If you knew which were “correct” and which were “incorrect,” you wouldn’t need a model, would you?

        The probability distribution of future climate states can be reasonably calculated. One can then decide how to interpret that distribution and what to do about it. Sure, maybe there is a methane bomb and temperatures skyrocket in a few decades. If you dislike that possibility, do your best to miminize then chance of a methane bomb.

      • The point is, you left out an important part of the IPCC’s statement.

      • David Appell,

        If you have 130 models producing 130 different results, only 1 can be correct, at most. If running each model millions of times produces a plethora of different results (and it does – the atmosphere behaves chaotically), then again, only one can be correct.

        Averaging all the demonstrably incorrect results helps you not at all. It is not possible to forecast future climate states. The IPCC says so, and so do I.

        As to not quoting the IPCC in full, I assume most people can read it for themselves. It seems likely that many did not see where the IPCC stated that it is not possible to predict future climate states.

        The IPCC states –

        “Addressing adequately the statistical nature of climate is computationally intensive and requires the application of new methods of model diagnosis, but such statistical information is essential.”

        Unfortunately, the “new methods of model diagnosis” are required because present methods don’t actually work. Even more unfortunately, attempts at new methods produce results just as dismal as the old methods. Maybe you can come up with a new method which will diagnose the problem with models attempting to predict the unpredictable future, (according to the IPCC – and me), but it seems the diagnosis is simple. The models produce many wrong answers. If there is a correct answer, it is indistinguishable from the garbage surrounding it.

        Apologies if I failed to quote precisely the matter you wanted me to. You could have quoted it yourself, and saved me the trouble, I suppose.

        Maybe the answer to climate state predictive computer models is to examine chicken entrails instead – or consult a psychic, like the White House did.

        Cheers.

      • *ALL* model results will be incorrect. Because all models have errors, and besides, no one knows the future time series of GHG emissions, aerosol emissions, volcanic emissions and solar irradiance. You can’t predict future climate even in principle.

        However, you don’t need a “correct” prediction for models to be useful. Knowing the probability distribution of an ensemble is useful information, especially since essentially none of them give a reassuring result for any realistic RCP. It’s not necessary to know if warming in the year 2100 will be 3.0 C, or 3.1 C – no policy decisions hinge on it.

      • How can an ensemble of results, all of which you admit are wrong, provide any useful results? Do two wrongs make a right? Do 100 wrongs make a right?

      • Allen:

        1) Would you please specify the daily CO2, CH4, N2O etc human emissions for the next 100 years? Also, all the SO2 emissions of all volcanic eruptions, their latitudes, and the daily solar irradiance for the same time period. Thanks.

        2) George Box: “All models are wrong, but some are useful.” (He wasn’t just talking about climate models.)

      • How can an ensemble of results, all of which you admit are wrong, provide any useful results?

        Ahhh, man, these engineers said that this bridge could support 20 tons, when in reality, it can support 20.00812751234 tons.

        That model was completely useless! It totally didn’t get us in the ballpark of what we needed to know.

      • This is what scientists mean when they say “all models are wrong”.

        Every model, from the Law of Gravity to the Ideal Gas Law, is an approximation of reality. None of them are exactly correct. (And by “exact”, we mean exact. As in math, to an infinite number of decimal places). Every model is wrong.

        But many of them are still extremely useful while not being exactly correct. We don’t usually need a hundred billion billion digits of accuracy.

      • “That model was completely useless! It totally didn’t get us in the ballpark of what we needed to know.”

        +1

      • (Actually it was +1.07544856, but +1 is good enough for a blog comment.)

  11. @author
    ‘It is important to note that it is known that the numerical solution methods used in GCM computer codes have not yet been shown to converge to the solutions of the continuous equations.’

    When has this *ever* been shown for numerical solutions to PDEs?

    • You have to be joking. With the vast majority of engineering applications, it is a simple matter to apply the numerical solution to a problem which has a known analytic solution. This then allows a direct test of the process of conversion of the governing equation into its numerical form. As an example, you can consider the diffusivity equation for single-phase flow much loved by hydrologists. (This is a PDE involving differentials in space and time.) This can be solved analytically for a variety of flow conditions. So you solve it numerically with differing numerical schemes or different levels of grid refinement and compare the results.

      Alternatively, to avoid a white-mouse test, many industries have set up more complex benchmark problems which do not have an analytic solution, but which are used by multiple code developers. Over a period of time, as different approaches yield the same answers, the true solution is well known and is available for comparative testing.

      Lastly, if there is no analytic solution available and no benchmark problem for comparison, you can always run an L2 convergence test. This involves running the same numerical scheme on successively finer grids to test that the solution is converging. This last is a necessary but not sufficient condition since it does not necessarily mean that you are converging to the correct solution. On the very few convergemnce tests of GCMs which appear in the literature, the GCMs fail even this weak test because of the need to make arbitrary adjustments to the parameters used to define subgrid scale behaviour. This should be telling us something.

      • “So you solve it numerically with differing numerical schemes or different levels of grid refinement and compare the results.”

        You can do that. But simpler and more direct tests are:
        1. Check conservation of mass, momentum and energy. Analytic solutions are just got from applying those principles, but you can check directly.
        2. As with any equations, you can just check by substitution that the solution actually satisfies the equations.
        3. There are standard tests of an advective flow, like just let it carry along a structure (a cone, say) and see what it does with it.

        Again, GCMs are just a kind of CFD. People have worked out this stuff.

      • Nick,
        My response was intended to address the comment of David Appell wherein he implied that numerical solutions of PDE’s were never, or hardly ever, tested against the continuous equations. I was pointing out au contraire that in the majority of physical problems, numerical solutions are indeed tested against the governing equations using an analytic solution for predefined boundary conditions – normally a simplified version of the actual problem to be solved. No engineer developing new code would dream of NOT testing that the code was solving correctly if a an analytic solution is available (and it normally is for a simplified version of the problem to be solved). Even in the case of the N-S equations, there are a number of exact solutions for steady-state assumptions; there are also exact solutions for incomressible flow assumptions. As yet there does not exist an analytic solution for non-steady-state flow of compressible fluids.

        Your response to me appears to be confused on several issues:-

        Analytic solutions are not derived by the application of conservation principles. They are derived by solving the continuous equations for specified initial and boundary conditions. If the continuous equations expressly conserve something then the analytic solution will conserve the same property, but not otherwise. Whether the numerical solution conserves those properties is a function of the numerical formulation. I think that you are talking (instead) about testing whether aggregate properties hold in the model.

        “As with any equations, you can just check by substitution that the solution actually satisfies the equations.” You can only do this if there is an analytic solution available.

      • “Analytic solutions are not derived by the application of conservation principles. They are derived by solving the continuous equations for specified initial and boundary conditions. “
        And what equations do you have in mind that are not expressing conservation principles?

        “You can only do this if there is an analytic solution available.”
        No, you have a differential equation (discretised), and a postulated solution. Differentiate the solution (discretely) as required and see if it satisfies the de.

      • Nick, The differential equations governing the eddy viscosity don’t express any “law of physics” but are based on assumed relationships and fitting data. They are leaps of faith in a very real sense and their developers say so.

      • dpy6629,

        They are leaps of faith in a very real sense and their developers say so.

        One wonders where that leaves making changes to the radiative properties of the atmosphere in a very real sense.

      • Nick,
        You suggested:-“No, you have a differential equation (discretised), and a postulated solution. Differentiate the solution (discretely) as required and see if it satisfies the de.”

        The reality is that you don’t have a single differential equation. You have a set of equations, the number of which depend on your spatial discretization. So, what do you want to do? Pick a few grid cells at random and pick a few timesteps at random? You plug in the local variables to see whether they are a valid solution to the discretized equations? Well you can do this, I guess, but if you use the same quadrature as in the numerical scheme, what do you think you get back and what do you think it tells you? There are easier ways to test whether your solver is working, if that is your aim here. Your test however tells you nothing about whether your numerical formulation offers a valid solution to the continuous equations.

      • David,
        “The differential equations governing the eddy viscosity don’t express any “law of physics””
        We were talking about systems with analytical solutions that might be tested.

        kri,
        “but if you use the same quadrature as in the numerical scheme, what do you think you get back and what do you think it tells you”
        Well, use a different one. That will tell you something. And if you’re handling data on the scale of solve a pde, it’s no real challenge to put the solution back into a system of equations and get a sum squares error on the grid for some set of times.

      • “No engineer developing new code would dream of NOT testing that the code was solving correctly if a an analytic solution is available ”

        Obviously.

        But in the real world, numerical solutions are calculated because there is no possibility of an analytic solution, or anything close to it. That’s certainly the case for climate. And if you are close to an idealized, textbook situation you’d first try perturbation theory anyway.

    • I believe kribaez to be right. A steady flat plate boundary layer has an analytical solution. Maxwells equations are linear and you can prove numerical solutions converge to the analytic solution.

      For turbulent CFD the story is different. Recent results call into question the grid convergence of large eddy simulation time accurate solutions. Without grid convergence it gets pretty tough to define what an “accurate” solution is.

      • Heat flow is another pretty easy example. Yes, there are analytical solutions for many simple PDE systems.

        Navier-Stokes is a particularly difficult problem, which is why there are huge prizes associated with making progress on it. I’m not sure that we need a completely convergent solution for NS for GCMs, though, so this is kinda a moot point.

  12. On my website “uClimate.com”, the logo is a butterfly. And the reason for this is simple, whereas the climate obeys all the laws of physics, because of the butterfly effect, it does so in a way that cannot be predicted.

    Or to be more precise, it can be predicted – but only that it is chaotic in its behaviour and as such in many aspects will behave in a way that appears to mean IT DOES NOT OBEY THE RULES OF “PHYSICS”.

    This is fundamentally why the academic culture fails when trying to understand climate. Because academia is taught a “deconstructional” approach whereby it is believed that a system can be totally described by the behaviour of small parts.

    In contrast, in the real world, most of the time, whilst it helps to know small parts of the system work, the total system’s behaviour usually has to be described using parameters that are different from that of a small section. So, e.g. the behaviour of a patient in a hospital, cannot be described by newtons equations … even if newtons equations dictate what we do … the complexity is such that they are irrelevant almost all the time for any doctor seeing a patient.

    Likewise, the climate – yes it’s behaviour is determined by “physics” – but we need to treat it at a system level using the tools and techniques taught to real-world practitioners and not the theoretical claptrap taught in universities.

    • So you believe that a small perturbation could move the climate from its current state (and similar to what we’ve had for a few thousand years) to one that is radically different. But because we cannot predict the impact we should perturb away and not worry?

      • Steve

        In what way is todays climate state similar to the one we had for a few thousand years? That assumes the Roman Warm Period, followed by the Dark ages cold era followed by the MWP followed by the LIA never happened. Climate changes quite dramatically but proxies are unable to pick up the changes due to their sampling methods and smoothing.

        tonyb

      • “small perturbation could move the climate from its current state (and similar to what we’ve had for a few thousand years) to one that is radically different”

        I find statements like this interesting.
        Can we judge which of our ‘perturbs’ would be good and which would be bad?
        I perturb little universes every time I trod through my yard.
        Perhaps one day when the models are perfected we will be able to produce only good perturbs and achieve a perpetual sustained climate based on recent acceptable history.
        That will be a sad day and our true end.

      • “But because we cannot predict the impact we should perturb away and not worry?”

        There is an asymmetry in your argument. There are two possible actions:
        1) Continue to perturb (keep emitting GHCs)
        2) Discontinue perturbing (stop emitting GHCs)

        If climate is unpredictable then doing either 1) OR 2) could lead to horrendous consequences, benign consequences or no consequences. For all we know our current perturbation may be the only thing preventing a catastrophic ice age and in this case doing 2) is not good.

        Your unstated assumption is that human non-interference is optimal. But nature as far as we know has no law that says that human interventions are bad things. In fact nature has no concept of bad/good…it just is.

      • Trevor Andrade wrote:
        “Your unstated assumption is that human non-interference is optimal.”

        The science — not an assumption — shows that organisms have difficulty adjusting to rapid climate change.

        From there it’s a matter of values: do you value cheap electricity more than human and other species’ well-being? If so, burn all the coal you want. OK if the present poor and future generations have to pay for what you’re creating because you only want cheap electricity? Burn away.

        It’s ultimately a moral question.

    • And the reason for this is simple, whereas the climate obeys all the laws of physics, because of the butterfly effect, it does so in a way that cannot be predicted.

      It can’t be predicted deterministically, but it can be predicted statistically. It’s like trying to predict the motion of a single molecule of gas in a parcel of air, versus trying to predict the motion of the whole parcel. The latter is much, much easier.

      as such in many aspects will behave in a way that appears to mean IT DOES NOT OBEY THE RULES OF “PHYSICS”.

      Sorry; this is wrong. Both butterfly and weather and individual particles of air appear to obey the rules of physics. Physics is sometimes chaotic.

      Because academia is taught a “deconstructional” approach whereby it is believed that a system can be totally described by the behaviour of small parts.

      By small parts and their interactions, yes. We are all made up of very large amounts of small parts (atoms) and their interactions. Physics has correctly nailed this one.

      • Benjamin,

        Physics is sometimes chaotic.

        A small semantic nit if I may. Some physical *systems* exhibit chaotic behavior:

        https://upload.wikimedia.org/wikipedia/commons/4/45/Double-compound-pendulum.gif

        … and some do not:

        https://en.wikipedia.org/wiki/Pendulum#/media/File:Oscillating_pendulum.gif

      • brandonrgates,

        You said it better. :-p

        If I were to rephrase my comment, I’d say “the physics of some systems is chaotic”.

      • I think your rephrasing is much less “misleading”. :-)

      • “It can’t be predicted deterministically, but it can be predicted statistically.”

        When someday I see a proof of this common, yet nonsensical, statement, I will quit laughing.

        It’s time for science in general to admit that the de-constructionist approach to explaining the world is, essentially useless.

        We keep waiting for that computer that…..

        Time to bring back metaphysics.

      • It’s time for science in general to admit that the de-constructionist approach to explaining the world is, essentially useless.

        …yeah. I’m hoping I misunderstand you, as it appears that you’re criticizing how science has been conducted for centuries. And science has been extremely useful during the last few centuries, so…

      • “And science has been extremely useful during the last few centuries, so…”
        Medicine maybe.
        Other than that science’s main legacy seems to be Malthusian darwinism and the cut throat capitalistic exploitation of the masses that we call modern society.
        On the other hand it completely destroyed the notion of teleology and the art of human living in comparability with the natural law and the environment.
        I just saw a magazine on the newsstand today that purports to explain human relationships by means of science.
        Disgusting.

        Pretty good for a right winger, eh?

        http://www.assisi-with-ingrid.com/art/landscape.jpg

      • Obvious model in action. Not anything natural exhibited. You must be presenting the view of someone who also supports AGW.

      • Medicine maybe.
        Other than that science’s main legacy seems to be Malthusian darwinism and the cut throat capitalistic exploitation of the masses that we call modern society.

        Medicine, electricity, the internal combustion engine, nuclear power, satellites, computers, refrigeration… yeah, science has been pretty useless the last few centuries. *cough*.

        There’s incredible irony in someone using a computer to tell me that science has been pretty useless.

      • “There’s incredible irony in someone using a computer to tell me that science has been pretty useless.”

        Believe me, if I had never seen a computer and, certainly, never ended having to sit behind one for 8 hours a day, my life would be infinitely better.

        The categories we are speaking in are diverging.

        What I am saying is that the rise of scientific naturalism was a foul moment in history. As a philosophy (actually a religion) it stated that the entire universe could be comprehended by taking matter apart and reassembling it, studying the physical forces involved.

        The disaster is that this ideology become a monopoly in the public sphere. It drove out competing ways of understanding our world and helped bring about the brutal economic disasters of English capitalism, its reacting communism, etc…. It destroyed metaphysics and teleology. It destroy the notion of living in harmony with the natural order and nature itself.

        To a large extent the climate crowd is in a similar throwback mode to what I am discussing. They also realize that man has lost his way in living harmoniously with his environment, economically, etc….
        But they base their solutions on the lie of computer modelling, AGW, etc… rather than getting to the heart of the matter, which is to see that the drift towards a strictly materialistic view of the world was a disaster.

      • As a philosophy (actually a religion) it stated that the entire universe could be comprehended by taking matter apart and reassembling it, studying the physical forces involved.

        Ehh, scientists don’t normally follow that view, strictly speaking. There are two distinctions to be made:

        1) Matter is not the only thing that matters (heh). Energy does, too! Or, more properly, bosons and fermions.

        2) Many scientists don’t necessarily believe that the entire universe can be comprehended by methodological naturalism, only that it’s the best tool we currently have for studying the natural world.
        In other words, there are no other good tools for studying the universe right now… but we can’t discount finding new tools in the future. Though, probably “science” would just come to encompass those tools as well.

        With those points in mind, yes, science and the scientific method have been absolutely fundamental in improving standards of living over the last few centuries.

      • That’s pretty good, Mosher.
        Its the first time anyone has attempted to address the issue for me, personally.

        I could still care less about global warming but it seems like there might be a basis for attempting a theory around computing statistics.

        Numerical errors and their highly correlated structure are another matter, but, thanks again.

  13. Even if you had a really perfect model there would be a problem that arises especially from it being deterministic and perfect.

    The climate system being somewhat chaotic, to make forecast you have to calculate the results for a representative sample of micro states compatible with your initial conditions. Forecasting only works if you get a distribution with a sharp maximum. If the distribution is spread out over a somewhat broad attractor (as is to be expected) you are a clever as before as this is not about forecasting a distribution but about forecasting a single event.

    If your “probable” outcomes differ by say 2°K what policy would you want to recommend? Build this or that kind of mega big infrastructure at this or that place (taken for granted for the moment that the technology (still under development) will actually work, resources be available, risk of political hiccups like WW III not considered)? Start building now at maximum speed or better slowly as better technology and better information about the future climate becomes available? What to do when forecasts/technology change over time make previous efforts obsolete?

  14. A fundamental part of computationally modelling is understanding the system that’s being modelled. It’s clear that GCMs are trying to solve a set of equations that describe a physical system; our climate. We use GCMs because we can’t easily probe how this system responds to changes without such tools. Of course, one could argue about whether or not it would be better to use simpler models with higher resolutions, or more detailed models with lower resolution, or some other type of model, but that’s slightly beside the point. What’s important, though, is that those who use GCMs, and those who critique them, have a good understanding of the basics of the system being studied. Maybe the author of this post could illustrate their understanding by describing the basics of the enhanced greenhouse effect.

    • (a) CO2 is a GHG. (b) As the concentration of CO2 increases in Earth’s atmosphere, assuming all other physical phenomena and processes remain constant at the initial states, the planet will warm. (c) Eventually over time, a balance between the incoming and outgoing radiative energy at the TOA will obtain.

      The hypothesis (b) contains an assumption that is a priori known to be an incorrect characterization of the physical domain to which the hypothesis is applied. All other physical phenomena and processes never remain constant at the initial states.

      The hypothesis (c) assumes that all the phenomena and processes occurring within the physical domain, many of which directly and indirectly affect radiative energy transport, will likewise attain balance. So long as changes that affect radiative energy transport occur within and between Earth’s climate systems, the state at the TOA will be affected.

      Earth’s climate systems are open with respect to energy. Additionally, physical phenomena and processes, driven by (1) the net energy that reaches the surface, (2) redistribution of energy content already within the systems, and (3) activities of human kind, directly affect the radiative energy balance from which the hypothesis was developed.

      I’m a potentialist.

    • Started reasonably well, and then went off the rails somewhat. I don’t think (b) necessarily has the assumption of “all else being equal”. Alternatively, you can add a third point, that “as the temperature changes, this will initiate feedbacks that will either enhance, or diminish, the overal response. I don’t really have any idea what a potentialist is.

      To be fair, it wasn’t an awful response and the reason I asked the question was because a critique of a scientific tool – like a GCM – does require a good understanding of the system being studied. As other have pointed out, your critique appears to be partly a strawman (you’re criticising what’s being presented to the public which – by design – is much simpler than what is being discussed amongst experts) and you appear to be applying conditions that may be appropriate for areas in which you might have expertise, but may not be appropriate in this case.

    • This response by ATTP contains a grain of truth and also some arrogance. While not explicit the implication is that simplistic “explanations” often called “physical understanding” have real scientific value. If I had a nickel for every time I’ve heard this invocation of specialists intuition or understanding that turned out to be wrong I’d be a wealthy man. Of course physicists invoke physical understanding. Engineers invoke engineering judgment. Doctors invoke medical experience or their selective memory of past cases. If it can’t be replicated or quantified it’s not really science.

    • Ken R, The only real oversight in Dans post is the lack of attention to subgrid models. But I agree he’s dealing at a somewhat superficial level that science communicators have chosen for their misleading apologetics. There are many deeper levels that deserve attention too.

  15. Physical, mundane engineering is based on the laws of physics. Complicated, but used in everyday life. It’s a much, much more mature science than climate modeling.

    However, when an engineer designs a bridge, they “overengineer” it, because they really aren’t sure what the loads and initial conditions are, or what they’re going to be in twenty years. If the bridge is supposed to support a hundred tons, the actual design load is often several times that – and sometimes, the bridge still breaks.

    On the other hand, a number of pseudosciences have been “based on the laws of physics.” they just weren’t based on the correct selection of the laws of physics.

    • In engineering Hammurabi rules -sleeping
      under a bridge of yr own making, so to speak.
      In pseudo-science, models, words diffused,
      adjustments, explanations, post hoc, ad hoc
      machinations.

    • Physical, mundane engineering is based on the laws of physics. Complicated, but used in everyday life. It’s a much, much more mature science than climate modeling.

      Depends on the branch of engineering. I’ve worked on materials engineering projects that were considerably less mature than climate modeling.

  16. Interesting write-up but I think your premise is a little misguided. As others have noted, in the technical literature these things are discussed, as you want. I think you’re confusing how these things are presented in public-facing broadbrush interactions with how discussions take place between experts in the field.

    In my experience, the sole false/incomplete focus on The Laws of Physics is not encountered in engineering. Instead, the actual equations that constitute the model are presented.

    I’ve seen numerous public-facing discussions with engineers and have never seen any presentation of equations. Typically they have indeed talked about ‘The Laws of Physics’. Perhaps a case where the depth of information you seek out in your own discipline is not the same as that you’re exposed to in other disciplines with which you are less familiar?

  17. Dan Hughes points out in his article “The articles from the public press that contain such statements sometimes allude to other aspects of the complete picture such as the parameterizations that are necessarily a part of the models. But generally such public statements always present an overly simplistic picture relative to the actual characterizations and status of climate-change modeling”.
    Example: https://www.sciencedaily.com/releases/2016/09/160907160628.htm
    The Rensselaer Polytechnic Institute researchers go on to claim “To tackle that challenge, the project will forecast future weather conditions for 2,000 lakes over the next 90 years using high-resolution weather forecasting models and projections of climate change provided by the Intergovernmental Panel on Climate Change.”
    So a model for lakes, a model for weather, and a model for climate will predict the future. We (the public) hope so. That’s why we hire experts. The public doesn’t create models or publish peer review papers but that doesn’t disqualify them from questioning how their money is spent.

    • I agree with much of what you say JMH and your above comment IMO hits the nail on the head. Demagogy is no substitute for genuine scientific endeavour.

      • Hi Peter. Dan Hughes opinion in his article “These statements present no actual information. The only possible information content is implicit, and that implicit information is at best a massive mis-characterization of GCMs, and at worst disingenuous (dishonest, insincere, deceitful, misleading, devious).”seemed to point in the same direction. Some ‘form’ of the science held near and dear at CE, is then paraded before the public as ‘proof’, by the demagogues, that their goals must be supported. (I haven’t seen Leonardo DiCaprio’s new movie. I’m waiting for it to come to cable.)

  18. My checkered career goes through the Navier Stokes equations, for which Stability plus consistency imply convergence. is not true in 3 dimensions.

    In 3d, flows go to shorter and shorter scales, meaning that no resolution is adequate to express the flow. You need the short scale flows though because they act back on large scale flows as a sort of ersatz viscosity.

    And in particular reducing the time step and space resolution does not converge to the true solution, because the true solution always has finer scale structure.

    The models have a knob called “effective viscosity” which is not a fundamental law of physics, or any law of physics. It’s a knob used in curve fitting, called tuning the model.

    In 2d, flows do not go to shorter scales and numerical solutions work. Indeed conservation of vorticity is one method of solution of these flows.

    The difference is that in 3d vortices can kink and break up, and in 2d they can’t.

    Anyway the absence of awareness of this feature of the Navier Stokes equations was a very early indicator that there’s no adult peer review in climate science.

    The other early indicator was violation of a mathematical law, that you can’t distinguish a trend from a long cycle with data short compared to the cycle. I think climate science still doesn’t recognize this one either.

    • You have commented on some of the properties of NS equations before and to my knowledge no-one from the AGW team has ever responded. IMO the underlying assumptions behind the GCMs need to be discussed more openly and any resulting uncertainties to be explained in conjunction with any conclusions that are to be made.

      • You have commented on some of the properties of NS equations before and to my knowledge no-one from the AGW team has ever responded.

        That’s cause the discussion happens in the scientific literature and at conferences. 99.99% of climate scientists will never bother to comment on one of these threads.

        If you want to genuinely be part of the debate, you have to get educated, do research, and publish.

      • Peter,

        The IPCC certainly accepts that non linearity precludes the possibly of forecasting future climate states.

        IPCC –

        “The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible.”

        “Climate scientists” refuse to accept the IPCC consensus. I don’t know why. I suppose you have to keep believing that “you can do what’s never been done, and you can win what’s never been won”, if it keeps the grant funds flowing.

        The AGW true believers just keep trying to change the subject. They’ve been quite successful – how many people realise that the IPCC said that climate prediction is impossible?

        Cheers.

      • That’s cause the discussion happens in the scientific literature and at conferences. 99.99% of climate scientists will never bother to comment on one of these threads.

        If you want to genuinely be part of the debate, you have to get educated, do research, and publish.

        I think, rather, it’s that to a physicist studying the Navier Stokes equation and numerical methods, the Navier Stokes equation is interesting.

        To a climate scientist, the Navier Stokes equation is an obstacle. Whatever gets past it is okay.

        That goes for a lot of the physics and statistics in climate science.

        It think it also bespeaks an absence of curiosity.

    • In addition, your last comment resonates with me because IMO many people from both sides of the AGW debate place too much store in short term data movements.

    • > Anyway the absence of awareness of this feature of the Navier Stokes equations was a very early indicator that there’s no adult peer review in climate science.

      Searching for “navier-stokes” and “climate” gives me 23K hits.

      Have you checked before making these wild allegations, RH?

    • rhhardin ==> Thank you for your commentary. I think that you are right that N-S is a stumbling block for CliSci modelers, which they struggle to “circumvent” rather than acknowledge the problems it presents. I touch on this in my series at WUWT on Chaos and Climate.

  19. “Important Sources from Engineered Equipment Models ”

    Do you mean by this analog circuits modelling a subsystem?

    • I’m thinking that as the GCMs mature the sources of important GHGs will be modeled in increasing detail. Including modeling of the availability and consumption rates of the natural-resource sources. This is of course already underway for some sources. Modeling of the source from electricity production, for example, would require representation of the various kinds of fossil-fueled plants. The same applies to transportation. I think these will initially be generally algebraic models. It is my understanding that the RCPs are presently used, instead.

  20. There is a distinction to make.

    Fluid flow in the atmosphere occurs by the differential equations of motion which are mostly* non-linear and unpredictable. Fluid flow determines winds, clouds, storms, precipitation, and determines local temperature.

    OTOH, Radiative Forcing from increasing CO2, in the global mean, is mostly stable and predictable. Radiance is determined by the arrangement of clouds and temperature but the effect of changing RF from additional CO2 is mostly of a positive sign and similar scale regardless of the weather below.

    ( Here is a crude calculation of RF change from 2xCO2 for 12 monthly snapshots of CFS profiles using the CRM radiative model. The absolute value may be off because of surface albedo choices, but in the global mean even the effect of seasonal variation does not change RF very much – GHG RF change appears stable )
    https://turbulenteddies.files.wordpress.com/2015/03/rf_figure2.png

    So, global average temperature increase appears to be predictable.
    Changes in winds, clouds, storms, precipitation appear to be unpredictable.

    The IPCC wants to have it both was with this, rightly asserting on one hand:
    https://wattsupwiththat.files.wordpress.com/2015/02/ipcc-models-predict-future.png
    but persisting with predictions about precipitation, droughts, storms, tropical cyclones, heat waves, etc.

    * Certain aspects of fluid flow are predictable. The pressure exerted by the earth’s surface, namely mountains and ocean basins, would seem unlikely to change much so the channeling of ridges and troughs to their statistically preferred locations would seem likely to continue. The gradient of net radiance which determines the thermal wind is established by the shape and orbit of earth, which is also unlikely to change much ( for a centuries, anyway ). So the general features ( presence of jet streams in each hemisphere and effects of orography on waves ) are likely to persist.

    • Presumably you are not able to find the full quote using Google???

      “The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible. Rather the focus must be upon the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. ”

      https://www.ipcc.ch/ipccreports/tar/wg1/501.htm

      • I don’t understand. How can one predict the probability distribution of the systems future possible states when one isn’t able to predict future climate states in the first place??

      • Simple Allen. They take an average of the guesses. This is how we can confidently predict that every roll of the dice will result in 3.

      • This is how we can confidently predict that every roll of the dice will result in 3.

        No – they take the ensemble of the results.

        This is like rolling a fair die 1,000 times, then showing that you have an equal chances to roll any of 1-6.

      • Problem being that the
        “probability distribution of the system’s future possible states”
        isn’t predictable either.

        But one example, ENSOs vary.
        By the record, the decadal frequency of ENSO events also vary.
        It’s not as clear, but I’m betting that the centennial and millenial frequency of ENSO events also vary.

      • “No – they take the ensemble of the results.

        Here’s what that looks like:
        https://www.ipcc.ch/report/graphics/images/Assessment%20Reports/AR5%20-%20WG1/Chapter%2014/FigBox14.2-1.jpg

        Blocking is pretty significant, especially for prolonged floods in one region droughts in another and cold in one region, heat in another.

        This alone should be enough to convince you that climate ( these are thirty years of events ) is not predictable.

      • TE

        You may remember my article here a year or two ago where I graphed annual, decadal and 50 year averages for extended CET

        https://wattsupwiththat.com/2013/08/16/historic-variations-in-temperature-number-four-the-hockey-stick/

        See figure 1 in particular. A little further down it is overlaid against the known glacier advances and retreats over the last thousand years.

        Climate is not predictable and swings considerably from one state to another with surprising frequency.

        Blocking events that make for prolonged floods or droughts etc are well described in John Kington’s recent book ‘climate and weather’. Kington is from CRU and a contemporary of Phil Jones

        tonyb

      • This alone should be enough to convince you that climate ( these are thirty years of events ) is not predictable.

        …why?

        If you show me a chart showing that current models do a mediocre job at handling blocking patterns, then I’m going to conclude that current models do a mediocre job of handling blocking patterns.

        It would be fallacious to extend that to how models handle climate in general or to how future models will do at blocking patterns.

        I can agree that regional precipitation needs a fair bit of work.

      • If you show me a chart showing that current models do a mediocre job at handling blocking patterns, then I’m going to conclude that current models do a mediocre job of handling blocking patterns.

        It would be fallacious to extend that to how models handle climate in general or to how future models will do at blocking patterns.

        Indeed. It would be also interesting to know how the blocking frequency changes when climate models are perturbed. Even if they don’t get the absolute value right, they may still all indicate a similar response to some kind of perturbation – or, maybe not; can’t tell from TE’s figure.

      • If you show me a chart showing that current models do a mediocre job at handling blocking patterns, then I’m going to conclude that current models do a mediocre job of handling blocking patterns.

        If only they could do a mediocre job. The results – of a hindcast, no less – are crap. These models all have the same physics, right? But subtle infidelities magnify into huge divergence, not just from reality, but from one another of the models.

        The same principle applies to the future.

        If you examine the blocking above you’ll find the peaks of observed blocking ( the line in black ) correspond with the Eastern edges of the ocean basins. This is a basic feature of the general circulation: the ocean basins are a low as air masses can sink ( minimal potential energy ) and the tend to coagulate like so many bumper cars when they then encounter the higher terrain of the continents. This circulation then largely determines precipitation:
        http://www.physicalgeography.net/fundamentals/images/GPCP_ave_annual_1980_2004.gif

        One is left with the distinct impression that climate models can’t even predict the existing climate.

      • Anders,

        I was going to bring this up at your place, but TE left the building just as his peddling was getting interesting.

        Here’s the 55-year NCEP/NCAR reanalysis view after Barriopedro and García-Herrera (2006):

        https://4.bp.blogspot.com/-2Sh-7Aovf8E/V9mfAngUYuI/AAAAAAAABFI/mxuFz7A4inAZyZOhL_n1ZYs-CgdXOeEtACLcB/s1600/barriopedro2006etalFig6.png

        How much exactly do these 1-sigma envelopes need to overlap before we can say that Teh Modulz Ensemblez resemblez reality well enough to be considered useful?

        ***

        Reminds me of a joke we have on this side of The Pond:

        Q: How many tourists can you put on a bus in Mexico?
        A: Uno mas.

        Siempre uno mas, todos los días.

      • The averaging of different models with different physics is statistically meaningless. The consideration of multiple realizations of the most correct model could be used. Similar methods are used in other fields.

        What is the assumed PDF? What is the basis for its selection? Are the results useful for making policy decisions?

      • Because ENSO events change a lot of features ( fewer Atlantic Hurricanes with El Nino, more CA flooding with El Nino, more Texas and CA drought with La Nina, etc. etc. ) insurance considerations alone mean you could make a huge amount of money if you could accurately forecast not even the individual years of ENSO events, but just whether there will be more Ninos, Ninas, or Nadas.

        The fact that no such forecasts are out there should tell you something about the limits of forecasting fluid flow fluctuations.

      • dougbadgero,

        The consideration of multiple realizations of the most correct model could be used.

        Which is the “most correct model” in this instance? How do you know that the one which gives the best fit to *reanalysis* data for blocking also “most correctly” represents all the other real processes in the real system?

        Similar methods are used in other fields.

        Yah, like weather forecasting. 100-member ensembles where the initial conditions are randomly perturbed to produce a probability distribution are routine, and based on Lorenz’s original works on the topic.

        Here’s the caption to the IPCC figure provided by TE above:

        Box 14.2, Figure 1 | Annual mean blocking frequency in the NH (expressed in % of time, that is, 1% means about 4 days per year) as simulated by a set of CMIP5 models (colour lines) for the 1961–1990 period of one run of the historical simulation. Grey shading shows the mean model result plus/minus one standard deviation. Black thick line indicates the observed blocking frequency derived from the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis. Only CMIP5 models with available 500 hPa geopotential height daily data at http://pcmdi3.llnl.gov/esgcet/home.htm have been used. Blocking is defined as in Barriopedro et al. (2006), which uses a modified version of the(Tibaldi and Molteni, 1990) index. Daily data was interpolated to a common regular 2.5° × 2.5° longitude–latitude grid before detecting blocking.

        Any guesses how much CPU time it takes to do *one* thirty-year run at daily resolution on a state of the art AOGCM? The reanalyis data come at 4x daily resolution you know.

        The IPCC uses “ensembles of opportunity” because they don’t have any alternative. There simply isn’t enough computing horsepower available to do it the “right” way. It’s a resource constraint, not “durrrrp, we don’t know what we’re doing because … climastrologists [drool drool drool].”

      • If only they could do a mediocre job. The results – of a hindcast, no less – are crap

        “Crap” is an emotional, subjective term you use to say “I don’t like it”. It’s your feelings, not fact.

        Looking at the chart, the observations are generally within the 1-sigma range of the models, and as brandon shows, there’s quite a bit over overlap between the error bars of observations and models. Obviously the models need some work, but I don’t think your attack is justified. And the models are improving every year.

        These models all have the same physics, right?

        No, they do not.

      • Looking at the chart, the observations are generally within the 1-sigma range of the models

        The fact remains none of the models could accurately predict what actually happened over thirty years. We expect that, because the solutions of motion are unstable.

        And the models are improving every year.

        How can you possibly say that?

        Model predictions for 100 years out are not tested.
        And atmospheric motion stops being predictable beyond 10 days.

        What model is being validated somewhere beyond ten days but somewhere testable in a human lifetime ( say ten years? ).

        If such a thing were accurate and possible don’t you think you’d see filthy rich climatologists?

        Instead, like economists, they don’t ever seem to strike it rich – unless you include Hansen’s quarter of million dollar payola from Teresa Heinz.

      • BG,
        I am not sure what the point of your response was…
        Conceptually even the best model has limited utility. I have no idea which GCM is best. Even the best may not be good enough to make policy. Averaging outputs of multiple models is still statistical nonsense.

        The other field I was referring too was nuclear safety analysis…my field.

      • “The fact remains none of the models could accurately predict what actually happened over thirty years.”

        Models cannot predict.

        To do that, they’d need to know the future — the next 30 years years of all human greenhouse gas emissions, changes in solar intensity, and volcanic eruptions.

        The best they can do is to take the known forcings over the time and run.

        Even that is iffy, because climate models aren’t initialized into the actual initial state. Because no one knows the exact initial state, especially ocean currents.

      • Now David, “Even that is iffy, because climate models aren’t initialized into the actual initial state. Because no one knows the exact initial state, especially ocean currents.”
        you aren’t supposed to say that, the standard line is that it is a boundary value problem, natural variability is only +/- 0.1 C and zeroes out in less than 60 years and ocean currents would fall into the “unforced variability” unicorn pigeonhole.

      • dougbadgero,

        I am not sure what the point of your response was…

        Neighborhood of “resource constraints” on computational cycles.

        Conceptually even the best model has limited utility.

        Sure. It’s *crucial* to not ask a climate model to make long-term weather predictions, which is where Turbulent Eddie’s appeals to initial conditions leads and the divergence problem leads. Implicitly demanding perfection for a hindcast like he does is another no-no. It simply never happens.

        Ya takes the error metrics for what they’re worth and assume the forecast/projection isn’t going to be better than that. Plan accordingly.

        I have no idea which GCM is best.

        It might be fair to say that nobody does. For one thing, they’re not all designed to be “good” at the same thing.

        Even the best may not be good enough to make policy.

        Dual-edged sword you’re wielding there. That would make even the “best” model not good enough to *not* make policy. You dig?

        Averaging outputs of multiple models is still statistical nonsense.

        You’re in luck, the IPCC doesn’t exactly say it makes statistical sense. A lot of discussion about what would be ideal vs. what can be reasonably done.

        The other field I was referring too was nuclear safety analysis…my field.

        Lotsa talk downthread about how comparable these two fields are in terms of scale when it comes to model V&V.

      • BG,

        I “dig”, IMO we would be better off ignoring the models when making policy. They are not much more than political tools.

      • “But one example, ENSOs vary.”

        ENSOs redistribute planetary heat; they don’t add or subtract from it, so don’t contribute to the long-term equilibrium state.

      • dougbadgero,

        I “dig”, IMO we would be better off ignoring the models when making policy.

        Yes, I gathered. I’m not sure you dig though because you’ve offered no alternative for making policy decisions.

        They are not much more than political tools.

        Any information used to influence a policy outcome is a political tool.

      • “But one example, ENSOs vary.”

        ENSOs redistribute planetary heat; they don’t add or subtract from it, so don’t contribute to the long-term equilibrium state.

        ENSO events change regional temperature, precipitation, drought, storms, etc. etc.

        The fact that no one can tell you whether there will be more El Ninos or La Ninas over the next ten years should tell you something.

        Atmospheric forecasts beyond ten days are not useful.

        The misconception is that at some time between ten days and a century, they become useful again. There is no basis for this and the non-linear nature of the equations of motion dictate why.

        Not everything is unpredictable. Global average temperature would appear to be predictable. The general circulation occurs within constraints which would also appear to be predictable.

        But events which are mostly determined by the equations of motion, such as floods, droughts, cold waves, heat waves, storms, tropical cyclones, etc. are not predictable.

      • I love this thread. Because some of the people are actually doing the work, not criticising it (not me though ;-) ). Also the way climate models are discussed actually suggests understanding, occasionally, and provides real insight as to their complexity and challenges. Sort of string theory for climatologists. Of course climate models are not proven science, nor can they ever be, and to misrepresent them this way is doing them a disservice. They have uses, as neural nets do (a wind up here). And the best and most expert contributors seem to eschew claiming hard scientific laws and relaible predictions arising from their models. Which is good. Wisely, because they cannot be independently validated in a repeatable controlled experiment.

        Modellers are not doing science as physiucs understands it, they are creating computer models based on a set of hypotheses regarding linear and non linear relationships they then approximate to in attempting to fit the models to a very multivariate, under susbscribed data set, with multiple interelated non linear responses and nowhere inadequate coverage across the whole planet, and not possible to compute at satisfactory resolution with the available computing resources if there was enough data, so crude guesses, but tracking reasonably well, after 4×2 adjustment, for their grants. Is that about fair?

        Right or wrong, how you ever gonna prove that? BTW Feyman DID describe the inability of pure science to understand the inter related complexities of nature rather elegantly, as far as they could be defined by true physical laws, and become accepted science – per the record. Sic:

        “What do we mean by “understanding” something? We can imagine that this complicated array of moving things which constitutes “the world” is something like a great chess game being played by the gods, and we are observers of the game. We do not know what the rules of the game are; all we are allowed to do is to watch the playing. Of course, if we watch long enough, we may eventually catch on to a few of the rules. The rules of the game are what we mean by fundamental physics. Even if we knew every rule, however, we might not be able to understand why a particular move is made in the game, merely because it is too complicated and our minds are limited. If you play chess you must know that it is easy to learn all the rules, and yet it is often very hard to select the best move or to understand why a player moves as he does. So it is in nature, only much more so.

        volume I; lecture 2, “Basic Physics”; section 2-1, “Introduction”; p. 2-1

        I think he nailed it right there. As he was so good at. Climate science is not rocket science, or they would rarely get off the pad. I said that. Your climate may vary…

      • Modellers are not doing science as physiucs understands it, they are creating computer models based on a set of hypotheses regarding linear and non linear relationships they then approximate to in attempting to fit the models to a very multivariate, under <subscribed data set, with multiple <interrelated non linear responses and nowhere inadequate coverage across the whole planet, and not possible to compute at satisfactory resolution with the available computing resources if there was enough data, so crude guesses, but tracking reasonably well, after 2×4 adjustment, for their grants. Is that about fair?

        You hit people and things with a 2×4,

        And I fixed ( I hope) two spelling errors.

      • “Correcting the data”, NASA like, to suit a prejudice, Huh? In the UK we hit things with a 4×2. So leave that alone. 4×2 clearly has more impact than leading with the lesser measurement.. The US obviously has this bassackwards, as ever with measurements, as well as not being metric enough. Ask NASA. etc. No spell check on HTML windows, is there?

      • lol, I surrender for changing the data to suit my bias.
        But, bassackwards, you say! Ha, my ancestor left your puny island because the pantries were all empty, and had to cross the pond to create good take out to get a bite to eat.

      • Besides, the 2″ leading edge will have twice the force over half the impact area, no wonder you lost the war ;)

      • You distract from Feynman’s crucial point. You can’t model nature to the level of detail effect or reliability that a physical law demands, and he made that clear. You are clearly correct regarding impact force per unit area of the 4×2 = 8, but overall applied force is the same, as long as contact is made.

        Depends if you want to make a dent in the model, or move the whole thing.

        BTW you want to communicate with the public, you have to use what impresses best on emotional intelligence, something lifetime techies rarely grasp, but Matt Ridley does, and Colin McInnes, and Douglas Lightfoot, and Lamar Alexander, etc. Your climate may vary.

      • BTW you want to communicate with the public, you have to use what impresses best on emotional intelligence, something lifetime techies rarely grasp, but Matt Ridley does, and Colin McInnes, and Douglas Lightfoot, and Lamar Alexander, etc.

        I was the fool who got to go explain to perfectly happy people that they had to do more/different work during their daily work time they normally spent trying to not do anything.
        But I was full of evangelistic fervor for what truly were good tools, and you get the typical bell curve of adoption, I worked with the customer deployment teams, usually they had a boss who told they this will be done, and it was pretty easy to get the younger early adopters to buy in, and I always suggested they pick a few of the more renowned early adopters as proto users, to become their evangelists, and then I always offered a solution for the laggards, the really smart old sob’s who were to valued to be fired, but were a huge PIA.
        In the design tool days, my customers were EE’s, typically smart folks, I’d tell them, take the biggest PIA, out in front of the windows for everyone to see, and shoot him.
        It’d only take a couple.

        Now, I just tell them a taser is just as much, maybe more fun, you get to watch them flop around on the ground, wet themselves lol.

        4×2’s are just so thuggish :)

      • The misconception is that at some time between ten days and a century, they become useful again. There is no basis for this and the non-linear nature of the equations of motion dictate why.

        Actually, I think you’re wrong on this.
        We should be able to get to a point where we can estimate the general rate of say El Nino’s for the next 100 years, is it 5 or 10? Home many Atlantic hurricane seasons vs gulf hurricanes. Not necessarily which year, just projected rates and probabilities. We did timing analysis of synchronous digital circuits like this, where you didn’t define 1 or 0 pattern to test timing, you specified a period all the signals could change, and when they stopped, and then it found the changing window of the outputs, which would then go to the next stage as the input if there was a next stage and so on. Now, if the inputs for the last stage aren’t stable when the clock triggers the sampling of the output values, your circuit doesn’t work. But we should be able to learn the basic PDO/AMO cycles, how the El Nino’s and La Nino’s cycle and what these global variables cause to happen.

        So I see a report that says: “We expect an El Nino in the next 6 to 9 years, and this will transition to an Atlantic hurricane season 6 of the next 9 seasons”, something like that, someone probably already does this, but to extent this out for a few hundred years, and improve our skills to projecting when not if.

        Now I don’t expect us to be able to do this for 50 or even 100 years, but we should be able to collect enough good data to at least catch the 30 or 40 year cycles and we will get better with models.

      • “They take an average of the guesses. This is how we can confidently predict that every roll of the dice will result in 3.”

        The average is 3.5

      • “The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible. Rather the focus must be upon the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. ”

        Translation:

        We are able to predict the highly biased average of model runs by averaging the highly correlated model and numerical errors of the climate system.

      • “And the models are improving every year.”

        How can you possibly say that?

        It’s pretty easy. You compare the models to the real world.

        There’s an immense amount of scientific literature covering just this. And yes, the models are demonstrably and objectively improving every year.

        The graph that *you* posted (about blocking events) comes from an IPCC section discussing the models, how they’ve done, and how they’re improving. I mean.. you did read the section before using their figure, right?

      • It’s pretty easy. You compare the models to the real world.

        The actual CMIP5 runs of the blocking above indicates otherwise.

        The IPCC can cheerlead all they want but they can’t change reality.

      • The actual CMIP5 runs of the blocking above indicates otherwise.

        Heh. So comparing CMIP5 to observations shows that the models aren’t getting better?

        I don’t know if I have to point this out, but that doesn’t logically follow.

      • Blocking frequency is discussed in Box 14.2, Chapter 14 of AR5 WGI (which is where you will also find TE’s figure). It says

        The AR4 (Section 8.4.5) reported a tendency for General Circulation Models (GCMs) to underestimate NH blocking frequency and persistence,
        although most models were able to capture the preferred locations for blocking occurrence and their seasonal distributions. Several intercomparison studies based on a set of CMIP3 models (Scaife et al., 2010; Vial and Osborn, 2012) revealed some progress in the simulation of NH blocking activity, mainly in the North Pacific, but only modest improvements in the North Atlantic. In the SH, blocking frequency and duration was also underestimated, particularly over the Australia–New Zealand sector (Matsueda et al., 2010). CMIP5 models still show a general blocking frequency underestimation over the Euro-Atlantic sector, and some tendency to overestimate North Pacific blocking (Section 9.5.2.2), with considerable inter-model spread (Box 14.2, Figure 1).

      • “The average is 3.5”

        It’s not possible to roll a “3.5” on a die. Basic physics. And don’t forget, all models are wrong, but they are useful!

      • Nevertheless, 3.5 is the average roll of a fair six-sided die.

      • Nevertheless, 3.5 is the average roll of a fair six-sided die

        So imagine a contraption that rolls say 10 die at a time, but you can only see the results of 5, though sometimes it’ll be a different 5, how do verify they are all fair dice?

        BTW, this is my take on reporting a temp for someplace without a surface station within 200 miles.

      • This little subtopic isn’t worth continuing, IMO.

      • If nasa, had hired a historian they would know that their new AGW subtopic about our planet Earth, getting hit by something from outer ‘space’…

        http://www.space.com/34070-earth-vulnerable-to-major-asteroid-strike.html

        has not changed a bit since the day the Dinosaurs, died. And nobody that counted then cared anyway.

      • “I’m not sure you dig though because you’ve offered no alternative for making policy decisions.”

        I would use what we know from first principles. CO2 is a ghg and should cause some warming. The system is complex and knowledge of first principles is of limited use. Now make policy.

        Using the wrong information is worse than operating from almost complete uncertainty.

      • Steve Milesworthy ==> The idea that one can predict “the probability distribution of the system’s future possible states by the generation of ensembles of model solutions.” itself is seriously challenged in the study of complex dynamical systems.

        I personally would challenge the idea that we can “predict” PAST probability distributions of the climate system’s states — as it would require being able to define a “system state” and then find how often it had occurred, I don’t think we can do that other than the broad-brush “Ice Age” and “Interglacial” (and maybe, further back, the occasional Jungle World scenario).

        My perspective is that the Climate System, behaving like a chaotic dynamical system, may have states that behave as “attractors” — and if so, we can expect to visit them again — but have no possible way of knowing when they will arrive next or what changes in climate factors might cause a change to or from any particular “attractor”.

        The only really likely suspect’s are the Earth’s orbital eccentricities associate with Ice Ages, as far as I know.

      • Yes Kip, I think this issue of the attractors properties is critical. It could be a very high dimensional manifold in which case the “climate of the attractor” may take a very long time to simulate. But the real problem I think is there is no reason to expect standard numerical methods to be meaningful on the really course grids used with all the turbulence and other subgrid models.

        Recent work on large eddy simulations casts doubt on whether the simulations even converge in the fine grid limit.

        The justifications given here by the apologists of GCMs have really nothing to back them up except words. To really look at this issue is a huge long term effort and current theoretical understanding is inadequate to really address it.

        Some kind of asymptotic result for fine grids would at least give some understanding of the issues. I am tempted to say that GCMs solve the laws of pseudo-physics and can only be credible to those already predisposed to believe in them.

    • Turbulent Eddie:

      You state :the IPCC wants to have it both ways with this, “Rightly” asserting, on one :hand: “in climate research and modelling, we should recognize that we are dealing with a coupled non-linear chaotic system, and therefore that LONG TERM PREDICTION OF FUTURE CLIMATE STATES IS NOT POSSIBLE”

      On the contrary, it is quite possible.

      As I have repeatedly pointed out, projections of future average global temperatures, between 1975 and 2011, based solely upon the amount of reduction in anthropogenic SO2 emissions, are accurate to within less than a tenth of a degree centigrade, for any year for which the net amount of global SO2 emissions is known.

      This accuracy, of course, eliminates the possibility of any warming due to greenhouse gasses. Which is why GCM’s are an exercise in futility.

      • Hans, why does your SO2/temperature graph stop in 2000?

        That’s always suspicious.

      • Hi David, Because this is a graph I made in 2002.

      • Hans, should be trivial to update

      • No it’s not trivial I don’t have the excel anymore and i would’t know where I found the data in the first place, this was just a reaction to a post above and “something I had prepared earlier”, if you think the graph is now superseded feel free to convince me, by updating it yourself. Btw the temperature history of the usa has been altered since 2002.

      • Hans: Now I see that your y-axis is upside down. In that case the data up to 2000 look similar.

      • Hans, because there is a approximate correlation between SO2 and USA48 temperatures — which may not be too surprising — doesn’t mean SO2 accounts for all the temperature change.

      • David Appell:

        You wrote to Hans: “because there is a correlation between SO2 and USA48 temperatures-which may not be too surprising-doesn’t mean SO2 accounts for all of the temperature change”

        Here, you are admitting that the removal of SO2 aerosols will cause temperatures to increase, but that all of the warming may not be due to their removal.

        I would remind you that the IPCC diagram of radiative forcings has NO component for any warming due to the removal of SO2 aerosols.

        Would you agree that until the amount of forcing due to their removal is established, and included in the diagram, the diagram is essentially useless?

      • “I would remind you that the IPCC diagram of radiative forcings has NO component for any warming due to the removal of SO2 aerosols.”

        I doubt it. Everyone knows aerosols are causing cooling.

      • David Appell:

        :You said “everyone knows aerosols cause cooling”

        Bur when they are removed, warming results. And that is what Clean Air efforts are doing.

      • “:You said “everyone knows aerosols cause cooling”
        “Bur when they are removed, warming results. And that is what Clean Air efforts are doing.”

        That’s what I said — aerosols cause cooling. So their absence doesn’t cause cooling.

      • David:

        You wrote: “That’s what I said-aerosols cause cooling. So their absence doesn’t cause cooling”

        Agreed. But if they ARE present in the atmosphere, causing cooling, their removal will cause warming. Surely you can understand that,

        In 1975,antropogenic aerosol emissions totaled approx. 131 Megatonnes. In 2011. they totaled 101 Megatonnes, a reduction of 30 Megatonnes.

        Their removal is responsible for all of the warming that has occurred!

      • “But if they ARE present in the atmosphere, causing cooling, their removal will cause warming. Surely you can understand that,”

        I’ve said exactly that twice now. This is the third time.

        “Their removal is responsible for all of the warming that has occurred!”

        You’ve never offered any evidence of this, despite claiming it many times.

      • David:

        I have provided evidence many times, although you may have missed it.

        Consider the 1991 eruptions of Mount Pinatubo and Mount Hudson, They injected 23 Megatonnes of SO2 into the stratosphere, cooling the earths climate by 0.55 deg. C. As they settled out, the earth’s temperature returned to pre-eruption levels, due to the cleaner air, a rise of 0.55 deg. C. This represented a warming of .02 deg. C.for each Megatonne of SO2 aerosols removed from the atmosphere.

        Between 1975 and 2011, net global anthropogenic SO2 aerosol emission reductions dropped from 131 Megatonnes, to 101 Megatonnes, a reduction of 30 Meatonnes..The average global temperature in 2011 (per NASA) was 0.59 deg.C. This also represents a warming of .02 deg. C. of warming for each net Megatonne of reduction in global SO2 aerosol emissions.

        Using the .02 deg. C. “climate sensitivity factor”, simply multiplying it times the amount of reduction in SO2 aerosol emissions between 1975 and any later year (where the amount of SO2 emissions is known) will give the average global temperature for that year with an accuracy of less than a tenth of a degree C (when natural variations due to El Ninos and La Ninas are accounted for). This PRECISE agreement leaves no room for any additional warming due to greenhouse gasses.

        CO2, therefore, has no climatic effect.

      • Burl, this isn’t evidence.

        It’s just a bunch of numbers. Where did they come from? How were they derived? It’s all no better than gibberish.

        I assume you didn’t publish your claims anywhere?

      • David:

        Google “it’s SO2, not CO2” for an earlier version of my thesis. The sources for the data are given there. None of the data is gibberish, it is all referenced.

        A later, up-dated version with additional supportive information is being submitted for publication. .

        .

      • For Burl’s idea to work, he needs Man not only to have removed their own aerosols, but also to have removed natural aerosols that existed in the pre-industrial era, simply because it is a degree warmer now than then. How that happened, he doesn’t elaborate. Compare now with pre-industrial. What has changed? More GHGs and more aerosols. The GHGs dominate.

      • Jim D.

        You said that it now a degree warmer than it was in pre-industrial times. It is now a degree warmer than it was in 1970

        Is 1970 really pre-industrial?

      • More like 1770. Keep up.

      • Jim D.

        I am confused. Since average global temperatures have risen 1 deg. C. since 1970, what was the pre-industrial temperature that you are referring to?

        Are you saying that it was the same as in 1970 (14.0 deg. C)?

      • Global temperatures now are 1 degree warmer than in pre-industrial times, which are usually taken to be before major emissions started in the 19th century. There were less aerosols then than now, because these mostly come with the emissions growth. Yet, it was a degree C colder then than now. Do you see how your logic of more aerosols, more cooling, fails when comparing the 18th century to now, or should I explain further?

      • Jim D.

        You keep saying that temperatures now are one degree C. warmer than in pre-industrial times.

        But they are now one degree C. warmer than they were in 1970, ( per NASA’s Land-Ocean temperature index).

        And 1.2 deg. C. warmer than they were in 1880.

        Where does your “one degree C. warmer” statement come from? It is obviously incorrect..

      • It’s from the temperature datasets such as HADCRUT and GISTEMP. Here’s an example. Ignore the wiggly CO2 line.
        http://www.woodfortrees.org/plot/gistemp/from:1900/mean:12/plot/esrl-co2/scale:0.01/offset:-3.2/plot/gistemp/mean:120/mean:240/from:1900/plot/gistemp/from:1985/trend
        How did it warm so much since the 1800’s? Aerosols are higher since then, so your logic would suggest it should be cooler today than the 1800’s, yes?

      • Jim D.

        The data sets that you show only go back to 1900. Nothing in them that would support your “one degree C. of warming above pre-industrial times” statement.

        You must have some other reference.

      • You are missing the point. Take 1850. Do we have more aerosols now than then? Why is it a degree warmer now? 70% of the GHG forcing increase has been since 1950, so this is why it looks like the warming has been faster more recently, but there was some 30% spread over the century or two before that, as you can see.
        http://www.woodfortrees.org/plot/hadcrut4gl/mean:12/plot/esrl-co2/scale:0.01/offset:-3.35/plot/hadcrut4gl/mean:120/mean:240/plot/hadcrut4gl/from:1985/trend

      • Jim D.

        Yes, we do have more SO2 aerosols in the atmosphere than in 1850.

        However, there are other warming sources that can offset the difference.

        For example, increased solar radiance (A possibility, although I have no data on it at this time.)

        Population growth. There were 1.2 billion people in 1850. Now there are more than 7.1 billion, with most all of them inputting far more heat into the atmosphere through the use of energy than in 1850.

        Infrastructure warming: Cities, parking lots, paved roads, more roofs, etc.

        Industrial heat emissions

        All of the above would go a long way toward offsetting the cooling due to more SO2 aerosols.in the present

      • Yes, CO2 forcing especially because that has added 2 W/m2. The sun is weaker now than in most of the last two centuries, so we can count that one out, and the fastest warming areas are away from cities, so that counts out your other supposition.

      • Jim D.:

        You wrote “the fastest warming areas are away from cities, so that counts out your other supposition”

        No, it does not.

        It does not matter where the heat is generated, it adds to the warming of the atmosphere. Far more large “urban heat islands” now than in 1850.

        I do want to thank you for the link to the Woodfortrees graph that goes back to 1850. It shows a tremendous temperature spike of about 0.59 deg. C, that coincides with the “Long Depression” of 1873-1879.

        This, of course, is due to the reduction in SO2 emissions due to reduced industrial activity during the depression (18,000 businesses failed, per Wikipedia)

        Interestingly, the peak is essentially identical in height to that of the 1930’s, where SO2 emissions fell by about 29.5 Megatonnes–and the 0.59 deg. C. temp rise is also .what would be expected for a decrease of 29.5 Megatonnes in SO2 emissions.

        Thus, SO2 emissions in the atmosphere prior to 1879 were at least 29.5 Megatonnes, which significantly narrows the gap between then and now.

        .

      • Global SO2 emissions are at least ten times larger today than in 1850, and the IPCC would say much more, so you have to reconcile that with warming that has occurred in that period. Clearly SO2 is not the main factor in that.

      • Jim D.

        I suspect that SO2 emission levels for the 1850 era are seriously under-reported. For example, there would have been large amounts of SO2 introduced into the atmosphere from the widespread use of coal for heating, which may not have been included.

        I have sent a query .to Dr. Klimont regarding this.

      • Coal would have been the main source they started their estimations with. Was coal use in 1850 anything like today’s levels? No. The global population was 1 billion, most of which was not developed.

      • So you don’t have a link, Burl, let alone a peer reviewed journal paper.

        Surprise surprise.

      • David:

        Google the reference. you will be surprised.

        Time to retire.

      • > you don’t have a link

        Burl offered you a way to find one. Here:

        https://wattsupwiththat.com/2015/05/26/the-role-of-sulfur-dioxide-aerosols-in-climate-change/

        You’re welcome, sea lion.

      • Burl Henry commented:
        “For example, increased solar radiance (A possibility, although I have no data on it at this time.)”

        ftp://ftp.pmodwrc.ch/pub/data/irradiance/composite/DataPlots
        http://www.acrim.com/Data%20Products.htm
        http://lasp.colorado.edu/data/sorce/tsi_data/daily/sorce_tsi_L3_c24h_latest.txt
        http://spot.colorado.edu/~koppg/TSI/
        http://www1.ncdc.noaa.gov/pub/data/paleo/climate_forcing/solar_variability/lean2000_irradiance.txt

        “Population growth. There were 1.2 billion people in 1850. Now there are more than 7.1 billion, with most all of them inputting far more heat into the atmosphere through the use of energy than in 1850.”

        Civilization runs on about 20 terawatts. Humans emit about 100 Watts, or collectively only 0.7 terawatts. That comes to only 0.04 W/m2, about the additional forcing from manmade GHGs added every year.

      • David:

        You indicated that humans emit about 100 watts. Interesting information!.
        However, my intended comment was that most of those extra people are using far more heat-emitting energy now than they were back in 1850, for transportation, lighting, appliances and so on. Our “footprint” is much greater than 100 watts.

      • David:

        Not convinced that your “100 watts person” estimate is anywhere near being correct.

        I just turned on a 100 watt lamp, thus doubling my “footprint”

      • Burl wrote:
        “Our “footprint” is much greater than 100 watts.”

        That’s why I gave the 20 TW number, and calculated with it.

      • Two thirds of the world does not have near the footprint you have David. If you are keeping track…

      • Burl wrote:
        “Not convinced that your “100 watts person” estimate is anywhere near being correct.”

        You emit about as much energy as you get from food. 2400 Cal/day (1 Cal = 1 kcal = 1000 cal). Do the math.

        “I just turned on a 100 watt lamp, thus doubling my “footprint””

        That is already included in the 20 TW number I used.

      • Jim D | September 18, 2016 at 12:29 am |
        Yes, CO2 forcing especially because that has added 2 W/m2. The sun is weaker now than in most of the last two centuries, so we can count that one out, and the fastest warming areas are away from cities, so that counts out your other supposition.

        Who ever told you this is wrong. Please do not repeat this claim again.

        http://spot.colorado.edu/~koppg/TSI/TIM_TSI_Reconstruction.jpg

        The current TSI is higher than the pre-cycle 18 era. The average for this cycle is approximately 1361. It is the lowest in about 72 years (start of cycle 18 was 1944)..

        This makes me a little suspicious of the future cooling claims because the TSI would have to drop significantly if CO2 has any forcing value whatsoever. The late 30s and 40s were pretty warm and there are many 30s and 40s maximums temperature records.

    • “Yes Kip, I think this issue of the attractors properties is critical. It could be a very high dimensional manifold in which case the “climate of the attractor” may take a very long time to simulate. But the real problem I think is there is no reason to expect standard numerical methods to be meaningful on the really course grids used with all the turbulence and other subgrid models.”

      The proof is in the pudding. All climate models give reasonable numbers for ECS. Not exactly the same, but reasonable.

      Look at the Quatenary. Its climate looks fairly predictable from Milankovitch factors. Not exactly so — but the uncertainty is in the carbon cycle response, not the radiative forcing.

      So where are all the strange attractors in the Quaternary?

      Maybe one is out there somewhere in our future. Why is it likely there’s one in the next 100 years? Or the next million? The history of the Quaternary suggest they are very very rare.

      • David, given selection bias and tuning I would regard GCM simulations as provisional pending sensitivity studies for the thousands of parameters.

        Nonlinear systems have 3 possible asymptotic attractors, fixed points, stable orbits, and strange attractors. Turbulent systems probably only have strange attractors. The attractor is the only faint hope for these simulations to be meaningful. As I mentioned above, we really know very little about its properties and how discrete approximations change them. Lack of grid convergence for LES is a real problem requiring further research. Without that the models would be little more than parameter curve fits to the training figures of merit

      • dpy6629: Where are the attractors during the 2.6 Myrs of the Quaternary?

        If none, why should I expect that one is imminent?

      • David, Rossby waves are chaotic and evidence of a strange attractor. As I said above you should embrace the attractor as its our only chance to show these simulations mean something.

      • David Appell,

        You may be confused about the difference between a point attractor, and a chaotic strange attractor. Asking “Where are all the strange attractors in the Quaternary?” Is an example of your misunderstanding, unless you misspoke.

        Chaotic systems are chaotic – weird, if you prefer. For some initial values, the system rapidly converges to zero – stable but meaningless. For other values, outputs become infinite. For the simple Lorenz equations, certain values produce a wide variety of three dimensional toroidal knots – stability of a kind.

        As far as I am aware, it is still impossible to predict initial values which will produce certain outcomes, mathematically. There are ranges of values which are seen to produce certain outcomes, although it cannot be shown that chaos may not occur within any assumed range.

        Lorenz said – “Chaos: When the present determines the future, but the approximate present does not approximately determine the future.”

        Both weather and climate appear to be examples of deterministic chaotic systems.

        Predicting future outcomes in any useful sense remains impossible.

        Cheers.

      • “David, given selection bias and tuning I would regard GCM simulations as provisional pending sensitivity studies for the thousands of parameters.”

        So all parameters are as important as atmospheric CO2 concentration, and should be treated as such?

      • Turbulence model parameters can make a very big difference and affect the boundary layer dustribution of energy. Mostly as the recent commendable paper on model tuning admitted we just don’t really know. Let’s get busy and find out.

      • Mike Flynn:

        Where is *any* attractor in the Quaternary, strange, quantum, weird or otherwise?

        In other words, where is all the chaos? The Quaternary climate looks to have discernable patterns, not large chaotic jumps hither and yon.

        Merely referring to Lorenz won’t do it here.

      • dpy6629 wrote:
        “Turbulence model parameters can make a very big difference and affect the boundary layer dustribution of energy.”

        And AGAIN: where is all this crazy chaos over the 2.6 Myrs of the Quaternary?

      • “Rossby waves are chaotic and evidence of a strange attractor.”

        And where is the evidence it’s mattered over the Quaternary?

      • David Appell,

        You don’t seem to appreciate that the rules of physics are the same now as when the Earth was created. Electrons and photons act the in the same fashions. The properties of matter are the same. Assumptions, I know, but they’ll do me in the absence of evidence to the contrary.

        The fact that you don’t understand deterministic chaotic systems will not make them vanish. If you are claiming that the atmosphere obeyed different physical principles in the past, I might beg to disagree.

        Just because you cannot understand something, does not mean it doesn’t exist. You can see an attractor just as clearly as you can see 285 K, or 25W/m2.

        How many Kelvins, or W/m2 can you see in the Quaternary? Do they not exist, just because you can’t see them?

        In any case, as the IPCC said, the prediction of future climate states is not possible. I agree, but you may not.

        Cheers.

      • Mike Flynn: If you think attractors and/or chaos were important in Earth’s past climate, then simply point to when that was.

        I’ve asked several times now. Clearly none of you can do it.

      • David Appell,

        You’re just being silly now.

        According to the IPCC, the climate is a chaotic system. You may not agree. You are free to believe anything you wish.

        You have asked that I point out a specific time when chaos was important in Earth’s past climate. As you have not provided a definition of important, and do not seem to understand the importance of chaos in physical processes at all levels, I hope you won’t mind if I first ask you to provide a specific time when the Earth’s past climate was not, as the IPCC states, a chaotic system.

        Playing with words doesn’t change facts. If you can provide new relevant facts, I’ll change my views, obviously.

        Cheers.

      • When did chaos last play an important role in Earth’s climate?

      • David Appell,

        When did it not?

        I note you choose not to define “important”. Not unexpected, really.

        Cheers.

      • “When did it not?”

        Throughout the very regular Quaternary.

        You’ve finally admitted you can’t point to any chaos. That was my point all along.

      • David Appell,

        I believe the “laws of physics” applied through pre history. Therefore, the climate operated, as the IPCC states, in a chaotic manner, the same as now.

        Not important to you, maybe.

        As the Quaternary period covers the present, adverse weather effects due to the chaotic nature of the atmosphere have been important to me. Cyclones, floods, hurricanes, blizzards, extreme cold and heat have all affected me.

        Maybe you deny the existence of chaos, the laws of thermodynamics, and similar things. That’s fine, but their existence or no, does not rest on what you think. Many scientists believed in things later discovered to be wrong or non-existent, or refused to accept things later found to be true.

        So far, I haven’t had to change many ideas based on new information. A few, but not many. Just lucky, I guess.

        Cheers.

      • Mike Flynn commented:
        “Therefore, the climate operated, as the IPCC states, in a chaotic manner, the same as now.”

        So, again, what’s the best example of chaos during the Quaternary?

        Because the last million years look fairly regular:

        https://seaandskyny.files.wordpress.com/2011/05/figure11.jpg

      • DavidA, you have just proved Kip and my point. Your graph is exactly like Ulams first nonlinear waves paper which predates Lorenz by at least a decade and is the signature of a strange attractor.

      • “DavidA, you have just proved Kip and my point. Your graph is exactly like Ulams first nonlinear waves paper which predates Lorenz by at least a decade and is the signature of a strange attractor.”

        How is that a “nonlinear wave?” It’s close to a periodic function, modulated mostly by Milankovitch cycles

      • It looks exactly like fluctuations in a turbulent boundary layer. I don’t have it downloaded but the nonlinear waves paper was by Stanislaw Ulam and it’s mentioned in his autobiography. Look at any flow visualization of a separated flow and you will see the same sort of thing.

        What is interesting about ice ages is that total forcing changes don’t cause them but small changes in the distribution of forcing. It’s a very subtle and nonlinear effect that GCMs can’t really capture at least last time I checked.

      • “It looks exactly like fluctuations in a turbulent boundary layer.”

        It “looks” more like the sum of few simple sinuosidals.

      • David, Now you are starting to resort to the curve fit method and Fourier analysis? We already know climate and weather are chaotic according to the IPPC and Palmer and Slingo.
        You should embrace the attractor. It’s your only chance that climate models are meaningful.

        All turbulent flows are chaotic and the atmosphere is no exception. Whether they are predictable is unresolved and the likely answer is some are and some aren’t.

      • Milankovitch forcing cycles are predictable maybe up to millions of years into the future. That is not chaos. You have to distinguish this from Lorenz-style chaos or turbulence that are not predictable because they only depend on previous states, not on a predictable future driver like the orbital properties.

      • JimD, just because you have a name for the cycles means nothing. The cycle itself is chaotic as orbital mechanics is well known to be on long time scales.

        Embrace the attractor, it’s your only hope GCMs are more than complicated energy balance methods.

      • You have to learn to distinguish true chaos from predictable combined cycles.

      • dpy6629 commented:
        “JimD, just because you have a name for the cycles means nothing. The cycle itself is chaotic as orbital mechanics is well known to be on long time scales.”

        Prove it! Instead of repeatedly asserting it.

        A simple pendulum also has an attractor. That doesn’t mean its motion is chaotic.

      • dpy wrote: “All turbulent flows are chaotic and the atmosphere is no exception. Whether they are predictable is unresolved and the likely answer is some are and some aren’t.”

        I haven’t seen anyone here explain how turbulence matters for long-term climate change, which is mostly about energy conservation.

      • I haven’t seen anyone here explain how turbulence matters for long-term climate change, which is mostly about energy conservation.

        What’s the difference in energy balance, between having 10 hurricanes/cyclones vs 20 hurricanes/cyclones per year?

      • Well DavidA the issue is the details of the turbulence could change. Ice ages which are big changes are simply not about average forcing but the details of its distribution. GCMs right now pretty much don’t resolve turbulence and don’t model it either. Do they get much right that simple energy balance methods miss?

        In any case, you seem to be laboring under a simplified understanding of fluid dynamics. its fundamentally different than electromagnets or structural analysis.

        If you want to increase your understanding I can send you privately a laymens intro to the subject.

      • The simple pendulum is a stable orbit not a strange attractor. It’s very well accepted that orbital mechanics and turbulent flows are strange attractors even when the naive “see” cyclical patterns.

      • dpy: I’d prefer something above layman level regarding fluid mechanics. My email address is on my Web site, davidappell.com

        You wrote:
        “Well DavidA the issue is the details of the turbulence could change. Ice ages which are big changes are simply not about average forcing but the details of its distribution. GCMs right now pretty much don’t resolve turbulence and don’t model it either. Do they get much right that simple energy balance methods miss?”

        I still don’t see how ANY of that implies chaos. It looks to be about the distribution of sunlight, ice-albedo feedbacks and the subteties of the carbon cycle.

      • dpy wrote:
        “The simple pendulum is a stable orbit not a strange attractor.”

        It’s not strange, but it has an attractor in phase space at (theta=0, v=v_max=v(theta=0)). And it’s not chaotic.

        I still have yet to see a proof that the ice ages are evidence of chaos.

  21. Thanks, Dan for this essay.
    So hard to point out all the different factors and not get bits taken the wrong way.
    GIGO does nor sum it up.
    It is more Garbage in and predicated result out .
    The result, like a stopped watch might be right sometime, close sometime but impractical for use if you have to get somewhere on time.
    I still think Climate models are useful and must be used.
    Blind adherence to desired algorithms when they produce wrong results is a worry.
    If all it takes is a lower climate sensitivity for instance, I am probably wrong, but why not put it in and use the model if it works and worry about why are we missing something here later.
    Fluid dynamics and chaos, yes but Nick is probably right that in most situations we will have some useful predictive power. Furthermore if it trends back to average it will probably take off in the same direction again. We just have to be aware it is probably not definitely always.

  22. Not even Garbage in, Data in, Predicated result out, wrong assumptions for algorithms

  23. A model is an approximation of reality.

    The question is whether the model is a good enough approximation that it responds in a similar way to reality if you perturb it with a change of some sort.

    For example, if adding CO2 radically changes the basis of some key assumptions such as the way the atmosphere and ocean exchange energy, then the model may respond incorrectly.

    The above article does not address the model approximations in these terms, so is merely a worthless restating of what model developers already know.

    • By the same token, the simple, incomplete, and mis-characterization as ‘Laws of Physics’ does not address the model approximations of reality. Model developers who already know that should not be repeating it.

      • You are arguing with descriptions of models for people not familiar with numerical analysis and physics-based parameterizations.

        And even then, your second example from Prof Tim Palmer is reasonably clearly qualified to make it clear that a “flaw” in his terms is a departure from the laws of physics rather than an approximation. We know what he means, but you choose to pretend not to.

        Approximations can be based on the laws of physics and they can be validated against either experimental results or more expensive models with greater fidelity.

      • I was not aware that I had used an example from Tim Palmer.

  24. > It is critical that the actual coding be shown to be exactly what was intended as guided by theoretical analyses of the discrete approximations and numerical solution methods.

    I don’t always V&V discrete approximations, but when I do, I V&V their exactness.

    Why V&V is so crucial is simply asserted.

    There would not be any need to invoke V&V to refute a self-contradictory statement, more so if they are expecially self-contradictory.

  25. Let me be explicitly clear on a couple of points.

    (1) I did not characterize GCMs as a case of GIGO.

    (2) I did not say that GCMs are not useful.

    I do not, and will not ever, apply those characterizations.

    • Hi Dan. Thanks for the article and the thought you put into it. BTW, went to your web site and want to thank you also for your article of January 14, 2015. Wish I had found it sooner.

    • Dan, it would have been good if you had looked at the countless verifications and validations of GCMs that have been published and commented on those instead of the pure disconnected speculation that appears here. That way you might have found possibly something to back up your arguments, but I think you have not looked at these based on the evidence of this article. Or you have, and not found anything to criticize in their outputs, because we don’t see anything specific here at all. An article on GCMs should at least illustrate something about their results, I think.

  26. Dan Hughes, thank you for this terrific and informative post. In finance, models are predicted (by financial consultants, banks, etc.) to provide a framework to understand what might happen for a base case and multiple sensitivity cases representing different economic conditions, market assumptions… Such models can correctly be described as scenario models. The forecasts for the independent inputs on the economy and markets are not known with great precision but running scenario cases basically gives a “what if” picture of what “might” or “could” happen. Two factors are important. First, the structures of the models and whether they represent our best knowledge on the process as you put it. Importantly that is not to suggest at all that the base case model is a good one, it is simply the best we can have at the time. For investment in a multibillion dollar petrochemical plant with bank – project financing this is relatively straight forward – the chemistry and engineering define the process and the financial terms are based on what kind of financing deal is negotiated with the banks. The “cases” run are selected to cover the range of cases on volumes, prices and margins based on historical data. To test the cases a back cast case is run to see how well the model predicts actual historical results. Climate models need to run backcast tests to see how well the models actually perform versus actual historical data. However, the real test comes after the business is whether / how well the actual base case predicts actual performance in the future. Often now that well. The input assumptions on the economy and markets can’t be predicted or tested a priori with certainty because the systems are overspecified. This is also the case with climate forecasts and here the challenge is much more difficult since the climate models are sorely lacking in such things as clouds, moisture, solar variability, etc. Someone above gave a link from a google search illustrating a bunch of examples of 10 color charts produced by climate scientists … I assume this was to suggest how wonderful climate models are at generating beautiful colorful contour plots illustrating such things as temperature prediction, sea level rise, movement of the short tale marmots in the Canadian Rockies during mating seasons, etc… These are reminiscent of the bank models that we would run that are based on various sets of assumptions. Putting aside the beautiful and impressive gradient plots, the outputs are only as good as the basic models and scenario assumptions they are based on which may be good but also may be GIGO – garbage in – garbage out. And today people are very skilled at googling and finding pages and pages of such beautiful and impressive charts. Lesson here is producing glossy colorful contour plots – which abound in articles appearing professional journals such as Science and Nature, are only as good as the models and assumptions upon which they are based. However, government departments and mainstream media then memorializes the beautiful glossy colorful charts to push their own policy agenda, failing to discuss or even mention the limitations in the models and data on which they are based.

  27. Dan, I may have missed it: but your post I think gives far too little attention to subgrid models such as turbulence models. All turbulent flows are time dependent and chaotic. The issues you raise are indeed issues but it’s the subgrid models that are really a fundamental unsolved problem. There are also interesting issues surrounding time accurate calculations but I don’t have time to go into it now.

    I have a laymens introduction to CFD I may send to Judith soon. Your bottom line Dan is correct. Appeals to the “laws of physics” are misleading and may deceive laymen into thinking GCMs are just like structural analysis. That’s a dangerous falsehood.

  28. F.W. Aston received the 1922 Nobel Prize in Chemistry for measuring the masses of atoms and reporting them as nuclear “packing fractions.”

    Drs. Carl von Weizsacker and Hans Bethe did not understand nuclear “packing fractions” and proposed the seriously flawed concept of nuclear “energy energies” instead.

  29. Dan,
    I’ve found this in model doc’s a few times, but I think it would be a good addition to your article.
    There is a mass conservation hack at air water boundaries, it allows a super saturation of water vapor, otherwise the models don’t warm enough.This hack actually warms too much, so they monkeyed with aerosols to tune the output.
    Actually I saved this one. It’ll probably disappear now :)
    http://www.cesm.ucar.edu/models/atm-cam/docs/description/node13.html#SECTION00736000000000000000
    Now this math is beyond me, and maybe it doesn’t do what I think, but I’ve seen earlier variants of this back in one of Hansen’s Model D(?) TOE’s.

    • Approximations that gain or lose energy or mass can be tested in a control model to ensure that the effect with the somewhat arbitrary correction doesn’t introduce warming or cooling in the absence of a change in forcing.

      “Monkeying with aerosols” is an entirely separate issue.

  30. Brunch break. I’m not on the clock any more, so I can take long-ish brunches. Back later.

    Thanks for the great comments.

  31. There is a separate fundamental problem with GCMs. The finest feasible grid scale is presently 110km x110km at the equator. Most CMIP5 models are 250km. Yet we know from weather models that to properly resolve convection cells (tstorms and precipitation) a grid of 4km or less is required. NCAR rule of thumb is that doubling resolution by halving grid size increases the computational burden 10x (the time step has to more than halve also). The scale needed to minimally model key climate processes like convection, precipitation, and clouds is 6-7 orders of magnitude more than presently feasible with the biggest, fastest supercomputers.
    So all GCM models have to be parameterized to het around the computational intractability of global climate. Those parameters are tuned to best hindcast; for CMIP5 the tuned hindcast period was YE2005 back to 1975, three decades. And that automatically creates the attibution problem. The rise in temp from ~1920-1945 is essentially indistinguishable from the rise from ~1975-2000. Yet the IPCC itself says the former rise was mostly natural; there simply was not a sufficient increase in CO2. Yet the IPCC attribution in the later period is to GHE, mainly CO2. Pretending natural variation has ceased is a fundamental error, which the growing divergence between modeled and observed (balloon and satellite) temperatures is revealing.

  32. “The art and science of climate model tuning”
    http://journals.ametsoc.org/doi/abs/10.1175/BAMS-D-15-00135.1

    Page 37 shows of the climate modelers surveyed, 96% answered “yes” to the question

    “is your model being tuned by adjusting model parameters to obtain certain desire properties e.g. radiation balance”

    • Just imagine all the models we could develop if we could explore world with no radiative balance.

      The scales are being tipped. Institutions are being bought. Wake up.

      Let’s fund research on models that don’t preserve radiative balance!

      • Just imagine all the models we could develop if we could explore world with no radiative balance.

        Exactly when is it in balance? What day and time, exactly? And by balance do you mean quiescence, incoming=outgoing?

      • Planet is 4.543 billion years old, and though it’s cooler than when it first coalesced, it’s neither reached the temperature of the CMB, nor that of bright yellowish orb which heats it.

        Steady state is your huckleberry, MiCro. Relax, even engineers use it.

      • Relax, even engineers use it.

        Then the wind really starts blowing, the bridge starts a’oscillating, then it all falls down.

      • Varying galactic cosmic ray radiation changes aerosol and cloud development. Solar cycles vary Total Solar Insolation.
        So earth’s incoming a and outgoing radiation are NOT in balance but cause variations in surface heating/cooling.
        Burning biomass and coal both increases soot and aerosols (aka “brown” or “black” “carbon”).
        Indeed we should fund models that model, quantify, and predict consequences of such natural variations.

      • Exactly when is it in balance? What day and time, exactly? And by balance do you mean quiescence, incoming=outgoing?

        They’re talking about the long-term energy balance, that energy in == energy out. (Read the paper, it’s good, and it explains what they mean).

      • > Exactly when is it in balance? What day and time, exactly? And by balance do you mean quiescence, incoming=outgoing?

        Only three questions, micro? Are you sure you can’t do better than that? Do you have any idea how important it is just to ask questions around here?

        Is the truth out there, or what?

      • Only three questions, micro? Are you sure you can’t do better than that? Do you have any idea how important it is just to ask questions around here?
        Is the truth out there, or what?

        But you want to avoid (or not) funding models that don’t preserve radiative balance, surely you must be able to define what you wish to avoid!
        How will you avoid such a model!

      • To avoid such a model, do as Dan does and do none!

        But what if I have two? How would I pick the one Willard would choose as better?

  33. The sole issue for computational physics is Verification of the solution.

    How do we do that?

    • There are books on the subjects of Verification and Validation of models, methods, software, and applications. The two that I turn to are:

      This book

      And this book

      There’s a bunch of reports, and associated journal papers, from Sandia National Laboratory. The Web site will have a link to reports about their research results.

      A Google, either plain or Scholar, will produce many, many hits. Here’s an example:

      This Google search

      Papers are now appearing frequently in journals devoted to numerical methods and various science and engineering disciplines.

      GCMs contain models and methods for numerous aspects in the physical domain, and have a wide variety of application objectives. Direct application of accepted Verification procedures to the whole ball of wax is very likely not possible. That does not prevent the various pieces parts from being individually investigated.

      The Method of Exact Solutions (MES) is usually a good starting point because that method requires that the model equations be extremely simplified, thus allowing focus. That is also a downfall of the method in the sense that the simplifications throw out the terms that are most difficult to correctly handle in numerical methods.

      The Method of Manufactured Solutions (MMS), on the other hand, has proven to be an excellent way to determine the actual order of numerical solution methods. Manufactured Solutions have been, and are being, developed all the time now. And MMS has also been used to locate bugs in coding. Again, a Google will find such reports and papers.

      The properties and characteristics of candidate numerical solution methods can also be directly investigated prior to coding them. Richtmyer and Morton is the standard classic introduction. Computer algebra and symbolic computing have greatly enhanced what is possible to learn by looking directly at the methods.

      • I can vouch that the Patrick Roache book is excellent (i purchased it on Dan’s recommendation).

        Note, several years ago, I had some posts at CE on climate model verification and validation
        https://judithcurry.com/?s=verification+and+validation

      • What single legitimate reason could possible exist that explains why Western academia would steadfastly refuse to insist on robust model verification and validation in climate science?

      • “What single legitimate reason could possible exist that explains why Western academia would steadfastly refuse to insist on robust model verification and validation in climate science?”

        The longest chapter in the IPCC 5AR WG1 is Chapter 9: “Evaluation of Climate Models.”

      • For all their self-aggrandizing, the data manipulators of the global warming movement have become the Brian Williams’ of science. But, a betrayal of the public trust is their biggest crime. Roger Pielke, Jr. asked for a copy of the raw data back in August 2009 to conduct his research. He could hardly overlook the degree of scientific sloppiness and ineptitude demonstrated by CRU after being informed that only quality controlled, homogenized data, i.e., adjusted data, was still available as all of the original raw data prior to 2009 had been lost (forever making the duplication, verification or reevaluation of the homogenized data impossible). We’re talking about research practices that David Oliver (see, “It is indicative of a lack of understanding of the scientific method among many scientists”) would describe as, shoddy at best and fraudulent at worst.
         

        In the business and trading world, people go to jail for such manipulations of data.

        ~Anthony Watts

  34. As a lukewarmer who also doesn’t have much confidence in the current verifiability of climate modeling software codes, I’ve been challenged by climate activists in my own organization to produce my own climate model as an alternative to current GCMs.

    Recognizing the valuable contribution such a model could make towards gaining public acceptance of the pressing need for adopting fully comprehensive government regulation of America’s carbon emissions, I’ve accepted the necessity of producing my own climate model as an alternative to what’s out there now.

    But as someone whose normal job is working down in the nitty-gritty trenches of nuclear plant construction and operations, I don’t have millions of dollars of my own to spend on computer processing time and on gaining access to the services of the legions of climate scientists, climate software coders, and climate software QA specialists that would be needed to produce my own version of a software-driven climate model.

    Great galloping gamma rays, what am I to do!?!?

    This is my solution: Graphical analysis to the rescue! As I’ve previously posted on Climate Etc., here once again is my own graphical climate model of where GMT might go between now and 2100:

    http://i1301.photobucket.com/albums/ag108/Beta-Blocker/GMT/BBs-Parallel-Offset-Universe-Climate-Model–2100ppx_zps7iczicmy.png

    Three foundational assumptions are made in the Parallel Offset Universe Climate Model: (1) Trends in HadCRUT4 surface temperature anomaly can be used as a usefully-approximate measurement parameter in predicting future temperature trends in the earth’s climate system as a whole; (2) Past history will repeat itself for another hundred years with the qualification that if the earth’s climate system is somewhat more sensitive to the presence of carbon dioxide, there will be more warming; if it is somewhat less sensitive, there will be less warming; and (3) Upward trends in HadCRUT4 surface temperature anomaly will roughly parallel current upward trends in atmospheric CO2 concentration.

    That’s it, that’s the whole model. There is nothing more to it than what you can read directly from the graph or what you can directly infer from its three alternative GMT trend scenarios. There is no physics per se employed in the construction of this model, parameterized or otherwise. There are no physics-based equations and no numerical analysis simulations of physics-based equations carrying artificially-imposed boundary constraints. There is no software coding involved. There is no software QA because there is no software coding; and there is no model validation process other than to follow HadCRUT4’s trends in surface temperature anomalies year by year as those trends actually occur.

    As a service to all humanity, I, Beta Blocker, mild mannered radiological control engineer, hereby relinquish all personal rights to the Parallel Offset Universe Climate Model in the hope that dedicated climate activists such as David Appell, Bill McKibben, Leonard DeCaprio, and Hillary Clinton — supported by dedicated environmental activist groups such as 350.org, the Children’s Litigation Trust, the Natural Resources Defense Council, and the Sierra Club, etc. etc. — will move decisively forward with pressing the EPA to strongly regulate all sources of America’s carbon emissions, not just the coal-fired power plants.

    • Brave move. These guys welcome all opinions, except when they don’t. Thank you curryja. Nuclear power? I’m there. http://www.eia.gov/state/?sid=MD.

    • Outstanding! One thing we all know for sure is that the near future looks like the recent past, except for when it doesn’t.

    • John and Justin, would either of you care to speculate as to what the debate concerning the long-term impacts of ever-increasing concentrations of CO2 in the earth’s atmosphere might look like a hundred years from now if my Scenario 3 is the one that actually occurs; i.e., atmospheric CO2 concentration as measured by the Keeling Curve reaches approximately 650 ppm by 2100, and the earth’s climate system warms at roughly +0.1 degree C per decade on average between 2016 and 2100? If that’s what happens, what will our descendants be saying a hundred years from now concerning the predictions that were being made here in the year 2016?

  35. It appears that GCMs attempt to address climate treating the atmosphere as a continuum. That should be valid on a macro level.

    An oversight is not addressing action at the level of gas molecules. Thermalization takes place at the level of gas molecules. Thermalization explains why CO2 (or any other ghg which does not condense in the atmosphere) has no significant effect on climate. (Thermalization results from interaction of atmospheric gas molecules according to the well understood Kinetic theory of gases. A smidgen of quantum mechanics helps in understanding that ghg molecules absorb only specific wavelengths of terrestrial electromagnetic radiation)
    http://globalclimatedrivers2.blogspot.com

  36. Willis Eschenbach

    Thanks for a good read, Dan. I liked this part:

    While the fundamental equations are usually written in conservation form, not all numerical solution methods exactly conserve the physical quantities. Actually, a test of numerical methods might be that conserved quantities in the continuous partial differential equations are in fact conserved in actual calculations.

    Looking at the GISS Model E code more than a decade ago now I noticed that at the end of each time step the total excess (or lack) of energy from various small errors in all areas of the globe was simply gathered up and distributed evenly around the planet. I asked Gavin Schmidt if I understood the code correctly. He said yes. I asked how large the distributed energy (or lack of energy) was on average, and what the peak was. He said he didn’t know, they didn’t monitor it …

    w.

  37. A Full Scope Replica Type Simulator has been built and commissioned by IGCAR, for imparting plant oriented training to PFBR (Prototype Fast Breeder Reactor) operators. The PFBR Operator Training Simulator is a training tool designed to imitate the operating states of a Nuclear Reactor under various conditions and generate response equivalent to reference plant to operator actions. Basically, the models representing the plant components are expressed by mathematical equations with the associated control logics built into the system as per the actual plant, which helps in replicating the plant dynamics with an acceptable degree of closeness. The operator carries out plant operations on the simulator and observes the response, similar to the actual plant.

    http://waset.org/publications/10000558/verification-and-validation-of-simulated-process-models-of-kalbr-sim-training-simulator

    • It has been long recognized that the world-wide nuclear power industry has been the leader in establishing V&V, and other model, methods, software, and application quality procedures and processes. And within the industry, the USA has been the lead, and within the USA the United States Nuclear Regulatory Agency (USNRC) has been the major driver. Pat Roache, starting the the 1980s, was the pioneer in getting the attention of other scientific and engineering applications and associated disciplines, primarily through professional societies. Again, The Google is your friend.

      • The atmosphere is not the same size than a nuclear powerplant, Dan. Teh stoopid modulz are not mission critical.

        Besides, V&V cost money.

      • The C in USNRC stands for Commission, of course, and I wrote Agency.

      • The size of the physical domain does not introduce any limitations relative to fundamental quality. It may well impact applications, but the fundamentals require verification no matter what the application limitions.

        Yep, quality costs money. Lack of quality, on the other hand, costs very much more.

      • > The size of the physical domain does not introduce any limitations relative to fundamental quality.

        Of course size matters in V&V.

        Maybe it’s a vocabulary thing.

      • Dan Hughes,

        The size of the physical domain does not introduce any limitations relative to fundamental quality.

        The unintentional humor exhibited by VSPs is always the best sort.

        How many sensors per unit volume in the average nuke plant? Extrapolate to the volume encompassed by all the fluids below the tropopause.

        Hang cost, let’s discuss the *physical* feasibility of that answer.

        Or you know what? We could continue pushing the real planet toward limits not seen for over a million years and just find out what happens. Who needs stinkin’ models, validated to impossible standards or not, when *hard data* can tell us all we need to know? Sure some smart engineers somewhere will be able to fix whatever might break just fine. It’s what they do.

      • Willard, I do not see that the size of the physical domain is addressed in that paper. In fact, I do not see a single reference about any physical domain. The sole focus of the paper is software.

        Kindly point me to what I have not seen.

      • brandonrgates, How many sensors per unit volume in the average nuke plant?

        On what theoretical basis is sensor density per unit volume in an engineered electricity-production facility system a scaling factor for what is needed in any other systems? Especially considering that the reference systems have safety-critical aspects.

      • > I do not see that the size of the physical domain is addressed in that paper.

        It’s right next to where the author admits he’s beating his dead horse with a stick, Dan.

        The first sentence ought to be enough:

        It is becoming increasingly difficult to deliver quality hardware under the constraints of resource and time to market.

        The bit where IBM admits using verification to find bugs more than to check for the correctness of their hardware may also be of interest. Logic gates are a bit less complex than watery processes and all that jazz.

        That said, it’s not as if modulz were never V&Ved. It still has a cost. It still is quite coarse compared to nuclear stations or motherboards.

      • brandonrgates, it seems that the instruments already applied to Earth’s climate systems produce an extremely large number of data points. Maybe that’s related to being able to get volumetric information from single instruments?

      • Dan Hughes,

        On what theoretical basis is sensor density per unit volume in an engineered electricity-production facility system a scaling factor for what is needed in any other systems? Especially considering that the reference systems have safety-critical aspects.

        Yes, those are excellent questions. They’re the sort of things I’d be thinking about if I were an engineer writing an article about best coding and validation practices for planet simulators.

        First thing I’d do is get a sense for the scale of the thing. Let’s generously assume that the critical systems of your average nuke occupy the volume of an average 2-story home in the United States … I make it about 600 m^3. The atmosphere below the tropopause alone is a 9 km thick shell wrapped around a spheroid with a volumetric radius of about 6,371 km … works out to a volume of 4.60E+18 m^3. What is that … sixteen orders of magnitude difference in volume.

        If we had better models and a bazillion times more computing power, I could give you a better outline of the safety-critical aspects. That might give a better clue as to what the sensor density and time resolution needs to be to do a proper validation.

        Gets circular real quick, dunnit.

        One thing I can say with some confidence … there’s gonna be some slop for the foreseeable future. On behalf of the sheer physical scale and complexity of the entire freaking planet, I offer my most sincere apologies about that.

        I’d think engineers would know better than to go monkeying around with machinery *they themselves* are telling us we don’t yet properly understand. But that’s just me.

      • What Dan isnt telling you guys is that Validation does not refer to “reflects reality”

        Validation means meets the Specification.

        if climate sceintists wanted to be sneaky all they would have to do is specify that the models shall be within 100% of observed values, and the spec would be met and they would be validated.

      • Steven Mosher,

        You wrote –

        “if climate sceintists wanted to be sneaky all they would have to do is specify that the models shall be within 100% of observed values, and the spec would be met and they would be validated.”

        On the other hand, the “sceintists”, having mastered the elements of spelling, could specify that their models are completely useless.

        Voilà! Specification met!

        Just as a matter of interest, “. . . within 100% of observed values . . . ” appears to be a foolish way of specifying anything, without also specifying what you are measuring. If you observe a value of 1C, you appear to be limiting yourself to plus or minus 1 C (100% of 1). You might be referring to Kelvins, I suppose, in which case your model is completely pointless.

        Playing with words cannot disguise the fact that climate models have so far demonstrated no utility whatsoever.

        Cheers.

      • Steven Mosher,

        Validation means meets the Specification.

        lol. Well I must admit, that one did get by me.

        Consolation is a supreme irony: contrarians rarely specify a threshold for utility. The rally cry is, “The models are WRONG,” which is about as illuminating as calling water wet.

      • http://www.dtic.mil/ndia/2012systemtutorial/14604.pdf

        “The purpose of Validation (VAL) is to demonstrate that a product or product
        component fulfills its intended use when placed in its intended environment.
        In other words, validation ensures that “you built the right thing.”

        “ Extreme IV&V can be as expensive and time consuming as the development
        effort
         An example of this is Nuclear Safety Cross Check Analysis (NSCCA)
        » Conducted by an organization independent of the development
        organization (usually a different contractor)
        » Purpose is to identify and eliminate defects related to nuclear vulnerabilities
        > The reentry vehicle (RV), with a nuclear war head, shall hit the intended target
         Not New York or Washington D.C.

      • Or you know what? We could continue pushing the real planet toward limits not seen for over a million years and just find out what happens. Who needs stinkin’ models, validated to impossible standards or not, when *hard data* can tell us all we need to know?

        Outstanding idea. I’m fine with this and wish I had thought of it.

        The global whiners are going to keep complaining about fossil fuel use and CO2 emissions until we prove them wrong.

        The solution is to prove them wrong. We should subsidize fossil fuel producers and encourage fossil fuel consumption as part of an effort to deliberately meet or exceed the atmospheric CO2 level that the global whiners deem to be the. “level of harm”.

        At that point we can declare the global whiners wrong and proceed to the next eco-environmental challenge.

        The only harm from 500, 600, 700, or even 940 PPM is that you have to mow your grass more and farm prices will be depressed a little by overproduction.

  38. Excellent summary!

  39. “1. Basic Equations Models The basic equations are generally from continuum mechanics such as the Navier-Stokes-Fourier model for mass, momentum and energy conservation in fluids…”

    I can’t verify this, but I understand there is more non-linear coupling in those models than in a whole field of bunnies.

  40. As a point of passing interest, how many hands believe energy is conserved in the Navier-Stokes equations?

  41. Almost always whenever models, methods, and software issues are the subjects of blog posts, we see calls for individuals to pony up with their own models, methods, and software. No individual can have expert/guru knowledge, experience, and expertise in all of the important physical phenomena and processes of the physical domain. Typically for cases in which the physical domain has a range of important phenomena, the modeling effort will have an expert/guru for each one. Together, they will generally provide the initial efforts to formulate a tractable problem.

    From that point onwards, the number of people that are required to successfully complete a project will only increase. Such is the nature of inherently complex physical domains and associated complex software.

    Hundreds of millions of dollars over decades of time, have been spent on development of GCMs by, maybe, thirty organizations.

    Such efforts are somewhat beyond what an individual can accomplish.

    • “Hundreds of millions of dollars over decades of time, have been spent on development of GCMs by, maybe, thirty organizations.”
      And did they all get it wrong? In the same way?

      • And did they all get it wrong? In the same way?

        They better all be wrong in the same way.
        Because what’s wrong is the numerical solutions to the physics.

      • Turbulent Eddie:

        You wrote “They better all be wrong in the same way. Because what is wrong is the numerical solution to the physics”

        No, what is wrong is their inclusion of greenhouse gasses in their models.

        I have proof, from from several directions, including one quite unexpected, that all of the warming that has occurred has been due to the reduction of SO2 aerosol emissions into the atmosphere. Greenhouse gasses can have had zero climatic effect.

      • NS, best as I can tell most did. Exception maybe Russian INM-CR4. Previously discussed elsewhere. Most produce a non-existant tropical troposphere hot spot. Most produce an ECS ~2x observed by EBM or other methods. AR4 black box 8.1 justifies the emergent equivalent of following Clausius Clapeyron across the alritude humidity lapse rate; thatnwas at the time and since with more studies proven wrong. Yes, specific humidity does increase with delta T. But not enough to keep rUTH roughly constant.
        The fundamental unidirectional flaw was to tune CMIP5 parameters to best hindcast YE 2005 back to 1975 (the second required ‘experimental design’ CMIP5 submission). That inherently sweeps in the attribution problem (comment elsewhere this thread). So, most got it wrong for the same basic reason, in the same basic ‘overheated’ direction. QED.

      • Rud,
        “Most produce a non-existant tropical troposphere hot spot. “
        And recent results (here nd here) say they are pobably right.

        “Most produce an ECS ~2x observed by EBM or other methods.”
        So who’s right? Heating is at an early (transient) stage.

        “The fundamental unidirectional flaw was to tune CMIP5 parameters”
        If that’s a flaw (big if), it is model usage. It has nothing to do with the structural issues claimed in this guest post.

      • > I have proof, from from several directions, including one quite unexpected, that all of the warming that has occurred has been due to the reduction of SO2 aerosol emissions into the atmosphere.

        Citation needed.

      • I have proof, from from several directions, including one quite unexpected, that all of the warming that has occurred has been due to the reduction of SO2 aerosol emissions into the atmosphere.

        SO2 and CO2 are not mutually exclusive. The effect of SO2 would appear uncertain, however, since cloud droplets are quite transient and the scattering from them occurs in all directions ( and so, they’re not readily observed by moving satellites that sample from only one direction at a time ).

        Greenhouse gasses can have had zero climatic effect.

        Below is a scatter plot of monthly CERES satellite estimated Outgoing Longwave Radiation, OLR, ( which is much more isotropic than SW ) versus monthly global average surface temperature. The monthly data is dominated by the NH seasonal cycle. With this cycle, water vapor ( a the greatest greenhouse gas ) varies with temperature.

        I have applied the Steffan Boltzman equation to convert the OLR to the effective radiating temperature (Te).

        http://climatewatcher.webs.com/TE_SFCT.png

        1. The thick black line represents the Unity line.
        For this line, Te = Tsfc. For an earth with no atmosphere ( or with no greenhouse gasses ) Tsfc would equal Te. The blue dots represent what actually happens on earth.

        2. The slope of the blue dots is less than 1. This indicates positive feedback. There are other factors, but the increase in water vapor with temperature can account for this.

        3. The distance from a blue dot to the Unity line represents the Greenhouse Effect. The average Tsfc is around 288K. The average Te is around 255K.

      • TE, what is the order of the blue dots? Are they a sequence, or sorted by value?

        Because you could be seeing just the seasonal slope (which you mention). But that doesn’t mean feedback, just a strong seasonal signal

        As for the season data, I use the change in temp to get the rate of change, both warming and cooling to see if the rate’s changed, it has slightly, but it could be just past an inflection point, but during warm years (or months) the cooling rates will be higher, than cool years.

      • Turbulent Eddie:

        You wrote “SO2 and CO2 are not mutually exclusive”

        They are, in the sense that warming from the removal of SO2 aerosols is so large that there is simply no room for any additional warming from CO2.

        (Surface temperature projections based solely upon the amount of warming expected from the reduction in SO2 aerosol emissions are accurate to within less than a tenth of a degree C, over decades).

      • TE, what is the order of the blue dots? Are they a sequence, or sorted by value?

        Because you could be seeing just the seasonal slope (which you mention). But that doesn’t mean feedback, just a strong seasonal signal

        The points are all the monthly data from 2001 through 2009.

        The feedback occurs because water vapor also increases with the NH seasonal cycle. The “shape” of the warming is different than what might occur with increased CO2, but the relationship is global and still pertains.

      • The points are all the monthly data from 2001 through 2009

        I figured that, but since they are ordered by Tfsc, what are the order of the months?
        I would expect all the Dec and Jan, to the extreme left, and all the July, Aug all the way to the right, and the rest sorted by temp between them.
        Is this how the months are ordered?

    • DH, an excellent post. You have posted before on this topic here, and its always enlightening. There are models and models. The 32 CMIP5 GCMs are enormously complex ‘finite element’ equivalents, and all doomed by the computational intractability of small gridscales necessary to even begin to get important climate features like convection cells, clouds, and precipitation right from first principles. See, for example AR5 WG1 chapter 7 on clouds. In terms of verification and validation, most CMIP5 GCMs have already invalidated themselves by producing a tropical troposphere hot spot that does not exist, by producing an ECS ~ 2X of that observed, and by predicting polar ampflification that is not happening in Antarctica.

      There are other much simpler models that still yield useful information bounding AGW. The EBMs that estimate sensitivity are one class (Lewis and Curry 2014). Monckton’s irreducibly simple equation, further reduced and properly parameterized is another example (guest post at the time). Properly done paleoproxy reconstructions that give a sense of centennial scale natural variation(e.g.Loehle northern hemisphere). Guy Callendar’s 1938 paper on sensitivity. Lindzen’s Bode version of feedbacks and sensitivity. These are simple logic, easy math, quick to check, and good enough for basic understanding and directional policy decisions. Not massive opaque numerical simulations of ‘physics’ that have already gone wrong because they weren’t numerically simulating the necessary real ‘physics’ on the proper scales in the first place.

    • > Such efforts are somewhat beyond what an individual can accomplish.

      I concur, Dan. It’s team work. Here’s one:

      Large, complex codes such as earth system models are in a constant state of development, requiring frequent software quality assurance. The recently developed Community Earth System Model (CESM) Ensemble Consistency Test (CESM-ECT) provides an objective measure of statistical consistency for new CESM simulation runs, which has greatly facilitated error detection and rapid feedback for model users and developers. CESM-ECT determines consistency based on an ensemble of simulations that represent the same earth system model. Its statistical distribution embodies the natural variability of the model. Clearly the composition of the employed ensemble is critical to CESM-ECT’s effectiveness. In this work we examine whether the composition of the CESM-ECT ensemble is adequate for characterizing the variability of a consistent climate. To this end, we introduce minimal code changes into CESM that should pass the CESM-ECT, and we evaluate the composition of the CESM-ECT ensemble in this context. We suggest an improved ensemble composition that better captures the accepted variability induced by code changes, compiler changes, and optimizations, thus more precisely facilitating the detection of errors in the CESM hardware or software stack as well as enabling more in-depth code optimization and the adoption of new technologies.

      I think your conclusion also extends to criticism of teh stoopid modulz too. Auditing them is not a single man feat. A string of posts on V&V can only do so much.

      • Willard, a suggestion. Rather than read modeler bragging rights, go to KNMI climate explorer. You do know how ro do that, right? Grab the CESM CMIP5 official archived results. Now compare them to the 4 balloon and 3 sat temp observations from 1979. You will notice that not only did CESM do a lousy job AFTER parameter tuning for hindcasts, it did an even worse job of ‘projecting’ from 2006 to now.
        Or, you can just grab Christy’s Feb 2016 comparison spaghetti chart and sort out the CESM line. You are just wrong, and it is easily provable with archived facts. You want to play here, up your data game.

      • > You are just wrong […]

        About what, Sir Rud?

        My turn to suggest a pro-tip: when you want people to go somewhere else, provide a link. Adding a quote also helps.

        Like this:

        Researchers have proved that extracting dynamical equations from data is in general a computationally hard problem.

        https://physics.aps.org/synopsis-for/10.1103/PhysRevLett.108.120503

        I do hope that econometric gurus like you know what “hard” means in that context.

      • Willard

        You are right. There is a veritable smorgasbord of opinion on every thread. If someone wants us to partake of the morsel they offer they need to tempt us. The best way is to provide a link and a quote from it with perhaps a short comment as to its relevance and interest.

        tonyb

  42. Oops, this was for Nick

  43. Global climate models and the laws of physics. Thanks Dan but I found my answer in a cartoon.
    http://www.slate.com/blogs/bad_astronomy/2016/09/13/xkcd_takes_on_global_warming.html

    By that world renowned scientist. Can’t believe I wasted so much of my time reading you guys.

    • Don’t forget the sarc on a paleoproxy cartoon comment. Else we might end up in a ‘discussion’.

      • Heavy sigh. ristvan my pal. You caught me out. You are more learn-ed and eloquent than I. You comment and I’ll learn. I’ve come to realize I have nothing of substance to add to these ‘discussions’. ‘One less clown in the circus’. (timg56).

  44. Climate models are only flawed only if the basic principles of physics are,

    You are kidding, BIG TIME, right?

    Model output does not match measured data. That proves they are flawed and proves that they don’t understand climate and that they have not properly programed the correct basis principles of physics.

  45. “The uncertainty principle states that the position and velocity cannot both be measured,exactly, at the same time (actually pairs of position, energy and time)” – requires expansion, but is true enough.

    A chaotic system may be fully deterministic, following the known laws of physics, yet may produce completely unpredictable divergent outcomes resulting from arbitrarily small differences in initial conditions. There is no minimum numerical quantity below which different inputs will result in known outcomes.

    For any non-believers, try and provide a minimum value which will result in either predictable chaotic or non chaotic output from the logistic difference equation. You can’t do it. Chaos exists, and rules.

    It should be apparent that Heisenberg’s uncertainty principle shows that inputs to a deterministic chaotic system such as the atmosphere cannot be precisely determined in any case.

    Lorenz’s butterfly effect taken to the limit of present understanding.

    The laws of physics appear to preclude the measurement of position and velocity simultaneously. Predictions based on what cannot even be measured appear to be breaking the laws of physics.

    Should offenders be prosecuted, and sentenced to write multiple times ” I must have regard to the laws of physics when pretending to predict the future”?

    Cheers.

    • “It should be apparent that Heisenberg’s uncertainty principle shows that inputs to a deterministic chaotic system such as the atmosphere cannot be precisely determined in any case.”

      Baloney. The Heisenberg uncertainty principle is about the limitations of measurements on a quantum scale. It is irrelevant for macroscopic measurements, where measurement uncertainties are far far above the Heisenberg limits, and where continuum equations do a great job of describing the physics. (You depend on that ever time you get on an airplane designed via continuum equations.)

      • David Appell,

        With respect, I believe you are wrong,

        Feynman said –

        “The simplest form on the problem is to take a pipe that is very long and push water through it at high speed. We ask: to push a given amount of water through that pipe, how much pressure is needed? No one can analyze it from first principles and the properties of water. If the water flows very slowly, or if we use a thick goo like honey, then we can do it nicely. you will find that in your textbook. What we really cannot do is deal with actual, wet water running through a pipe. That is the central problem which we ought to solve some day, and we have not.”

        I believe there is a million dollar prize from the Clay Institute (as yet unclaimed) which you can pick up if you can spare the time.

        You are not alone. Many, if not most, physicists, still refuse to accept that chaos can result merely by changing the input value to an equation as simple as the logistic difference equation. Even worse for some, is that there is no minimum value which distinguishes chaos from non chaos.

        As far as the atmosphere is concerned, simplistic assumptions that tomorrow will be much the same as today, or that “Red sky in the morning, shepherd’s forewarning”, suffice in most cases. Obviously, a satellite picture of a giant cyclone is helpful in deciding where not to be, but history shows that government warnings are iffy at best. Numerical prediction methods don’t seem to be useful in terms of accuracy.

        With regard to the airplane red herring, maybe you might care to specify an aircraft “designed via continuum equations”? Sounds sciencey, impressive even, but conveys no useful information. Now is your opportunity to lambaste me for quoting Feynman, a deceased physicist!

        Cheers.

      • Feynman isn’t saying the problem requires quantum considerations or the Heisenbert principle.

      • (You depend on that ever time you get on an airplane designed via continuum equations.)

        If you depend on a weather forecast for that plane to avoid thunderstorms, you are more likely to fly into one than around one.

  46. David Appell,

    I have not heard of the Heisenbert principle, so I will accept your assertion.

    However, your assertion about what Feynman is or isn’t saying is moot. He’s dead.

    Feynman did write that he was unable to solve the chaos inherent in supposedly simple turbulent flow, and he applied his not inconsiderable knowledge of quantum physics to the problem for several years.

    Feynman merely stated that an apparently simple problem was incapable of solution by calculation and knowledge of physics.

    Feynman was aware of Heisenberg’s principle, and stated –

    Heisenberg proposed his uncertainty principle which, stated in terms of our own experiment, is the following. (He stated it in another way, but they are exactly equivalent, and you can get from one to the other.) ‘It is impossible to design any apparatus whatsoever to determine through which hole the electron passes that will not at the same time disturb the electron enough to destroy the interference pattern’. No one has found a way around this.

    In a deterministic chaotic system, even a difference in position or velocity of just one electron may result in entirely unpredictable outcomes.

    You may not like it, but that’s the way it is (or seems to be – maybe the laws of the Universe may be different in the future).

    On a final note, Feynman also said –

    “For instance we could cook up — we’d better not, but we could — a scheme by which we set up a photo cell, and one electron to go through, and if we see it behind hole No. 1 we set off the atomic bomb and start World War III, whereas if we see it behind hole No. 2 we make peace feelers and delay the war a little longer. Then the future of man would be dependent on something which no amounht of science can predict. The future is unpredictable.”

    “The future is unpredictable.” Seems clear enough to me.

    Cheers.

    • Mike: Feynman’s thoughts about water in a pipe have nothing to do with the uncertainty principle.

      I wish I had a nickel every time some so-called “skeptic” quoted Feynman while misinterpreting him. .

      • This seems like the typical overly anal interpretation of what someone “believes” instead of trying to understand what is being said.

        If you have a system you consider to be “chaotic” it just means there isn’t an exact solution. Instead you have a probability range. The more precise you want an answer, the less information you are likely to get. You can call it whatever you like, but the best answer is an unbiased as possible range of probability.

        Most of the issues “skeptics” have are due to the obvious bias and the incredibly moronic memes like, “uncertainty is not your friend.”

      • Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can – if you know anything at all wrong, or possibly wrong – to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition. (Richard Feynman)

      • You owe me $0.05

      • David,

        I would certainly pay you a buck next Tuesday, for a link to PJ’s, Table of Contents, to the original data that he ‘dumped’, today. Any luck?

      • David,
        Do you find it strange that Phil Jones, did not even save a copy of just what it was that he ‘dumped’ after he decided on his own to save ‘space’? No records David, science! UNbelievable.

  47. “(1) Application of assumptions and judgments to the basic fundamental “Laws of Physics” in order to formulate a calculation problem that is both (a) tractable, and (b) that captures the essence of the physical phenomena and processes important for the intended applications.”

    The dominant physical phenomena and processes reside unknown as shadows on the wall of a cave named ‘Internal Variability’. Only leaving room for the myopic solipsism which plays cuckoo and adopts any warming that it can lay its hands on.

  48. Dan, this article WAAAY too long and misses the central point: it is not the basic physics that is the problem, it’s all the poorly constrained “parameters” for the bits for which we don’t know the basic physics.

    GCMs to NOT have basic physics for evaporation, condensation / cloud formation, precipitation and how infra-red radiation interacts with a well ventilated ocean surface.

    ie the key parts of the climate are basically unknown as “basic physics” and summarised as “pararmeters” that at guestimated frig factors.

    The whole basic physics theme is a lie because the key processes of climate are not modelled as basic physics . END OF.

    • Bingo.
      Politicians and pundits can claim to be “pro AGW,” “lukewarmer,” or “skeptic.” Science is advanced by opinion, comment, and analysis, but since “the key parts of the climate are basically unknown,” I would expect the technical people to list themselves in the “I don’t know” group.

  49. Chaos is not necessary to make a computation unstable.
    Any unstable manifold is enough to make answers diverge in the long term.
    But I guess instability + periodicity will probably imply some form of stretching/folding and hence chaos…

  50. What an excellent post and set of comments. I congratulate those of you possessing the physics chops required to make an intelligent contribution to the conversation. Not having these chops, I’ll try making my contribution elsewhere.

    In my opinion, Physics (emphasis on the “P”) explains everything. If you don’t believe me, ask God. Unfortunately, all we mortals have at our disposal is physics (emphasis on the “p”). Little-p physics is what Newton had, what Einstein had, and what Planck had. In fact, it’s what all today’s physicists, including you climate scientists, have. Now what all you little-“p”ers need is a bigger pot of humility so you won’t make a mess when you do your business.

  51. Pingback: Engineering the software for understanding climate change | …and Then There's Physics

  52. Watch this before you comment

    • Or maybe not.

      British Met Office, Hadley Centre.

      Would you buy a computer program from them?

      Cheers.

      • 1/3 the defect rate of NASA (53:58)? (“0.1 failures/KLOC” vs. “0.03 faults/KLOC”)

      • AK,

        In climate science computer programs, it seems a “bug” becomes a “minor imperfection”, and over time can turn into a “feature”.

        Apparently, computer science cannot be of further use to climate scientists, because the science is “done”, and the behaviour of the “climate” is well understood.

        Amongst other things, “games” need to be produced to enable decision makers to understand how smart the climate scientists are.

        I thought the current GCMs fulfil that role admirably.

        Cheers.

      • “Would you buy a computer program from them?”

        Ask Judith.

      • “Ask Judith.”

        Another two word meaningless command from Steven Mosher. Why should I? It seems that the BBC gave up believing that the output from the Met Office programs was worth anything at all. Maybe Judith disagrees for all I know. Does it make a difference?

        Many people purchase programs claiming to predict stock market movements, horse racing results, and similar things.

        “A California astrologer said she had been consulted by the Reagans regarding key White House decisions . . . ” I suppose if the White House pays for astrological advice, it must be reliable. Only joking!

        That’s about as silly as believing car manufacturers’ computer programs in relation to fuel consumption, emission levels and so on. People can spend their money any way they like. Toy computer games are an example.

        Cheers.

      • I think UKMO was charging them to much and not providing the service they want (nothing to do with their models). The private sector group that the BBC hired (MeteoGroup) is much better suited to provide the BBC with what they want.

      • curryja,

        From meteogroup –

        “The reason the forecast varies so much between the sources is because different companies look at different weather models. At MeteoGroup we have access to a number of models such as those shown below and then the forecasters analyse these models at the start of a shift;
        ECMWF
        EURO4
        UKMO Global
        KNMI (HiRLAM)
        GFS
        WRF
        It is likely the other companies have access to these or at least some of these models as well, but they are likely to weight their forecast on a specific model. For example, the Met Office spends a lot of money developing their own models, such as EURO4 and the UKMO Global model, so they are likely to use that model on a more regular basis. However, by only looking at one or two models you decrease how accurate your forecast will be as because the weather is a chaotic system; if the starting conditions are wrong then it is likely the forecast will be wrong.”

        Once again, hoping that the miracle of averaging will help. If any individual model was demonstrably superior, the others would be unnecessary.

        I cannot easily find accuracy claims for meteogroup (most commercial forecasters are remarkably coy – surprise, surprise!)

        However,

        “Below are the accuracy percentages of the ‘one to five day’ graphical and text forecasts from the top-10 Google ranked weather forecast providers in Essex.”

        Overall measured meteogroup accuracy? 76.12 % Next best? 75.95%.

        Another location or time? Who knows?

        From ForecastWatch (commercial provider) –

        “Because if you never predict rain, in most parts of the country you will have an accuracy of 70% or so.”

        Naive persistence forecasts do far better than this, of course. In temperate regions, tomorrow=today gives around 85% – depending on acceptable tolerances. It’s a matter of minutes to set up a spreadsheet, download a years worth of data for a given locality, and check for yourself.

        Maybe things will improve in the future.

        Cheers.

      • I have learnt much here. Firstly about modelling non linear chaotic systems from the early exchanges, and then that Climate Scientists have far too little serious work to do from the later exchanges. I had forgotten how easy life is in academe.

      • curryja,

        I cannot say if this is true or not, but some journalists are obviously of the opinion that the “huge computer model” is a factor in the spectacular forecasting failures by the Met Office.

        “But the chief reason why the Met Office has been getting so many forecasts spectacularly wrong, as reported here ad nauseam, is that all its short, medium and long-term forecasts ultimately derive from the same huge computer model, which is programmed to believe in manmade global warming.”

        As an aside, if there’s not much difference between forecasting products, why not just go for the cheapest? If the web site is sufficiently flashy and impressive, replete with the requisite jargon, nobody will care whether you’re just guessing that tomorrow will be just the same as today.

        Maybe Weatherzone has the answer –

        “Australia’s most accurate weather forecasts whenever and wherever you are.” And it’s FREE! Or you can pay $1.99, and get hi-res icons and dynamic backgrounds!

        But wait – there’s more!

        “AccuWeather is Most Accurate Source of Weather Forecasts and Warnings in the World, Recognized in New Proof of Performance Results”

        And – their “dramatic operations centre” has a 21 foot ceiling! Imagine that! 21 feet of space between the floor and the ceiling.

        I’m quite baffled why organisations such as the BBC don’t use the most accurate source of weather forecasts in the World. Why would anyone settle for second best?

        Please excuse my poor attempt at sarcasm, but it looks as though there are no end of organisations taking advantage of people’s willingness to suspend disbelief at the behest of any itinerant huckster.

        Feel free to delete this. I’m sure some true believers will be shaking with rage, or on the verge of apoplexy, at this point. I find the subject quite amusing, demonstrating yet again the human passion to believe that the future can be reliably ascertained, by consulting the appropriate deities, or their earthly representatives.

        Cheers.

      • Mike,

        My observation as a non-meteorologist looking from the inside is that comments like this one from Christopher Booker could be classified as not even wrong.

        “But the chief reason why the Met Office has been getting so many forecasts spectacularly wrong, as reported here ad nauseam, is that all its short, medium and long-term forecasts ultimately derive from the same huge computer model, which is programmed to believe in manmade global warming.”

        The tuning process of a weather and climate model requires that it provides a stable climate when no forcings are applied. And there are no forcings applied when running it as a weather model.

        The weather model is based on much more recent versions of the underlying Unified Model science and is trialled in many different weather scenarios (i.e. direct comparison with detailed observations taken over a period of a few days).

        The climate model configurations typically take several years to come to fruition due to the need to couple to other components, and are trialled against climatology statistics.

      • Steve,

        You may well be right. I am not sure what “not even wrong” means. I assume that an assertion is right, wrong, or indeterminate.

        I assume you are implying that the journalist in question is wrong, but I don’t know for sure.

        Your assumption that climate, (that is, the average of weather) , should be “stable”, doesn’t seem to be supported by fact. The weather, and hence, the climate, seems to be always changing. Chaotic, in fact, as stated by the IPCC.

        Have you any documents relating to the BBC’s reasons for dumping the Met Office? I’m not a fan of conspiracy theories, but you may have evidence to the contrary. Maybe you could provide it, if it wouldn’t be too much of an imposition.

        Cheers.

      • Steve

        I live just a few miles from the Met Office in Exeter and visit there regularly to use their archives and library and have had meeting with a variety of their scientists. Yes, as you know, they do get things very wrong and seem to fail especially on micro climates which is what interests most of us.

        That they do get things wrong so frequently is confirmed by observations of forecasts over reality and also that after 70 years the BBC is ditching their forecasts and are using a European group.

        I have a lot of time for the Met Office but they do need to improve their forecasting skills and not rely on the models they have developed.

        tonyb

      • “That they do get things wrong so frequently is confirmed by observations of forecasts over reality and also that after 70 years the BBC is ditching their forecasts and are using a European group.”

        As an ex UKMO Forecaster, and a regular watcher of forecasts, I am unaware that they do “get things wrong so frequently”. Anymore than any other Met organisation does anyway. Their NWP model is second only to ECMWF’s and is the basis of their Mesoscale models – and their senior forecasters that review the models before issuing modified fields have a wealth of on-the-bench experience …. Certainly far more than any other Foreign Met organisation.
        Meteogroup will only review those same models ( because of the MetO will still sell them to Meteogroup).
        What the BBC will get is new graphics and webpage design.
        However it is undoubted in my mind that the decision rests with money.
        I have a friend who is a MetO BBC TV forecaster (for over 10 years and very popular in his region – actually, country) and he will be forced to swap organisations to continue in the same job. The BBC is of course assuming that most will. They are probably right.

        “But they do need to improve their forecasting skills and not rely on the models they have developed.”
        I’m sorry Tony but you betray your ignorance of on-the-bench operational weather forecasting with that comment.
        Models are King. Humans increasingly find it difficult to gainsay them. Yes, there are certain inherent traits to each model that can be corrected by human I put but it is very difficult to go against them in the real world often. They can be astoundingly accurate. I regularly plan my day by noting the arrival of rain to within an hour at my home in Lincs (from a forecast the day before).
        PS: although retired I still have access to the UKMO’s Media briefing pages via their Intranet. I therefore read the Senior Forecasters explanations/thoughts on things along with graphics/details the public do not see.
        I also must say that I come across the “you’re always wrong” attitude still. And the answer is usually that that person never properly “clocks” a forecast in the first place…. Added to the human fallibility on not understanding when they do, and always remembering the odd bad one often long ago (Fish’s “Hurricane” anyone?), and never acknowledging the vast majority of good ones.
        PPS: I talk only of Weather forecasts.

      • Tony Banton

        I am sure you will have realised that I have a soft spot for the Met office and often defend them here.

        However, we are fooling ourselves if we believe they achieve the degree of accuracy that might be expected from the millions of pounds invested in them over the years. The Met Office have a blind spot for micro climates which we all live in. In my area, on the coast, tourism is vital and I lose count of the number of days there is a dire forecast keeping tourists away only for it to turn out nice after all . It also works the other way round of course where tourists are lured here on the promise of good weather but the weather then forces them off the beach and into the cafes (fortunately!)

        I also worked with the Environment Agency and as a specific result of their failure to predict the Boscastle deluge they were asked to go away and develop a model that more accurately predicted these type of events. They are especially worrying here in the west country where catchment areas may be focused on tight valleys leading down to the sea and past villages and towns where water could back up if the tide is in or there are obstructions on the river

        We have their app and constantly marvel at how often it is updated and rain becomes sun and vice versa. We would observe that their first forecast is often the best one.

        So, I hold no ill will to the met office at all and appreciate the difficulties associated with our type of climate but in view of the enormous resources given to them I think it reasonable that their three day forecasts at least should have a high degree of accuracy.

        tonyb

      • Mike said: “Your assumption that climate, (that is, the average of weather) , should be “stable”, doesn’t seem to be supported by fact. The weather, and hence, the climate, seems to be always changing. Chaotic, in fact, as stated by the IPCC.”

        Climate models are designed to produce a plausible but stable climate. That is because one can then estimate the impact of a perturbation to the model. The not unreasonable expectation is that the Earth’s climate would be more stable if we didn’t have such random volcanoes, volcanic and natural emissions and so forth to foul the temperature record.

        Absolutely agree that at the detailed level we cannot really say that the variability in a climate model is good enough to be very confident about detailed small-scale changes in weather under a warming scenario.

        Mike said “Have you any documents relating to the BBC’s reasons for dumping the Met Office? I’m not a fan of conspiracy theories, but you may have evidence to the contrary. ”

        I don’t have any inside knowledge on this at all. The Met Office management say they were dumped at an early stage – before money was discussed in detail.

        Rumours are that given that the BBC were under attack from the government they didn’t fancy giving money to a government organisation.

        The Met Office forecasters are no different from MeteoGroup in that they don’t just rely on the Met Office models. The Met Office model is objectively almost as good as the ECMWF model. Normally we get a lot of letters from senior emergency/police workers to tell us how great we were after a period of severe weather. And there are a lot of high profile commercial customers paying the Met Office several tens of millions per year for forecasts.

        But nobody is perfect.

      • “Once again, hoping that the miracle of averaging will help.”

        You seem to think that all models that start at a time t0 should be in the same state for all times t > t0. But there are good reasons why this doesn’t happen and why averaging is useful.

        One is imprecise knowledge of the initial state. There simply aren’t all the observations that a modeler ideally wants. So they have to make choices about how to handle that — do you interprolate between points where there is observational data, and if so how, etc.

        Second is parametrizations. Models don’t all use the same parametrizations (applications of the laws of physics), and it’s not clear which are better. How should the carbon cycle be described? Aerosols? Both are very complicated, and observational data about both is incomplete.

        In fact, when I’ve talk to modelers I often find they’re not as intertested in projecting final states — sure, the public is — as they are in using models as experiments to understand the effect of different assumptions and parametrizations.

        So models aren’t going to end up in the same final state, even in the absence of equations with chaotic results. Averaging is a decent way to capture the spread in models due to their different inputs and assumptions.

      • “We have their app and constantly marvel at how often it is updated and rain becomes sun and vice versa”

        Tony: You obviously don’t understand how that is generated.
        It is a grid point taken at the closest point to you and it just squirts out what that is saying straight from the model. It may even be from the unmodified fields. If it’s a showery set-up then obviously (?) it will oscillate between sun and rain !!

        “However, we are fooling ourselves if we believe they achieve the degree of accuracy that might be expected from the millions of pounds invested in them over the years. The Met Office have a blind spot for micro climates which we all live in.”

        Let’s just agree to disagree on that Tony.
        I think they do better than any National Met service with the money they get. Especially considering the wages that scientists are paid there.
        Micro-regimes are dealt with via their meso models. Have you ever seen the output of surface wind streamlines?
        Let me tell you THAT is all you need to have, along with the knowledge of the local topography to foecast for micro climates.
        Unfortunately again it comes down to human input. Before I retired (because of the closure of Birminingham WC). I looked after the Engish Midlands and knew it’s intricacies. It closed and the MetO promised it’s customers and the Government that it could do the job just as well centrally from Exeter. We told MetO managemnet they couldn’t. What happened? They lost customers. Nothing they could do. The Gov forced them into it by not funding adequately.
        Same thing with IT. Met IT does not have the staff to properly service customers. They don’t pay enough.
        No, the private peeps like Meteogroup can afford to pay for the best IT. Why? because they don’t have to fund a new Supercomputer every 6 or 8 years or whatever it is, in order to improve NWP. Not to mention to run an observing and data collection service and be one of the the two World area Forecast Centres (WAFC).
        No, there is no doubt in my mind that the MetO do a bl**dy good job considering.

    • Right, they “know” the answer so they need to figure out how to sell the “solution.” Two years with soil hydrology off by a couple hundred percent.

      • The ATF Program ( eventually the F-22 and F35 ) ran for 10 years with a fundamental flaw in the ESA radar code simulation code. Really big error but when it come down to it, the system effect wasnt that great.

        Toy Example: I have a model of bank account where I project future balances.. For years, the module that projects incomes from lottery winnings as been Horribly off. But in the grand scheme of things it made no sense to correct it as it was not a grand driver of anything..

        Its like this in any major simulation. some knobs have small gains.

        Objectively their code defect density is good.

      • “Its like this in any major simulation. some knobs have small gains.”

        And some don’t. Let’s see, models pretty much uniformly underestimate 30-60 north land amplification but estimate that land use is a negative forcing. One major land use change since the first beaver pelt hit the market is land hydrology. The Aral Sea is now a desert thanks to poorly planned water use and how many acres have been over grazed?

        Thanks to the Dakota Pipeline protests I read up on tribe migration etc. Some speculate that the Comanche left the Dakotas due to the little ice age which would pose a problem for hunter gathers, in the early 1700s. Pity they didn’t have a written language. In any case, the made room for the Dakota tribe which did get along with the Comanche very well. Then the Comanche were busy building an empire at the time on the southern plains and didn’t get along with anyone.

      • Why didn’t it pose a problem for the hunter gatherer tribes that remained in the Dakotas throughout the putative Little Ice Age?

      • Most likely it didn’t pose any problem for any hunter-gatherers. The Comanche probably left to conquer an empire after they learned to use horses.

      • Also, the dominant South Dakota tribe, the Arikara, were being pushed out by Sioux from the east. The Sioux had rifles and horses, and had been exposed to western military tactics, which they employed. A large, fortified Arikara village was discovered not far from our ranch. It was the scene of an apparent massacre. The Arikara, a farming and fishing culture, fled north… an odd direction to go in an ice age.

      • “Why didn’t it pose a problem for the hunter gatherer tribes that remained in the Dakotas throughout the putative Little Ice Age?”

        Don’t know that it didn’t.

      • JCH, The main Sioux migration was in the 1800s and the Comanche supposedly left in the early 1700s. The Canadian tribes migrated south so the Comanche were likely pushed out leaving room for tribes that tended to stay put longer.

    • Tony Banton

      The Met Office really need to do better but I agree that others are much worse than them. I go to Austria a lot and look at the Meteogroup forecasts every day. The forecasts are often so far divorced from actual reality that I have to check they aren’t giving me the weather for Australia!

      This business about micro climates is crucial. Whether that can be done well under the current set up is debatable.

      Perhaps the Met office were given the chop by the BBC because some one there, just as I do, get irritated by the generalities of the temperature range given in forecasts…’temperatures in a range today rom 14 to 26C’ isn’t really being helpful!

      tonyb

    • Thank for that link Mosh’ , so far I have got 13 min into and realise that this guy has no idea about climate science but is there telling us what he has been fed by climate scientists.

      Full of the usual crap about “it’s all basic physics”. He clearly has NOT looked at how the code works and does not realise that they do not have “basic physics” equations for the key processes. . He also claims that model output is not what our knowledge of climate is based on. Maybe he should read some of the IPCC reports.

  53. Below is a recent measure of dynamic forecasting.
    The forecast is of variance from the actual 500 millibar height field.
    The measure of “Anomaly Correlation” is a forecast measure not to be confused with “auto-correlation”.
    “Anomaly Correlation” of less than 60 is considered unusable.
    There has been some remarkable improvement since 1981.
    However, all duration forecasts have plateaued recently, including the ten day forecast which is still not useful.

    https://i.guim.co.uk/img/static/sys-images/Guardian/Pix/pictures/2015/1/7/1420635756055/b585fc35-4707-45b4-bf6f-03fa649d17c3-1020×612.jpeg

    At what duration beyond 7 days, would one think that forecast improve?

    If one believes that variations ‘average out’, how does one account then for the fact that one year varies from the next?
    And if one believes that years ‘average out’, how does one account for the fact that one decade varies from the next?
    And if one believes that decades ‘average out’, how does one account for the fact that one century varies from the next?

    • At what duration beyond 7 days, would one think that forecast improve?

      It doesn’t. It’s not suppose to.

      If one believes that variations ‘average out’, how does one account then for the fact that one year varies from the next?

      They don’t “average out”, any more than one rainy day and one sunny day average out to make two half-rainy days. It’s just a turn of phrase. When we talk about climate as the “average weather”, we mean that we’re talking about the *statistics* of weather, like how many rainy days and sunny days you’ll normally get in a year, and how many super-rainy days, and how often droughts come, etc.

      That’s what distinguishes weather from climate.

      • Yes, this goes to my point: the statistics of weather aren’t predictable either.

        Below is precipitation by latitude.
        For a given latitude band, days vary fine.
        But years also vary.
        Decades vary.
        Centuries vary.
        These variations are representative of the chaotic fluctuations of circulation which are not predictable.

        https://www.e-education.psu.edu/meteo469/sites/www.e-education.psu.edu.meteo469/files/lesson02/IPCCfigure3-15-l.gif

      • Yes, this goes to my point: the statistics of weather aren’t predictable either.

        And yet, farmers know to plant in the spring and harvest in the fall. If the statistics of weather wasn’t predictable, that would be impossible.

        January is pretty reliably cooler than July in the northern hemisphere. Some areas are pretty reliably desert, and others are pretty reliably jungle. And the Earth, as a whole, stays within a relatively narrow band of temperature.

        …But the statistics of weather are completely unpredictable?

      • That graph shows that annual precipitation is predictable within +/- 7% for almost the entire globe.

      • That graph shows that annual precipitation is predictable within +/- 7% for almost the entire globe.

        Right – so if the IPCC says where you live, precipitation will either increase, decrease, or be about the same, I’m down with it.

      • Right – so if the IPCC says where you live, precipitation will either increase, decrease, or be about the same, I’m down with it.

        Sounds pretty hard to get it wrong at all.

      • And yet, farmers know to plant in the spring and harvest in the fall. If the statistics of weather wasn’t predictable, that would be impossible.

        January is pretty reliably cooler than July in the northern hemisphere. Some areas are pretty reliably desert, and others are pretty reliably jungle. And the Earth, as a whole, stays within a relatively narrow band of temperature.

        …But the statistics of weather are completely unpredictable?

        Glad you mentioned this. You may not have read above where I tried to distinguish between between aspects of predictable and unpredictable and the bounds. I don’t think I used the word completely and certainly above I laid out a difference.

        Seasons, as you reference, are determined by some fairly stable phenomena – specifically astronomical orbits. This determines not only the net radiance but also the pole to equator gradient which gives us jet streams.

        Temperature is determined by local change plus advection terms.
        So temperature is partly determined by unpredictable phenomena ( whether there will be more ridges or troughs over a given area ) but also determined by predictable phenomena, in this case seasonal change in incoming solar radiation.

        Precipitation, on the other hand is much more a function of atmospheric motion than global average temperature. Correspondingly, precipitation is much more unpredictable because, within the bounds of fluctuation, atmospheric motion is unpredictable.

      • Benjamin: My friend is a farmer. A real one. If he could find a forecast (better than Farmer’s almanac) that told him merely if it was going to be a wet or dry summer (not even how much) he would pay $10,000 for a forecast. But his money has been safe (though not his crops) because no one can do it yet.

  54. So, if the climate isn’t changing, can someone please explain how the Midwest is now getting monsoon type rainfall?
    50 years ago, a 3″ rain was virtually unheard of. That is my observation gained from living on a stock and grain farm, where we literally lived and died by the weather.
    Now days, torrential rains in the 3-7+ inches are happening 3-4 times a year, with most of the rain falling in a couple of hours.

    I also remember Winters being colder back then, with some nights the temps falling to 15-20 degrees below zero, but not anymore. Now we have January’s so warm, the trees start budding out.

    Last December, mid-Missouri got close to 7″ of rain the week of Christmas, when the weather should of been cold enough to preclude that kind of moisture forming. Instead of snow we got record-breaking floods.

    Something is going on and neither Obama’s plan to set up a money grabbing carbon trading scheme nor people arguing back and forth over charts isn’t the answer..

    • Greg

      You will find reading ‘ The US weather review’ interesting. It started around 1830 and became more formalised around 1850. It lists weather and extreme events by each state and often county. Some of the weather in the 19th century was extraordinary

      tonyb

    • Greg

      You may be interested in this snippet I took from the US weather Review when I was researching climate at the UK Met Office library. It is just a snippet in time of course. I don’t know your geography but this one specifically mentions Missouri, as you did

      ‘Feb 1888 . In the gulf states and Missouri valley, Rocky mountain and Pacific coast districts-except Southern California where the temperature was nearly normal-the month was decidedly warmer than the average, the excess over normal temperatures amounting to more than 4degrees f over the greater part of the area embraced by the districts named, and ranging from 6 to 10f in the northeast and central Rocky mountains region, Helena mountain being 11f above normal.’

      tonyb

    • can someone please explain how the Midwest is now getting monsoon type rainfall?

      The ocean cycles push the jet stream around, alters the surface track of all the tropical water vapor as it’s thrown off towards the poles to cool.

    • Last December, mid-Missouri got close to 7″ of rain the week of Christmas, when the weather should of been cold enough to preclude that kind of moisture forming.

      Of course, the moisture didn’t form – it was advected ( most likely from the Gulf of Mexico ) as part of the storm system which also provided the lift for the precipitation. There is always plenty of moisture available over the Gulf to soak the US if appropriate circulation exists. That goes the topic of the post – unpredictable fluid flow gives rise to variations in weather.

      Now, December 2015 was the peak of an El Nino – a fluctuation of circulation. See if you can spot another El Nino is this plot of yearly daily maximum precipitation (average for all reporting stations in the US ):
      https://turbulenteddies.files.wordpress.com/2016/07/ghcn_conus_extreme_precipitation.png

      The 82/83 El Nino provided the all time CONUS flooding rains, though there is a spike for most of the El Nino years. There is a trend, though still shy of being significant, of 10cm daily rains. Trends of higher amounts of rains are less and are not significant at all.

  55. Previously “>David Appell said, as a part of a sub-thread above:

    “The S-B Law is very much not a normal distribution (not a “Bell Curve”).”

    I didn’t say it was. Nor did the OP. He said, if I understood correctly, that F=ma gives, for a given a, a range of values of F that are Gaussian distributed.

    Which isn’t true. Nor, for a given temperature T, does the S-B Law give a range of emission intensities. There is a 1-1 relationship.
    [ Bold mine ]

    Now he says he agrees with Benjamin Winchester

    here.

    Young’s Modulus is a paramaterization of an isotropic materials property. Go deeper, and you get the anisotropic properties. Then you get them as a function of time. Then you look at how elasticity also varies microscopically, at grain boundaries, at high-stress regimes, etc. Young’s Modulus is a simplification of all of these nitty-gritty details.

    The Ideal Gas Law is another paramaterization, coming from treating gas particles under conditions of elastic collisions. (Which is why it breaks down as you get close to phase transitions).

    And, yes, Ohm’s Law is another one, a simplification of the laws surrounding electron scattering at quantum mechanical levels. You can get there from the Drude Model, if I recall right, which is itself a simplification.

    In the case of Young’s modulus, this interpretation seems to indicate that given the stain one can vary Young’s modulus over a range of values to get whatever stress you need.

    In the case of Ohm’s Law and the Ideal Gas Law, it appears that the entire law/model is a parameterization. Apparently, you can vary any of the quantities appearing in the law//model, including the actual physical properties gas constant, electrical resistance, temperature, pressure, voltage, current, density, to get whatever other quantities you need.

    So, how can F=ma not be a parameterization. Why can’t we go for the whole nine yards and declare that conservation of mass, conservation of energy, and every single material property, including all thermodynamic state, thermo-physical and transort properties, all be parameterizations?

    These simple examples of Young’s modulus, Ohm’s law, and the Ideal Gas equation of state present an opportunity to briefly mention some aspects of parameterizations.

    In the case of these simple example models, one would never consider varying material properties in order to determine the state of the material. We look up the material properties, plug them into the equations along with other known quantities to determine an unknown state property.

    Some parameterizations introduced when developing tractable science or engineering problems do in fact change the fundamental laws. Replacement of gradients in driving potentials with bulk-to-bulk algebraic empirical correlations generally replace a material property, and associated gradients, with coefficients that represent states that the materials have attained. In the case of energy exchange, for example, the thermal conductivity and temperature gradient is replaced by a heat transfer coefficient and bulk-to-bulk temperature difference, along with some macro-scale representation of the geometry.

    Turbulence is another example. The simplest turbulence models replace a material property, the viscosity, with a model of what is usually called the turbulence viscosity. Some of these are, rough, mechanistic models that are developed based on idealizations of what turbulence is. Others consider in more detail the micro-scale physical phenomena and processes that are considered to characterize turbulence.

    GCMs do not use The Laws of Physics. Instead, models of The Laws of Physics are used. Actually, discrete approximations to The Laws are used, and these are approximately “solved” by numerical methods.

    • Dan Hughes:

      Thanks for mentioning turbulence. Viscosity is another parametrization.

      You wrote: “So, how can F=ma not be a parameterization. Why can’t we go for the whole nine yards and declare that conservation of mass, conservation of energy, and every single material property, including all thermodynamic state, thermo-physical and transort properties, all be parameterizations?”

      If you are trying to say that the laws of physics are themselves models, I’m fine with that. In fact, that’s often what physicists call them: “the Standard Model,” “the quark model,” etc., often before the model gets established with a more formal name, like “quantum chromodynamics.” One of Steven Weinberg’s most famous and useful papers was titled “A Model of Leptons.”

      You wrote: “In the case of these simple example models, one would never consider varying material properties in order to determine the state of the material. We look up the material properties, plug them into the equations along with other known quantities to determine an unknown state property.”

      I’m not talking about “varying material properties.” I’m talking about “looking up material properties.” Those “properties” that you look up – resistance, or the Young’s modulus, or the viscosity – are *parametrizations.* If you want to find the current I flowing through a wire with a potential difference V, you don’t solve the 10^N electron scattering equations of quantum mechanics to determine R, you (often) use a parametrization like I(V)=V/R, taking R as a constant and looking up a measured value. Often that suffices. Often it does not. But that’s what parametrizations are – simplified expressions of complex interactions.

      You wrote: “GCMs do not use The Laws of Physics. Instead, models of The Laws of Physics are used. Actually, discrete approximations to The Laws are used, and these are approximately “solved” by numerical methods.”

      You’re talking about how to solve the equations that express the laws of physics. I’m talking about the laws of physics, which are taken as the starting points.

      • You’re talking about how to solve the equations that express the laws of physics.

        Yes, that’s the problem!

        The physics and equations which describe the physics are accurate and valid ( neglecting scale and parameterizations ).

        Problem: the solutions to the equations mandate unpredictability.

        Now, that applies less to heat content of the atmosphere
        ( aka global temperature )
        Why?
        Because heat content of the atmosphere as a whole is mostly INdependent of motion fluctuation of the atmosphere, being a nice soluble problem of input – output.

        But global average temperature isn’t very meaningful.
        Extremes of temperature, changes in precipitation, storms, etc. are much more dependent on motion, and are much more unpredictable.

      • “Extremes of temperature, changes in precipitation, storms, etc. are much more dependent on motion, and are much more unpredictable.”

        But not the averages.

        Consider a swimming pool. It’s very difficult to calculate T(x,y,z,t) — there is sunshine and wind and fluctuations and cool spots and warm spots and fluid motion.

        But it’s much easlier to to calculate the average , given sunshine and wind.

        Or, look at Maxwell-Boltzmann’s description of a gas. Difficult-to-impossible to calculate the velocity v(t) of each particle. But straightforward to calculate the average velocity.

      • David and Dan, turbulence is of course where simulations are made or broken. Turbulence models are not really based on the laws of physics but on assumed relationships and fitting test data. It is amazing that small viscous forces can change the global level of forces by a factor of 2 but it does. This is it seems to me where the laws of physics can’t be solved directly and that way of explaining it can be misleading.

      • “It is amazing that small viscous forces can change the global level of forces by a factor of 2 but it does.”

        Can you explain more about this? What do you mean by “global level of forces?” Can you point to something I can read? Thanks.

      • But not the averages.

        Perhaps for well mixed phenomena.

        So, perhaps global average temperature is predictable ( with a lesser unpredictable portion from circulation variation ).

        But most events within the atmosphere, including extreme temperatures, precipitation, storms, winds, clouds, etc., are not well mixed but are rather mostly the result of the fluctuations of the circulation.

        This leaves the IPCC with global average temperature rise.
        But that’s not very scary, so they gravitate toward extreme scenarios and predicting phenomena they know are not predictable.

        Well, global average temperature rise may not be very scary because it’s not very harmful.

      • “This leaves the IPCC with global average temperature rise.
        But that’s not very scary, so they gravitate toward extreme scenarios and predicting phenomena they know are not predictable.”

        Such as?

        “Well, global average temperature rise may not be very scary because it’s not very harmful.”

        No? Says what science?

      • “This leaves the IPCC with global average temperature rise.
        But that’s not very scary, so they gravitate toward extreme scenarios and predicting phenomena they know are not predictable.”

        Such as?

        Extreme temperatures.
        Precipitation/Drought
        Storms generally ( tropical cyclones more specifically ).
        Windiness
        Cloudiness

        “Well, global average temperature rise may not be very scary because it’s not very harmful.”

        No? Says what science?

        The magnitude of natural variation for any given locale being so much greater than the global trend for one.

      • Eddie wrote:”The magnitude of natural variation for any given locale being so much greater than the global trend for one.”

        The interglacial-glacial difference of the recent ice ages is about 8 C (14 F).

        Yesterday the high-low difference where I live in Oregon was 38 F.

        So you think that what happened in my backyard last night is more significant than the changes during the ice ages?

      • Eddie wrote:
        “Extreme temperatures.
        Precipitation/Drought
        Storms generally ( tropical cyclones more specifically ).
        Windiness
        Cloudiness”

        Where exactly are the IPCC’s predictions for these?

  56. GCMs do not use The Laws of Physics. Instead, models of The Laws of Physics are used.

    The Laws of Physics are models of the actual physics.

    Stefan-Boltzmann Law? A model. Ideal Gas Law? A model. The Law of Gravity? A model.

    And any model that isn’t exactly first principles is a parameterization. At this point, that’s anything above fundamental particles and their interactions.

    • Benjamin: Perhaps it would be better to say the GCMS use simplified versions of the laws of physics and in many cases very approximate numerical methods for the dynamic aspects. Is that better?

  57. Dan Hughes:

    As I recall it was you who pointed out that GISS/E had zero as the
    heat of vaporization of water, so you have creds for attempting
    to get to the source of the problems. Thank you for your work.

    What comes to my mind is that the GCM’s are vast exercises in
    “semi-empirical physics”, analogous to semi-empirical quantum
    mechanics as used successfully in chemistry.

    “Semi-empirical” means that complicated relations which cannot
    be evaluated exactly get approximated by an assumed value or
    relation with arbitrary parameters chosen to best match whatever experimental data is available. While more general than pure curve fitting, semi-empirical methods are still only as good as their domain of verification.

    Unfortunately practitioners in semi-empiricism can come to believe that their model is in fact reality, which seems to have happened in the case of the GCM’s. And to the extent that the model has been tuned to produce the available data, it does accurately reflect the available data, whence the belief in its reality. But like any statistical exercise the model may or may not be predictive.

    Semi-empirical methods can be terribly wrong, or near perfect approximations, depending on the available math and the skill of the implementer. In the end, they are still statistics, but with a dimensionality guided by physics.

    • > Unfortunately practitioners in semi-empiricism can come to believe that their model is in fact reality, which seems to have happened in the case of the GCM’s.

      A quote would be nice to substantiate that semi-empirical mind probing.

    • 4kx3, that was back in 2009-2010. It’s still there.

      The source for the version of the GISS ModelE code used for AR5 simulations in MODULE CONSTANT still contains the same statement:

      !@param shv specific heat of water vapour (const. pres.) (J/kg C)
      c**** shv is currently assumed to be zero to aid energy conservation in
      c**** the atmosphere. Once the heat content associated with water
      c**** vapour is included, this can be set to the standard value
      c**** Literature values are 1911 (Arakawa), 1952 (Wallace and Hobbs)
      c**** Smithsonian Met Tables = 4*rvap + delta = 1858--1869 ????
      c     real*8,parameter :: shv = 4.*rvap  ????
            real*8,parameter :: shv = 0.
      

      The file linked above contains a directory/folder named Model. A global search of that directory/folder for the word ‘conservation’ or ‘energy conservation’ will give several hits related to mass and energy conservation. It appears that the numerical methods used in ModelE do not inherently conserve mass and energy. There are statements that will stop execution of the code if dialogistic checks on conservation fall outside specific ranges. There are also statements that attempt to ‘correct’ or ‘force’ conservation.

      My experiences with numerical methods that inherently conserve mass and energy has been that such checks and ‘corrections’ are not necessary. We do not even bother making such dialogistic checks. How to make ‘corrections’ to ‘force conservation’ of course are somewhat ad hoc. For example, in the trcadv.f routine:

      4    continue
      
            call esmf_bcast(ogrid, bfore)
            call esmf_bcast(ogrid, after)
      c
            if (bfore.ne.0.)
           . write (lp,'(a,1p,3e14.6,e11.1)') 'fct3d conservation:',
           .  bfore,after,after-bfore,(after-bfore)/bfore
            q=1.
            if (after.ne.0.) q=bfore/after
            write (lp,'(a,f11.6)') 'fct3d: multiply tracer field by',q
      ccc   if (q.gt.1.1 .or. q.lt..9) stop '(excessive nonconservation)'
            if (q.gt.2.0 .or. q.lt..5) stop '(excessive nonconservation)'
      c
      

      Corrections to my interpretation will be appreciated.

      • Dan, you are correct that all these flux corrections to “fix” lack of discrete conservation are very problematic. Each correction has parameters to adjust of course. I actually feel sorry for those who build and maintain and “validate” GCMs. Doing meaningful parameter studies is a Herculean task. There is a veritable mountain of work to do. There is far too much “running” of the codes by climate scientists to “study” various effects when all the computer time could easily be spent on actually validation. Most of these climate effect studies are in my view a huge waste of resources. But “running the code” is easier than trying to advance theoretical understanding or working on better data.

      • dpy: Most GCMs no longer use flux corrections.

    • oops, the lines got automatically formatted to a shorter length. Really messes up the coding.

  58. David Appell admits that the models make approximations and that they are still working to improve their codes (well, good for them). The point is not that all physical relations (Ohm’s law for example) are approximations, but that many such approximations (laws of physics) when applied to simple problems give highly accurate results compared to highly accurate measurements. But in complex settings, many things can go wrong with our approximations, discretizations, parameter estimates, numerical methods, surface data characterizations, and forcing data (to make an incomplete list). As a simple example, it is possible to get quite good predictability for fracturing of a uniform material under strain, but no-one can yet predict earthquakes. We cannot characterize the materials or their spatial makeup at all scales sufficiently to do the calculations. The climate system is like the earthquake problem: you cannot assume that just because you start out with known physics that you can get a useful result. Newton’s laws are pretty good but you still can’t predict the path of a feather dropped off a roof.

    • you cannot assume that just because you start out with known physics that you can get a useful result

      And after 15 years supporting simulators one of the most difficult tasks was figuring out what the simulator was really telling you and why.

    • > The climate system is like the earthquake problem: […]

      What’s the earthquake problem?

    • Craig wrote:
      “The climate system is like the earthquake problem: you cannot assume that just because you start out with known physics that you can get a useful result.”

      You can’t assume that, sure, but you can compare GCMs outputs to what’s happened, such as

      http://www.climate-lab-book.ac.uk/comparing-cmip5-observations/

      or to paleoclimate information, such as

      Hanesn, J., M. Sato, P. Hearty, R. Ruedy, et al., 2016: Ice melt, sea level rise and superstorms: evidence from paleoclimate data, climate modeling, and modern observations that 2 C global warming c ould be dangerous Atmos. Chem. Phys., 16, 3761-3812. doi:10.5194/acp-16-3761-2016 .

      • My reading of such tests is that it is an eye-of-the-beholder problem. Some outputs of GCMs don’t look too bad. Others, pretty bad, such as precipitation, the ITCZ, Antarctic snow, the jet stream, the mid-trop tropical hot spot. Do these “not matter”? They matter to me.
        And by the way, Hansen’s understanding of paleoclimates sucks and he is quick to make excuses for why the models don’t do paleoclimate very well.

    • Craig wrote:
      “As a simple example, it is possible to get quite good predictability for fracturing of a uniform material under strain, but no-one can yet predict earthquakes.”

      We have signfiicantly better information about recent climate parameters than we do about the geologic parameters in the deep Earth that are relevant to earthquakes.

      In fact we have essentially *no* data on the latter, let alone real-time or recent data.

      • The problems are similar in that known laws do not guarantee that you can solve the problem. The climate models use very poor input for ocean temperature distribution, an important initial condition.

      • Craig wrote:
        “The problems are similar in that known laws do not guarantee that you can solve the problem. The climate models use very poor input for ocean temperature distribution, an important initial condition.”

        Climate models don’t solve an initial value problem. You should know that.

        But there is essentially NO information about subsurface geologic and tectonic conditions.

  59. It seems to me that the defenders of the models on this thread (brandon gates, Nick Stokes, ATTP, etc) are not defending as nicely as they intend. They keep mentioning how the models do not use the same physics, cannot make predictions, the modelers are still working on the numerical methods, ENSO is not predictable, etc. Is this supposed to give the public confidence when told to shut down their coal plants? Just because a billion $ went into the models and they are doing the best they can does not mean I must believe the models. Where is the sort of testing that Dan mentions (numerical convergence for ideal problems, etc)? I have drawers full of papers showing GCM test results and these results are mostly equivocal or Rorschach test-like. Sometimes just awful. Doesn’t that bother you?

    • does not mean I must believe the models.

      Of course, you can believe whatever you like (this is obvious, right?).

    • “It seems to me that the defenders of the models on this thread (brandon gates, Nick Stokes, ATTP, etc) are not defending as nicely as they intend. They keep mentioning how the models do not use the same physics, cannot make predictions, the modelers are still working on the numerical methods, ENSO is not predictable, etc. Is this supposed to give the public confidence when told to shut down their coal plants? Just because a billion $ went into the models and they are doing the best they can does not mean I must believe the models. Where is the sort of testing that Dan mentions (numerical convergence for ideal problems, etc)? I have drawers full of papers showing GCM test results and these results are mostly equivocal or Rorschach test-like. Sometimes just awful. Doesn’t that bother you?

      #######################

      Does not bother me in the least. For the most part Policy has NO NEED WHATSOEVER for results from GCMS.

      Very simply: The best science tells us.

      A) doubling c02 will increase temps by 3C . This is NOT from GCMs
      but rather from Paleo and Observational studies, GCMs merely
      confirm this or are at best consistent with it.
      B) the estimates that the temperature will increase by 3C, is REASON enough, to take policy decisions that put an early end to the use of coal
      as a fuel of choice. On top of the climate risk, we have the risk
      to health ( from Pollution, namely pm25). Those two risks ALONE
      can justify a policy that puts an end to coal sooner rather than later
      and justifes policies that favor low risk ( warming risk) technologies such
      as Nuclear.

      In short, we knew enough about the risks without considering ANY input from a GCM, to justify policies that favor non polluting technologies like Nuclear over coal. You dont need a GCM to tell you that switching from Coal of Nuclear and NG is a lower risk path. The sooner this happens, the better.

      • Mosh: you cannot get a 3deg C warming from doubling without using the models to calculate sensitivity. Show a citation. Papers that use data (not GCMs) get a much lower sensitivity. I’ve published on this personally.

      • Does not bother me in the least. For the most part Policy has NO NEED WHATSOEVER for results from GCMS.

        Very simply: The best science tells us.

        A) doubling c02 will increase temps by 3C . This is NOT from GCMs
        but rather from Paleo and Observational studies, GCMs merely
        confirm this or are at best consistent with it.

        Let’s go with a little more defensible 2C ( early 1D Manabe ) and also realize that much of that is buffered for centuries by the oceans never to be realized all at once.

        But what does a global 1.5C rise mean about actual climate?

        Global mean temperature doesn’t tell us much about the things that matter. And it’s possible that temperature rise coincides with less extreme climate.

      • The global average daily rising temp is 17.8F, the average solar for flat ground at the stations whose numbers were measured for a average Sun ( Avg of 1979-2014 TSI) is 3740.4 Whr/m^2, which works out to 0.0047F/W
        And the seasonal change for the continental US is ~0.0002F/W

      • “Let’s go with a little more defensible 2C ( early 1D Manabe )”

        A model from 1980 is more defensible than a model from today? I’d like to see that argument.

        ECS = 2 C = 3.6 F is already bad enough.

      • Craig wrote:
        “you cannot get a 3deg C warming from doubling without using the models to calculate sensitivity. Show a citation. Papers that use data (not GCMs) get a much lower sensitivity.”

        You cannot calculate climate sensitivity using 1850-2015 data because the information on manmade aerosols is not nearly good enough.

      • A model from 1980 is more defensible than a model from today?

        Aboslutely, but try from the 1960s instead.

        Manabe was constrained by compute resources and modeling was still infant but he might have thought more about this than those rushing off to make runs.

        For the 1D, Manabe used a reasonable global approximation. Not much has changed with estimates of either CO2 forcing or a water vapour feedback since then.

        In fact, the large range of global mean temperature estimates that the IPCC pronounces, and the fact that Manabes estimates lie within that range, prove that GCMs have largely been a waste of time and money.

        They’ve been a waste because the thing they were employed to do beyond a 1D model is provide prediction of how atmospheric motion might change things. But since motion is not predictable, applying GCMs to the problem obscures the relative certainty of radiative forcing with the uncertainty of circulation. But that uncertainty is there whether or not the CO2 changes.

      • page 8. There’s also this.

        “Paleo estimates” are not valid – for one thing they’re not observed, but for another, they don’t compare.

        The LGM had
        1.) Mountains of kilometer deep ice which changed the atmospheric circulation
        2.) The ice albedo of such times
        3.) The orbital distortions of solar radiance falling differntly across earth.
        4.) lower sea levels/higher land changing circulation

        The HCO had quite different solar radiance also.

        Further, they promote a misunderstanding of climate.

        The ice ages didn’t occur because of changes in global temperature.
        The ice ages occurred because of regional insolation changes over the ice accumulation zones.
        It is more accurate to say the ice ages caused global temperature change.

        I applied a radiative model to a year’s monthly snap shots of atmospheric data for given scenarios ( Eemian, LGM, HCO, PI, 2010, and 2xCO2 ). The atmospheres are the same but with Snow, Ice, Land, and orbit appropriate for the scenario.

        Below is how they compare.
        The difference between BLACK(2010) and GREEN( hypotheical 800ppm CO2, with reduced global sea ice ) compared to the difference between BLACX and the other scenarios is instructive.

        Paleo events were quite dynamic across seasons, and generally of much greater range. It was the ice accumulation( and subsequent ablation ) that mattered, not so much the global average temperature.

        https://turbulenteddies.files.wordpress.com/2016/05/pc_net_rad_all_months2.gif

      • Turbulent Eddie wrote:
        “They’ve been a waste because the thing they were employed to do beyond a 1D model is provide prediction of how atmospheric motion might change things”

        Where is the evidence any of that matters in the big picture?

        If is does, why are the climate patterns of the Quaternary so regular?

      • Where is the evidence any of that matters in the big picture?

        The GCMs have made the big picture blurry.

      • “The GCMs have made the big picture blurry.”

        Another nice, meaningless, utterly useless claim.

        Congratulations.

      • “Very simply: The best science tells us.”

        “The best science tells us”, that’s hilarious. Also you probably mean PM2.5 not PM25.

        The current states of pollution and GHG studies are so overrun with bias they don’t tell you anything.

        1. The argunent that the CO2 level will exceed 500 PPM is so funny it is almost absurd.

        .2 The cost of PM2.5 from coal in the US is trivial. It is an invented problem. The are a few problem legacy plants. If 1/100th of the money wasted on renewables and global warming scares was dedicated to upgrading the plants the problem would be solved. Warmunists aren’t interested in fixing problems, they are interested in getting their own way and will use whatever scare tactics work.

        Found this:
        http://www.health.utah.gov/utahair/pollutants/PM/PM2.5_sources.png

        http://www.health.utah.gov/utahair/pollutants/PM/PM10_sources.png

        Couldn’t find a pie chart of all US PM2.5 sources… But coal might not even be in the top 3.

        The claim of big gains reduction of a tiny minority of PM2.5 (which is mostly dust from various sources) indicates politically motivated study writing. And it is based on bioscience studies which are mostly wrong (AMGEN 89%) anyway.

        Lets look at the claims:
        1. ” doubling c02 will increase temps by 3C . This is NOT from GCMs”.
        Since it isn’t true they probably made it up. This would make Mosher correct. “rather from Paleo and Observational studies” more humor from Mosher. Field measurement says 0.64°C for a doubling and direct CO2 forcing is estimated (probably on lab studies) at 1°C.

        2. the estimates that the temperature will increase by 3C, is REASON enough, to take policy decisions that put an early end to the use of coal
        More humor gold from Mosher. This is like warning people to stay indoors because the sun will come up tomorrow. 3°C? So what? It isn’t going to hurt crops (Ag studies show heat tolerance increasing and soybean yield increasing fastest at the equator). It isn’t clear that 3°C would be a problem today and it certainly won’t be in a couple of decades.

        3. On top of the climate risk,
        And then we do the bait and switch. The warmunists just keep throwing things against the wall and hoping something will stick. All that money wasted on warmunism must have some benefit, eh?

    • > are not defending as nicely as they intend.

      The intention probing problem may be more like the earthquake problem than the climate system, Craig

    • Craig, you don’t have to be able to predict ENSOs to project the long-term climate state. Because ENSOs redistribute heat, they don’t create it. GHGs “create” new heat in the climate system.

      Calculating the final state of climate is mostly a matter of figuring out how much energy is added to the system (viz. conservation of energy), and how much of it is distributed to near the surface.

      • Craig, you don’t have to be able to predict ENSOs to project the long-term climate state. Because ENSOs redistribute heat, they don’t create it. GHGs “create” new heat in the climate system.

        You do need to predict ENSO events and the statistics of ENSO events if you want to have any basis of predicting whether California precipitation will be higher, lower or about the same as long term averages.

        You do need to predict El Ninos if you want to predict flooding in the US.

        You do need to predict ENSO events if you want to predict Atlantic hurricane frequency.

        You do need to predict ENSO events if you want to predict fire and drought in Australia.

        You do need to predict La Nina events if you want to predict Dust Bowl type events.

        But you can’t predict ENSO events, so you also can’t predict any of these other phenomena.

      • The Earth is a heat engine. It redistributes heat from the tropics to the poles via ocean and air circulation and the heat is lost more readily that way. Does ENSO “create” heat? Of course not, but by changing how it is distributed in the ocean, an el Nino can create a spike (like last year, remember?). Long term effect? Not sure.
        My point about ENSO was that it was something the models don’t do. How many things can they fail to do and still be “just physics”?

      • Eddie, GCMs don’t predict any of the things you mentioned. Maybe some downscaled regional models are now trying.

        Is anyone even claiming it’s now possible to “predict flooding in the US?”

      • Craig: how is ENSO relevant to a calculation of ECS?

        No modelers are trying to predict the exact average global surface temperature in 2100, 2100.5, 2101, 2101.5 etc. They are trying to project (not predict) the long-term average of global surface temperatures.

      • Eddie, GCMs don’t predict any of the things you mentioned. Maybe some downscaled regional models are now trying.

        Is anyone even claiming it’s now possible to “predict flooding in the US?”

        Gosh yes.

        You won’t have to look far to find all sorts of people attributing the Louisiana floods, or the Hawaii hurricanes, or Sandy, or…
        to global warming.

        There is no basis for this.

      • Eddie wrote:
        “You won’t have to look far to find all sorts of people attributing the Louisiana floods, or the Hawaii hurricanes….”

        You are misreading — the attributions are probabilistic. The Guardian just had a good article on this:

        https://www.theguardian.com/environment/planet-oz/2016/sep/15/was-that-climate-change-scientists-are-getting-faster-at-linking-extreme-weather-to-warming

      • The IPCC is maddening because they’ll coyly admit:
        “there is large spatial variability [ in precip extremes]”
        “There is limited evidence of changes in extremes…”
        “no significant observed trends in global tropical cyclone frequency”
        “No robust trends in annual numbers of tropical storms, hurricanes and major hurricanes counts have been identified over the past 100 years in the North Atlantic basin”
        “Iack of evidence and thus low confidence regarding the sign of trend in the magnitude and/or frequency of floods on a global scale”
        “In summary, there is low confidence in observed trends in small-scale severe weather phenomena”
        “not enough evidence at present to suggest more than low confidence in a global-scale observed trend in drought or dryness (lack of rainfall)”
        “Based on updated studies, AR4 conclusions regarding global increasing trends in drought since the 1970s were probably overstated.”
        “In summary, confidence in large scale changes in the intensity of extreme extratropical cyclones since 1900 is low”
        (from RPJr).

        But then go on to intimate that extreme weather is increasing.

      • You are misreading — the attributions are probabilistic.
        If that means made up because there’s no basis, then fine.

      • Eddie, and your comments on my backyard vs the ice age?

      • David: no one is trying to predict local effects like el Nino produces? The forecasts of doom are based on drought in Africa giving crop failure, more tornados, more hurricanes, more floods, heat waves killing people in Europe, all local things. Maybe YOU are only thinking about long term global averages, but the IPCC impacts reports are based on regional forecasts.

      • Craig: Who is predicting how many people will be killed in Europe from the next ENSO?

        The ENSO “forecasts” are mostly based on historical precedent. I”m sure there are modelers trying to do regional modeling. Do you expect them to be perfect too?

      • Eddie, and your comments on my backyard vs the ice age?

        Well, global average temperature did not cause the glacial cycles.
        It is more accurate to say that the glacial cycles caused changes in global average temperature.

        But I’d also observe that your temperature range ( you must be in Eastern Oregon or the mountains for 38F ) is much larger than 2 or 3F from global warming. I don’t think your day, even in the backyard, would be very different if your temps were 52F to 90F instead of 50F to 88F.

      • Eddie: I’m in western Oregon, in Salem, west of the Cascades.

        So how is my 38 F daily range so much worse than 2 miles of ice above Chicago?

      • Eddie wrote:
        “I don’t think your day, even in the backyard, would be very different if your temps were 52F to 90F instead of 50F to 88F.”

        What science supports that view?

        Evaporation rates increase exponentially, by about 7% per 1 deg C of warming. Was that a factor in the very difficult droughts being experienced in southern and southeastern Oregon? In the California drought?

      • So how is my 38 F daily range so much worse than 2 miles of ice above Chicago?

        I’m say that raising the annual average temperature in Salem by 3C is something you would expect naturally and that the Salem didn’t end when it happened in the past.

        In fact, it seems to have done okay.

      • Who says Salem has done OK?

        (Your chart only shows about 1.5 C of warming.)

        How much money was lost by Oregonian farmers due to the multi-year drought they’re still dealing with?

      • Eddie, that Salem/McNary data only shows a temperature rise of 1.2 C, not 3 C.

      • Eddie, that Salem/McNary data only shows a temperature rise of 1.2 C, not 3 C.
        The lowest annual temp to the highest annual temp ( about the range you can expect for any year ) is about 3C.

      • How much money was lost by Oregonian farmers due to the multi-year drought they’re still dealing with?

        And why do you bring up drought?

        I though you agreed that weather is not predictable.

      • “The lowest annual temp to the highest annual temp ( about the range you can expect for any year ) is about 3C.”

        That’s not how you calculate trends.

        So you don’t know how much was lost by Oregon farmers in the recent drought. Why haven’t you taken that into account when considering impacts?

      • “I though you agreed that weather is not predictable.”

        I haven’t written a word here about weather.

      • Sorry David but the distribution of extra energy is critical. If GCMs are just very complex energy balance methods then they will be assuredly very wrong. You can get almost any answer you want for an airfoil with course grids that don’t resolve details of the boundary layer. Give me the value of lift you want and I will run a CFD code and get that answer (at least for positive values between 0.5 and 1.5)

      • We aren’t talking about airfoils or how you would manipulate their equations, we’re talking about climate. Let’s stick to the subject.

      • Well, we have a much better understanding of simple turbulent flows than climate. There is at least real data and you can say something meaningful. All I’ve ever heard for the climate problem is “physical understanding” invoked. Thats just entirely subjective.

        The point is that if simple flow modeling is sensitive to parameters one would expect more complex turbulent flows to also be sensitive. Weather models contain turbulence models and primitive boundary layer models.

      • Steven Mosher