Global climate models and the laws of physics

by Dan Hughes

We frequently see the simple statement, “The Laws of Physics”, invoked as the canonical summary of the status of the theoretical basis of GCMs.

We also see statements like, “The GCM models are based on the fundamental laws of conservation of mass, momentum, and energy.” Or, “GCM computer models are based on physics.” I recently ran across this summary:

How many hours have been spent verifying the Planck Law? The spectra of atmospheric gases? The laws of thermodynamics? Fluid mechanics? They make up climate models just as the equations of aerodynamics make up the airplane models.

And here’s another version:

Climate models are only flawed only if the basic principles of physics are, but they can be improved. Many components of the climate system could be better quantified and therefore allow for greater parameterisation in the models to make the models more accurate. Additionally increasing the resolution of models to allow them to model processes at a finer scale, again increasing the accuracy of the results. However, advances in computing technologies would be needed to perform all the necessary calculations. However, although the accuracy of predictions could be improved, the underlying processes of the models are accurate.

These statements present no actual information. The only possible information content is implicit, and that implicit information is at best a massive mis-characterization of GCMs, and at worst disingenuous (dishonest, insincere, deceitful, misleading, devious).

There are so many self-contradictions in the last quoted paragraph, both within a given sentence and between sentences, that it’s hard to know where to begin. The first sentence is especially self-contradictory (assuming there are degree of self-contradictions). There are a very large number of procedures and processes applied to the model equations between the continuous equations and the coded solution methods in GCMs. It is critical that the actual coding be shown to be exactly what was intended as guided by theoretical analyses of the discrete approximations and numerical solution methods.

The articles from the public press that contain such statements sometimes allude to other aspects of the complete picture such as the parameterizations that are necessarily a part of the models. But generally such public statements always present an overly simplistic picture relative to the actual characterizations and status of climate-change modeling.

It appears to me that the climate-change community is in a unique position relative to presenting such informal kinds of information. In my experience, the sole false/incomplete focus on The Laws of Physics is not encountered in engineering. Instead, the actual equations that constitute the model are presented.

The fundamental unaltered Laws of Physics that are needed for calculating the responses of Earth’s climate systems are never solved by GCMs, and certainly will never be solved by GCMs. That is a totally intractable calculation problem, both analytically and numerically. Additionally, and very importantly, the continuous equations are never solved directly for the numbers presented as calculated results. Numerical solution methods applied to discrete approximations to the continuous equations are the actual source of the presented numbers. It is important to note that it is known that the numerical solution methods used in GCM computer codes have not yet been shown to converge to the solutions of the continuous equations.

In order to gain insight from of the numbers calculated by GCMs, deep understanding of the actual source of the numbers is of paramount importance. The actual source is far removed from the implied source that is conveyed by the statements. The ultimate source of the calculated results is the numerical solutions from computer software. The numerical solutions arise from the discrete equations that are used to approximate the continuous equations that are the basis of the discrete equations. Thus a bottom-up approach for gaining an understanding of GCM reported results requires that the nitty-gritty details of what is actually in the computer codes be available for inspection and study.

The Actual Source of the Numbers

The ultimate source of the numbers calculated by GCMs is the result of the following processes and procedures:

(1) Application of assumptions and judgments to the basic fundamental “Laws of Physics” in order to formulate a calculation problem that is both (a) tractable, and (b) that captures the essence of the physical phenomena and processes important for the intended applications.

(2) Development of discrete approximations to the tractable equation system. The discrete approximations must maintain the requirements of the Laws of Physics (conservation principles, for example).

(3) Development of stable and consistent numerical solution methods for the discrete approximations. Stability plus consistency imply convergence. Yes, that’s for well-posed problems, but some of the ODEs and PDEs used in GCMs represent well-posed problems (heat conduction, for example).

(4) Coding of the numerical solution methods.

(5) Ensuring that the solution methods, and all other aspects of the software, are correctly coded.

(6) Validation of the model equations by comparisons of calculated results with data from the physical domain.

(7) Development of application procedures and user training for each of the intended applications.

Validation, demonstrating the fidelity of the resulting whole ball of wax to the physical domain, is a continuing process over the lifetime of the models, methods, and software.

The long difficult iterative path through the processes and procedures outlined above, from the basic fundamental Laws of Physics continuous equations to the calculated numbers, is critically affected by a number of factors that are seldom mentioned whenever GCMs are the subject. Among the more significant, and never mentioned, factors is the well-known user effect; item (7) above. Complex software built around complex physical domain problems requires very careful attention to the qualifications of the users for each application.

In these notes the real-world nature and characteristics of such complex software and physical domain problems are examined in the light of the extremely simplified public face of GCMs. That public face will be shown to be a largely false characterization of these models and codes.

The critically important issues are those associated with (1) the modifications and limitations of the continuous formulation of the model equation systems used in GCMs (generally, the fluid-flow model equations are not the complete fundamental form of the Navier-Stokes equations, the fundamental formulation of radiative energy transport in the presence of an interacting media, for examples), and this applies to equations used in GCMs, (2) the exact transformation of all the continuous equation formulations into discrete approximations, (3) the critically important properties and characteristics of the numerical solution methods used to solve the discrete approximations, (4) the limitations introduced at run time for each type of application and the effects of these on the response functions of interest for the application, and (5) the expertise and experience of users of the GCMs for each application area.

These matters are discussed in the following paragraphs.

Background

Such statements as those mentioned above provide, at the very best, only a starting point relative to where the presented numbers actually come from. It is generally not possible to present an accurate and complete description of what constitutes the complete model in communications intended primarily to be informal presentations of a model and a few results. However, the overly simplistic summary that is usually presented should be tempered to more nearly reflect the reality of GCM models and methods and software.

Here are four examples of where GCM model equations depart from the fundamental Laws of Physics.

(1) In almost no practical applications of the Navier-Stokes equations are they solved to the degree of resolution necessary for accurate representation of fluid flows near and adjacent to stationary, or moving, surfaces. Two such surfaces of interest in modeling Earth’s climate systems are; (1) the air-water interface presented by the boundary between the atmosphere and oceans and (2) the interface between the atmosphere and the land. When considering the entirety of the interactions between sub-systems, including, for examples, biological, chemical, hydrodynamic, and thermodynamic interactions, the number of such interfaces is quite large.

The gradients, which appear in the fundamental formulations at these interfaces, are all replaced by algebraic approximations. The replacement occurs at the continuous equation level, even prior to making discrete approximations. These algebraic models and correlations are used to represent mass, momentum, and energy exchanges between the materials that make up the interfaces.

(2) The assumption of hydrostatic equilibrium normal to Earth’s surface is exactly that; an assumption. The fundamental Law of Physics, the complete momentum balance equation for the vertical direction, is not used.

(3) Likewise, whenever the popular description of the effects of CO2 in Earth’s atmosphere is stated, that hypothesis, too, is based on an assumption of nearly steady state balance between in-coming and out-going radiative energy exchange. And this is sometimes attributed to the Laws of Physics and conservation of energy. However, conservation of energy holds for all time and everywhere. The balance between in-coming and out-going radiative energy exchange for a system that is open to energy exchange is solely an assumption and is not related to conservation of energy.

(4) There is a singular, of upmost importance, critical difference between the, proven, fundamental Laws of Physics and the basic model equations used in GCMs. The fundamental Laws of Physics are based solely on descriptions of materials. The parameterizations that are used in GCMs are instead approximate descriptions of previous states that the materials have attained. The proven fundamental laws will not ever, as in never, incorporate descriptions of states that the materials have previously attained. Whenever descriptions of states that materials have experienced appear in equations, the results are models of the basic fundamental laws, and are not the laws as originally formulated.

A more nearly complete description of exactly what constitutes computer software developed for analyses of inherently complex physical phenomena and processes is given in the following discussions.

Characterization of the Software

Models and associated computer software intended for analyses of real-world complex phenomena and processes is generally comprised of the following models, methods, software, and user components:

1. Basic Equations Models The basic equations are generally from continuum mechanics such as the Navier-Stokes-Fourier model for mass, momentum and energy conservation in fluids, heat conduction for solids, radiative energy transport, chemical-reaction laws, the Boltzmann equation, and many others. The fundamental equations include also the constitutive equations for the behavior and properties of the associated materials: equation of state, thermo-physical and transport properties and basic material properties. Generally the basic equations refer to the behavior and properties of the materials of interest.

Even though fundamental basic equations of mass, momentum, and energy conservation are taken as the starting point for the modeling the physical phenomena and processes of importance, several assumptions and approximations are generally needed in order to make the problem tractable, even with the tremendous computing power available today. The exact radiative transfer equations for an interacting media, for example, are not solved, but instead approximations are introduced to make the problem tractable.

With almost no exceptions, the basic, fundamental laws in the form of continuous algebraic equations, ODEs and PDEs from which the models are built are not the equations that are ultimately programmed into the computer codes. Assumptions and approximations, appropriate for the intended application areas, are applied to the fundamental original form of the equations to obtain the continuous equations that will be used in the model. The approximations that are made are to more and lesser degrees important relative to the nature of the physical phenomena and processes of interest. A few examples are given in the following paragraphs.

The fluid motions of the mixtures in both the atmosphere and oceans are turbulent and there is no attempt at all to use the fundamental laws of turbulent fluid motions in GCM models/codes. For the case of two- or multi-phase turbulent flows, liquid droplets in a gaseous mixture for example, the fundamental laws are not yet known.

The exchanges of mass, momentum, and energy at the interfaces between the (atmosphere, oceans, land, biological, chemical,etc.) systems that make up the climate are, at the fundamental-law level, expressed as a coefficient multiplying the gradient of a driving potential. These are never used in the GCM models/codes because spatial resolution used in the numerical solution methods do not allow the gradients to be resolved. The gradients of the driving potentials are not calculated in the codes. Instead algebraic correlations of empirical data, based on a bulk state-to-bulk-state average potential, are used. These are almost always algebraic equations.

The modeling of radiative energy transport in an interacting media does not use the fundamental laws of radiative transport. Assumptions are applied to the fundamental law so that a reasonable and tractable approximation to the physical phenomena for the intended application is obtained.

While the fundamental equations are usually written in conservation form, not all numerical solution methods exactly conserve the physical quantities. Actually, a test of numerical methods might be that conserved quantities in the continuous partial differential equations are in fact conserved in actual calculations.

This comment should not be interpreted to mean that the basic model equations are incorrect. They are, however, incomplete representations of the fundamental Laws of Physics. Additionally, as next discussed, the algebraic equations of empirical data are often far from based on fundamental laws.

2. Engineering Models and Correlations of Empirical Data These equations generally arise from experimental data and are needed to close the basic model equations; turbulent fluid flow, heat transfer and friction factor correlations, mass exchange coefficients, for examples. Generally the engineering models and empirical correlations refer to specific states of the materials of interest, not the materials themselves, and are thus usually of much less than a fundamental nature. Many times these are basically interpolation methods for experimental data.

Models and correlations that represent states of materials and processes do not represent properties of the materials and are thus of much less of a fundamental nature than the basic conservation laws.

3. Special Purpose Models Special purpose models for phenomena and processes that are too complex or insufficiently understood to model from basic principles, or would require excessive computing resources if modeled from basic principles.

The apparently all-encompassing parameterizations used in almost all GCM models and codes fall under items 2 and 3. There are many physical phenomena and processes important to climate-change modeling that treated by use of parameterization. Some of the parameterizations are of heuristic and ad hoc nature.

Special purpose models can also include calculations of quantities that assist users, post-processing of calculated data, calculation of quality control quantities, for examples. Calculation of solution functionals, and other aspects that do not feed back to the main calculations, are examples.

4. Important Sources from Engineered Equipment Models for phenomena and processes occurring in complex engineering equipment, if a physical system of interest includes hardware. In the case of the large general GCMs, the equipment and processes involved in conversion of materials in one form and composition into other forms and compositions will use engineered equipment.

Summary of the Continuous Equations Domain The final continuous equations that are used to model the physical phenomena and processes usually arise from these first four items. The continuous equations always form a large system of coupled, non-linear partial and/or ordinary differential equations (PDEs and ODEs) plus a very large number of algebraic equations.

For the class of models of interest here, and for models of inherently-complex, real-world problems in general, the projective/predictive/extrapolative capabilities are maintained in the modeling under Items 1, 2, 3, and 4 listed above.

5. The Discrete Approximation Domain Moving to the discrete-approximation domain introduces a host of additional issues, and the ‘art’ aspects of ‘scientific and engineering’ computations complicate these. Within the linearized domain the theoretical ramifications can be computationally realized by use of idealized flows that correspond to the theoretical aspects of the analyses.

The adverse theoretical ramifications do not always prevent successful applications of the model equations. In part because the critical frequencies are not resolved in the applications, and in part due to the properties of the discrete approximations which usually have inherent implicit representations of dissipative-like terms. Such dissipative terms are also sometimes explicitly added into the discrete approximations. And sometimes these added-in terms do not have counterparts in the original fundamental equations.

Reconciliation of the theoretical results with computed results is also complicated by the basic properties of the selected solution method for the discrete approximations. The methods themselves can introduce aphysical perturbations into the calculated flows. And these are further complicated whenever the discrete approximations contain discontinuous algebraic correlations (for mass, momentum, and energy exchanges, for examples) and switches that are intended to prevent aphysical calculated results. In the physical domain any discontinuity (in pressure, velocity, temperature, EoS, thermophysical, and transport properties, for examples) has a potential to lead to growth of perturbations. In the physical domain, however, physical phenomena and processes act to limit growth of physical perturbations.

6. Numerical Solution Methods Numerical solution methods for all the equations that comprise the models are necessary. These processes are the actual source of the numbers that are usually presented as results.

Almost all complex physical phenomena are non-linear with a multitude of temporal and spatial scales, interactions, and feedbacks. Universally, numerical solution methods via finite-difference, finite-element, spectral, and other discrete-approximation approaches, are about the only alternative for solving the system of equations. When applied to the continuous PDEs and ODEs and the algebraic equations of the model these approximations give systems of coupled, nonlinear algebraic equations that are enormous in size.

Almost all of the important physical processes occur at spatial scales that are less than the discrete spatial resolution employed in all calculations. Additionally, the range of temporal scales of the phenomena and processes encountered in applications range from those associated with chemical reactions to time spans on the order of a century. In the GCM solution methods almost none of these temporal scales are resolved.

It is a fact that numerical solution methods are the dominant aspect of almost all modeling and calculation of inherently complex physical phenomena and processes in inherently complex geometries. The spatial and temporal scales of the application area of GCMs are enormous, maybe unsurpassed in all of modeling and calculations. The tremendous spatial scale of the atmosphere and oceans has so far proven to be a very limiting aspect relative to computing requirements, especially when coupled with the large temporal scale of interest; centuries of time, for example.

In GCM codes and applications, the algebraic approximations to the original continuous equations are only approximately solved. Grid independence has never been demonstrated, for example. The lack on demonstrated grid independence is proof that the algebraic equations have been only approximately solved. Evidence of independent Verification of (1) the coding and (2) the actual achieved accuracy of the numerical solution methods also have never been demonstrated.

Because numerical solutions are the source of the numbers, one of the primary focuses of discussions of GCM models and codes must be the properties and characteristics of the numerical solution methods. Some of the issues that have not been sufficiently addressed are briefly summarized here.

Summary of the Discrete Approximation Domain The discrete approximation domain ultimately determines the correspondence of the properties of the fundamental Laws of Physics and the actual numbers from GCMs. The overriding principles of conservation of mass and energy, for example, can be destroyed in this domain. One cannot simply state that the Laws of Physics insure that fundamental conservation principles are obtained.

7. Auxiliary Functional Methods Auxiliary functional methods include instructions for installation on the users’ computer system, pre- and post-processing, code input and output formats, analyses of calculated results, and other user-aids such as training for users.

Accurate understanding and presentation of calculations of inherently complex models and equally complex computer codes demands that the qualifications of the users be determined and enhanced by training. The model/code developers are generally most qualified to provide the required training.

8. Non-functional Methods Non-functional aspects of the software include its ease of, and fitness for, understandability, maintainability, extensibility and portability. Large complex codes have generally evolved, usually over decades, in contrast to being built from scratch and thus include a variety of potential sources of problems in these areas.

9. User Qualifications For real-world models of inherently complex physical phenomena and processes the software itself will generally be complex and somewhat difficult to accurately apply and the calculated results somewhat difficult to understand. Users of such software must usually receive training in applications of the software.

Summary

I think all of the above characterizations, properties, procedures, and processes, presented from a bottom-up focus, constitute a more nearly complete and correct characterization of GCM computer codes. The models and methods summarized above are incorporated into computer software for applications to the analyses for which the models and methods were designed.

Documentation of all the above characteristics, in sufficient detail to allow independent replication of the software and its applications, is generally a very important aspect of development and use of production-grade software.

Unlike a “pure” science problems, for which the unchanged fundamental Laws of Physics are solved, the simplifications and assumptions made at the fundamental-equation level, the correlations and parameterizations, and, especially, the finite-difference aspects of GCMs are the overriding concerns.

Spatial discontinuities in all fluid-state properties (density, velocity, temperature, pressure, etc.) introduce the potential for instabilities, as do discontinuities in the discrete representation of the geometry of the solution domain. Physical instabilities, captured by the equations in GCMs, and the behavior of the numerical solution methods when these are resolved becomes vitally important. The solutions are required to be demonstrated to be correct and not artifacts of numerical approximations and solution methods.

GCMs are Process Models Here’s a zeroth-order cut at differentiating a computational physics problem for The Laws of Physics from working with a process model of the same physical phenomena and processes.

A computational physics problem will have no numerical values for coefficients appearing in the continuous equations other than those that describe the material of interest.

Process models can be identified by the fact that given the same material and physical phenomena and processes, there is more than one specification for the continuous equations and more than one model.

Some processes models are based on more nearly complete usage of fundamental equations, and fewer parameterizations, than others.

The necessary degree of completeness for the continuous equations, and the level of fidelity for the parameterizations, in process models is determined by the dominant controlling physical phenomena and processes.

The sole issue for computational physics is Verification of the solution.

Process models will involve many calculations in which variations of the parameters in the model are the focus. None of these parameters will be associated with properties of the material. Instead they will all be associated with configurations that the material has experienced, or nearly so, at some time in the past.

Moderation note:  As with all guest posts, please keep your comments civil and relevant.  Moderation on this post will be done with a heavy hand, please keep your comments substantive.

978 responses to “Global climate models and the laws of physics

  1. Pingback: Global climate models and the laws of physics – Enjeux énergies et environnement

  2. Dan Hughes,

    In my experience, the sole false/incomplete focus on The Laws of Physics is not encountered in engineering. Instead, the actual equations that constitute the model are presented.

    Here, knock yourself out.

    Process models will involve many calculations in which variations of the parameters in the model are the focus. None of these parameters will be associated with properties of the material. Instead they will all be associated with configurations that the material has experienced, or nearly so, at some time in the past.

    Wonderful. When can we expect to see your model?

    • Laws of physics are exaggerated opinions of infallibility.

    • “Laws of physics are exaggerated opinions of infallibility.”

      Tell that to the electricity coming into your house.
      Or your car when you try to start it.
      Or your brakes when you want to stop.

    • Looks like Dan nailed part of it. Why do you think Dan should produce a climate model? :

      The following adjustable parameters differ between various finite volume resolutions in the CAM
      4745 5.0. Refer to the model code for parameters relevant to alternative dynamical cores.
      Table C.1: Resolution-dependent parameters
      Parameter FV 1 deg FV 2 deg Description
      qic,warm 2.e-4 2.e-4 threshold for autoconversion of warm ice
      qic,cold 18.e-6 9.5e-6 threshold for autoconversion of cold ice
      ke,strat 5.e-6 5.e-6 stratiform precipitation evaporation efficiency parameter
      RHlow
      min .92 .91 minimum RH threshold for low stable clouds
      RHhigh
      min .77 .80 minimum RH threshold for high stable clouds
      k1,deep 0.10 0.10 parameter for deep convection cloud fraction
      pmid 750.e2 750.e2 top of area defined to be mid-level cloud
      c0,shallow 1.0e-4 1.0e-4 shallow convection precip production efficiency parameter
      c0,deep 3.5E-3 3.5E-3 deep convection precipitation production efficiency parameter
      ke,conv 1.0E-6 1.0E-6 convective precipitation evaporation efficiency parameter
      vi 1.0 0.5 Stokes ice sedimentation fall speed (m/s)
      245

      • brandon, “How many angels can dance on the head of a pin?”

        yawn, typical progressive BS. “Scientific” press releases are getting more and more hyped in self promotion efforts which creates less “faith” in scientists. If scientists do not that the gonads to correct the record, the distrust will just get “progressively” worse.

      • Dallas,

        yawn, typical progressive BS. “Scientific” press releases are getting more and more hyped in self promotion efforts which creates less “faith” in scientists. If scientists do not that the gonads to correct the record, the distrust will just get “progressively” worse.

        Yawn, typical contrarian disconnect from reality:

        http://content.gallup.com/origin/gallupinc/GallupSpaces/Production/Cms/POLL/mv4nnuxuy0-t17h_w0su9g.png

        http://content.gallup.com/origin/gallupinc/GallupSpaces/Production/Cms/POLL/kk6qz4ey2kqlojjpj6wnna.png

        Cue: “See? The propaganda is working”.

        Here, does <a href="http://phys.org/news/2012-11-limitations-climate.html"this comport to your arbitrary and vague definition of “accurate picture of the science”?

        How accurate is the latest generation of climate models? Climate physicist Reto Knutti from ETH Zurich has compared them with old models and draws a differentiated conclusion: while climate modelling has made substantial progress in recent years, we also need to be aware of its limitations.
        We know that scientists simulate the climate on the computer. A large proportion of their work, however, is devoted to improving and refining the simulations: they include recent research results into their computer models and test them with increasingly extensive sets of measurement data. Consequently, the climate models used today are not the same as those that were used five years ago when the Intergovernmental Panel on Climate Change (IPCC) published its last report. But is the evidence from the new, more complex and more detailed models still the same? Or have five years of climate research turned the old projections upside down?

        It is questions like these that hundreds of climate researchers have been pursuing in recent years, joining forces to calculate the climate of the future with all thirty-five existing models. Together with his team, Reto Knutti, a professor of climate physics, analysed the data and compared it with that of the old models. In doing so, the ETH-Zurich researchers reached the conclusion: hardly anything has changed in the projections. From today’s perspective, predictions five years ago were already remarkably good. “That’s great news from scientist’s point of view,” says Knutti. Apparently, however, it is not all good: the uncertainties in the old projections still exist. “We’re still convinced that the climate is changing because of the high levels of greenhouse gas emissions. However, the information on how much warmer or drier it’s getting is still uncertain in many places,” says Knutti. One is thus inclined to complain that the last five years of climate research have led nowhere – at least as far as the citizens or decision makers who rely on accurate projections are concerned.

        Still too apologetic? Try this:

        Once again this brings us back to the thorny question of whether a GCM is a suitable tool to inform public policy.
        Bish, as always I am slightly bemused over why you think GCMs are so central to climate policy.

        Everyone* agrees that the greenhouse effect is real, and that CO2 is a greenhouse gas.
        Everyone* agrees that CO2 rise is anthropogenic
        Everyone** agrees that we can’t predict the long-term response of the climate to ongoing CO2 rise with great accuracy. It could be large, it could be small. We don’t know. The old-style energy balance models got us this far. We can’t be certain of large changes in future, but can’t rule them out either.

        So climate mitigation policy is a political judgement based on what policymakers think carries the greater risk in the future – decarbonising or not decarbonising.

        A primary aim of developing GCMs these days is to improve forecasts of regional climate on nearer-term timescales (seasons, year and a couple of decades) in order to inform contingency planning and adaptation (and also simply to increase understanding of the climate system by seeing how well forecasts based on current understanding stack up against observations, and then futher refining the models). Clearly, contingency planning and adaptation need to be done in the face of large uncertainty.

        *OK so not quite everyone, but everyone who has thought about it to any reasonable extent
        **Apart from a few who think that observations of a decade or three of small forcing can be extrapolated to indicate the response to long-term larger forcing with confidence

        Aug 22, 2014 at 5:38 PM | Registered Commenter Richard Betts

        Are the bits I bolded ballsy enough for you? Does Dr. Betts suitably anti-hype things to your discerning and exacting specifications? Do I need to assign even more climatologists to the role of media nanny at the expense of them doing, I don’t know, their real jobs?

        I’m just trying to make sure all your concerns are being properly addressed in a timely fashion. The customer is always right, you know.

        ***

        At what point does it occur to you that if we don’t know f%#kall about how the real system works that we probably shouldn’t be making changes to it?

        Sweet weepin’ Reason on the cross.

    • Brandon – Specifically which of Dan’s assertions do you dispute? You just tossed off a throw-away comment as far as I can tell.

      • jim2,

        Specifically which of Dan’s assertions do you dispute?

        I quoted it: Instead, the actual equations that constitute the model are presented.

        Is the popular press he’s critiquing supposed to do that every time they write about the physical underpinnings of climate models?

        Why do you think Dan should produce a climate model?

        Don’t tell me, show me. Engineering is *applied* science. Instead, he’s giving the climate modeling community homework assignments by way of attacking what’s written about GCMs by journalists.

      • brandon, “Instead, he’s giving the climate modeling community homework assignments by way of attacking what’s written about GCMs by journalists.”

        Right, the climate modeling community should be communicating with the churnalists so the public gets an accurate picture of the science. How many churnalists “know” that the models run hot so they are providing an upper limit instead of likely possibility? How many churnalists know that RCP 8.5 isn’t really the business as usual estimate but one of a number of business as usual estimates?

      • brandon, and just for fun, the Kimoto equation is a simple model that produces a low, but reasonable estimate of response to CO2 based on “current” energy balance information. Everyone should know that it is a lower end estimate, though can be adapted for more detail, but there is huge opposition to anyone producing a low end estimate, with reasonable caveats vs acceptance of known high end estimates.

        It is almost like a high bias is the most useful for some reason.

      • the Kimoto equation is a simple model that produces a low, but reasonable

        That’s mutually exclusive.

      • Micro, “That’s mutually exclusive.”

        No it isn’t, it is a boundary value problem so having a reasonable lower bound estimate is desirable. If you expand Kimoto you have a fair complex partial differential equation set. Not much different than NS except it considers a combination of energy flux..

      • /sarc sorry lol

        No you’re right, I draw boxes around solutions all the time, very useful in getting your point across.

  3. “Colorless green ideas sleep furiously” obeys all laws of grammar.

  4. Hard to believe people won’t accept the scientific evidence that science has completely F@@ked up the environment in complete clueless like fashion pretending all this time to understand nature! Maybe they cling to the outdated notion that science has a clue!!!!

  5. All laws of physics are appropriate in some bell curve sweet spot, but tail off in all dimensions of space time. No law of physics is complete [yet].

    So we develop “parameters” to dance between the bumps and holes of the laws we have ratified. You can think of a double black diamond ski run or a class 4 river rapid. Lots of bumps and holes, but many different “parameterized” ways down.

    • “All laws of physics are appropriate in some bell curve sweet spot, but tail off in all dimensions of space time.”

      Talk about pseudoscientific babble speak.

      The Planck law holds everywhere, not only in some “sweet spot,” in, as far as we know, all of spacetime, not as a Bell Curve, but as a fundamental law of nature.

      • Not true.
        eg definition
        “The law may also be expressed in other terms, such as the number of photons emitted at a certain wavelength, or the energy density in a volume of radiation. ”
        is very suggestive of a Bells Curve.

      • You call that a kna-ife, angech? That is a kna-ife:

        Within metaphysics, there are two competing theories of Laws of Nature. On one account, the Regularity Theory, Laws of Nature are statements of the uniformities or regularities in the world; they are mere descriptions of the way the world is. On the other account, the Necessitarian Theory, Laws of Nature are the “principles” which govern the natural phenomena of the world. That is, the natural world “obeys” the Laws of Nature. This seemingly innocuous difference marks one of the most profound gulfs within contemporary philosophy, and has quite unexpected, and wide-ranging, implications.

        http://www.iep.utm.edu/lawofnat/#SH2a

        Clifford was probly a regularist, BTW.

      • “The law may also be expressed in other terms, such as the number of photons emitted at a certain wavelength, or the energy density in a volume of radiation. ”
        is very suggestive of a Bells Curve.

        No — you can describe things in other ways without it fitting a normal distribution. The S-B Law is very much not a normal distribution (not a “Bell Curve”).

        Are you using “Bell Curve” to just mean any kind of statistical distribution with some spread?

      • “The S-B Law is very much not a normal distribution (not a “Bell Curve”).”

        I didn’t say it was. Nor did the OP. He said, if I understood correctly, that F=ma gives, for a given a, a range of values of F that are Gaussian distributed.

        Which isn’t true. Nor, for a given temperature T, does the S-B Law give a range of emission intensities. There is a 1-1 relationship.

      • Willard | September 14, 2016 at 9:27 am |
        You call that a kna-ife, angech? That is a kna-ife:
        True.
        Thanks. The death of a thousand cuts.
        One has to be careful talking the planck in case one falls off.

      • angech wrote:
        “Not true.
        eg definition
        “The law may also be expressed in other terms, such as the number of photons emitted at a certain wavelength, or the energy density in a volume of radiation. ”
        is very suggestive of a Bells Curve.”

        That is not my understanding of what the original commenter wrote, which was “All laws of physics are appropriate in some bell curve sweet spot, but tail off in all dimensions of space time. No law of physics is complete”

        This is vague, but my interpretation of it is that F does not equal ma, but that, given an F, a is not exact but is represented as a probability distribution that’s a Bell Curve. Which is wacky.

      • Appell, ““All laws of physics are appropriate in some bell curve sweet spot, but tail off in all dimensions of space time.”

        Talk about pseudoscientific babble speak.”

        More of an engineering truism. The limits of most models are approximations. Newtonian physics need corrections as you near the speed of light, the ideal gas laws need various corrections as you approach limits. S-B is an ideal approximation that nearly always needs some adjustment. So if you use exclusively ideal approximations your expectations will always be unrealized to some degree.

        Engineers normally start by looking at how unrealistic expectations are then adjust accordingly. Which makes it humorous when “philosopher” types start telling engineers, what engineers always do :)

      • The laws of physics aren’t the problem, it’s the application of them. Of course they pertain to idealized situations. But if you’re trying to use Newton’s F=ma law at relativistic speeds, you’re not using the right law, and that’s your fault, not the law’s. Instead you should be using the laws of general relativity.

        The real world is complex. But that complexity is the world’s, and not the laws. The laws are useful precisely because they aren’t complex.

      • Appell, “Instead you should be using the laws of general relativity.”

        Well, it is pretty obvious that a number or climate model parameterizations need using an updated version for whatever they were using. That is kind of the point, revisiting simplifying assumptions instead of defending flawed results.

      • “Well, it is pretty obvious that a number or climate model parameterizations need using an updated version for whatever they were using.”

        Why is that obvious?

        “That is kind of the point, revisiting simplifying assumptions….”

        Climate modelers do this constantly.

      • “Why is that obvious?”

        Perhaps because clouds and aerosols, the same big uncertainties that have always been the big uncertainties, are still big uncertainties? Clouds are pretty funny really since convective triggering a parameterization based on a real surface temperature and the models miss the temperatures that are being parameterized.

        Aerosols have been revised since the last set of runs, but what should be “normal” natural aerosol optical depth is a bit of a guess so the current period of low volcanic activity could contributing to warming currently attributed to just about everything else.

        Unless models can more accurately “get” real surface temperatures, not many of the parameterizations based on thermodynamics are very “robust”.

      • “Assuming that models now run warm means assuming surface temperature models have no errors.”

        No. It means that the models register the surface and the surface does not represent the “planet”. Ignoring for the moment dubious adjustments to the oceanic parts of the surface record, even the surface temperatures after Nino equilibrium is reestablished, fall short of model predictions.

        That is just the beginning of the story. Secular temperature increase decreases continuously to the tropopause. The tropopause is about flat, and above it, temperature decreases with time. Very sharply in the middle stratosphere, the highest altitude our Nimbus style satellites can see.

        Co2 continues to radiate well into the mesosphere, cooling the “planet” above the view of common satellites.

        Tell me. Which altitude referenced above represents the “planet”.

    • dallas wrote:
      “Perhaps because clouds and aerosols, the same big uncertainties that have always been the big uncertainties, are still big uncertainties?”

      And the carbon cycle — huge uncertainties there.

      But why is uncertainty a reason for inaction? Are you assuming uncertainty about a factor means its influence is zero until the uncertainty is reduced to…what?

      The ironic thing about all this is that the radiative part of models regarding GHGs like CO2 is the *least* uncertain part of climate models, because it’s the most approachable via fundamental physics.

      Still pseudoskeptics complain about CO2, when the real uncertainty in models is in the carbon cycle, clouds, and aerosols. They have it all upside down.

      • “But why is uncertainty a reason for inaction?””
        Try
        Look before you leap?
        Out of the frying pan into the fire?
        Not terribly original
        Even a stitch in time means you have known insight into the problem before you attempt to fix it
        Which leads to if it ain’t broke don’t fix it.
        The road to hell is paved with good intentions.

      • angech: Cliches, and not very applicable at that.

        Uncertainty does not mean nothing is changing.

      • No. The radiative forcing is the MOST uncertain part. The fluid dynamic parts of the models are actually pretty good. Otherwise they would not produce index for ENSO, however incorrect in space and time.

        Just go to MODTRAN. Look up and down. It “sees” NO radiation in the atmosphere below a kilometer. At a kilo it begins to pick up signal from the CO2 bands as a lessening of the outbound blackbody radiation. The altitudes for water and ozone are about 4 and 5 km respectively.

        The problem here is complex. If MODTRAN is correct, there is NO radiation in the first km of the atmosphere to transfer. The transfer must all be accomplished by conduction and convection…

      • “Just go to MODTRAN. Look up and down. It “sees” NO radiation in the atmosphere below a kilometer”

        Again, what does that even mean, “sees?”

      • “If MODTRAN is correct, there is NO radiation in the first km of the atmosphere to transfer. The transfer must all be accomplished by conduction and convection…”

        That sounds ridiculous, and I’m sure it’s incorrect. Or rather, your interpretation is. There is lots of IR in the bottom km of the atmosphere….

  6. Yeah why is it when its garbage in garbage out its always the computer’s fault?

    • Where is the “garbage” here?

      “Description of the NCAR Community Atmosphere Model (CAM 3.0),” NCAR Technical Note NCAR/TN–464+STR, June 2004.
      http://www.cesm.ucar.edu/models/atm-cam/docs/description/description.pdf

      • Know it well, have studied it cover to cover. Two basic sections plus appendices. First section describes the dynamical core. Secton section describes parameterizations– all the values given to all those PDE’s. Therein lies the validation problem, because hindcasting cannot validate them given the attribution problem. So CAM runs objectively hot compared to balloon and satellite observarions. If anything, invalidation.

      • So CAM runs objectively hot compared to balloon and satellite observarions.

        Which gets to the kluge I mentioned, the history of the models I remember reading was that they all ran cold, until they changed how they “conserved” water vapor, after which they all run warm.

      • Assuming that models now run warm means assuming surface temperature models have no errors. Which may well not be true:

        “Historical Records Miss a Fifth of Global Warming: NASA,” NASA.gov, 7/21/16
        http://www.nasa.gov/feature/jpl/historical-records-miss-a-fifth-of-global-warming-nasa

      • David, you know a lot of people, would you please ask around for a manifest or table of contents, even a few index pages, of just what data it was that Phil Jones ‘dumped’? I know the original data is no longer available but it would be helpful if you would provide a link that would be wonderful. Thank you for any help you might be able to bring to this discussion.

      • The point is that the equations in the model description *are* specific applications of the laws of phyics, which Dan said did not happen.

  7. Thanks for addressing this.

    Regarding: “Validation, demonstrating the fidelity of the resulting whole ball of wax to the physical domain, is a continuing process over the lifetime of the models, methods, and software.”

    I think that one thing which can never be understated is the need for independent testing. That´s the whole idea behind use of independent laboratories which are accredited in accordance with ISO/IEC 17025 General requirements for the competence of testing and calibration laboratories
    : “In many cases, suppliers and regulatory authorities will not accept test or calibration results from a lab that is not accredited.”

    The requirement for used of accredited laboratories is imposed by authorities, for measurement of CO2 emissions, on even small businesses with the tiniest CO2 emissions.

    While the reason why authorities are imposing all this on mankind – models – are nowhere near being tested or validated by the same standards.

  8. There is a lot of general harrumphinging here that isn’t attached to an examination of what GCMs actually do. But almost all of what is said could be applied to computational fluid dynamics (CFD). And that is established engineering.

    On the specific issues:
    “(1) In almost no practical applications of the Navier-Stokes equations are they solved to the degree of resolution necessary for accurate representation of fluid flows near and adjacent to stationary, or moving, surfaces. “
    True in all CFD. Wall modelling works.

    “(2) The assumption of hydrostatic equilibrium normal to Earth’s surface is exactly that; an assumption. The fundamental Law of Physics, the complete momentum balance equation for the vertical direction, is not used.”
    Hrdrostatic equilibrium isthe vertival momentum balance equation. The forces associated with vertical acceleration and viscous shear are assumed negligible. That is perfectly testable.

    “The balance between in-coming and out-going radiative energy exchange for a system that is open to energy exchange is solely an assumption “
    It isn’t assumed in the equations – indeed, such a global constraint can’t be. It is usually achieved by tuning, usually with cloud parameters.

    “The fundamental Laws of Physics are based solely on descriptions of materials. The parameterizations that are used in GCMs are instead approximate descriptions of previous states that the materials have attained.”
    There is a well developed mathematics of material with history, in non-newtonian flow. But I’m not aware of that being used in GCM’s. In fact, I frankly don’t know what you are talking about here.

    • Nick,
      I’m surprised at you suggesting that CFD can be treated as a mature science. Without experimental data specific to the problem, there are still major problems with turbulence modeling.

      You wrote:- : “Hrdrostatic [sic] equilibrium is the vertival [sic] momentum balance equation. The forces associated with vertical acceleration and viscous shear are assumed negligible. That is perfectly testable.”

      You are correct that it is testable. It has already been tested. The Richardson assumption gives useless (arbitrary) answers after a period of less than three weeks. ( See The Impact of Rough Forcing on Systems with Multiple Time Scales: Browning and Kreiss, 1994; and Diagnosing summertime mesoscale vertical motion: implications for atmospheric data assimilation: Page, Fillion, and Zwack, 2007.) It leads to nonphysical high energy content at high wavenumbers, sometimes called “rough forcing”. It then becomes necessary to fudge the model by introducing a nonphysical dissipation to avoid the model blowing up. The GCMs all seem to be locked into this problem because of the legacy of large investment in adaptation of short term meteorological modeling. I read recently of some moves afoot to fix this problem but it requires a radical change-out of the numerical scheme for atmospheric (and shallow ocean) modeling and it hasn’t happened yet.

      What we have now is a heavily parameterised, non-physical solution which does not converge on grid refinement and which shows negative correlation with regional observations in most of the key variables. Do you want to buy a bridge?

      • “It leads to nonphysical high energy content at high wavenumbers, sometimes called “rough forcing”.”
        The corollary to testing, where instability is found, is updraft modelling, which emulates the meteorological processes. Checking your reference Pagé et al 2007:

        Using a state of the art NWP forecasting model at 2.5 km horizontal resolution, these issues are examined. Results show that for phenomena of length scales 15-100 km, over convective regions, an accurate form of diagnostic omega equation can be obtained. It is also shown that this diagnostic omega equation produces more accurate results compared to digital filters results.

        People can solve problems, if they try.

      • Nick,
        Your reference to updraft modeling takes me to an empirical subgrid model of deep convection. Was that your intent?
        Please do read the Page et al reference. There are two omega grids generated for comparison, one is “an accurate diagnostic form of omega equation” and the other (derived from the Richardson equation) is a high resolution version of what is used in climate models. Note the separation after 45 minutes in the modeling with a little wind divergence. Note also the pains taken to eliminate the high energy variation at high wavenumbers in the diagnostic version.

      • “empirical subgrid model of deep convection. Was that your intent?”
        The section heading of 4.1 is “Deep Convection”. The first subsection, 4.1.1 is the updraft ensemble.

      • Nick,
        There are a number of steps in converting a set of continuous equations into a numerical model:-
        1) Choice of spatial discretisation ( which controls spatial error order and grid orientation errors)
        2) How to evaluate time differentials, specifically whether time differentials are evaluated using information available at the beginning of the timestep (explicit scheme), at the end of the timestep (fully implicit scheme) or using a combination of end-timestep estimates and beginning-timestep information (semi-implicit) scheme
        3) Control of timestep
        4) Choice of SOLVER. This is the mathematical algorithm which solves the set of simultaneous equations which are developed as a result of the above choices.
        The original question was about whether it was possible to relate a numerical solution to the continuous equations – the governing equations from which the numerical formulation is derived. At most, your proposed test will tell you whether your solver is working to correctly solve your discretised equations. It can tell you nothing about whether your numerical formulation is working to satisfy the original continuous equations.
        David Appell,
        I say with no disrespect that it is evident that you have never developed numerical code for a complex problem. You wrote:
        “But in the real world, numerical solutions are calculated because there is no possibility of an analytic solution, or anything close to it. That’s certainly the case for climate. And if you are close to an idealized, textbook situation you’d first try perturbation theory anyway.”

        Your first statement is almost true. (A number of years ago, I was asked to develop a high resolution numerical code to test the validity of a solution which was analytic in a non-physical mathematical space, but which then had to be numerically inverted back to physical space.) Of necessity, this means that the best we can ever do to test a numerical formulation is run a series of NECESSARY tests. There is no fully sufficient test of the validity of the solution for the actual problem to be solved. For most physical problems, however, there is an analytic solution available for a simplified version of the problem. You may wish to solve a problem (acoustic propagation, fluid displacement, tracer dispersion, whatever) across naturally heterogeneous media. You can create analytic solutions for the same problems but applied to layered homogeneous media (for example), and this then allows one (necessary) test of the numerical formulation, which directly relates the discretised numerical solution to the continuous equations. Alternatively, you may have a real problem involving arbitrarily varying boundary conditions. You would then always at least ensure that the solution is valid for a fixed functional variation of boundary conditions with known solution. Finite element model of stress-strain? You test the formulation against a simple mechanical problem with known solution before applying it to the real problem you are trying to solve.

        Even with N-S, as I have mentioned, there are analytic solutions available for certain steady-state problems. There are also valid analytic bounds that can be put on N-S solutions at certain length and time scales. Research labs and universities might be happy to use untested code, or even worse, to use results from code which is known to be in error when such tests are applied. Engineers cannot afford to be quite so imprudent.

      • “There are a number of steps in converting a set of continuous equations into a numerical model”
        Yes, of course. And they have been developing them for essentially the same geometry for over forty years. The GCMs are outgrowths of, and often use the same code as, numerical weather forecasting programs. These are used of course by met offices, military etc. They are not programs tapped out by climate scientists.

        The programs use a regular grid topology, elements typically hexahedra, 100-200km in the horizontal, and 30-40 layers in the vertical, so they are long and thin. They are mapped to follow terrain. Grid and timestep size is highly constrained by the Courant stability condition, because of the need to resolve in time whatever sound waves can be represented on the grid (horizontally). For solution, they focus on the pressure velocity (dynamic core) which has the fastest timescales. In CAM3, they use finite differences in the vertical, and a spectral method in the horizontal, for speed. The spectral method requires inversion of a matrix with rank equal to the number of harmonics used.

        I think that if you verify the solution using a different differentiation procedure, you are finding whether the processes are solving the pde, because that is what they have in common. But in any case, in CFD at least, that is not commonly what is done. Instead various local diagnostics are examined – flow at sharp corners, boundary layer detachment, vortex shedding, stagnation points etc. These are not analytic solutions, but they are circumstance under which some subset of terms are dominant. I’m not sure what the corresponding diagnostics in GCM’s would be, but I’m sure they have some.

    • Wall modeling, ‘laws of the wall’ work if the present flow of interest corresponds to both the fluid and the flow structure on which the empirical description is based. The requirement that the fluid corresponds can be eased off a bit, if the empirical description has correctly accounted for fluid effects; i. e. the description uses only fluid properties and the controlling dissipative phenomena have been correctly captured. Surface curvature (concave vs. convex), microstructure of the interface of the surface bounding the flow (wall roughness), large-scale flow-channel geometry effects (abrupt expansion, abrupt contraction, smoothly converging, smoothly diverging, inter-connections between embedded solid structures), solid stationary vs. compliant fluid interface, forced convection vs. natural convection vs. mixed convection as in natural circulation, among others, require different ‘laws’. Even for very high Reynolds number flows; the Super Pipe experiments.

      Such ‘laws’ are not laws.

      The case of single-phase flow is in much better shape than even two- or multi-phase flows when phase change is not an issue. Then we have flows that involve phase change, both in the bulk mixture and at a wall. There are no ‘laws’ for such flows, only EWAG correlations for specific fluids, specific flow-channel geometry, specific types of mixtures, and specific wall heating, or cooling, states.

      • “Wall modeling, ‘laws of the wall’ work if the present flow of interest corresponds to both the fluid and the flow structure on which the empirical description is based.”
        Yes. And the kinds of surfaces encountered in atmospheric modelling are not infinite, and have been studied for decades. Particularly ocean, with evap and heat and gas exchange.

    • 1. Hydrostatic equilibrium is the vertical momentum balance equation. The forces associated with vertical acceleration and viscous shear are assumed negligible. That is perfectly testable.
      2. kribaez testable and not negligible.
      3. When proven not negligible use updraft modelling state of the art NWP forecasting model at 2.5 km horizontal resolution.
      4. of course this was not used as see point 1. assume forces are negligible.

    • “The forces associated with vertical acceleration and viscous shear are assumed negligible.”

      This sounds like a world without cold fronts or thunderstorms.

      I like cold fronts and thunderstorms.

      • I like SSW’s. They certainly shake things up, as do changes in the jet stream

        tonyb

      • “This sounds like a world without cold fronts or thunderstorms.”
        They have specific models for “deep convection” – updrafts etc.

      • Nick

        I am not sure that SSW”‘s come into that category. Thunderstorms etc also differ in their effect as do tornadoes etc etc.

        How do you include random numbers of each type, each of which are themselves of differing intensity?

        Tonyb

      • Nick

        Speaking of tornadoes the UK has more for its land area than any other country although they are not as severe of course as those in tornado alley.

        http://www.manchester.ac.uk/discover/news/new-map-of-uk-tornadoes-produced

        I do not recall seeing these mentioned in the recent output from the Met office who are now modelling regional impacts.

        Tonyb

      • Tony,
        From that Sec 4.1:
        “The scheme is based on a plume ensemble approach where it is assumed that an ensemble of convective scale updrafts (and the associated saturated downdrafts) may exist whenever the atmosphere is conditionally unstable in the lower troposphere.”
        They don’t model individual plumes. They extract the average effects that matter on a GCM scale – entrainment etc. And yes, GCMs won’t model individual tornadoes.

      • “This sounds like a world without cold fronts or thunderstorms.”
        They have specific models for “deep convection” – updrafts etc.

        Yes, they have to, because for cold fronts or thunderstorms, the hydrostatic approximation is invalid.

        As I understand it, most GCMs currently are based on hydrostatic assumptions but most future plans are for non-hydrostatic models.

        Maybe that would help some of the failings but not the limits to prediction because the basic chaotic equations still remain.

      • “most future plans are for non-hydrostatic models”
        They won’t reinstate the acceleration term. The main reason for the hydrostatic assumption is that it overcomes the limit (Courant) on time step vs horizontal grid – if the equation allows sound waves, it has to resolve them in time, else the distorted version will gain energy and blow up.

  9. Dan Hughest wrote:
    “In my experience, the sole false/incomplete focus on The Laws of Physics is not encountered in engineering. Instead, the actual equations that constitute the model are presented.”

    Dan, you don’t know what you’re talking about. Read:

    “Description of the NCAR Community Atmosphere Model (CAM 3.0),” NCAR Technical Note NCAR/TN–464+STR, June 2004.
    http://www.cesm.ucar.edu/models/atm-cam/docs/description/description.pdf

  10. @author
    ‘It is important to note that it is known that the numerical solution methods used in GCM computer codes have not yet been shown to converge to the solutions of the continuous equations.’

    Interesting… is there any reference to peer-reviewed papers demonstrating this?
    Thanks.

    • Lower order effects related to the discrete approximations are known to be present in current-generation GCMs.

      One of the more egregious aspects that explicitly flies in the face of the Laws of Physics are the parameterizations that are explicit functions of the size of the discrete spatial increments. That’ll never be encountered in fundamental laws of material behaviors.

      Importantly, such terms introduce lower-order effects into the numerical solution methods. As the size of the discrete increments change, these terms change.

      As the spatial grids are refined, for example, the topology of the various interfaces within the solution domain changes, along with various spatial extrapolations associated with parameterizations.

      The theoretical order of the numerical solution methods cannot be attained so long as these lower-order effects are present.

      • Dan Hughes wrote:
        “One of the more egregious aspects that explicitly flies in the face of the Laws of Physics are the parameterizations that are explicit functions of the size of the discrete spatial increments. That’ll never be encountered in fundamental laws of material behaviors.”

        Ohm’s law is a parametrization. So is Young’s modulus.

        GCM parametrizations aren’t ideal, of course. But they are necessary to make the problem tractable. There simply isn’t enough computing power, by far, to handle all the scales of the problem necessary for a full physics treatment, from microscopic to macroscopic.

        So modelers make approximations. All modelers do this, of course, even engineering modelers. But modeling climate is a far harder problem than any engineering problem; it’s the most difficult calculation science has ever attempted.

        As Stephen Schneider used to say, there is no escaping this uncertainty on the time scale necessary if we are to avoid climate change, so we will have to make decisions about what to do, or not do, about the CO2 problem in the face of considerable uncertainty. We makes lots of decisions with uncertain or incomplete information, but this one has the most riding on it.

      • There simply isn’t enough computing power, by far, to handle all the scales of the problem necessary for a full physics treatment, from microscopic to macroscopic.

        This is another misconception – the problem isn’t compute power or resolution. The problem is the non-linearity of the equations.

      • the problem isn’t compute power or resolution

        It is at budget review time.

      • As Stephen Schneider used to say, there is no escaping this uncertainty on the time scale necessary if we are to avoid climate change, so we will have to make decisions about what to do, or not do, about the CO2 problem in the face of considerable uncertainty. We makes lots of decisions with uncertain or incomplete information, but this one has the most riding on it.

        The problem David is that when you dig, all of the evidence falls back to the hypothesized effect of increasing Co2.
        What we lack is a quantification of it’s actualized change in temp. Because I can go out and point an IR thermometer at my asphalt driveway, my roof, and point out more forcing then the surface co2 over my yard has.
        I have also calculated how much the temperature goes up and down as the length of day changes, and that sensitivity is less than 0.02F/Wm^2 outside the tropics.

      • “The problem David is that when you dig, all of the evidence falls back to the hypothesized effect of increasing Co2.
        What we lack is a quantification of it’s actualized change in temp.”

        Um, have you ever read a climate textbook?

      • David Appell said Ohm’s law is a parametrization. So is Young’s modulus.

        No, Young’s modulus ia a property of materials.

        Ohm’s law. Ohm’s law is one of the basic equations used in the analysis of electrical circuits. I assume you are referring to the resistance as being a parameter. Nope again, the resistance is a property of materials.

        Ohm’s law is a model that is applicable to materials that exhibit a linear relationship between current and voltage. There are, however, components of electrical circuits which do not obey Ohm’s law; that is, their relationship between current and voltage (their I–V curve) is nonlinear (or non-ohmic).

      • The problem is the non-linearity of the equations.

        This is a misconception. “Non-linearity” isn’t a problem; there are many easily solvable non-linear equations. Here: find the root of x^2 = 0. You can do it analytically or numerically, but either way it’s quite easy.

        No, Young’s modulus ia a property of materials.

        Young’s Modulus is a paramaterization of an isotropic materials property. Go deeper, and you get the anisotropic properties. Then you get them as a function of time. Then you look at how elasticity also varies microscopically, at grain boundaries, at high-stress regimes, etc. Young’s Modulus is a simplification of all of these nitty-gritty details.

        The Ideal Gas Law is another paramaterization, coming from treating gas particles under conditions of elastic collisions. (Which is why it breaks down as you get close to phase transitions).

        And, yes, Ohm’s Law is another one, a simplification of the laws surrounding electron scattering at quantum mechanical levels. You can get there from the Drude Model, if I recall right, which is itself a simplification.

      • Dan, yes, Young’s Modulus is about material (obviously), and it’s a parametrization, as Benjamin just explained very well. And so is the ideal gas law, Ohm’s law, and the law of supply and demand — simple summaries of the complex collective interactions of large systems.

      • Dan,

        True believers will attempt to deny, divert, and confuse by any means possible.

        They cannot bring themselves to accept the IPCC statement that prediction of future climate states is impossible.

        Cheers.

      • And look at the very next sentence the IPCC wrote:
        “Rather the focus must be upon the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions.”

        https://www.ipcc.ch/ipccreports/tar/wg1/501.htm

        Yes, there is the possibility of abrupt change — and scientist warn about that. (There are two National Academy of Sciences reports on that in the last two decades.) But if you look at the climate of the Holocene, it looks quite smooth and predicable. If you look at the ice ages of the Quaternary, there are clear patterns that dominate the progression of climate. Milankovitch cycles don’t explain them exactly, but that’s where you start, and you get a useful fit to the actual climate history.

        We may never be able to predict something like the Younger Dryas (but maybe), or a rapid change due to, say, melting clathrates. But with big enough emsembles you get a good sense of what final climate states are most probable, and that may be the best climate models can offer. Then human intelligence has to take all the projections in and think and decide and plan. Maybe you plan for the worst case scenario. Maybe you choose to take a risk and not plan for it but for the maximum of the probability distribution. But there’s still uncertainty involved — that’s just reality. That doesn’t you simply and smugly reject computer models — instead you use them with their strengths and limitations in mind.

      • David Appell,

        The main limitation of climate models seems to be that they cannot predict future climate states. You seem to be implying that they have some other unspecified utility.

        You tell me I can’t simply reject the climate models just because they have no demonstrated utility. OK, I won’t reject them. I’ll just ignore them. They still cannot predict the future climate.

        Saying you can get a good sense of the future by averaging more and more incorrect outputs is akin to saying that future climate states can be usefully predicted.

        You seem to agree that sudden changes can’t be usefully predicted. The logical course of action would seem to be to assume that things will go on more or less as usual, with occasional moments of terror and panic when confronted by sudden unpredictable adverse changes.

        This seems to be be what history shows. I assume it will continue.

        Cheers.

      • “Saying you can get a good sense of the future by averaging more and more incorrect outputs is akin to saying that future climate states can be usefully predicted.”

        The outputs aren’t “incorrect.” They’re the outputs of the model. If you knew which were “correct” and which were “incorrect,” you wouldn’t need a model, would you?

        The probability distribution of future climate states can be reasonably calculated. One can then decide how to interpret that distribution and what to do about it. Sure, maybe there is a methane bomb and temperatures skyrocket in a few decades. If you dislike that possibility, do your best to miminize then chance of a methane bomb.

      • The point is, you left out an important part of the IPCC’s statement.

      • David Appell,

        If you have 130 models producing 130 different results, only 1 can be correct, at most. If running each model millions of times produces a plethora of different results (and it does – the atmosphere behaves chaotically), then again, only one can be correct.

        Averaging all the demonstrably incorrect results helps you not at all. It is not possible to forecast future climate states. The IPCC says so, and so do I.

        As to not quoting the IPCC in full, I assume most people can read it for themselves. It seems likely that many did not see where the IPCC stated that it is not possible to predict future climate states.

        The IPCC states –

        “Addressing adequately the statistical nature of climate is computationally intensive and requires the application of new methods of model diagnosis, but such statistical information is essential.”

        Unfortunately, the “new methods of model diagnosis” are required because present methods don’t actually work. Even more unfortunately, attempts at new methods produce results just as dismal as the old methods. Maybe you can come up with a new method which will diagnose the problem with models attempting to predict the unpredictable future, (according to the IPCC – and me), but it seems the diagnosis is simple. The models produce many wrong answers. If there is a correct answer, it is indistinguishable from the garbage surrounding it.

        Apologies if I failed to quote precisely the matter you wanted me to. You could have quoted it yourself, and saved me the trouble, I suppose.

        Maybe the answer to climate state predictive computer models is to examine chicken entrails instead – or consult a psychic, like the White House did.

        Cheers.

      • *ALL* model results will be incorrect. Because all models have errors, and besides, no one knows the future time series of GHG emissions, aerosol emissions, volcanic emissions and solar irradiance. You can’t predict future climate even in principle.

        However, you don’t need a “correct” prediction for models to be useful. Knowing the probability distribution of an ensemble is useful information, especially since essentially none of them give a reassuring result for any realistic RCP. It’s not necessary to know if warming in the year 2100 will be 3.0 C, or 3.1 C – no policy decisions hinge on it.

      • How can an ensemble of results, all of which you admit are wrong, provide any useful results? Do two wrongs make a right? Do 100 wrongs make a right?

      • Allen:

        1) Would you please specify the daily CO2, CH4, N2O etc human emissions for the next 100 years? Also, all the SO2 emissions of all volcanic eruptions, their latitudes, and the daily solar irradiance for the same time period. Thanks.

        2) George Box: “All models are wrong, but some are useful.” (He wasn’t just talking about climate models.)

      • How can an ensemble of results, all of which you admit are wrong, provide any useful results?

        Ahhh, man, these engineers said that this bridge could support 20 tons, when in reality, it can support 20.00812751234 tons.

        That model was completely useless! It totally didn’t get us in the ballpark of what we needed to know.

      • This is what scientists mean when they say “all models are wrong”.

        Every model, from the Law of Gravity to the Ideal Gas Law, is an approximation of reality. None of them are exactly correct. (And by “exact”, we mean exact. As in math, to an infinite number of decimal places). Every model is wrong.

        But many of them are still extremely useful while not being exactly correct. We don’t usually need a hundred billion billion digits of accuracy.

      • “That model was completely useless! It totally didn’t get us in the ballpark of what we needed to know.”

        +1

      • (Actually it was +1.07544856, but +1 is good enough for a blog comment.)

  11. @author
    ‘It is important to note that it is known that the numerical solution methods used in GCM computer codes have not yet been shown to converge to the solutions of the continuous equations.’

    When has this *ever* been shown for numerical solutions to PDEs?

    • You have to be joking. With the vast majority of engineering applications, it is a simple matter to apply the numerical solution to a problem which has a known analytic solution. This then allows a direct test of the process of conversion of the governing equation into its numerical form. As an example, you can consider the diffusivity equation for single-phase flow much loved by hydrologists. (This is a PDE involving differentials in space and time.) This can be solved analytically for a variety of flow conditions. So you solve it numerically with differing numerical schemes or different levels of grid refinement and compare the results.

      Alternatively, to avoid a white-mouse test, many industries have set up more complex benchmark problems which do not have an analytic solution, but which are used by multiple code developers. Over a period of time, as different approaches yield the same answers, the true solution is well known and is available for comparative testing.

      Lastly, if there is no analytic solution available and no benchmark problem for comparison, you can always run an L2 convergence test. This involves running the same numerical scheme on successively finer grids to test that the solution is converging. This last is a necessary but not sufficient condition since it does not necessarily mean that you are converging to the correct solution. On the very few convergemnce tests of GCMs which appear in the literature, the GCMs fail even this weak test because of the need to make arbitrary adjustments to the parameters used to define subgrid scale behaviour. This should be telling us something.

      • “So you solve it numerically with differing numerical schemes or different levels of grid refinement and compare the results.”

        You can do that. But simpler and more direct tests are:
        1. Check conservation of mass, momentum and energy. Analytic solutions are just got from applying those principles, but you can check directly.
        2. As with any equations, you can just check by substitution that the solution actually satisfies the equations.
        3. There are standard tests of an advective flow, like just let it carry along a structure (a cone, say) and see what it does with it.

        Again, GCMs are just a kind of CFD. People have worked out this stuff.

      • Nick,
        My response was intended to address the comment of David Appell wherein he implied that numerical solutions of PDE’s were never, or hardly ever, tested against the continuous equations. I was pointing out au contraire that in the majority of physical problems, numerical solutions are indeed tested against the governing equations using an analytic solution for predefined boundary conditions – normally a simplified version of the actual problem to be solved. No engineer developing new code would dream of NOT testing that the code was solving correctly if a an analytic solution is available (and it normally is for a simplified version of the problem to be solved). Even in the case of the N-S equations, there are a number of exact solutions for steady-state assumptions; there are also exact solutions for incomressible flow assumptions. As yet there does not exist an analytic solution for non-steady-state flow of compressible fluids.

        Your response to me appears to be confused on several issues:-

        Analytic solutions are not derived by the application of conservation principles. They are derived by solving the continuous equations for specified initial and boundary conditions. If the continuous equations expressly conserve something then the analytic solution will conserve the same property, but not otherwise. Whether the numerical solution conserves those properties is a function of the numerical formulation. I think that you are talking (instead) about testing whether aggregate properties hold in the model.

        “As with any equations, you can just check by substitution that the solution actually satisfies the equations.” You can only do this if there is an analytic solution available.

      • “Analytic solutions are not derived by the application of conservation principles. They are derived by solving the continuous equations for specified initial and boundary conditions. “
        And what equations do you have in mind that are not expressing conservation principles?

        “You can only do this if there is an analytic solution available.”
        No, you have a differential equation (discretised), and a postulated solution. Differentiate the solution (discretely) as required and see if it satisfies the de.

      • Nick, The differential equations governing the eddy viscosity don’t express any “law of physics” but are based on assumed relationships and fitting data. They are leaps of faith in a very real sense and their developers say so.

      • dpy6629,

        They are leaps of faith in a very real sense and their developers say so.

        One wonders where that leaves making changes to the radiative properties of the atmosphere in a very real sense.

      • Nick,
        You suggested:-“No, you have a differential equation (discretised), and a postulated solution. Differentiate the solution (discretely) as required and see if it satisfies the de.”

        The reality is that you don’t have a single differential equation. You have a set of equations, the number of which depend on your spatial discretization. So, what do you want to do? Pick a few grid cells at random and pick a few timesteps at random? You plug in the local variables to see whether they are a valid solution to the discretized equations? Well you can do this, I guess, but if you use the same quadrature as in the numerical scheme, what do you think you get back and what do you think it tells you? There are easier ways to test whether your solver is working, if that is your aim here. Your test however tells you nothing about whether your numerical formulation offers a valid solution to the continuous equations.

      • David,
        “The differential equations governing the eddy viscosity don’t express any “law of physics””
        We were talking about systems with analytical solutions that might be tested.

        kri,
        “but if you use the same quadrature as in the numerical scheme, what do you think you get back and what do you think it tells you”
        Well, use a different one. That will tell you something. And if you’re handling data on the scale of solve a pde, it’s no real challenge to put the solution back into a system of equations and get a sum squares error on the grid for some set of times.

      • “No engineer developing new code would dream of NOT testing that the code was solving correctly if a an analytic solution is available ”

        Obviously.

        But in the real world, numerical solutions are calculated because there is no possibility of an analytic solution, or anything close to it. That’s certainly the case for climate. And if you are close to an idealized, textbook situation you’d first try perturbation theory anyway.

    • I believe kribaez to be right. A steady flat plate boundary layer has an analytical solution. Maxwells equations are linear and you can prove numerical solutions converge to the analytic solution.

      For turbulent CFD the story is different. Recent results call into question the grid convergence of large eddy simulation time accurate solutions. Without grid convergence it gets pretty tough to define what an “accurate” solution is.

      • Heat flow is another pretty easy example. Yes, there are analytical solutions for many simple PDE systems.

        Navier-Stokes is a particularly difficult problem, which is why there are huge prizes associated with making progress on it. I’m not sure that we need a completely convergent solution for NS for GCMs, though, so this is kinda a moot point.

  12. On my website “uClimate.com”, the logo is a butterfly. And the reason for this is simple, whereas the climate obeys all the laws of physics, because of the butterfly effect, it does so in a way that cannot be predicted.

    Or to be more precise, it can be predicted – but only that it is chaotic in its behaviour and as such in many aspects will behave in a way that appears to mean IT DOES NOT OBEY THE RULES OF “PHYSICS”.

    This is fundamentally why the academic culture fails when trying to understand climate. Because academia is taught a “deconstructional” approach whereby it is believed that a system can be totally described by the behaviour of small parts.

    In contrast, in the real world, most of the time, whilst it helps to know small parts of the system work, the total system’s behaviour usually has to be described using parameters that are different from that of a small section. So, e.g. the behaviour of a patient in a hospital, cannot be described by newtons equations … even if newtons equations dictate what we do … the complexity is such that they are irrelevant almost all the time for any doctor seeing a patient.

    Likewise, the climate – yes it’s behaviour is determined by “physics” – but we need to treat it at a system level using the tools and techniques taught to real-world practitioners and not the theoretical claptrap taught in universities.

    • So you believe that a small perturbation could move the climate from its current state (and similar to what we’ve had for a few thousand years) to one that is radically different. But because we cannot predict the impact we should perturb away and not worry?

      • Steve

        In what way is todays climate state similar to the one we had for a few thousand years? That assumes the Roman Warm Period, followed by the Dark ages cold era followed by the MWP followed by the LIA never happened. Climate changes quite dramatically but proxies are unable to pick up the changes due to their sampling methods and smoothing.

        tonyb

      • “small perturbation could move the climate from its current state (and similar to what we’ve had for a few thousand years) to one that is radically different”

        I find statements like this interesting.
        Can we judge which of our ‘perturbs’ would be good and which would be bad?
        I perturb little universes every time I trod through my yard.
        Perhaps one day when the models are perfected we will be able to produce only good perturbs and achieve a perpetual sustained climate based on recent acceptable history.
        That will be a sad day and our true end.

      • “But because we cannot predict the impact we should perturb away and not worry?”

        There is an asymmetry in your argument. There are two possible actions:
        1) Continue to perturb (keep emitting GHCs)
        2) Discontinue perturbing (stop emitting GHCs)

        If climate is unpredictable then doing either 1) OR 2) could lead to horrendous consequences, benign consequences or no consequences. For all we know our current perturbation may be the only thing preventing a catastrophic ice age and in this case doing 2) is not good.

        Your unstated assumption is that human non-interference is optimal. But nature as far as we know has no law that says that human interventions are bad things. In fact nature has no concept of bad/good…it just is.

      • Trevor Andrade wrote:
        “Your unstated assumption is that human non-interference is optimal.”

        The science — not an assumption — shows that organisms have difficulty adjusting to rapid climate change.

        From there it’s a matter of values: do you value cheap electricity more than human and other species’ well-being? If so, burn all the coal you want. OK if the present poor and future generations have to pay for what you’re creating because you only want cheap electricity? Burn away.

        It’s ultimately a moral question.

    • And the reason for this is simple, whereas the climate obeys all the laws of physics, because of the butterfly effect, it does so in a way that cannot be predicted.

      It can’t be predicted deterministically, but it can be predicted statistically. It’s like trying to predict the motion of a single molecule of gas in a parcel of air, versus trying to predict the motion of the whole parcel. The latter is much, much easier.

      as such in many aspects will behave in a way that appears to mean IT DOES NOT OBEY THE RULES OF “PHYSICS”.

      Sorry; this is wrong. Both butterfly and weather and individual particles of air appear to obey the rules of physics. Physics is sometimes chaotic.

      Because academia is taught a “deconstructional” approach whereby it is believed that a system can be totally described by the behaviour of small parts.

      By small parts and their interactions, yes. We are all made up of very large amounts of small parts (atoms) and their interactions. Physics has correctly nailed this one.

      • Benjamin,

        Physics is sometimes chaotic.

        A small semantic nit if I may. Some physical *systems* exhibit chaotic behavior:

        https://upload.wikimedia.org/wikipedia/commons/4/45/Double-compound-pendulum.gif

        … and some do not:

        https://en.wikipedia.org/wiki/Pendulum#/media/File:Oscillating_pendulum.gif

      • brandonrgates,

        You said it better. :-p

        If I were to rephrase my comment, I’d say “the physics of some systems is chaotic”.

      • I think your rephrasing is much less “misleading”. :-)

      • “It can’t be predicted deterministically, but it can be predicted statistically.”

        When someday I see a proof of this common, yet nonsensical, statement, I will quit laughing.

        It’s time for science in general to admit that the de-constructionist approach to explaining the world is, essentially useless.

        We keep waiting for that computer that…..

        Time to bring back metaphysics.

      • It’s time for science in general to admit that the de-constructionist approach to explaining the world is, essentially useless.

        …yeah. I’m hoping I misunderstand you, as it appears that you’re criticizing how science has been conducted for centuries. And science has been extremely useful during the last few centuries, so…

      • “And science has been extremely useful during the last few centuries, so…”
        Medicine maybe.
        Other than that science’s main legacy seems to be Malthusian darwinism and the cut throat capitalistic exploitation of the masses that we call modern society.
        On the other hand it completely destroyed the notion of teleology and the art of human living in comparability with the natural law and the environment.
        I just saw a magazine on the newsstand today that purports to explain human relationships by means of science.
        Disgusting.

        Pretty good for a right winger, eh?

        http://www.assisi-with-ingrid.com/art/landscape.jpg

      • Obvious model in action. Not anything natural exhibited. You must be presenting the view of someone who also supports AGW.

      • Medicine maybe.
        Other than that science’s main legacy seems to be Malthusian darwinism and the cut throat capitalistic exploitation of the masses that we call modern society.

        Medicine, electricity, the internal combustion engine, nuclear power, satellites, computers, refrigeration… yeah, science has been pretty useless the last few centuries. *cough*.

        There’s incredible irony in someone using a computer to tell me that science has been pretty useless.

      • “There’s incredible irony in someone using a computer to tell me that science has been pretty useless.”

        Believe me, if I had never seen a computer and, certainly, never ended having to sit behind one for 8 hours a day, my life would be infinitely better.

        The categories we are speaking in are diverging.

        What I am saying is that the rise of scientific naturalism was a foul moment in history. As a philosophy (actually a religion) it stated that the entire universe could be comprehended by taking matter apart and reassembling it, studying the physical forces involved.

        The disaster is that this ideology become a monopoly in the public sphere. It drove out competing ways of understanding our world and helped bring about the brutal economic disasters of English capitalism, its reacting communism, etc…. It destroyed metaphysics and teleology. It destroy the notion of living in harmony with the natural order and nature itself.

        To a large extent the climate crowd is in a similar throwback mode to what I am discussing. They also realize that man has lost his way in living harmoniously with his environment, economically, etc….
        But they base their solutions on the lie of computer modelling, AGW, etc… rather than getting to the heart of the matter, which is to see that the drift towards a strictly materialistic view of the world was a disaster.

      • As a philosophy (actually a religion) it stated that the entire universe could be comprehended by taking matter apart and reassembling it, studying the physical forces involved.

        Ehh, scientists don’t normally follow that view, strictly speaking. There are two distinctions to be made:

        1) Matter is not the only thing that matters (heh). Energy does, too! Or, more properly, bosons and fermions.

        2) Many scientists don’t necessarily believe that the entire universe can be comprehended by methodological naturalism, only that it’s the best tool we currently have for studying the natural world.
        In other words, there are no other good tools for studying the universe right now… but we can’t discount finding new tools in the future. Though, probably “science” would just come to encompass those tools as well.

        With those points in mind, yes, science and the scientific method have been absolutely fundamental in improving standards of living over the last few centuries.

      • That’s pretty good, Mosher.
        Its the first time anyone has attempted to address the issue for me, personally.

        I could still care less about global warming but it seems like there might be a basis for attempting a theory around computing statistics.

        Numerical errors and their highly correlated structure are another matter, but, thanks again.

  13. Even if you had a really perfect model there would be a problem that arises especially from it being deterministic and perfect.

    The climate system being somewhat chaotic, to make forecast you have to calculate the results for a representative sample of micro states compatible with your initial conditions. Forecasting only works if you get a distribution with a sharp maximum. If the distribution is spread out over a somewhat broad attractor (as is to be expected) you are a clever as before as this is not about forecasting a distribution but about forecasting a single event.

    If your “probable” outcomes differ by say 2°K what policy would you want to recommend? Build this or that kind of mega big infrastructure at this or that place (taken for granted for the moment that the technology (still under development) will actually work, resources be available, risk of political hiccups like WW III not considered)? Start building now at maximum speed or better slowly as better technology and better information about the future climate becomes available? What to do when forecasts/technology change over time make previous efforts obsolete?

  14. A fundamental part of computationally modelling is understanding the system that’s being modelled. It’s clear that GCMs are trying to solve a set of equations that describe a physical system; our climate. We use GCMs because we can’t easily probe how this system responds to changes without such tools. Of course, one could argue about whether or not it would be better to use simpler models with higher resolutions, or more detailed models with lower resolution, or some other type of model, but that’s slightly beside the point. What’s important, though, is that those who use GCMs, and those who critique them, have a good understanding of the basics of the system being studied. Maybe the author of this post could illustrate their understanding by describing the basics of the enhanced greenhouse effect.

    • (a) CO2 is a GHG. (b) As the concentration of CO2 increases in Earth’s atmosphere, assuming all other physical phenomena and processes remain constant at the initial states, the planet will warm. (c) Eventually over time, a balance between the incoming and outgoing radiative energy at the TOA will obtain.

      The hypothesis (b) contains an assumption that is a priori known to be an incorrect characterization of the physical domain to which the hypothesis is applied. All other physical phenomena and processes never remain constant at the initial states.

      The hypothesis (c) assumes that all the phenomena and processes occurring within the physical domain, many of which directly and indirectly affect radiative energy transport, will likewise attain balance. So long as changes that affect radiative energy transport occur within and between Earth’s climate systems, the state at the TOA will be affected.

      Earth’s climate systems are open with respect to energy. Additionally, physical phenomena and processes, driven by (1) the net energy that reaches the surface, (2) redistribution of energy content already within the systems, and (3) activities of human kind, directly affect the radiative energy balance from which the hypothesis was developed.

      I’m a potentialist.

    • Started reasonably well, and then went off the rails somewhat. I don’t think (b) necessarily has the assumption of “all else being equal”. Alternatively, you can add a third point, that “as the temperature changes, this will initiate feedbacks that will either enhance, or diminish, the overal response. I don’t really have any idea what a potentialist is.

      To be fair, it wasn’t an awful response and the reason I asked the question was because a critique of a scientific tool – like a GCM – does require a good understanding of the system being studied. As other have pointed out, your critique appears to be partly a strawman (you’re criticising what’s being presented to the public which – by design – is much simpler than what is being discussed amongst experts) and you appear to be applying conditions that may be appropriate for areas in which you might have expertise, but may not be appropriate in this case.

    • This response by ATTP contains a grain of truth and also some arrogance. While not explicit the implication is that simplistic “explanations” often called “physical understanding” have real scientific value. If I had a nickel for every time I’ve heard this invocation of specialists intuition or understanding that turned out to be wrong I’d be a wealthy man. Of course physicists invoke physical understanding. Engineers invoke engineering judgment. Doctors invoke medical experience or their selective memory of past cases. If it can’t be replicated or quantified it’s not really science.

    • Ken R, The only real oversight in Dans post is the lack of attention to subgrid models. But I agree he’s dealing at a somewhat superficial level that science communicators have chosen for their misleading apologetics. There are many deeper levels that deserve attention too.

  15. Physical, mundane engineering is based on the laws of physics. Complicated, but used in everyday life. It’s a much, much more mature science than climate modeling.

    However, when an engineer designs a bridge, they “overengineer” it, because they really aren’t sure what the loads and initial conditions are, or what they’re going to be in twenty years. If the bridge is supposed to support a hundred tons, the actual design load is often several times that – and sometimes, the bridge still breaks.

    On the other hand, a number of pseudosciences have been “based on the laws of physics.” they just weren’t based on the correct selection of the laws of physics.

    • In engineering Hammurabi rules -sleeping
      under a bridge of yr own making, so to speak.
      In pseudo-science, models, words diffused,
      adjustments, explanations, post hoc, ad hoc
      machinations.

    • Physical, mundane engineering is based on the laws of physics. Complicated, but used in everyday life. It’s a much, much more mature science than climate modeling.

      Depends on the branch of engineering. I’ve worked on materials engineering projects that were considerably less mature than climate modeling.

  16. Interesting write-up but I think your premise is a little misguided. As others have noted, in the technical literature these things are discussed, as you want. I think you’re confusing how these things are presented in public-facing broadbrush interactions with how discussions take place between experts in the field.

    In my experience, the sole false/incomplete focus on The Laws of Physics is not encountered in engineering. Instead, the actual equations that constitute the model are presented.

    I’ve seen numerous public-facing discussions with engineers and have never seen any presentation of equations. Typically they have indeed talked about ‘The Laws of Physics’. Perhaps a case where the depth of information you seek out in your own discipline is not the same as that you’re exposed to in other disciplines with which you are less familiar?

  17. Dan Hughes points out in his article “The articles from the public press that contain such statements sometimes allude to other aspects of the complete picture such as the parameterizations that are necessarily a part of the models. But generally such public statements always present an overly simplistic picture relative to the actual characterizations and status of climate-change modeling”.
    Example: https://www.sciencedaily.com/releases/2016/09/160907160628.htm
    The Rensselaer Polytechnic Institute researchers go on to claim “To tackle that challenge, the project will forecast future weather conditions for 2,000 lakes over the next 90 years using high-resolution weather forecasting models and projections of climate change provided by the Intergovernmental Panel on Climate Change.”
    So a model for lakes, a model for weather, and a model for climate will predict the future. We (the public) hope so. That’s why we hire experts. The public doesn’t create models or publish peer review papers but that doesn’t disqualify them from questioning how their money is spent.

    • I agree with much of what you say JMH and your above comment IMO hits the nail on the head. Demagogy is no substitute for genuine scientific endeavour.

      • Hi Peter. Dan Hughes opinion in his article “These statements present no actual information. The only possible information content is implicit, and that implicit information is at best a massive mis-characterization of GCMs, and at worst disingenuous (dishonest, insincere, deceitful, misleading, devious).”seemed to point in the same direction. Some ‘form’ of the science held near and dear at CE, is then paraded before the public as ‘proof’, by the demagogues, that their goals must be supported. (I haven’t seen Leonardo DiCaprio’s new movie. I’m waiting for it to come to cable.)

  18. My checkered career goes through the Navier Stokes equations, for which Stability plus consistency imply convergence. is not true in 3 dimensions.

    In 3d, flows go to shorter and shorter scales, meaning that no resolution is adequate to express the flow. You need the short scale flows though because they act back on large scale flows as a sort of ersatz viscosity.

    And in particular reducing the time step and space resolution does not converge to the true solution, because the true solution always has finer scale structure.

    The models have a knob called “effective viscosity” which is not a fundamental law of physics, or any law of physics. It’s a knob used in curve fitting, called tuning the model.

    In 2d, flows do not go to shorter scales and numerical solutions work. Indeed conservation of vorticity is one method of solution of these flows.

    The difference is that in 3d vortices can kink and break up, and in 2d they can’t.

    Anyway the absence of awareness of this feature of the Navier Stokes equations was a very early indicator that there’s no adult peer review in climate science.

    The other early indicator was violation of a mathematical law, that you can’t distinguish a trend from a long cycle with data short compared to the cycle. I think climate science still doesn’t recognize this one either.

    • You have commented on some of the properties of NS equations before and to my knowledge no-one from the AGW team has ever responded. IMO the underlying assumptions behind the GCMs need to be discussed more openly and any resulting uncertainties to be explained in conjunction with any conclusions that are to be made.

      • You have commented on some of the properties of NS equations before and to my knowledge no-one from the AGW team has ever responded.

        That’s cause the discussion happens in the scientific literature and at conferences. 99.99% of climate scientists will never bother to comment on one of these threads.

        If you want to genuinely be part of the debate, you have to get educated, do research, and publish.

      • Peter,

        The IPCC certainly accepts that non linearity precludes the possibly of forecasting future climate states.

        IPCC –

        “The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible.”

        “Climate scientists” refuse to accept the IPCC consensus. I don’t know why. I suppose you have to keep believing that “you can do what’s never been done, and you can win what’s never been won”, if it keeps the grant funds flowing.

        The AGW true believers just keep trying to change the subject. They’ve been quite successful – how many people realise that the IPCC said that climate prediction is impossible?

        Cheers.

      • That’s cause the discussion happens in the scientific literature and at conferences. 99.99% of climate scientists will never bother to comment on one of these threads.

        If you want to genuinely be part of the debate, you have to get educated, do research, and publish.

        I think, rather, it’s that to a physicist studying the Navier Stokes equation and numerical methods, the Navier Stokes equation is interesting.

        To a climate scientist, the Navier Stokes equation is an obstacle. Whatever gets past it is okay.

        That goes for a lot of the physics and statistics in climate science.

        It think it also bespeaks an absence of curiosity.

    • In addition, your last comment resonates with me because IMO many people from both sides of the AGW debate place too much store in short term data movements.

    • > Anyway the absence of awareness of this feature of the Navier Stokes equations was a very early indicator that there’s no adult peer review in climate science.

      Searching for “navier-stokes” and “climate” gives me 23K hits.

      Have you checked before making these wild allegations, RH?

    • rhhardin ==> Thank you for your commentary. I think that you are right that N-S is a stumbling block for CliSci modelers, which they struggle to “circumvent” rather than acknowledge the problems it presents. I touch on this in my series at WUWT on Chaos and Climate.

  19. “Important Sources from Engineered Equipment Models ”

    Do you mean by this analog circuits modelling a subsystem?

    • I’m thinking that as the GCMs mature the sources of important GHGs will be modeled in increasing detail. Including modeling of the availability and consumption rates of the natural-resource sources. This is of course already underway for some sources. Modeling of the source from electricity production, for example, would require representation of the various kinds of fossil-fueled plants. The same applies to transportation. I think these will initially be generally algebraic models. It is my understanding that the RCPs are presently used, instead.

  20. There is a distinction to make.

    Fluid flow in the atmosphere occurs by the differential equations of motion which are mostly* non-linear and unpredictable. Fluid flow determines winds, clouds, storms, precipitation, and determines local temperature.

    OTOH, Radiative Forcing from increasing CO2, in the global mean, is mostly stable and predictable. Radiance is determined by the arrangement of clouds and temperature but the effect of changing RF from additional CO2 is mostly of a positive sign and similar scale regardless of the weather below.

    ( Here is a crude calculation of RF change from 2xCO2 for 12 monthly snapshots of CFS profiles using the CRM radiative model. The absolute value may be off because of surface albedo choices, but in the global mean even the effect of seasonal variation does not change RF very much – GHG RF change appears stable )
    https://turbulenteddies.files.wordpress.com/2015/03/rf_figure2.png

    So, global average temperature increase appears to be predictable.
    Changes in winds, clouds, storms, precipitation appear to be unpredictable.

    The IPCC wants to have it both was with this, rightly asserting on one hand:
    https://wattsupwiththat.files.wordpress.com/2015/02/ipcc-models-predict-future.png
    but persisting with predictions about precipitation, droughts, storms, tropical cyclones, heat waves, etc.

    * Certain aspects of fluid flow are predictable. The pressure exerted by the earth’s surface, namely mountains and ocean basins, would seem unlikely to change much so the channeling of ridges and troughs to their statistically preferred locations would seem likely to continue. The gradient of net radiance which determines the thermal wind is established by the shape and orbit of earth, which is also unlikely to change much ( for a centuries, anyway ). So the general features ( presence of jet streams in each hemisphere and effects of orography on waves ) are likely to persist.

    • Presumably you are not able to find the full quote using Google???

      “The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible. Rather the focus must be upon the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. ”

      https://www.ipcc.ch/ipccreports/tar/wg1/501.htm

      • I don’t understand. How can one predict the probability distribution of the systems future possible states when one isn’t able to predict future climate states in the first place??

      • Simple Allen. They take an average of the guesses. This is how we can confidently predict that every roll of the dice will result in 3.

      • This is how we can confidently predict that every roll of the dice will result in 3.

        No – they take the ensemble of the results.

        This is like rolling a fair die 1,000 times, then showing that you have an equal chances to roll any of 1-6.

      • Problem being that the
        “probability distribution of the system’s future possible states”
        isn’t predictable either.

        But one example, ENSOs vary.
        By the record, the decadal frequency of ENSO events also vary.
        It’s not as clear, but I’m betting that the centennial and millenial frequency of ENSO events also vary.

      • “No – they take the ensemble of the results.

        Here’s what that looks like:
        https://www.ipcc.ch/report/graphics/images/Assessment%20Reports/AR5%20-%20WG1/Chapter%2014/FigBox14.2-1.jpg

        Blocking is pretty significant, especially for prolonged floods in one region droughts in another and cold in one region, heat in another.

        This alone should be enough to convince you that climate ( these are thirty years of events ) is not predictable.

      • TE

        You may remember my article here a year or two ago where I graphed annual, decadal and 50 year averages for extended CET

        https://wattsupwiththat.com/2013/08/16/historic-variations-in-temperature-number-four-the-hockey-stick/

        See figure 1 in particular. A little further down it is overlaid against the known glacier advances and retreats over the last thousand years.

        Climate is not predictable and swings considerably from one state to another with surprising frequency.

        Blocking events that make for prolonged floods or droughts etc are well described in John Kington’s recent book ‘climate and weather’. Kington is from CRU and a contemporary of Phil Jones

        tonyb

      • This alone should be enough to convince you that climate ( these are thirty years of events ) is not predictable.

        …why?

        If you show me a chart showing that current models do a mediocre job at handling blocking patterns, then I’m going to conclude that current models do a mediocre job of handling blocking patterns.

        It would be fallacious to extend that to how models handle climate in general or to how future models will do at blocking patterns.

        I can agree that regional precipitation needs a fair bit of work.

      • If you show me a chart showing that current models do a mediocre job at handling blocking patterns, then I’m going to conclude that current models do a mediocre job of handling blocking patterns.

        It would be fallacious to extend that to how models handle climate in general or to how future models will do at blocking patterns.

        Indeed. It would be also interesting to know how the blocking frequency changes when climate models are perturbed. Even if they don’t get the absolute value right, they may still all indicate a similar response to some kind of perturbation – or, maybe not; can’t tell from TE’s figure.

      • If you show me a chart showing that current models do a mediocre job at handling blocking patterns, then I’m going to conclude that current models do a mediocre job of handling blocking patterns.

        If only they could do a mediocre job. The results – of a hindcast, no less – are crap. These models all have the same physics, right? But subtle infidelities magnify into huge divergence, not just from reality, but from one another of the models.

        The same principle applies to the future.

        If you examine the blocking above you’ll find the peaks of observed blocking ( the line in black ) correspond with the Eastern edges of the ocean basins. This is a basic feature of the general circulation: the ocean basins are a low as air masses can sink ( minimal potential energy ) and the tend to coagulate like so many bumper cars when they then encounter the higher terrain of the continents. This circulation then largely determines precipitation:
        http://www.physicalgeography.net/fundamentals/images/GPCP_ave_annual_1980_2004.gif

        One is left with the distinct impression that climate models can’t even predict the existing climate.

      • Anders,

        I was going to bring this up at your place, but TE left the building just as his peddling was getting interesting.

        Here’s the 55-year NCEP/NCAR reanalysis view after Barriopedro and García-Herrera (2006):

        https://4.bp.blogspot.com/-2Sh-7Aovf8E/V9mfAngUYuI/AAAAAAAABFI/mxuFz7A4inAZyZOhL_n1ZYs-CgdXOeEtACLcB/s1600/barriopedro2006etalFig6.png

        How much exactly do these 1-sigma envelopes need to overlap before we can say that Teh Modulz Ensemblez resemblez reality well enough to be considered useful?

        ***

        Reminds me of a joke we have on this side of The Pond:

        Q: How many tourists can you put on a bus in Mexico?
        A: Uno mas.

        Siempre uno mas, todos los días.

      • The averaging of different models with different physics is statistically meaningless. The consideration of multiple realizations of the most correct model could be used. Similar methods are used in other fields.

        What is the assumed PDF? What is the basis for its selection? Are the results useful for making policy decisions?

      • Because ENSO events change a lot of features ( fewer Atlantic Hurricanes with El Nino, more CA flooding with El Nino, more Texas and CA drought with La Nina, etc. etc. ) insurance considerations alone mean you could make a huge amount of money if you could accurately forecast not even the individual years of ENSO events, but just whether there will be more Ninos, Ninas, or Nadas.

        The fact that no such forecasts are out there should tell you something about the limits of forecasting fluid flow fluctuations.

      • dougbadgero,

        The consideration of multiple realizations of the most correct model could be used.

        Which is the “most correct model” in this instance? How do you know that the one which gives the best fit to *reanalysis* data for blocking also “most correctly” represents all the other real processes in the real system?

        Similar methods are used in other fields.

        Yah, like weather forecasting. 100-member ensembles where the initial conditions are randomly perturbed to produce a probability distribution are routine, and based on Lorenz’s original works on the topic.

        Here’s the caption to the IPCC figure provided by TE above:

        Box 14.2, Figure 1 | Annual mean blocking frequency in the NH (expressed in % of time, that is, 1% means about 4 days per year) as simulated by a set of CMIP5 models (colour lines) for the 1961–1990 period of one run of the historical simulation. Grey shading shows the mean model result plus/minus one standard deviation. Black thick line indicates the observed blocking frequency derived from the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis. Only CMIP5 models with available 500 hPa geopotential height daily data at http://pcmdi3.llnl.gov/esgcet/home.htm have been used. Blocking is defined as in Barriopedro et al. (2006), which uses a modified version of the(Tibaldi and Molteni, 1990) index. Daily data was interpolated to a common regular 2.5° × 2.5° longitude–latitude grid before detecting blocking.

        Any guesses how much CPU time it takes to do *one* thirty-year run at daily resolution on a state of the art AOGCM? The reanalyis data come at 4x daily resolution you know.

        The IPCC uses “ensembles of opportunity” because they don’t have any alternative. There simply isn’t enough computing horsepower available to do it the “right” way. It’s a resource constraint, not “durrrrp, we don’t know what we’re doing because … climastrologists [drool drool drool].”

      • If only they could do a mediocre job. The results – of a hindcast, no less – are crap

        “Crap” is an emotional, subjective term you use to say “I don’t like it”. It’s your feelings, not fact.

        Looking at the chart, the observations are generally within the 1-sigma range of the models, and as brandon shows, there’s quite a bit over overlap between the error bars of observations and models. Obviously the models need some work, but I don’t think your attack is justified. And the models are improving every year.

        These models all have the same physics, right?

        No, they do not.

      • Looking at the chart, the observations are generally within the 1-sigma range of the models

        The fact remains none of the models could accurately predict what actually happened over thirty years. We expect that, because the solutions of motion are unstable.

        And the models are improving every year.

        How can you possibly say that?

        Model predictions for 100 years out are not tested.
        And atmospheric motion stops being predictable beyond 10 days.

        What model is being validated somewhere beyond ten days but somewhere testable in a human lifetime ( say ten years? ).

        If such a thing were accurate and possible don’t you think you’d see filthy rich climatologists?

        Instead, like economists, they don’t ever seem to strike it rich – unless you include Hansen’s quarter of million dollar payola from Teresa Heinz.

      • BG,
        I am not sure what the point of your response was…
        Conceptually even the best model has limited utility. I have no idea which GCM is best. Even the best may not be good enough to make policy. Averaging outputs of multiple models is still statistical nonsense.

        The other field I was referring too was nuclear safety analysis…my field.

      • “The fact remains none of the models could accurately predict what actually happened over thirty years.”

        Models cannot predict.

        To do that, they’d need to know the future — the next 30 years years of all human greenhouse gas emissions, changes in solar intensity, and volcanic eruptions.

        The best they can do is to take the known forcings over the time and run.

        Even that is iffy, because climate models aren’t initialized into the actual initial state. Because no one knows the exact initial state, especially ocean currents.

      • Now David, “Even that is iffy, because climate models aren’t initialized into the actual initial state. Because no one knows the exact initial state, especially ocean currents.”
        you aren’t supposed to say that, the standard line is that it is a boundary value problem, natural variability is only +/- 0.1 C and zeroes out in less than 60 years and ocean currents would fall into the “unforced variability” unicorn pigeonhole.

      • dougbadgero,

        I am not sure what the point of your response was…

        Neighborhood of “resource constraints” on computational cycles.

        Conceptually even the best model has limited utility.

        Sure. It’s *crucial* to not ask a climate model to make long-term weather predictions, which is where Turbulent Eddie’s appeals to initial conditions leads and the divergence problem leads. Implicitly demanding perfection for a hindcast like he does is another no-no. It simply never happens.

        Ya takes the error metrics for what they’re worth and assume the forecast/projection isn’t going to be better than that. Plan accordingly.

        I have no idea which GCM is best.

        It might be fair to say that nobody does. For one thing, they’re not all designed to be “good” at the same thing.

        Even the best may not be good enough to make policy.

        Dual-edged sword you’re wielding there. That would make even the “best” model not good enough to *not* make policy. You dig?

        Averaging outputs of multiple models is still statistical nonsense.

        You’re in luck, the IPCC doesn’t exactly say it makes statistical sense. A lot of discussion about what would be ideal vs. what can be reasonably done.

        The other field I was referring too was nuclear safety analysis…my field.

        Lotsa talk downthread about how comparable these two fields are in terms of scale when it comes to model V&V.

      • BG,

        I “dig”, IMO we would be better off ignoring the models when making policy. They are not much more than political tools.

      • “But one example, ENSOs vary.”

        ENSOs redistribute planetary heat; they don’t add or subtract from it, so don’t contribute to the long-term equilibrium state.

      • dougbadgero,

        I “dig”, IMO we would be better off ignoring the models when making policy.

        Yes, I gathered. I’m not sure you dig though because you’ve offered no alternative for making policy decisions.

        They are not much more than political tools.

        Any information used to influence a policy outcome is a political tool.

      • “But one example, ENSOs vary.”

        ENSOs redistribute planetary heat; they don’t add or subtract from it, so don’t contribute to the long-term equilibrium state.

        ENSO events change regional temperature, precipitation, drought, storms, etc. etc.

        The fact that no one can tell you whether there will be more El Ninos or La Ninas over the next ten years should tell you something.

        Atmospheric forecasts beyond ten days are not useful.

        The misconception is that at some time between ten days and a century, they become useful again. There is no basis for this and the non-linear nature of the equations of motion dictate why.

        Not everything is unpredictable. Global average temperature would appear to be predictable. The general circulation occurs within constraints which would also appear to be predictable.

        But events which are mostly determined by the equations of motion, such as floods, droughts, cold waves, heat waves, storms, tropical cyclones, etc. are not predictable.

      • I love this thread. Because some of the people are actually doing the work, not criticising it (not me though ;-) ). Also the way climate models are discussed actually suggests understanding, occasionally, and provides real insight as to their complexity and challenges. Sort of string theory for climatologists. Of course climate models are not proven science, nor can they ever be, and to misrepresent them this way is doing them a disservice. They have uses, as neural nets do (a wind up here). And the best and most expert contributors seem to eschew claiming hard scientific laws and relaible predictions arising from their models. Which is good. Wisely, because they cannot be independently validated in a repeatable controlled experiment.

        Modellers are not doing science as physiucs understands it, they are creating computer models based on a set of hypotheses regarding linear and non linear relationships they then approximate to in attempting to fit the models to a very multivariate, under susbscribed data set, with multiple interelated non linear responses and nowhere inadequate coverage across the whole planet, and not possible to compute at satisfactory resolution with the available computing resources if there was enough data, so crude guesses, but tracking reasonably well, after 4×2 adjustment, for their grants. Is that about fair?

        Right or wrong, how you ever gonna prove that? BTW Feyman DID describe the inability of pure science to understand the inter related complexities of nature rather elegantly, as far as they could be defined by true physical laws, and become accepted science – per the record. Sic:

        “What do we mean by “understanding” something? We can imagine that this complicated array of moving things which constitutes “the world” is something like a great chess game being played by the gods, and we are observers of the game. We do not know what the rules of the game are; all we are allowed to do is to watch the playing. Of course, if we watch long enough, we may eventually catch on to a few of the rules. The rules of the game are what we mean by fundamental physics. Even if we knew every rule, however, we might not be able to understand why a particular move is made in the game, merely because it is too complicated and our minds are limited. If you play chess you must know that it is easy to learn all the rules, and yet it is often very hard to select the best move or to understand why a player moves as he does. So it is in nature, only much more so.

        volume I; lecture 2, “Basic Physics”; section 2-1, “Introduction”; p. 2-1

        I think he nailed it right there. As he was so good at. Climate science is not rocket science, or they would rarely get off the pad. I said that. Your climate may vary…

      • Modellers are not doing science as physiucs understands it, they are creating computer models based on a set of hypotheses regarding linear and non linear relationships they then approximate to in attempting to fit the models to a very multivariate, under <subscribed data set, with multiple <interrelated non linear responses and nowhere inadequate coverage across the whole planet, and not possible to compute at satisfactory resolution with the available computing resources if there was enough data, so crude guesses, but tracking reasonably well, after 2×4 adjustment, for their grants. Is that about fair?

        You hit people and things with a 2×4,

        And I fixed ( I hope) two spelling errors.

      • “Correcting the data”, NASA like, to suit a prejudice, Huh? In the UK we hit things with a 4×2. So leave that alone. 4×2 clearly has more impact than leading with the lesser measurement.. The US obviously has this bassackwards, as ever with measurements, as well as not being metric enough. Ask NASA. etc. No spell check on HTML windows, is there?

      • lol, I surrender for changing the data to suit my bias.
        But, bassackwards, you say! Ha, my ancestor left your puny island because the pantries were all empty, and had to cross the pond to create good take out to get a bite to eat.

      • Besides, the 2″ leading edge will have twice the force over half the impact area, no wonder you lost the war ;)

      • You distract from Feynman’s crucial point. You can’t model nature to the level of detail effect or reliability that a physical law demands, and he made that clear. You are clearly correct regarding impact force per unit area of the 4×2 = 8, but overall applied force is the same, as long as contact is made.

        Depends if you want to make a dent in the model, or move the whole thing.

        BTW you want to communicate with the public, you have to use what impresses best on emotional intelligence, something lifetime techies rarely grasp, but Matt Ridley does, and Colin McInnes, and Douglas Lightfoot, and Lamar Alexander, etc. Your climate may vary.

      • BTW you want to communicate with the public, you have to use what impresses best on emotional intelligence, something lifetime techies rarely grasp, but Matt Ridley does, and Colin McInnes, and Douglas Lightfoot, and Lamar Alexander, etc.

        I was the fool who got to go explain to perfectly happy people that they had to do more/different work during their daily work time they normally spent trying to not do anything.
        But I was full of evangelistic fervor for what truly were good tools, and you get the typical bell curve of adoption, I worked with the customer deployment teams, usually they had a boss who told they this will be done, and it was pretty easy to get the younger early adopters to buy in, and I always suggested they pick a few of the more renowned early adopters as proto users, to become their evangelists, and then I always offered a solution for the laggards, the really smart old sob’s who were to valued to be fired, but were a huge PIA.
        In the design tool days, my customers were EE’s, typically smart folks, I’d tell them, take the biggest PIA, out in front of the windows for everyone to see, and shoot him.
        It’d only take a couple.

        Now, I just tell them a taser is just as much, maybe more fun, you get to watch them flop around on the ground, wet themselves lol.

        4×2’s are just so thuggish :)

      • The misconception is that at some time between ten days and a century, they become useful again. There is no basis for this and the non-linear nature of the equations of motion dictate why.

        Actually, I think you’re wrong on this.
        We should be able to get to a point where we can estimate the general rate of say El Nino’s for the next 100 years, is it 5 or 10? Home many Atlantic hurricane seasons vs gulf hurricanes. Not necessarily which year, just projected rates and probabilities. We did timing analysis of synchronous digital circuits like this, where you didn’t define 1 or 0 pattern to test timing, you specified a period all the signals could change, and when they stopped, and then it found the changing window of the outputs, which would then go to the next stage as the input if there was a next stage and so on. Now, if the inputs for the last stage aren’t stable when the clock triggers the sampling of the output values, your circuit doesn’t work. But we should be able to learn the basic PDO/AMO cycles, how the El Nino’s and La Nino’s cycle and what these global variables cause to happen.

        So I see a report that says: “We expect an El Nino in the next 6 to 9 years, and this will transition to an Atlantic hurricane season 6 of the next 9 seasons”, something like that, someone probably already does this, but to extent this out for a few hundred years, and improve our skills to projecting when not if.

        Now I don’t expect us to be able to do this for 50 or even 100 years, but we should be able to collect enough good data to at least catch the 30 or 40 year cycles and we will get better with models.

      • “They take an average of the guesses. This is how we can confidently predict that every roll of the dice will result in 3.”

        The average is 3.5

      • “The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible. Rather the focus must be upon the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. ”

        Translation:

        We are able to predict the highly biased average of model runs by averaging the highly correlated model and numerical errors of the climate system.

      • “And the models are improving every year.”

        How can you possibly say that?

        It’s pretty easy. You compare the models to the real world.

        There’s an immense amount of scientific literature covering just this. And yes, the models are demonstrably and objectively improving every year.

        The graph that *you* posted (about blocking events) comes from an IPCC section discussing the models, how they’ve done, and how they’re improving. I mean.. you did read the section before using their figure, right?

      • It’s pretty easy. You compare the models to the real world.

        The actual CMIP5 runs of the blocking above indicates otherwise.

        The IPCC can cheerlead all they want but they can’t change reality.

      • The actual CMIP5 runs of the blocking above indicates otherwise.

        Heh. So comparing CMIP5 to observations shows that the models aren’t getting better?

        I don’t know if I have to point this out, but that doesn’t logically follow.

      • Blocking frequency is discussed in Box 14.2, Chapter 14 of AR5 WGI (which is where you will also find TE’s figure). It says

        The AR4 (Section 8.4.5) reported a tendency for General Circulation Models (GCMs) to underestimate NH blocking frequency and persistence,
        although most models were able to capture the preferred locations for blocking occurrence and their seasonal distributions. Several intercomparison studies based on a set of CMIP3 models (Scaife et al., 2010; Vial and Osborn, 2012) revealed some progress in the simulation of NH blocking activity, mainly in the North Pacific, but only modest improvements in the North Atlantic. In the SH, blocking frequency and duration was also underestimated, particularly over the Australia–New Zealand sector (Matsueda et al., 2010). CMIP5 models still show a general blocking frequency underestimation over the Euro-Atlantic sector, and some tendency to overestimate North Pacific blocking (Section 9.5.2.2), with considerable inter-model spread (Box 14.2, Figure 1).

      • “The average is 3.5”

        It’s not possible to roll a “3.5” on a die. Basic physics. And don’t forget, all models are wrong, but they are useful!

      • Nevertheless, 3.5 is the average roll of a fair six-sided die.

      • Nevertheless, 3.5 is the average roll of a fair six-sided die

        So imagine a contraption that rolls say 10 die at a time, but you can only see the results of 5, though sometimes it’ll be a different 5, how do verify they are all fair dice?

        BTW, this is my take on reporting a temp for someplace without a surface station within 200 miles.

      • This little subtopic isn’t worth continuing, IMO.

      • If nasa, had hired a historian they would know that their new AGW subtopic about our planet Earth, getting hit by something from outer ‘space’…

        http://www.space.com/34070-earth-vulnerable-to-major-asteroid-strike.html

        has not changed a bit since the day the Dinosaurs, died. And nobody that counted then cared anyway.

      • “I’m not sure you dig though because you’ve offered no alternative for making policy decisions.”

        I would use what we know from first principles. CO2 is a ghg and should cause some warming. The system is complex and knowledge of first principles is of limited use. Now make policy.

        Using the wrong information is worse than operating from almost complete uncertainty.

      • Steve Milesworthy ==> The idea that one can predict “the probability distribution of the system’s future possible states by the generation of ensembles of model solutions.” itself is seriously challenged in the study of complex dynamical systems.

        I personally would challenge the idea that we can “predict” PAST probability distributions of the climate system’s states — as it would require being able to define a “system state” and then find how often it had occurred, I don’t think we can do that other than the broad-brush “Ice Age” and “Interglacial” (and maybe, further back, the occasional Jungle World scenario).

        My perspective is that the Climate System, behaving like a chaotic dynamical system, may have states that behave as “attractors” — and if so, we can expect to visit them again — but have no possible way of knowing when they will arrive next or what changes in climate factors might cause a change to or from any particular “attractor”.

        The only really likely suspect’s are the Earth’s orbital eccentricities associate with Ice Ages, as far as I know.

      • Yes Kip, I think this issue of the attractors properties is critical. It could be a very high dimensional manifold in which case the “climate of the attractor” may take a very long time to simulate. But the real problem I think is there is no reason to expect standard numerical methods to be meaningful on the really course grids used with all the turbulence and other subgrid models.

        Recent work on large eddy simulations casts doubt on whether the simulations even converge in the fine grid limit.

        The justifications given here by the apologists of GCMs have really nothing to back them up except words. To really look at this issue is a huge long term effort and current theoretical understanding is inadequate to really address it.

        Some kind of asymptotic result for fine grids would at least give some understanding of the issues. I am tempted to say that GCMs solve the laws of pseudo-physics and can only be credible to those already predisposed to believe in them.

    • Turbulent Eddie:

      You state :the IPCC wants to have it both ways with this, “Rightly” asserting, on one :hand: “in climate research and modelling, we should recognize that we are dealing with a coupled non-linear chaotic system, and therefore that LONG TERM PREDICTION OF FUTURE CLIMATE STATES IS NOT POSSIBLE”

      On the contrary, it is quite possible.

      As I have repeatedly pointed out, projections of future average global temperatures, between 1975 and 2011, based solely upon the amount of reduction in anthropogenic SO2 emissions, are accurate to within less than a tenth of a degree centigrade, for any year for which the net amount of global SO2 emissions is known.

      This accuracy, of course, eliminates the possibility of any warming due to greenhouse gasses. Which is why GCM’s are an exercise in futility.

      • Hans, why does your SO2/temperature graph stop in 2000?

        That’s always suspicious.

      • Hi David, Because this is a graph I made in 2002.

      • Hans, should be trivial to update

      • No it’s not trivial I don’t have the excel anymore and i would’t know where I found the data in the first place, this was just a reaction to a post above and “something I had prepared earlier”, if you think the graph is now superseded feel free to convince me, by updating it yourself. Btw the temperature history of the usa has been altered since 2002.

      • Hans: Now I see that your y-axis is upside down. In that case the data up to 2000 look similar.

      • Hans, because there is a approximate correlation between SO2 and USA48 temperatures — which may not be too surprising — doesn’t mean SO2 accounts for all the temperature change.

      • David Appell:

        You wrote to Hans: “because there is a correlation between SO2 and USA48 temperatures-which may not be too surprising-doesn’t mean SO2 accounts for all of the temperature change”

        Here, you are admitting that the removal of SO2 aerosols will cause temperatures to increase, but that all of the warming may not be due to their removal.

        I would remind you that the IPCC diagram of radiative forcings has NO component for any warming due to the removal of SO2 aerosols.

        Would you agree that until the amount of forcing due to their removal is established, and included in the diagram, the diagram is essentially useless?

      • “I would remind you that the IPCC diagram of radiative forcings has NO component for any warming due to the removal of SO2 aerosols.”

        I doubt it. Everyone knows aerosols are causing cooling.

      • David Appell:

        :You said “everyone knows aerosols cause cooling”

        Bur when they are removed, warming results. And that is what Clean Air efforts are doing.

      • “:You said “everyone knows aerosols cause cooling”
        “Bur when they are removed, warming results. And that is what Clean Air efforts are doing.”

        That’s what I said — aerosols cause cooling. So their absence doesn’t cause cooling.

      • David:

        You wrote: “That’s what I said-aerosols cause cooling. So their absence doesn’t cause cooling”

        Agreed. But if they ARE present in the atmosphere, causing cooling, their removal will cause warming. Surely you can understand that,

        In 1975,antropogenic aerosol emissions totaled approx. 131 Megatonnes. In 2011. they totaled 101 Megatonnes, a reduction of 30 Megatonnes.

        Their removal is responsible for all of the warming that has occurred!

      • “But if they ARE present in the atmosphere, causing cooling, their removal will cause warming. Surely you can understand that,”

        I’ve said exactly that twice now. This is the third time.

        “Their removal is responsible for all of the warming that has occurred!”

        You’ve never offered any evidence of this, despite claiming it many times.

      • David:

        I have provided evidence many times, although you may have missed it.

        Consider the 1991 eruptions of Mount Pinatubo and Mount Hudson, They injected 23 Megatonnes of SO2 into the stratosphere, cooling the earths climate by 0.55 deg. C. As they settled out, the earth’s temperature returned to pre-eruption levels, due to the cleaner air, a rise of 0.55 deg. C. This represented a warming of .02 deg. C.for each Megatonne of SO2 aerosols removed from the atmosphere.

        Between 1975 and 2011, net global anthropogenic SO2 aerosol emission reductions dropped from 131 Megatonnes, to 101 Megatonnes, a reduction of 30 Meatonnes..The average global temperature in 2011 (per NASA) was 0.59 deg.C. This also represents a warming of .02 deg. C. of warming for each net Megatonne of reduction in global SO2 aerosol emissions.

        Using the .02 deg. C. “climate sensitivity factor”, simply multiplying it times the amount of reduction in SO2 aerosol emissions between 1975 and any later year (where the amount of SO2 emissions is known) will give the average global temperature for that year with an accuracy of less than a tenth of a degree C (when natural variations due to El Ninos and La Ninas are accounted for). This PRECISE agreement leaves no room for any additional warming due to greenhouse gasses.

        CO2, therefore, has no climatic effect.

      • Burl, this isn’t evidence.

        It’s just a bunch of numbers. Where did they come from? How were they derived? It’s all no better than gibberish.

        I assume you didn’t publish your claims anywhere?

      • David:

        Google “it’s SO2, not CO2” for an earlier version of my thesis. The sources for the data are given there. None of the data is gibberish, it is all referenced.

        A later, up-dated version with additional supportive information is being submitted for publication. .

        .

      • For Burl’s idea to work, he needs Man not only to have removed their own aerosols, but also to have removed natural aerosols that existed in the pre-industrial era, simply because it is a degree warmer now than then. How that happened, he doesn’t elaborate. Compare now with pre-industrial. What has changed? More GHGs and more aerosols. The GHGs dominate.

      • Jim D.

        You said that it now a degree warmer than it was in pre-industrial times. It is now a degree warmer than it was in 1970

        Is 1970 really pre-industrial?

      • More like 1770. Keep up.

      • Jim D.

        I am confused. Since average global temperatures have risen 1 deg. C. since 1970, what was the pre-industrial temperature that you are referring to?

        Are you saying that it was the same as in 1970 (14.0 deg. C)?

      • Global temperatures now are 1 degree warmer than in pre-industrial times, which are usually taken to be before major emissions started in the 19th century. There were less aerosols then than now, because these mostly come with the emissions growth. Yet, it was a degree C colder then than now. Do you see how your logic of more aerosols, more cooling, fails when comparing the 18th century to now, or should I explain further?

      • Jim D.

        You keep saying that temperatures now are one degree C. warmer than in pre-industrial times.

        But they are now one degree C. warmer than they were in 1970, ( per NASA’s Land-Ocean temperature index).

        And 1.2 deg. C. warmer than they were in 1880.

        Where does your “one degree C. warmer” statement come from? It is obviously incorrect..

      • It’s from the temperature datasets such as HADCRUT and GISTEMP. Here’s an example. Ignore the wiggly CO2 line.
        http://www.woodfortrees.org/plot/gistemp/from:1900/mean:12/plot/esrl-co2/scale:0.01/offset:-3.2/plot/gistemp/mean:120/mean:240/from:1900/plot/gistemp/from:1985/trend
        How did it warm so much since the 1800’s? Aerosols are higher since then, so your logic would suggest it should be cooler today than the 1800’s, yes?

      • Jim D.

        The data sets that you show only go back to 1900. Nothing in them that would support your “one degree C. of warming above pre-industrial times” statement.

        You must have some other reference.

      • You are missing the point. Take 1850. Do we have more aerosols now than then? Why is it a degree warmer now? 70% of the GHG forcing increase has been since 1950, so this is why it looks like the warming has been faster more recently, but there was some 30% spread over the century or two before that, as you can see.
        http://www.woodfortrees.org/plot/hadcrut4gl/mean:12/plot/esrl-co2/scale:0.01/offset:-3.35/plot/hadcrut4gl/mean:120/mean:240/plot/hadcrut4gl/from:1985/trend

      • Jim D.

        Yes, we do have more SO2 aerosols in the atmosphere than in 1850.

        However, there are other warming sources that can offset the difference.

        For example, increased solar radiance (A possibility, although I have no data on it at this time.)

        Population growth. There were 1.2 billion people in 1850. Now there are more than 7.1 billion, with most all of them inputting far more heat into the atmosphere through the use of energy than in 1850.

        Infrastructure warming: Cities, parking lots, paved roads, more roofs, etc.

        Industrial heat emissions

        All of the above would go a long way toward offsetting the cooling due to more SO2 aerosols.in the present

      • Yes, CO2 forcing especially because that has added 2 W/m2. The sun is weaker now than in most of the last two centuries, so we can count that one out, and the fastest warming areas are away from cities, so that counts out your other supposition.

      • Jim D.:

        You wrote “the fastest warming areas are away from cities, so that counts out your other supposition”

        No, it does not.

        It does not matter where the heat is generated, it adds to the warming of the atmosphere. Far more large “urban heat islands” now than in 1850.

        I do want to thank you for the link to the Woodfortrees graph that goes back to 1850. It shows a tremendous temperature spike of about 0.59 deg. C, that coincides with the “Long Depression” of 1873-1879.

        This, of course, is due to the reduction in SO2 emissions due to reduced industrial activity during the depression (18,000 businesses failed, per Wikipedia)

        Interestingly, the peak is essentially identical in height to that of the 1930’s, where SO2 emissions fell by about 29.5 Megatonnes–and the 0.59 deg. C. temp rise is also .what would be expected for a decrease of 29.5 Megatonnes in SO2 emissions.

        Thus, SO2 emissions in the atmosphere prior to 1879 were at least 29.5 Megatonnes, which significantly narrows the gap between then and now.

        .

      • Global SO2 emissions are at least ten times larger today than in 1850, and the IPCC would say much more, so you have to reconcile that with warming that has occurred in that period. Clearly SO2 is not the main factor in that.

      • Jim D.

        I suspect that SO2 emission levels for the 1850 era are seriously under-reported. For example, there would have been large amounts of SO2 introduced into the atmosphere from the widespread use of coal for heating, which may not have been included.

        I have sent a query .to Dr. Klimont regarding this.

      • Coal would have been the main source they started their estimations with. Was coal use in 1850 anything like today’s levels? No. The global population was 1 billion, most of which was not developed.

      • So you don’t have a link, Burl, let alone a peer reviewed journal paper.

        Surprise surprise.

      • David:

        Google the reference. you will be surprised.

        Time to retire.

      • > you don’t have a link

        Burl offered you a way to find one. Here:

        https://wattsupwiththat.com/2015/05/26/the-role-of-sulfur-dioxide-aerosols-in-climate-change/

        You’re welcome, sea lion.

      • Burl Henry commented:
        “For example, increased solar radiance (A possibility, although I have no data on it at this time.)”

        ftp://ftp.pmodwrc.ch/pub/data/irradiance/composite/DataPlots
        http://www.acrim.com/Data%20Products.htm
        http://lasp.colorado.edu/data/sorce/tsi_data/daily/sorce_tsi_L3_c24h_latest.txt
        http://spot.colorado.edu/~koppg/TSI/
        http://www1.ncdc.noaa.gov/pub/data/paleo/climate_forcing/solar_variability/lean2000_irradiance.txt

        “Population growth. There were 1.2 billion people in 1850. Now there are more than 7.1 billion, with most all of them inputting far more heat into the atmosphere through the use of energy than in 1850.”

        Civilization runs on about 20 terawatts. Humans emit about 100 Watts, or collectively only 0.7 terawatts. That comes to only 0.04 W/m2, about the additional forcing from manmade GHGs added every year.

      • David:

        You indicated that humans emit about 100 watts. Interesting information!.
        However, my intended comment was that most of those extra people are using far more heat-emitting energy now than they were back in 1850, for transportation, lighting, appliances and so on. Our “footprint” is much greater than 100 watts.

      • David:

        Not convinced that your “100 watts person” estimate is anywhere near being correct.

        I just turned on a 100 watt lamp, thus doubling my “footprint”

      • Burl wrote:
        “Our “footprint” is much greater than 100 watts.”

        That’s why I gave the 20 TW number, and calculated with it.

      • Two thirds of the world does not have near the footprint you have David. If you are keeping track…

      • Burl wrote:
        “Not convinced that your “100 watts person” estimate is anywhere near being correct.”

        You emit about as much energy as you get from food. 2400 Cal/day (1 Cal = 1 kcal = 1000 cal). Do the math.

        “I just turned on a 100 watt lamp, thus doubling my “footprint””

        That is already included in the 20 TW number I used.

      • Jim D | September 18, 2016 at 12:29 am |
        Yes, CO2 forcing especially because that has added 2 W/m2. The sun is weaker now than in most of the last two centuries, so we can count that one out, and the fastest warming areas are away from cities, so that counts out your other supposition.

        Who ever told you this is wrong. Please do not repeat this claim again.

        http://spot.colorado.edu/~koppg/TSI/TIM_TSI_Reconstruction.jpg

        The current TSI is higher than the pre-cycle 18 era. The average for this cycle is approximately 1361. It is the lowest in about 72 years (start of cycle 18 was 1944)..

        This makes me a little suspicious of the future cooling claims because the TSI would have to drop significantly if CO2 has any forcing value whatsoever. The late 30s and 40s were pretty warm and there are many 30s and 40s maximums temperature records.

    • “Yes Kip, I think this issue of the attractors properties is critical. It could be a very high dimensional manifold in which case the “climate of the attractor” may take a very long time to simulate. But the real problem I think is there is no reason to expect standard numerical methods to be meaningful on the really course grids used with all the turbulence and other subgrid models.”

      The proof is in the pudding. All climate models give reasonable numbers for ECS. Not exactly the same, but reasonable.

      Look at the Quatenary. Its climate looks fairly predictable from Milankovitch factors. Not exactly so — but the uncertainty is in the carbon cycle response, not the radiative forcing.

      So where are all the strange attractors in the Quaternary?

      Maybe one is out there somewhere in our future. Why is it likely there’s one in the next 100 years? Or the next million? The history of the Quaternary suggest they are very very rare.

      • David, given selection bias and tuning I would regard GCM simulations as provisional pending sensitivity studies for the thousands of parameters.

        Nonlinear systems have 3 possible asymptotic attractors, fixed points, stable orbits, and strange attractors. Turbulent systems probably only have strange attractors. The attractor is the only faint hope for these simulations to be meaningful. As I mentioned above, we really know very little about its properties and how discrete approximations change them. Lack of grid convergence for LES is a real problem requiring further research. Without that the models would be little more than parameter curve fits to the training figures of merit

      • dpy6629: Where are the attractors during the 2.6 Myrs of the Quaternary?

        If none, why should I expect that one is imminent?

      • David, Rossby waves are chaotic and evidence of a strange attractor. As I said above you should embrace the attractor as its our only chance to show these simulations mean something.

      • David Appell,

        You may be confused about the difference between a point attractor, and a chaotic strange attractor. Asking “Where are all the strange attractors in the Quaternary?” Is an example of your misunderstanding, unless you misspoke.

        Chaotic systems are chaotic – weird, if you prefer. For some initial values, the system rapidly converges to zero – stable but meaningless. For other values, outputs become infinite. For the simple Lorenz equations, certain values produce a wide variety of three dimensional toroidal knots – stability of a kind.

        As far as I am aware, it is still impossible to predict initial values which will produce certain outcomes, mathematically. There are ranges of values which are seen to produce certain outcomes, although it cannot be shown that chaos may not occur within any assumed range.

        Lorenz said – “Chaos: When the present determines the future, but the approximate present does not approximately determine the future.”

        Both weather and climate appear to be examples of deterministic chaotic systems.

        Predicting future outcomes in any useful sense remains impossible.

        Cheers.

      • “David, given selection bias and tuning I would regard GCM simulations as provisional pending sensitivity studies for the thousands of parameters.”

        So all parameters are as important as atmospheric CO2 concentration, and should be treated as such?

      • Turbulence model parameters can make a very big difference and affect the boundary layer dustribution of energy. Mostly as the recent commendable paper on model tuning admitted we just don’t really know. Let’s get busy and find out.

      • Mike Flynn:

        Where is *any* attractor in the Quaternary, strange, quantum, weird or otherwise?

        In other words, where is all the chaos? The Quaternary climate looks to have discernable patterns, not large chaotic jumps hither and yon.

        Merely referring to Lorenz won’t do it here.

      • dpy6629 wrote:
        “Turbulence model parameters can make a very big difference and affect the boundary layer dustribution of energy.”

        And AGAIN: where is all this crazy chaos over the 2.6 Myrs of the Quaternary?

      • “Rossby waves are chaotic and evidence of a strange attractor.”

        And where is the evidence it’s mattered over the Quaternary?

      • David Appell,

        You don’t seem to appreciate that the rules of physics are the same now as when the Earth was created. Electrons and photons act the in the same fashions. The properties of matter are the same. Assumptions, I know, but they’ll do me in the absence of evidence to the contrary.

        The fact that you don’t understand deterministic chaotic systems will not make them vanish. If you are claiming that the atmosphere obeyed different physical principles in the past, I might beg to disagree.

        Just because you cannot understand something, does not mean it doesn’t exist. You can see an attractor just as clearly as you can see 285 K, or 25W/m2.

        How many Kelvins, or W/m2 can you see in the Quaternary? Do they not exist, just because you can’t see them?

        In any case, as the IPCC said, the prediction of future climate states is not possible. I agree, but you may not.

        Cheers.

      • Mike Flynn: If you think attractors and/or chaos were important in Earth’s past climate, then simply point to when that was.

        I’ve asked several times now. Clearly none of you can do it.

      • David Appell,

        You’re just being silly now.

        According to the IPCC, the climate is a chaotic system. You may not agree. You are free to believe anything you wish.

        You have asked that I point out a specific time when chaos was important in Earth’s past climate. As you have not provided a definition of important, and do not seem to understand the importance of chaos in physical processes at all levels, I hope you won’t mind if I first ask you to provide a specific time when the Earth’s past climate was not, as the IPCC states, a chaotic system.

        Playing with words doesn’t change facts. If you can provide new relevant facts, I’ll change my views, obviously.

        Cheers.

      • When did chaos last play an important role in Earth’s climate?

      • David Appell,

        When did it not?

        I note you choose not to define “important”. Not unexpected, really.

        Cheers.

      • “When did it not?”

        Throughout the very regular Quaternary.

        You’ve finally admitted you can’t point to any chaos. That was my point all along.

      • David Appell,

        I believe the “laws of physics” applied through pre history. Therefore, the climate operated, as the IPCC states, in a chaotic manner, the same as now.

        Not important to you, maybe.

        As the Quaternary period covers the present, adverse weather effects due to the chaotic nature of the atmosphere have been important to me. Cyclones, floods, hurricanes, blizzards, extreme cold and heat have all affected me.

        Maybe you deny the existence of chaos, the laws of thermodynamics, and similar things. That’s fine, but their existence or no, does not rest on what you think. Many scientists believed in things later discovered to be wrong or non-existent, or refused to accept things later found to be true.

        So far, I haven’t had to change many ideas based on new information. A few, but not many. Just lucky, I guess.

        Cheers.

      • Mike Flynn commented:
        “Therefore, the climate operated, as the IPCC states, in a chaotic manner, the same as now.”

        So, again, what’s the best example of chaos during the Quaternary?

        Because the last million years look fairly regular:

        https://seaandskyny.files.wordpress.com/2011/05/figure11.jpg

      • DavidA, you have just proved Kip and my point. Your graph is exactly like Ulams first nonlinear waves paper which predates Lorenz by at least a decade and is the signature of a strange attractor.

      • “DavidA, you have just proved Kip and my point. Your graph is exactly like Ulams first nonlinear waves paper which predates Lorenz by at least a decade and is the signature of a strange attractor.”

        How is that a “nonlinear wave?” It’s close to a periodic function, modulated mostly by Milankovitch cycles

      • It looks exactly like fluctuations in a turbulent boundary layer. I don’t have it downloaded but the nonlinear waves paper was by Stanislaw Ulam and it’s mentioned in his autobiography. Look at any flow visualization of a separated flow and you will see the same sort of thing.

        What is interesting about ice ages is that total forcing changes don’t cause them but small changes in the distribution of forcing. It’s a very subtle and nonlinear effect that GCMs can’t really capture at least last time I checked.

      • “It looks exactly like fluctuations in a turbulent boundary layer.”

        It “looks” more like the sum of few simple sinuosidals.

      • David, Now you are starting to resort to the curve fit method and Fourier analysis? We already know climate and weather are chaotic according to the IPPC and Palmer and Slingo.
        You should embrace the attractor. It’s your only chance that climate models are meaningful.

        All turbulent flows are chaotic and the atmosphere is no exception. Whether they are predictable is unresolved and the likely answer is some are and some aren’t.

      • Milankovitch forcing cycles are predictable maybe up to millions of years into the future. That is not chaos. You have to distinguish this from Lorenz-style chaos or turbulence that are not predictable because they only depend on previous states, not on a predictable future driver like the orbital properties.

      • JimD, just because you have a name for the cycles means nothing. The cycle itself is chaotic as orbital mechanics is well known to be on long time scales.

        Embrace the attractor, it’s your only hope GCMs are more than complicated energy balance methods.

      • You have to learn to distinguish true chaos from predictable combined cycles.

      • dpy6629 commented:
        “JimD, just because you have a name for the cycles means nothing. The cycle itself is chaotic as orbital mechanics is well known to be on long time scales.”

        Prove it! Instead of repeatedly asserting it.

        A simple pendulum also has an attractor. That doesn’t mean its motion is chaotic.

      • dpy wrote: “All turbulent flows are chaotic and the atmosphere is no exception. Whether they are predictable is unresolved and the likely answer is some are and some aren’t.”

        I haven’t seen anyone here explain how turbulence matters for long-term climate change, which is mostly about energy conservation.

      • I haven’t seen anyone here explain how turbulence matters for long-term climate change, which is mostly about energy conservation.

        What’s the difference in energy balance, between having 10 hurricanes/cyclones vs 20 hurricanes/cyclones per year?

      • Well DavidA the issue is the details of the turbulence could change. Ice ages which are big changes are simply not about average forcing but the details of its distribution. GCMs right now pretty much don’t resolve turbulence and don’t model it either. Do they get much right that simple energy balance methods miss?

        In any case, you seem to be laboring under a simplified understanding of fluid dynamics. its fundamentally different than electromagnets or structural analysis.

        If you want to increase your understanding I can send you privately a laymens intro to the subject.

      • The simple pendulum is a stable orbit not a strange attractor. It’s very well accepted that orbital mechanics and turbulent flows are strange attractors even when the naive “see” cyclical patterns.

      • dpy: I’d prefer something above layman level regarding fluid mechanics. My email address is on my Web site, davidappell.com

        You wrote:
        “Well DavidA the issue is the details of the turbulence could change. Ice ages which are big changes are simply not about average forcing but the details of its distribution. GCMs right now pretty much don’t resolve turbulence and don’t model it either. Do they get much right that simple energy balance methods miss?”

        I still don’t see how ANY of that implies chaos. It looks to be about the distribution of sunlight, ice-albedo feedbacks and the subteties of the carbon cycle.

      • dpy wrote:
        “The simple pendulum is a stable orbit not a strange attractor.”

        It’s not strange, but it has an attractor in phase space at (theta=0, v=v_max=v(theta=0)). And it’s not chaotic.

        I still have yet to see a proof that the ice ages are evidence of chaos.

  21. Thanks, Dan for this essay.
    So hard to point out all the different factors and not get bits taken the wrong way.
    GIGO does nor sum it up.
    It is more Garbage in and predicated result out .
    The result, like a stopped watch might be right sometime, close sometime but impractical for use if you have to get somewhere on time.
    I still think Climate models are useful and must be used.
    Blind adherence to desired algorithms when they produce wrong results is a worry.
    If all it takes is a lower climate sensitivity for instance, I am probably wrong, but why not put it in and use the model if it works and worry about why are we missing something here later.
    Fluid dynamics and chaos, yes but Nick is probably right that in most situations we will have some useful predictive power. Furthermore if it trends back to average it will probably take off in the same direction again. We just have to be aware it is probably not definitely always.

  22. Not even Garbage in, Data in, Predicated result out, wrong assumptions for algorithms

  23. A model is an approximation of reality.

    The question is whether the model is a good enough approximation that it responds in a similar way to reality if you perturb it with a change of some sort.

    For example, if adding CO2 radically changes the basis of some key assumptions such as the way the atmosphere and ocean exchange energy, then the model may respond incorrectly.

    The above article does not address the model approximations in these terms, so is merely a worthless restating of what model developers already know.

    • By the same token, the simple, incomplete, and mis-characterization as ‘Laws of Physics’ does not address the model approximations of reality. Model developers who already know that should not be repeating it.

      • You are arguing with descriptions of models for people not familiar with numerical analysis and physics-based parameterizations.

        And even then, your second example from Prof Tim Palmer is reasonably clearly qualified to make it clear that a “flaw” in his terms is a departure from the laws of physics rather than an approximation. We know what he means, but you choose to pretend not to.

        Approximations can be based on the laws of physics and they can be validated against either experimental results or more expensive models with greater fidelity.

      • I was not aware that I had used an example from Tim Palmer.

  24. > It is critical that the actual coding be shown to be exactly what was intended as guided by theoretical analyses of the discrete approximations and numerical solution methods.

    I don’t always V&V discrete approximations, but when I do, I V&V their exactness.

    Why V&V is so crucial is simply asserted.

    There would not be any need to invoke V&V to refute a self-contradictory statement, more so if they are expecially self-contradictory.

  25. Let me be explicitly clear on a couple of points.

    (1) I did not characterize GCMs as a case of GIGO.

    (2) I did not say that GCMs are not useful.

    I do not, and will not ever, apply those characterizations.

    • Hi Dan. Thanks for the article and the thought you put into it. BTW, went to your web site and want to thank you also for your article of January 14, 2015. Wish I had found it sooner.

    • Dan, it would have been good if you had looked at the countless verifications and validations of GCMs that have been published and commented on those instead of the pure disconnected speculation that appears here. That way you might have found possibly something to back up your arguments, but I think you have not looked at these based on the evidence of this article. Or you have, and not found anything to criticize in their outputs, because we don’t see anything specific here at all. An article on GCMs should at least illustrate something about their results, I think.

  26. Dan Hughes, thank you for this terrific and informative post. In finance, models are predicted (by financial consultants, banks, etc.) to provide a framework to understand what might happen for a base case and multiple sensitivity cases representing different economic conditions, market assumptions… Such models can correctly be described as scenario models. The forecasts for the independent inputs on the economy and markets are not known with great precision but running scenario cases basically gives a “what if” picture of what “might” or “could” happen. Two factors are important. First, the structures of the models and whether they represent our best knowledge on the process as you put it. Importantly that is not to suggest at all that the base case model is a good one, it is simply the best we can have at the time. For investment in a multibillion dollar petrochemical plant with bank – project financing this is relatively straight forward – the chemistry and engineering define the process and the financial terms are based on what kind of financing deal is negotiated with the banks. The “cases” run are selected to cover the range of cases on volumes, prices and margins based on historical data. To test the cases a back cast case is run to see how well the model predicts actual historical results. Climate models need to run backcast tests to see how well the models actually perform versus actual historical data. However, the real test comes after the business is whether / how well the actual base case predicts actual performance in the future. Often now that well. The input assumptions on the economy and markets can’t be predicted or tested a priori with certainty because the systems are overspecified. This is also the case with climate forecasts and here the challenge is much more difficult since the climate models are sorely lacking in such things as clouds, moisture, solar variability, etc. Someone above gave a link from a google search illustrating a bunch of examples of 10 color charts produced by climate scientists … I assume this was to suggest how wonderful climate models are at generating beautiful colorful contour plots illustrating such things as temperature prediction, sea level rise, movement of the short tale marmots in the Canadian Rockies during mating seasons, etc… These are reminiscent of the bank models that we would run that are based on various sets of assumptions. Putting aside the beautiful and impressive gradient plots, the outputs are only as good as the basic models and scenario assumptions they are based on which may be good but also may be GIGO – garbage in – garbage out. And today people are very skilled at googling and finding pages and pages of such beautiful and impressive charts. Lesson here is producing glossy colorful contour plots – which abound in articles appearing professional journals such as Science and Nature, are only as good as the models and assumptions upon which they are based. However, government departments and mainstream media then memorializes the beautiful glossy colorful charts to push their own policy agenda, failing to discuss or even mention the limitations in the models and data on which they are based.

  27. Dan, I may have missed it: but your post I think gives far too little attention to subgrid models such as turbulence models. All turbulent flows are time dependent and chaotic. The issues you raise are indeed issues but it’s the subgrid models that are really a fundamental unsolved problem. There are also interesting issues surrounding time accurate calculations but I don’t have time to go into it now.

    I have a laymens introduction to CFD I may send to Judith soon. Your bottom line Dan is correct. Appeals to the “laws of physics” are misleading and may deceive laymen into thinking GCMs are just like structural analysis. That’s a dangerous falsehood.

  28. F.W. Aston received the 1922 Nobel Prize in Chemistry for measuring the masses of atoms and reporting them as nuclear “packing fractions.”

    Drs. Carl von Weizsacker and Hans Bethe did not understand nuclear “packing fractions” and proposed the seriously flawed concept of nuclear “energy energies” instead.

  29. Dan,
    I’ve found this in model doc’s a few times, but I think it would be a good addition to your article.
    There is a mass conservation hack at air water boundaries, it allows a super saturation of water vapor, otherwise the models don’t warm enough.This hack actually warms too much, so they monkeyed with aerosols to tune the output.
    Actually I saved this one. It’ll probably disappear now :)
    http://www.cesm.ucar.edu/models/atm-cam/docs/description/node13.html#SECTION00736000000000000000
    Now this math is beyond me, and maybe it doesn’t do what I think, but I’ve seen earlier variants of this back in one of Hansen’s Model D(?) TOE’s.

    • Approximations that gain or lose energy or mass can be tested in a control model to ensure that the effect with the somewhat arbitrary correction doesn’t introduce warming or cooling in the absence of a change in forcing.

      “Monkeying with aerosols” is an entirely separate issue.

  30. Brunch break. I’m not on the clock any more, so I can take long-ish brunches. Back later.

    Thanks for the great comments.

  31. There is a separate fundamental problem with GCMs. The finest feasible grid scale is presently 110km x110km at the equator. Most CMIP5 models are 250km. Yet we know from weather models that to properly resolve convection cells (tstorms and precipitation) a grid of 4km or less is required. NCAR rule of thumb is that doubling resolution by halving grid size increases the computational burden 10x (the time step has to more than halve also). The scale needed to minimally model key climate processes like convection, precipitation, and clouds is 6-7 orders of magnitude more than presently feasible with the biggest, fastest supercomputers.
    So all GCM models have to be parameterized to het around the computational intractability of global climate. Those parameters are tuned to best hindcast; for CMIP5 the tuned hindcast period was YE2005 back to 1975, three decades. And that automatically creates the attibution problem. The rise in temp from ~1920-1945 is essentially indistinguishable from the rise from ~1975-2000. Yet the IPCC itself says the former rise was mostly natural; there simply was not a sufficient increase in CO2. Yet the IPCC attribution in the later period is to GHE, mainly CO2. Pretending natural variation has ceased is a fundamental error, which the growing divergence between modeled and observed (balloon and satellite) temperatures is revealing.

  32. “The art and science of climate model tuning”
    http://journals.ametsoc.org/doi/abs/10.1175/BAMS-D-15-00135.1

    Page 37 shows of the climate modelers surveyed, 96% answered “yes” to the question

    “is your model being tuned by adjusting model parameters to obtain certain desire properties e.g. radiation balance”

    • Just imagine all the models we could develop if we could explore world with no radiative balance.

      The scales are being tipped. Institutions are being bought. Wake up.

      Let’s fund research on models that don’t preserve radiative balance!

      • Just imagine all the models we could develop if we could explore world with no radiative balance.

        Exactly when is it in balance? What day and time, exactly? And by balance do you mean quiescence, incoming=outgoing?

      • Planet is 4.543 billion years old, and though it’s cooler than when it first coalesced, it’s neither reached the temperature of the CMB, nor that of bright yellowish orb which heats it.

        Steady state is your huckleberry, MiCro. Relax, even engineers use it.

      • Relax, even engineers use it.

        Then the wind really starts blowing, the bridge starts a’oscillating, then it all falls down.

      • Varying galactic cosmic ray radiation changes aerosol and cloud development. Solar cycles vary Total Solar Insolation.
        So earth’s incoming a and outgoing radiation are NOT in balance but cause variations in surface heating/cooling.
        Burning biomass and coal both increases soot and aerosols (aka “brown” or “black” “carbon”).
        Indeed we should fund models that model, quantify, and predict consequences of such natural variations.

      • Exactly when is it in balance? What day and time, exactly? And by balance do you mean quiescence, incoming=outgoing?

        They’re talking about the long-term energy balance, that energy in == energy out. (Read the paper, it’s good, and it explains what they mean).

      • > Exactly when is it in balance? What day and time, exactly? And by balance do you mean quiescence, incoming=outgoing?

        Only three questions, micro? Are you sure you can’t do better than that? Do you have any idea how important it is just to ask questions around here?

        Is the truth out there, or what?

      • Only three questions, micro? Are you sure you can’t do better than that? Do you have any idea how important it is just to ask questions around here?
        Is the truth out there, or what?

        But you want to avoid (or not) funding models that don’t preserve radiative balance, surely you must be able to define what you wish to avoid!
        How will you avoid such a model!

      • To avoid such a model, do as Dan does and do none!

        But what if I have two? How would I pick the one Willard would choose as better?

  33. The sole issue for computational physics is Verification of the solution.

    How do we do that?

    • There are books on the subjects of Verification and Validation of models, methods, software, and applications. The two that I turn to are:

      This book

      And this book

      There’s a bunch of reports, and associated journal papers, from Sandia National Laboratory. The Web site will have a link to reports about their research results.

      A Google, either plain or Scholar, will produce many, many hits. Here’s an example:

      This Google search

      Papers are now appearing frequently in journals devoted to numerical methods and various science and engineering disciplines.

      GCMs contain models and methods for numerous aspects in the physical domain, and have a wide variety of application objectives. Direct application of accepted Verification procedures to the whole ball of wax is very likely not possible. That does not prevent the various pieces parts from being individually investigated.

      The Method of Exact Solutions (MES) is usually a good starting point because that method requires that the model equations be extremely simplified, thus allowing focus. That is also a downfall of the method in the sense that the simplifications throw out the terms that are most difficult to correctly handle in numerical methods.

      The Method of Manufactured Solutions (MMS), on the other hand, has proven to be an excellent way to determine the actual order of numerical solution methods. Manufactured Solutions have been, and are being, developed all the time now. And MMS has also been used to locate bugs in coding. Again, a Google will find such reports and papers.

      The properties and characteristics of candidate numerical solution methods can also be directly investigated prior to coding them. Richtmyer and Morton is the standard classic introduction. Computer algebra and symbolic computing have greatly enhanced what is possible to learn by looking directly at the methods.

      • I can vouch that the Patrick Roache book is excellent (i purchased it on Dan’s recommendation).

        Note, several years ago, I had some posts at CE on climate model verification and validation
        https://judithcurry.com/?s=verification+and+validation

      • What single legitimate reason could possible exist that explains why Western academia would steadfastly refuse to insist on robust model verification and validation in climate science?

      • “What single legitimate reason could possible exist that explains why Western academia would steadfastly refuse to insist on robust model verification and validation in climate science?”

        The longest chapter in the IPCC 5AR WG1 is Chapter 9: “Evaluation of Climate Models.”

      • For all their self-aggrandizing, the data manipulators of the global warming movement have become the Brian Williams’ of science. But, a betrayal of the public trust is their biggest crime. Roger Pielke, Jr. asked for a copy of the raw data back in August 2009 to conduct his research. He could hardly overlook the degree of scientific sloppiness and ineptitude demonstrated by CRU after being informed that only quality controlled, homogenized data, i.e., adjusted data, was still available as all of the original raw data prior to 2009 had been lost (forever making the duplication, verification or reevaluation of the homogenized data impossible). We’re talking about research practices that David Oliver (see, “It is indicative of a lack of understanding of the scientific method among many scientists”) would describe as, shoddy at best and fraudulent at worst.
         

        In the business and trading world, people go to jail for such manipulations of data.

        ~Anthony Watts

  34. As a lukewarmer who also doesn’t have much confidence in the current verifiability of climate modeling software codes, I’ve been challenged by climate activists in my own organization to produce my own climate model as an alternative to current GCMs.

    Recognizing the valuable contribution such a model could make towards gaining public acceptance of the pressing need for adopting fully comprehensive government regulation of America’s carbon emissions, I’ve accepted the necessity of producing my own climate model as an alternative to what’s out there now.

    But as someone whose normal job is working down in the nitty-gritty trenches of nuclear plant construction and operations, I don’t have millions of dollars of my own to spend on computer processing time and on gaining access to the services of the legions of climate scientists, climate software coders, and climate software QA specialists that would be needed to produce my own version of a software-driven climate model.

    Great galloping gamma rays, what am I to do!?!?

    This is my solution: Graphical analysis to the rescue! As I’ve previously posted on Climate Etc., here once again is my own graphical climate model of where GMT might go between now and 2100:

    http://i1301.photobucket.com/albums/ag108/Beta-Blocker/GMT/BBs-Parallel-Offset-Universe-Climate-Model–2100ppx_zps7iczicmy.png

    Three foundational assumptions are made in the Parallel Offset Universe Climate Model: (1) Trends in HadCRUT4 surface temperature anomaly can be used as a usefully-approximate measurement parameter in predicting future temperature trends in the earth’s climate system as a whole; (2) Past history will repeat itself for another hundred years with the qualification that if the earth’s climate system is somewhat more sensitive to the presence of carbon dioxide, there will be more warming; if it is somewhat less sensitive, there will be less warming; and (3) Upward trends in HadCRUT4 surface temperature anomaly will roughly parallel current upward trends in atmospheric CO2 concentration.

    That’s it, that’s the whole model. There is nothing more to it than what you can read directly from the graph or what you can directly infer from its three alternative GMT trend scenarios. There is no physics per se employed in the construction of this model, parameterized or otherwise. There are no physics-based equations and no numerical analysis simulations of physics-based equations carrying artificially-imposed boundary constraints. There is no software coding involved. There is no software QA because there is no software coding; and there is no model validation process other than to follow HadCRUT4’s trends in surface temperature anomalies year by year as those trends actually occur.

    As a service to all humanity, I, Beta Blocker, mild mannered radiological control engineer, hereby relinquish all personal rights to the Parallel Offset Universe Climate Model in the hope that dedicated climate activists such as David Appell, Bill McKibben, Leonard DeCaprio, and Hillary Clinton — supported by dedicated environmental activist groups such as 350.org, the Children’s Litigation Trust, the Natural Resources Defense Council, and the Sierra Club, etc. etc. — will move decisively forward with pressing the EPA to strongly regulate all sources of America’s carbon emissions, not just the coal-fired power plants.

    • Brave move. These guys welcome all opinions, except when they don’t. Thank you curryja. Nuclear power? I’m there. http://www.eia.gov/state/?sid=MD.

    • Outstanding! One thing we all know for sure is that the near future looks like the recent past, except for when it doesn’t.

    • John and Justin, would either of you care to speculate as to what the debate concerning the long-term impacts of ever-increasing concentrations of CO2 in the earth’s atmosphere might look like a hundred years from now if my Scenario 3 is the one that actually occurs; i.e., atmospheric CO2 concentration as measured by the Keeling Curve reaches approximately 650 ppm by 2100, and the earth’s climate system warms at roughly +0.1 degree C per decade on average between 2016 and 2100? If that’s what happens, what will our descendants be saying a hundred years from now concerning the predictions that were being made here in the year 2016?

  35. It appears that GCMs attempt to address climate treating the atmosphere as a continuum. That should be valid on a macro level.

    An oversight is not addressing action at the level of gas molecules. Thermalization takes place at the level of gas molecules. Thermalization explains why CO2 (or any other ghg which does not condense in the atmosphere) has no significant effect on climate. (Thermalization results from interaction of atmospheric gas molecules according to the well understood Kinetic theory of gases. A smidgen of quantum mechanics helps in understanding that ghg molecules absorb only specific wavelengths of terrestrial electromagnetic radiation)
    http://globalclimatedrivers2.blogspot.com

  36. Willis Eschenbach

    Thanks for a good read, Dan. I liked this part:

    While the fundamental equations are usually written in conservation form, not all numerical solution methods exactly conserve the physical quantities. Actually, a test of numerical methods might be that conserved quantities in the continuous partial differential equations are in fact conserved in actual calculations.

    Looking at the GISS Model E code more than a decade ago now I noticed that at the end of each time step the total excess (or lack) of energy from various small errors in all areas of the globe was simply gathered up and distributed evenly around the planet. I asked Gavin Schmidt if I understood the code correctly. He said yes. I asked how large the distributed energy (or lack of energy) was on average, and what the peak was. He said he didn’t know, they didn’t monitor it …

    w.

  37. A Full Scope Replica Type Simulator has been built and commissioned by IGCAR, for imparting plant oriented training to PFBR (Prototype Fast Breeder Reactor) operators. The PFBR Operator Training Simulator is a training tool designed to imitate the operating states of a Nuclear Reactor under various conditions and generate response equivalent to reference plant to operator actions. Basically, the models representing the plant components are expressed by mathematical equations with the associated control logics built into the system as per the actual plant, which helps in replicating the plant dynamics with an acceptable degree of closeness. The operator carries out plant operations on the simulator and observes the response, similar to the actual plant.

    http://waset.org/publications/10000558/verification-and-validation-of-simulated-process-models-of-kalbr-sim-training-simulator

    • It has been long recognized that the world-wide nuclear power industry has been the leader in establishing V&V, and other model, methods, software, and application quality procedures and processes. And within the industry, the USA has been the lead, and within the USA the United States Nuclear Regulatory Agency (USNRC) has been the major driver. Pat Roache, starting the the 1980s, was the pioneer in getting the attention of other scientific and engineering applications and associated disciplines, primarily through professional societies. Again, The Google is your friend.

      • The atmosphere is not the same size than a nuclear powerplant, Dan. Teh stoopid modulz are not mission critical.

        Besides, V&V cost money.

      • The C in USNRC stands for Commission, of course, and I wrote Agency.

      • The size of the physical domain does not introduce any limitations relative to fundamental quality. It may well impact applications, but the fundamentals require verification no matter what the application limitions.

        Yep, quality costs money. Lack of quality, on the other hand, costs very much more.

      • > The size of the physical domain does not introduce any limitations relative to fundamental quality.

        Of course size matters in V&V.

        Maybe it’s a vocabulary thing.

      • Dan Hughes,

        The size of the physical domain does not introduce any limitations relative to fundamental quality.

        The unintentional humor exhibited by VSPs is always the best sort.

        How many sensors per unit volume in the average nuke plant? Extrapolate to the volume encompassed by all the fluids below the tropopause.

        Hang cost, let’s discuss the *physical* feasibility of that answer.

        Or you know what? We could continue pushing the real planet toward limits not seen for over a million years and just find out what happens. Who needs stinkin’ models, validated to impossible standards or not, when *hard data* can tell us all we need to know? Sure some smart engineers somewhere will be able to fix whatever might break just fine. It’s what they do.

      • Willard, I do not see that the size of the physical domain is addressed in that paper. In fact, I do not see a single reference about any physical domain. The sole focus of the paper is software.

        Kindly point me to what I have not seen.

      • brandonrgates, How many sensors per unit volume in the average nuke plant?

        On what theoretical basis is sensor density per unit volume in an engineered electricity-production facility system a scaling factor for what is needed in any other systems? Especially considering that the reference systems have safety-critical aspects.

      • > I do not see that the size of the physical domain is addressed in that paper.

        It’s right next to where the author admits he’s beating his dead horse with a stick, Dan.

        The first sentence ought to be enough:

        It is becoming increasingly difficult to deliver quality hardware under the constraints of resource and time to market.

        The bit where IBM admits using verification to find bugs more than to check for the correctness of their hardware may also be of interest. Logic gates are a bit less complex than watery processes and all that jazz.

        That said, it’s not as if modulz were never V&Ved. It still has a cost. It still is quite coarse compared to nuclear stations or motherboards.

      • brandonrgates, it seems that the instruments already applied to Earth’s climate systems produce an extremely large number of data points. Maybe that’s related to being able to get volumetric information from single instruments?

      • Dan Hughes,

        On what theoretical basis is sensor density per unit volume in an engineered electricity-production facility system a scaling factor for what is needed in any other systems? Especially considering that the reference systems have safety-critical aspects.

        Yes, those are excellent questions. They’re the sort of things I’d be thinking about if I were an engineer writing an article about best coding and validation practices for planet simulators.

        First thing I’d do is get a sense for the scale of the thing. Let’s generously assume that the critical systems of your average nuke occupy the volume of an average 2-story home in the United States … I make it about 600 m^3. The atmosphere below the tropopause alone is a 9 km thick shell wrapped around a spheroid with a volumetric radius of about 6,371 km … works out to a volume of 4.60E+18 m^3. What is that … sixteen orders of magnitude difference in volume.

        If we had better models and a bazillion times more computing power, I could give you a better outline of the safety-critical aspects. That might give a better clue as to what the sensor density and time resolution needs to be to do a proper validation.

        Gets circular real quick, dunnit.

        One thing I can say with some confidence … there’s gonna be some slop for the foreseeable future. On behalf of the sheer physical scale and complexity of the entire freaking planet, I offer my most sincere apologies about that.

        I’d think engineers would know better than to go monkeying around with machinery *they themselves* are telling us we don’t yet properly understand. But that’s just me.

      • What Dan isnt telling you guys is that Validation does not refer to “reflects reality”

        Validation means meets the Specification.

        if climate sceintists wanted to be sneaky all they would have to do is specify that the models shall be within 100% of observed values, and the spec would be met and they would be validated.

      • Steven Mosher,

        You wrote –

        “if climate sceintists wanted to be sneaky all they would have to do is specify that the models shall be within 100% of observed values, and the spec would be met and they would be validated.”

        On the other hand, the “sceintists”, having mastered the elements of spelling, could specify that their models are completely useless.

        Voilà! Specification met!

        Just as a matter of interest, “. . . within 100% of observed values . . . ” appears to be a foolish way of specifying anything, without also specifying what you are measuring. If you observe a value of 1C, you appear to be limiting yourself to plus or minus 1 C (100% of 1). You might be referring to Kelvins, I suppose, in which case your model is completely pointless.

        Playing with words cannot disguise the fact that climate models have so far demonstrated no utility whatsoever.

        Cheers.

      • Steven Mosher,

        Validation means meets the Specification.

        lol. Well I must admit, that one did get by me.

        Consolation is a supreme irony: contrarians rarely specify a threshold for utility. The rally cry is, “The models are WRONG,” which is about as illuminating as calling water wet.

      • http://www.dtic.mil/ndia/2012systemtutorial/14604.pdf

        “The purpose of Validation (VAL) is to demonstrate that a product or product
        component fulfills its intended use when placed in its intended environment.
        In other words, validation ensures that “you built the right thing.”

        “ Extreme IV&V can be as expensive and time consuming as the development
        effort
         An example of this is Nuclear Safety Cross Check Analysis (NSCCA)
        » Conducted by an organization independent of the development
        organization (usually a different contractor)
        » Purpose is to identify and eliminate defects related to nuclear vulnerabilities
        > The reentry vehicle (RV), with a nuclear war head, shall hit the intended target
         Not New York or Washington D.C.

      • Or you know what? We could continue pushing the real planet toward limits not seen for over a million years and just find out what happens. Who needs stinkin’ models, validated to impossible standards or not, when *hard data* can tell us all we need to know?

        Outstanding idea. I’m fine with this and wish I had thought of it.

        The global whiners are going to keep complaining about fossil fuel use and CO2 emissions until we prove them wrong.

        The solution is to prove them wrong. We should subsidize fossil fuel producers and encourage fossil fuel consumption as part of an effort to deliberately meet or exceed the atmospheric CO2 level that the global whiners deem to be the. “level of harm”.

        At that point we can declare the global whiners wrong and proceed to the next eco-environmental challenge.

        The only harm from 500, 600, 700, or even 940 PPM is that you have to mow your grass more and farm prices will be depressed a little by overproduction.

  38. Excellent summary!

  39. “1. Basic Equations Models The basic equations are generally from continuum mechanics such as the Navier-Stokes-Fourier model for mass, momentum and energy conservation in fluids…”

    I can’t verify this, but I understand there is more non-linear coupling in those models than in a whole field of bunnies.

  40. As a point of passing interest, how many hands believe energy is conserved in the Navier-Stokes equations?

  41. Almost always whenever models, methods, and software issues are the subjects of blog posts, we see calls for individuals to pony up with their own models, methods, and software. No individual can have expert/guru knowledge, experience, and expertise in all of the important physical phenomena and processes of the physical domain. Typically for cases in which the physical domain has a range of important phenomena, the modeling effort will have an expert/guru for each one. Together, they will generally provide the initial efforts to formulate a tractable problem.

    From that point onwards, the number of people that are required to successfully complete a project will only increase. Such is the nature of inherently complex physical domains and associated complex software.

    Hundreds of millions of dollars over decades of time, have been spent on development of GCMs by, maybe, thirty organizations.

    Such efforts are somewhat beyond what an individual can accomplish.

    • “Hundreds of millions of dollars over decades of time, have been spent on development of GCMs by, maybe, thirty organizations.”
      And did they all get it wrong? In the same way?

      • And did they all get it wrong? In the same way?

        They better all be wrong in the same way.
        Because what’s wrong is the numerical solutions to the physics.

      • Turbulent Eddie:

        You wrote “They better all be wrong in the same way. Because what is wrong is the numerical solution to the physics”

        No, what is wrong is their inclusion of greenhouse gasses in their models.

        I have proof, from from several directions, including one quite unexpected, that all of the warming that has occurred has been due to the reduction of SO2 aerosol emissions into the atmosphere. Greenhouse gasses can have had zero climatic effect.

      • NS, best as I can tell most did. Exception maybe Russian INM-CR4. Previously discussed elsewhere. Most produce a non-existant tropical troposphere hot spot. Most produce an ECS ~2x observed by EBM or other methods. AR4 black box 8.1 justifies the emergent equivalent of following Clausius Clapeyron across the alritude humidity lapse rate; thatnwas at the time and since with more studies proven wrong. Yes, specific humidity does increase with delta T. But not enough to keep rUTH roughly constant.
        The fundamental unidirectional flaw was to tune CMIP5 parameters to best hindcast YE 2005 back to 1975 (the second required ‘experimental design’ CMIP5 submission). That inherently sweeps in the attribution problem (comment elsewhere this thread). So, most got it wrong for the same basic reason, in the same basic ‘overheated’ direction. QED.

      • Rud,
        “Most produce a non-existant tropical troposphere hot spot. “
        And recent results (here nd here) say they are pobably right.

        “Most produce an ECS ~2x observed by EBM or other methods.”
        So who’s right? Heating is at an early (transient) stage.

        “The fundamental unidirectional flaw was to tune CMIP5 parameters”
        If that’s a flaw (big if), it is model usage. It has nothing to do with the structural issues claimed in this guest post.

      • > I have proof, from from several directions, including one quite unexpected, that all of the warming that has occurred has been due to the reduction of SO2 aerosol emissions into the atmosphere.

        Citation needed.

      • I have proof, from from several directions, including one quite unexpected, that all of the warming that has occurred has been due to the reduction of SO2 aerosol emissions into the atmosphere.

        SO2 and CO2 are not mutually exclusive. The effect of SO2 would appear uncertain, however, since cloud droplets are quite transient and the scattering from them occurs in all directions ( and so, they’re not readily observed by moving satellites that sample from only one direction at a time ).

        Greenhouse gasses can have had zero climatic effect.

        Below is a scatter plot of monthly CERES satellite estimated Outgoing Longwave Radiation, OLR, ( which is much more isotropic than SW ) versus monthly global average surface temperature. The monthly data is dominated by the NH seasonal cycle. With this cycle, water vapor ( a the greatest greenhouse gas ) varies with temperature.

        I have applied the Steffan Boltzman equation to convert the OLR to the effective radiating temperature (Te).

        http://climatewatcher.webs.com/TE_SFCT.png

        1. The thick black line represents the Unity line.
        For this line, Te = Tsfc. For an earth with no atmosphere ( or with no greenhouse gasses ) Tsfc would equal Te. The blue dots represent what actually happens on earth.

        2. The slope of the blue dots is less than 1. This indicates positive feedback. There are other factors, but the increase in water vapor with temperature can account for this.

        3. The distance from a blue dot to the Unity line represents the Greenhouse Effect. The average Tsfc is around 288K. The average Te is around 255K.

      • TE, what is the order of the blue dots? Are they a sequence, or sorted by value?

        Because you could be seeing just the seasonal slope (which you mention). But that doesn’t mean feedback, just a strong seasonal signal

        As for the season data, I use the change in temp to get the rate of change, both warming and cooling to see if the rate’s changed, it has slightly, but it could be just past an inflection point, but during warm years (or months) the cooling rates will be higher, than cool years.

      • Turbulent Eddie:

        You wrote “SO2 and CO2 are not mutually exclusive”

        They are, in the sense that warming from the removal of SO2 aerosols is so large that there is simply no room for any additional warming from CO2.

        (Surface temperature projections based solely upon the amount of warming expected from the reduction in SO2 aerosol emissions are accurate to within less than a tenth of a degree C, over decades).

      • TE, what is the order of the blue dots? Are they a sequence, or sorted by value?

        Because you could be seeing just the seasonal slope (which you mention). But that doesn’t mean feedback, just a strong seasonal signal

        The points are all the monthly data from 2001 through 2009.

        The feedback occurs because water vapor also increases with the NH seasonal cycle. The “shape” of the warming is different than what might occur with increased CO2, but the relationship is global and still pertains.

      • The points are all the monthly data from 2001 through 2009

        I figured that, but since they are ordered by Tfsc, what are the order of the months?
        I would expect all the Dec and Jan, to the extreme left, and all the July, Aug all the way to the right, and the rest sorted by temp between them.
        Is this how the months are ordered?

    • DH, an excellent post. You have posted before on this topic here, and its always enlightening. There are models and models. The 32 CMIP5 GCMs are enormously complex ‘finite element’ equivalents, and all doomed by the computational intractability of small gridscales necessary to even begin to get important climate features like convection cells, clouds, and precipitation right from first principles. See, for example AR5 WG1 chapter 7 on clouds. In terms of verification and validation, most CMIP5 GCMs have already invalidated themselves by producing a tropical troposphere hot spot that does not exist, by producing an ECS ~ 2X of that observed, and by predicting polar ampflification that is not happening in Antarctica.

      There are other much simpler models that still yield useful information bounding AGW. The EBMs that estimate sensitivity are one class (Lewis and Curry 2014). Monckton’s irreducibly simple equation, further reduced and properly parameterized is another example (guest post at the time). Properly done paleoproxy reconstructions that give a sense of centennial scale natural variation(e.g.Loehle northern hemisphere). Guy Callendar’s 1938 paper on sensitivity. Lindzen’s Bode version of feedbacks and sensitivity. These are simple logic, easy math, quick to check, and good enough for basic understanding and directional policy decisions. Not massive opaque numerical simulations of ‘physics’ that have already gone wrong because they weren’t numerically simulating the necessary real ‘physics’ on the proper scales in the first place.

    • > Such efforts are somewhat beyond what an individual can accomplish.

      I concur, Dan. It’s team work. Here’s one:

      Large, complex codes such as earth system models are in a constant state of development, requiring frequent software quality assurance. The recently developed Community Earth System Model (CESM) Ensemble Consistency Test (CESM-ECT) provides an objective measure of statistical consistency for new CESM simulation runs, which has greatly facilitated error detection and rapid feedback for model users and developers. CESM-ECT determines consistency based on an ensemble of simulations that represent the same earth system model. Its statistical distribution embodies the natural variability of the model. Clearly the composition of the employed ensemble is critical to CESM-ECT’s effectiveness. In this work we examine whether the composition of the CESM-ECT ensemble is adequate for characterizing the variability of a consistent climate. To this end, we introduce minimal code changes into CESM that should pass the CESM-ECT, and we evaluate the composition of the CESM-ECT ensemble in this context. We suggest an improved ensemble composition that better captures the accepted variability induced by code changes, compiler changes, and optimizations, thus more precisely facilitating the detection of errors in the CESM hardware or software stack as well as enabling more in-depth code optimization and the adoption of new technologies.

      I think your conclusion also extends to criticism of teh stoopid modulz too. Auditing them is not a single man feat. A string of posts on V&V can only do so much.

      • Willard, a suggestion. Rather than read modeler bragging rights, go to KNMI climate explorer. You do know how ro do that, right? Grab the CESM CMIP5 official archived results. Now compare them to the 4 balloon and 3 sat temp observations from 1979. You will notice that not only did CESM do a lousy job AFTER parameter tuning for hindcasts, it did an even worse job of ‘projecting’ from 2006 to now.
        Or, you can just grab Christy’s Feb 2016 comparison spaghetti chart and sort out the CESM line. You are just wrong, and it is easily provable with archived facts. You want to play here, up your data game.

      • > You are just wrong […]

        About what, Sir Rud?

        My turn to suggest a pro-tip: when you want people to go somewhere else, provide a link. Adding a quote also helps.

        Like this:

        Researchers have proved that extracting dynamical equations from data is in general a computationally hard problem.

        https://physics.aps.org/synopsis-for/10.1103/PhysRevLett.108.120503

        I do hope that econometric gurus like you know what “hard” means in that context.

      • Willard

        You are right. There is a veritable smorgasbord of opinion on every thread. If someone wants us to partake of the morsel they offer they need to tempt us. The best way is to provide a link and a quote from it with perhaps a short comment as to its relevance and interest.

        tonyb

  42. Oops, this was for Nick

  43. Global climate models and the laws of physics. Thanks Dan but I found my answer in a cartoon.
    http://www.slate.com/blogs/bad_astronomy/2016/09/13/xkcd_takes_on_global_warming.html

    By that world renowned scientist. Can’t believe I wasted so much of my time reading you guys.

    • Don’t forget the sarc on a paleoproxy cartoon comment. Else we might end up in a ‘discussion’.

      • Heavy sigh. ristvan my pal. You caught me out. You are more learn-ed and eloquent than I. You comment and I’ll learn. I’ve come to realize I have nothing of substance to add to these ‘discussions’. ‘One less clown in the circus’. (timg56).

  44. Climate models are only flawed only if the basic principles of physics are,

    You are kidding, BIG TIME, right?

    Model output does not match measured data. That proves they are flawed and proves that they don’t understand climate and that they have not properly programed the correct basis principles of physics.

  45. “The uncertainty principle states that the position and velocity cannot both be measured,exactly, at the same time (actually pairs of position, energy and time)” – requires expansion, but is true enough.

    A chaotic system may be fully deterministic, following the known laws of physics, yet may produce completely unpredictable divergent outcomes resulting from arbitrarily small differences in initial conditions. There is no minimum numerical quantity below which different inputs will result in known outcomes.

    For any non-believers, try and provide a minimum value which will result in either predictable chaotic or non chaotic output from the logistic difference equation. You can’t do it. Chaos exists, and rules.

    It should be apparent that Heisenberg’s uncertainty principle shows that inputs to a deterministic chaotic system such as the atmosphere cannot be precisely determined in any case.

    Lorenz’s butterfly effect taken to the limit of present understanding.

    The laws of physics appear to preclude the measurement of position and velocity simultaneously. Predictions based on what cannot even be measured appear to be breaking the laws of physics.

    Should offenders be prosecuted, and sentenced to write multiple times ” I must have regard to the laws of physics when pretending to predict the future”?

    Cheers.

    • “It should be apparent that Heisenberg’s uncertainty principle shows that inputs to a deterministic chaotic system such as the atmosphere cannot be precisely determined in any case.”

      Baloney. The Heisenberg uncertainty principle is about the limitations of measurements on a quantum scale. It is irrelevant for macroscopic measurements, where measurement uncertainties are far far above the Heisenberg limits, and where continuum equations do a great job of describing the physics. (You depend on that ever time you get on an airplane designed via continuum equations.)

      • David Appell,

        With respect, I believe you are wrong,

        Feynman said –

        “The simplest form on the problem is to take a pipe that is very long and push water through it at high speed. We ask: to push a given amount of water through that pipe, how much pressure is needed? No one can analyze it from first principles and the properties of water. If the water flows very slowly, or if we use a thick goo like honey, then we can do it nicely. you will find that in your textbook. What we really cannot do is deal with actual, wet water running through a pipe. That is the central problem which we ought to solve some day, and we have not.”

        I believe there is a million dollar prize from the Clay Institute (as yet unclaimed) which you can pick up if you can spare the time.

        You are not alone. Many, if not most, physicists, still refuse to accept that chaos can result merely by changing the input value to an equation as simple as the logistic difference equation. Even worse for some, is that there is no minimum value which distinguishes chaos from non chaos.

        As far as the atmosphere is concerned, simplistic assumptions that tomorrow will be much the same as today, or that “Red sky in the morning, shepherd’s forewarning”, suffice in most cases. Obviously, a satellite picture of a giant cyclone is helpful in deciding where not to be, but history shows that government warnings are iffy at best. Numerical prediction methods don’t seem to be useful in terms of accuracy.

        With regard to the airplane red herring, maybe you might care to specify an aircraft “designed via continuum equations”? Sounds sciencey, impressive even, but conveys no useful information. Now is your opportunity to lambaste me for quoting Feynman, a deceased physicist!

        Cheers.

      • Feynman isn’t saying the problem requires quantum considerations or the Heisenbert principle.

      • (You depend on that ever time you get on an airplane designed via continuum equations.)

        If you depend on a weather forecast for that plane to avoid thunderstorms, you are more likely to fly into one than around one.

  46. David Appell,

    I have not heard of the Heisenbert principle, so I will accept your assertion.

    However, your assertion about what Feynman is or isn’t saying is moot. He’s dead.

    Feynman did write that he was unable to solve the chaos inherent in supposedly simple turbulent flow, and he applied his not inconsiderable knowledge of quantum physics to the problem for several years.

    Feynman merely stated that an apparently simple problem was incapable of solution by calculation and knowledge of physics.

    Feynman was aware of Heisenberg’s principle, and stated –

    Heisenberg proposed his uncertainty principle which, stated in terms of our own experiment, is the following. (He stated it in another way, but they are exactly equivalent, and you can get from one to the other.) ‘It is impossible to design any apparatus whatsoever to determine through which hole the electron passes that will not at the same time disturb the electron enough to destroy the interference pattern’. No one has found a way around this.

    In a deterministic chaotic system, even a difference in position or velocity of just one electron may result in entirely unpredictable outcomes.

    You may not like it, but that’s the way it is (or seems to be – maybe the laws of the Universe may be different in the future).

    On a final note, Feynman also said –

    “For instance we could cook up — we’d better not, but we could — a scheme by which we set up a photo cell, and one electron to go through, and if we see it behind hole No. 1 we set off the atomic bomb and start World War III, whereas if we see it behind hole No. 2 we make peace feelers and delay the war a little longer. Then the future of man would be dependent on something which no amounht of science can predict. The future is unpredictable.”

    “The future is unpredictable.” Seems clear enough to me.

    Cheers.

    • Mike: Feynman’s thoughts about water in a pipe have nothing to do with the uncertainty principle.

      I wish I had a nickel every time some so-called “skeptic” quoted Feynman while misinterpreting him. .

      • This seems like the typical overly anal interpretation of what someone “believes” instead of trying to understand what is being said.

        If you have a system you consider to be “chaotic” it just means there isn’t an exact solution. Instead you have a probability range. The more precise you want an answer, the less information you are likely to get. You can call it whatever you like, but the best answer is an unbiased as possible range of probability.

        Most of the issues “skeptics” have are due to the obvious bias and the incredibly moronic memes like, “uncertainty is not your friend.”

      • Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can – if you know anything at all wrong, or possibly wrong – to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition. (Richard Feynman)

      • You owe me $0.05

      • David,

        I would certainly pay you a buck next Tuesday, for a link to PJ’s, Table of Contents, to the original data that he ‘dumped’, today. Any luck?

      • David,
        Do you find it strange that Phil Jones, did not even save a copy of just what it was that he ‘dumped’ after he decided on his own to save ‘space’? No records David, science! UNbelievable.

  47. “(1) Application of assumptions and judgments to the basic fundamental “Laws of Physics” in order to formulate a calculation problem that is both (a) tractable, and (b) that captures the essence of the physical phenomena and processes important for the intended applications.”

    The dominant physical phenomena and processes reside unknown as shadows on the wall of a cave named ‘Internal Variability’. Only leaving room for the myopic solipsism which plays cuckoo and adopts any warming that it can lay its hands on.

  48. Dan, this article WAAAY too long and misses the central point: it is not the basic physics that is the problem, it’s all the poorly constrained “parameters” for the bits for which we don’t know the basic physics.

    GCMs to NOT have basic physics for evaporation, condensation / cloud formation, precipitation and how infra-red radiation interacts with a well ventilated ocean surface.

    ie the key parts of the climate are basically unknown as “basic physics” and summarised as “pararmeters” that at guestimated frig factors.

    The whole basic physics theme is a lie because the key processes of climate are not modelled as basic physics . END OF.

    • Bingo.
      Politicians and pundits can claim to be “pro AGW,” “lukewarmer,” or “skeptic.” Science is advanced by opinion, comment, and analysis, but since “the key parts of the climate are basically unknown,” I would expect the technical people to list themselves in the “I don’t know” group.

  49. Chaos is not necessary to make a computation unstable.
    Any unstable manifold is enough to make answers diverge in the long term.
    But I guess instability + periodicity will probably imply some form of stretching/folding and hence chaos…

  50. What an excellent post and set of comments. I congratulate those of you possessing the physics chops required to make an intelligent contribution to the conversation. Not having these chops, I’ll try making my contribution elsewhere.

    In my opinion, Physics (emphasis on the “P”) explains everything. If you don’t believe me, ask God. Unfortunately, all we mortals have at our disposal is physics (emphasis on the “p”). Little-p physics is what Newton had, what Einstein had, and what Planck had. In fact, it’s what all today’s physicists, including you climate scientists, have. Now what all you little-“p”ers need is a bigger pot of humility so you won’t make a mess when you do your business.

  51. Pingback: Engineering the software for understanding climate change | …and Then There's Physics

  52. Watch this before you comment

    • Or maybe not.

      British Met Office, Hadley Centre.

      Would you buy a computer program from them?

      Cheers.

      • 1/3 the defect rate of NASA (53:58)? (“0.1 failures/KLOC” vs. “0.03 faults/KLOC”)

      • AK,

        In climate science computer programs, it seems a “bug” becomes a “minor imperfection”, and over time can turn into a “feature”.

        Apparently, computer science cannot be of further use to climate scientists, because the science is “done”, and the behaviour of the “climate” is well understood.

        Amongst other things, “games” need to be produced to enable decision makers to understand how smart the climate scientists are.

        I thought the current GCMs fulfil that role admirably.

        Cheers.

      • “Would you buy a computer program from them?”

        Ask Judith.

      • “Ask Judith.”

        Another two word meaningless command from Steven Mosher. Why should I? It seems that the BBC gave up believing that the output from the Met Office programs was worth anything at all. Maybe Judith disagrees for all I know. Does it make a difference?

        Many people purchase programs claiming to predict stock market movements, horse racing results, and similar things.

        “A California astrologer said she had been consulted by the Reagans regarding key White House decisions . . . ” I suppose if the White House pays for astrological advice, it must be reliable. Only joking!

        That’s about as silly as believing car manufacturers’ computer programs in relation to fuel consumption, emission levels and so on. People can spend their money any way they like. Toy computer games are an example.

        Cheers.

      • I think UKMO was charging them to much and not providing the service they want (nothing to do with their models). The private sector group that the BBC hired (MeteoGroup) is much better suited to provide the BBC with what they want.

      • curryja,

        From meteogroup –

        “The reason the forecast varies so much between the sources is because different companies look at different weather models. At MeteoGroup we have access to a number of models such as those shown below and then the forecasters analyse these models at the start of a shift;
        ECMWF
        EURO4
        UKMO Global
        KNMI (HiRLAM)
        GFS
        WRF
        It is likely the other companies have access to these or at least some of these models as well, but they are likely to weight their forecast on a specific model. For example, the Met Office spends a lot of money developing their own models, such as EURO4 and the UKMO Global model, so they are likely to use that model on a more regular basis. However, by only looking at one or two models you decrease how accurate your forecast will be as because the weather is a chaotic system; if the starting conditions are wrong then it is likely the forecast will be wrong.”

        Once again, hoping that the miracle of averaging will help. If any individual model was demonstrably superior, the others would be unnecessary.

        I cannot easily find accuracy claims for meteogroup (most commercial forecasters are remarkably coy – surprise, surprise!)

        However,

        “Below are the accuracy percentages of the ‘one to five day’ graphical and text forecasts from the top-10 Google ranked weather forecast providers in Essex.”

        Overall measured meteogroup accuracy? 76.12 % Next best? 75.95%.

        Another location or time? Who knows?

        From ForecastWatch (commercial provider) –

        “Because if you never predict rain, in most parts of the country you will have an accuracy of 70% or so.”

        Naive persistence forecasts do far better than this, of course. In temperate regions, tomorrow=today gives around 85% – depending on acceptable tolerances. It’s a matter of minutes to set up a spreadsheet, download a years worth of data for a given locality, and check for yourself.

        Maybe things will improve in the future.

        Cheers.

      • I have learnt much here. Firstly about modelling non linear chaotic systems from the early exchanges, and then that Climate Scientists have far too little serious work to do from the later exchanges. I had forgotten how easy life is in academe.

      • curryja,

        I cannot say if this is true or not, but some journalists are obviously of the opinion that the “huge computer model” is a factor in the spectacular forecasting failures by the Met Office.

        “But the chief reason why the Met Office has been getting so many forecasts spectacularly wrong, as reported here ad nauseam, is that all its short, medium and long-term forecasts ultimately derive from the same huge computer model, which is programmed to believe in manmade global warming.”

        As an aside, if there’s not much difference between forecasting products, why not just go for the cheapest? If the web site is sufficiently flashy and impressive, replete with the requisite jargon, nobody will care whether you’re just guessing that tomorrow will be just the same as today.

        Maybe Weatherzone has the answer –

        “Australia’s most accurate weather forecasts whenever and wherever you are.” And it’s FREE! Or you can pay $1.99, and get hi-res icons and dynamic backgrounds!

        But wait – there’s more!

        “AccuWeather is Most Accurate Source of Weather Forecasts and Warnings in the World, Recognized in New Proof of Performance Results”

        And – their “dramatic operations centre” has a 21 foot ceiling! Imagine that! 21 feet of space between the floor and the ceiling.

        I’m quite baffled why organisations such as the BBC don’t use the most accurate source of weather forecasts in the World. Why would anyone settle for second best?

        Please excuse my poor attempt at sarcasm, but it looks as though there are no end of organisations taking advantage of people’s willingness to suspend disbelief at the behest of any itinerant huckster.

        Feel free to delete this. I’m sure some true believers will be shaking with rage, or on the verge of apoplexy, at this point. I find the subject quite amusing, demonstrating yet again the human passion to believe that the future can be reliably ascertained, by consulting the appropriate deities, or their earthly representatives.

        Cheers.

      • Mike,

        My observation as a non-meteorologist looking from the inside is that comments like this one from Christopher Booker could be classified as not even wrong.

        “But the chief reason why the Met Office has been getting so many forecasts spectacularly wrong, as reported here ad nauseam, is that all its short, medium and long-term forecasts ultimately derive from the same huge computer model, which is programmed to believe in manmade global warming.”

        The tuning process of a weather and climate model requires that it provides a stable climate when no forcings are applied. And there are no forcings applied when running it as a weather model.

        The weather model is based on much more recent versions of the underlying Unified Model science and is trialled in many different weather scenarios (i.e. direct comparison with detailed observations taken over a period of a few days).

        The climate model configurations typically take several years to come to fruition due to the need to couple to other components, and are trialled against climatology statistics.

      • Steve,

        You may well be right. I am not sure what “not even wrong” means. I assume that an assertion is right, wrong, or indeterminate.

        I assume you are implying that the journalist in question is wrong, but I don’t know for sure.

        Your assumption that climate, (that is, the average of weather) , should be “stable”, doesn’t seem to be supported by fact. The weather, and hence, the climate, seems to be always changing. Chaotic, in fact, as stated by the IPCC.

        Have you any documents relating to the BBC’s reasons for dumping the Met Office? I’m not a fan of conspiracy theories, but you may have evidence to the contrary. Maybe you could provide it, if it wouldn’t be too much of an imposition.

        Cheers.

      • Steve

        I live just a few miles from the Met Office in Exeter and visit there regularly to use their archives and library and have had meeting with a variety of their scientists. Yes, as you know, they do get things very wrong and seem to fail especially on micro climates which is what interests most of us.

        That they do get things wrong so frequently is confirmed by observations of forecasts over reality and also that after 70 years the BBC is ditching their forecasts and are using a European group.

        I have a lot of time for the Met Office but they do need to improve their forecasting skills and not rely on the models they have developed.

        tonyb

      • “That they do get things wrong so frequently is confirmed by observations of forecasts over reality and also that after 70 years the BBC is ditching their forecasts and are using a European group.”

        As an ex UKMO Forecaster, and a regular watcher of forecasts, I am unaware that they do “get things wrong so frequently”. Anymore than any other Met organisation does anyway. Their NWP model is second only to ECMWF’s and is the basis of their Mesoscale models – and their senior forecasters that review the models before issuing modified fields have a wealth of on-the-bench experience …. Certainly far more than any other Foreign Met organisation.
        Meteogroup will only review those same models ( because of the MetO will still sell them to Meteogroup).
        What the BBC will get is new graphics and webpage design.
        However it is undoubted in my mind that the decision rests with money.
        I have a friend who is a MetO BBC TV forecaster (for over 10 years and very popular in his region – actually, country) and he will be forced to swap organisations to continue in the same job. The BBC is of course assuming that most will. They are probably right.

        “But they do need to improve their forecasting skills and not rely on the models they have developed.”
        I’m sorry Tony but you betray your ignorance of on-the-bench operational weather forecasting with that comment.
        Models are King. Humans increasingly find it difficult to gainsay them. Yes, there are certain inherent traits to each model that can be corrected by human I put but it is very difficult to go against them in the real world often. They can be astoundingly accurate. I regularly plan my day by noting the arrival of rain to within an hour at my home in Lincs (from a forecast the day before).
        PS: although retired I still have access to the UKMO’s Media briefing pages via their Intranet. I therefore read the Senior Forecasters explanations/thoughts on things along with graphics/details the public do not see.
        I also must say that I come across the “you’re always wrong” attitude still. And the answer is usually that that person never properly “clocks” a forecast in the first place…. Added to the human fallibility on not understanding when they do, and always remembering the odd bad one often long ago (Fish’s “Hurricane” anyone?), and never acknowledging the vast majority of good ones.
        PPS: I talk only of Weather forecasts.

      • Tony Banton

        I am sure you will have realised that I have a soft spot for the Met office and often defend them here.

        However, we are fooling ourselves if we believe they achieve the degree of accuracy that might be expected from the millions of pounds invested in them over the years. The Met Office have a blind spot for micro climates which we all live in. In my area, on the coast, tourism is vital and I lose count of the number of days there is a dire forecast keeping tourists away only for it to turn out nice after all . It also works the other way round of course where tourists are lured here on the promise of good weather but the weather then forces them off the beach and into the cafes (fortunately!)

        I also worked with the Environment Agency and as a specific result of their failure to predict the Boscastle deluge they were asked to go away and develop a model that more accurately predicted these type of events. They are especially worrying here in the west country where catchment areas may be focused on tight valleys leading down to the sea and past villages and towns where water could back up if the tide is in or there are obstructions on the river

        We have their app and constantly marvel at how often it is updated and rain becomes sun and vice versa. We would observe that their first forecast is often the best one.

        So, I hold no ill will to the met office at all and appreciate the difficulties associated with our type of climate but in view of the enormous resources given to them I think it reasonable that their three day forecasts at least should have a high degree of accuracy.

        tonyb

      • Mike said: “Your assumption that climate, (that is, the average of weather) , should be “stable”, doesn’t seem to be supported by fact. The weather, and hence, the climate, seems to be always changing. Chaotic, in fact, as stated by the IPCC.”

        Climate models are designed to produce a plausible but stable climate. That is because one can then estimate the impact of a perturbation to the model. The not unreasonable expectation is that the Earth’s climate would be more stable if we didn’t have such random volcanoes, volcanic and natural emissions and so forth to foul the temperature record.

        Absolutely agree that at the detailed level we cannot really say that the variability in a climate model is good enough to be very confident about detailed small-scale changes in weather under a warming scenario.

        Mike said “Have you any documents relating to the BBC’s reasons for dumping the Met Office? I’m not a fan of conspiracy theories, but you may have evidence to the contrary. ”

        I don’t have any inside knowledge on this at all. The Met Office management say they were dumped at an early stage – before money was discussed in detail.

        Rumours are that given that the BBC were under attack from the government they didn’t fancy giving money to a government organisation.

        The Met Office forecasters are no different from MeteoGroup in that they don’t just rely on the Met Office models. The Met Office model is objectively almost as good as the ECMWF model. Normally we get a lot of letters from senior emergency/police workers to tell us how great we were after a period of severe weather. And there are a lot of high profile commercial customers paying the Met Office several tens of millions per year for forecasts.

        But nobody is perfect.

      • “Once again, hoping that the miracle of averaging will help.”

        You seem to think that all models that start at a time t0 should be in the same state for all times t > t0. But there are good reasons why this doesn’t happen and why averaging is useful.

        One is imprecise knowledge of the initial state. There simply aren’t all the observations that a modeler ideally wants. So they have to make choices about how to handle that — do you interprolate between points where there is observational data, and if so how, etc.

        Second is parametrizations. Models don’t all use the same parametrizations (applications of the laws of physics), and it’s not clear which are better. How should the carbon cycle be described? Aerosols? Both are very complicated, and observational data about both is incomplete.

        In fact, when I’ve talk to modelers I often find they’re not as intertested in projecting final states — sure, the public is — as they are in using models as experiments to understand the effect of different assumptions and parametrizations.

        So models aren’t going to end up in the same final state, even in the absence of equations with chaotic results. Averaging is a decent way to capture the spread in models due to their different inputs and assumptions.

      • “We have their app and constantly marvel at how often it is updated and rain becomes sun and vice versa”

        Tony: You obviously don’t understand how that is generated.
        It is a grid point taken at the closest point to you and it just squirts out what that is saying straight from the model. It may even be from the unmodified fields. If it’s a showery set-up then obviously (?) it will oscillate between sun and rain !!

        “However, we are fooling ourselves if we believe they achieve the degree of accuracy that might be expected from the millions of pounds invested in them over the years. The Met Office have a blind spot for micro climates which we all live in.”

        Let’s just agree to disagree on that Tony.
        I think they do better than any National Met service with the money they get. Especially considering the wages that scientists are paid there.
        Micro-regimes are dealt with via their meso models. Have you ever seen the output of surface wind streamlines?
        Let me tell you THAT is all you need to have, along with the knowledge of the local topography to foecast for micro climates.
        Unfortunately again it comes down to human input. Before I retired (because of the closure of Birminingham WC). I looked after the Engish Midlands and knew it’s intricacies. It closed and the MetO promised it’s customers and the Government that it could do the job just as well centrally from Exeter. We told MetO managemnet they couldn’t. What happened? They lost customers. Nothing they could do. The Gov forced them into it by not funding adequately.
        Same thing with IT. Met IT does not have the staff to properly service customers. They don’t pay enough.
        No, the private peeps like Meteogroup can afford to pay for the best IT. Why? because they don’t have to fund a new Supercomputer every 6 or 8 years or whatever it is, in order to improve NWP. Not to mention to run an observing and data collection service and be one of the the two World area Forecast Centres (WAFC).
        No, there is no doubt in my mind that the MetO do a bl**dy good job considering.

    • Right, they “know” the answer so they need to figure out how to sell the “solution.” Two years with soil hydrology off by a couple hundred percent.

      • The ATF Program ( eventually the F-22 and F35 ) ran for 10 years with a fundamental flaw in the ESA radar code simulation code. Really big error but when it come down to it, the system effect wasnt that great.

        Toy Example: I have a model of bank account where I project future balances.. For years, the module that projects incomes from lottery winnings as been Horribly off. But in the grand scheme of things it made no sense to correct it as it was not a grand driver of anything..

        Its like this in any major simulation. some knobs have small gains.

        Objectively their code defect density is good.

      • “Its like this in any major simulation. some knobs have small gains.”

        And some don’t. Let’s see, models pretty much uniformly underestimate 30-60 north land amplification but estimate that land use is a negative forcing. One major land use change since the first beaver pelt hit the market is land hydrology. The Aral Sea is now a desert thanks to poorly planned water use and how many acres have been over grazed?

        Thanks to the Dakota Pipeline protests I read up on tribe migration etc. Some speculate that the Comanche left the Dakotas due to the little ice age which would pose a problem for hunter gathers, in the early 1700s. Pity they didn’t have a written language. In any case, the made room for the Dakota tribe which did get along with the Comanche very well. Then the Comanche were busy building an empire at the time on the southern plains and didn’t get along with anyone.

      • Why didn’t it pose a problem for the hunter gatherer tribes that remained in the Dakotas throughout the putative Little Ice Age?

      • Most likely it didn’t pose any problem for any hunter-gatherers. The Comanche probably left to conquer an empire after they learned to use horses.

      • Also, the dominant South Dakota tribe, the Arikara, were being pushed out by Sioux from the east. The Sioux had rifles and horses, and had been exposed to western military tactics, which they employed. A large, fortified Arikara village was discovered not far from our ranch. It was the scene of an apparent massacre. The Arikara, a farming and fishing culture, fled north… an odd direction to go in an ice age.

      • “Why didn’t it pose a problem for the hunter gatherer tribes that remained in the Dakotas throughout the putative Little Ice Age?”

        Don’t know that it didn’t.

      • JCH, The main Sioux migration was in the 1800s and the Comanche supposedly left in the early 1700s. The Canadian tribes migrated south so the Comanche were likely pushed out leaving room for tribes that tended to stay put longer.

    • Tony Banton

      The Met Office really need to do better but I agree that others are much worse than them. I go to Austria a lot and look at the Meteogroup forecasts every day. The forecasts are often so far divorced from actual reality that I have to check they aren’t giving me the weather for Australia!

      This business about micro climates is crucial. Whether that can be done well under the current set up is debatable.

      Perhaps the Met office were given the chop by the BBC because some one there, just as I do, get irritated by the generalities of the temperature range given in forecasts…’temperatures in a range today rom 14 to 26C’ isn’t really being helpful!

      tonyb

    • Thank for that link Mosh’ , so far I have got 13 min into and realise that this guy has no idea about climate science but is there telling us what he has been fed by climate scientists.

      Full of the usual crap about “it’s all basic physics”. He clearly has NOT looked at how the code works and does not realise that they do not have “basic physics” equations for the key processes. . He also claims that model output is not what our knowledge of climate is based on. Maybe he should read some of the IPCC reports.

  53. Below is a recent measure of dynamic forecasting.
    The forecast is of variance from the actual 500 millibar height field.
    The measure of “Anomaly Correlation” is a forecast measure not to be confused with “auto-correlation”.
    “Anomaly Correlation” of less than 60 is considered unusable.
    There has been some remarkable improvement since 1981.
    However, all duration forecasts have plateaued recently, including the ten day forecast which is still not useful.

    https://i.guim.co.uk/img/static/sys-images/Guardian/Pix/pictures/2015/1/7/1420635756055/b585fc35-4707-45b4-bf6f-03fa649d17c3-1020×612.jpeg

    At what duration beyond 7 days, would one think that forecast improve?

    If one believes that variations ‘average out’, how does one account then for the fact that one year varies from the next?
    And if one believes that years ‘average out’, how does one account for the fact that one decade varies from the next?
    And if one believes that decades ‘average out’, how does one account for the fact that one century varies from the next?

    • At what duration beyond 7 days, would one think that forecast improve?

      It doesn’t. It’s not suppose to.

      If one believes that variations ‘average out’, how does one account then for the fact that one year varies from the next?

      They don’t “average out”, any more than one rainy day and one sunny day average out to make two half-rainy days. It’s just a turn of phrase. When we talk about climate as the “average weather”, we mean that we’re talking about the *statistics* of weather, like how many rainy days and sunny days you’ll normally get in a year, and how many super-rainy days, and how often droughts come, etc.

      That’s what distinguishes weather from climate.

      • Yes, this goes to my point: the statistics of weather aren’t predictable either.

        Below is precipitation by latitude.
        For a given latitude band, days vary fine.
        But years also vary.
        Decades vary.
        Centuries vary.
        These variations are representative of the chaotic fluctuations of circulation which are not predictable.

        https://www.e-education.psu.edu/meteo469/sites/www.e-education.psu.edu.meteo469/files/lesson02/IPCCfigure3-15-l.gif

      • Yes, this goes to my point: the statistics of weather aren’t predictable either.

        And yet, farmers know to plant in the spring and harvest in the fall. If the statistics of weather wasn’t predictable, that would be impossible.

        January is pretty reliably cooler than July in the northern hemisphere. Some areas are pretty reliably desert, and others are pretty reliably jungle. And the Earth, as a whole, stays within a relatively narrow band of temperature.

        …But the statistics of weather are completely unpredictable?

      • That graph shows that annual precipitation is predictable within +/- 7% for almost the entire globe.

      • That graph shows that annual precipitation is predictable within +/- 7% for almost the entire globe.

        Right – so if the IPCC says where you live, precipitation will either increase, decrease, or be about the same, I’m down with it.

      • Right – so if the IPCC says where you live, precipitation will either increase, decrease, or be about the same, I’m down with it.

        Sounds pretty hard to get it wrong at all.

      • And yet, farmers know to plant in the spring and harvest in the fall. If the statistics of weather wasn’t predictable, that would be impossible.

        January is pretty reliably cooler than July in the northern hemisphere. Some areas are pretty reliably desert, and others are pretty reliably jungle. And the Earth, as a whole, stays within a relatively narrow band of temperature.

        …But the statistics of weather are completely unpredictable?

        Glad you mentioned this. You may not have read above where I tried to distinguish between between aspects of predictable and unpredictable and the bounds. I don’t think I used the word completely and certainly above I laid out a difference.

        Seasons, as you reference, are determined by some fairly stable phenomena – specifically astronomical orbits. This determines not only the net radiance but also the pole to equator gradient which gives us jet streams.

        Temperature is determined by local change plus advection terms.
        So temperature is partly determined by unpredictable phenomena ( whether there will be more ridges or troughs over a given area ) but also determined by predictable phenomena, in this case seasonal change in incoming solar radiation.

        Precipitation, on the other hand is much more a function of atmospheric motion than global average temperature. Correspondingly, precipitation is much more unpredictable because, within the bounds of fluctuation, atmospheric motion is unpredictable.

      • Benjamin: My friend is a farmer. A real one. If he could find a forecast (better than Farmer’s almanac) that told him merely if it was going to be a wet or dry summer (not even how much) he would pay $10,000 for a forecast. But his money has been safe (though not his crops) because no one can do it yet.

  54. So, if the climate isn’t changing, can someone please explain how the Midwest is now getting monsoon type rainfall?
    50 years ago, a 3″ rain was virtually unheard of. That is my observation gained from living on a stock and grain farm, where we literally lived and died by the weather.
    Now days, torrential rains in the 3-7+ inches are happening 3-4 times a year, with most of the rain falling in a couple of hours.

    I also remember Winters being colder back then, with some nights the temps falling to 15-20 degrees below zero, but not anymore. Now we have January’s so warm, the trees start budding out.

    Last December, mid-Missouri got close to 7″ of rain the week of Christmas, when the weather should of been cold enough to preclude that kind of moisture forming. Instead of snow we got record-breaking floods.

    Something is going on and neither Obama’s plan to set up a money grabbing carbon trading scheme nor people arguing back and forth over charts isn’t the answer..

    • Greg

      You will find reading ‘ The US weather review’ interesting. It started around 1830 and became more formalised around 1850. It lists weather and extreme events by each state and often county. Some of the weather in the 19th century was extraordinary

      tonyb

    • Greg

      You may be interested in this snippet I took from the US weather Review when I was researching climate at the UK Met Office library. It is just a snippet in time of course. I don’t know your geography but this one specifically mentions Missouri, as you did

      ‘Feb 1888 . In the gulf states and Missouri valley, Rocky mountain and Pacific coast districts-except Southern California where the temperature was nearly normal-the month was decidedly warmer than the average, the excess over normal temperatures amounting to more than 4degrees f over the greater part of the area embraced by the districts named, and ranging from 6 to 10f in the northeast and central Rocky mountains region, Helena mountain being 11f above normal.’

      tonyb

    • can someone please explain how the Midwest is now getting monsoon type rainfall?

      The ocean cycles push the jet stream around, alters the surface track of all the tropical water vapor as it’s thrown off towards the poles to cool.

    • Last December, mid-Missouri got close to 7″ of rain the week of Christmas, when the weather should of been cold enough to preclude that kind of moisture forming.

      Of course, the moisture didn’t form – it was advected ( most likely from the Gulf of Mexico ) as part of the storm system which also provided the lift for the precipitation. There is always plenty of moisture available over the Gulf to soak the US if appropriate circulation exists. That goes the topic of the post – unpredictable fluid flow gives rise to variations in weather.

      Now, December 2015 was the peak of an El Nino – a fluctuation of circulation. See if you can spot another El Nino is this plot of yearly daily maximum precipitation (average for all reporting stations in the US ):
      https://turbulenteddies.files.wordpress.com/2016/07/ghcn_conus_extreme_precipitation.png

      The 82/83 El Nino provided the all time CONUS flooding rains, though there is a spike for most of the El Nino years. There is a trend, though still shy of being significant, of 10cm daily rains. Trends of higher amounts of rains are less and are not significant at all.

  55. Previously “>David Appell said, as a part of a sub-thread above:

    “The S-B Law is very much not a normal distribution (not a “Bell Curve”).”

    I didn’t say it was. Nor did the OP. He said, if I understood correctly, that F=ma gives, for a given a, a range of values of F that are Gaussian distributed.

    Which isn’t true. Nor, for a given temperature T, does the S-B Law give a range of emission intensities. There is a 1-1 relationship.
    [ Bold mine ]

    Now he says he agrees with Benjamin Winchester

    here.

    Young’s Modulus is a paramaterization of an isotropic materials property. Go deeper, and you get the anisotropic properties. Then you get them as a function of time. Then you look at how elasticity also varies microscopically, at grain boundaries, at high-stress regimes, etc. Young’s Modulus is a simplification of all of these nitty-gritty details.

    The Ideal Gas Law is another paramaterization, coming from treating gas particles under conditions of elastic collisions. (Which is why it breaks down as you get close to phase transitions).

    And, yes, Ohm’s Law is another one, a simplification of the laws surrounding electron scattering at quantum mechanical levels. You can get there from the Drude Model, if I recall right, which is itself a simplification.

    In the case of Young’s modulus, this interpretation seems to indicate that given the stain one can vary Young’s modulus over a range of values to get whatever stress you need.

    In the case of Ohm’s Law and the Ideal Gas Law, it appears that the entire law/model is a parameterization. Apparently, you can vary any of the quantities appearing in the law//model, including the actual physical properties gas constant, electrical resistance, temperature, pressure, voltage, current, density, to get whatever other quantities you need.

    So, how can F=ma not be a parameterization. Why can’t we go for the whole nine yards and declare that conservation of mass, conservation of energy, and every single material property, including all thermodynamic state, thermo-physical and transort properties, all be parameterizations?

    These simple examples of Young’s modulus, Ohm’s law, and the Ideal Gas equation of state present an opportunity to briefly mention some aspects of parameterizations.

    In the case of these simple example models, one would never consider varying material properties in order to determine the state of the material. We look up the material properties, plug them into the equations along with other known quantities to determine an unknown state property.

    Some parameterizations introduced when developing tractable science or engineering problems do in fact change the fundamental laws. Replacement of gradients in driving potentials with bulk-to-bulk algebraic empirical correlations generally replace a material property, and associated gradients, with coefficients that represent states that the materials have attained. In the case of energy exchange, for example, the thermal conductivity and temperature gradient is replaced by a heat transfer coefficient and bulk-to-bulk temperature difference, along with some macro-scale representation of the geometry.

    Turbulence is another example. The simplest turbulence models replace a material property, the viscosity, with a model of what is usually called the turbulence viscosity. Some of these are, rough, mechanistic models that are developed based on idealizations of what turbulence is. Others consider in more detail the micro-scale physical phenomena and processes that are considered to characterize turbulence.

    GCMs do not use The Laws of Physics. Instead, models of The Laws of Physics are used. Actually, discrete approximations to The Laws are used, and these are approximately “solved” by numerical methods.

    • Dan Hughes:

      Thanks for mentioning turbulence. Viscosity is another parametrization.

      You wrote: “So, how can F=ma not be a parameterization. Why can’t we go for the whole nine yards and declare that conservation of mass, conservation of energy, and every single material property, including all thermodynamic state, thermo-physical and transort properties, all be parameterizations?”

      If you are trying to say that the laws of physics are themselves models, I’m fine with that. In fact, that’s often what physicists call them: “the Standard Model,” “the quark model,” etc., often before the model gets established with a more formal name, like “quantum chromodynamics.” One of Steven Weinberg’s most famous and useful papers was titled “A Model of Leptons.”

      You wrote: “In the case of these simple example models, one would never consider varying material properties in order to determine the state of the material. We look up the material properties, plug them into the equations along with other known quantities to determine an unknown state property.”

      I’m not talking about “varying material properties.” I’m talking about “looking up material properties.” Those “properties” that you look up – resistance, or the Young’s modulus, or the viscosity – are *parametrizations.* If you want to find the current I flowing through a wire with a potential difference V, you don’t solve the 10^N electron scattering equations of quantum mechanics to determine R, you (often) use a parametrization like I(V)=V/R, taking R as a constant and looking up a measured value. Often that suffices. Often it does not. But that’s what parametrizations are – simplified expressions of complex interactions.

      You wrote: “GCMs do not use The Laws of Physics. Instead, models of The Laws of Physics are used. Actually, discrete approximations to The Laws are used, and these are approximately “solved” by numerical methods.”

      You’re talking about how to solve the equations that express the laws of physics. I’m talking about the laws of physics, which are taken as the starting points.

      • You’re talking about how to solve the equations that express the laws of physics.

        Yes, that’s the problem!

        The physics and equations which describe the physics are accurate and valid ( neglecting scale and parameterizations ).

        Problem: the solutions to the equations mandate unpredictability.

        Now, that applies less to heat content of the atmosphere
        ( aka global temperature )
        Why?
        Because heat content of the atmosphere as a whole is mostly INdependent of motion fluctuation of the atmosphere, being a nice soluble problem of input – output.

        But global average temperature isn’t very meaningful.
        Extremes of temperature, changes in precipitation, storms, etc. are much more dependent on motion, and are much more unpredictable.

      • “Extremes of temperature, changes in precipitation, storms, etc. are much more dependent on motion, and are much more unpredictable.”

        But not the averages.

        Consider a swimming pool. It’s very difficult to calculate T(x,y,z,t) — there is sunshine and wind and fluctuations and cool spots and warm spots and fluid motion.

        But it’s much easlier to to calculate the average , given sunshine and wind.

        Or, look at Maxwell-Boltzmann’s description of a gas. Difficult-to-impossible to calculate the velocity v(t) of each particle. But straightforward to calculate the average velocity.

      • David and Dan, turbulence is of course where simulations are made or broken. Turbulence models are not really based on the laws of physics but on assumed relationships and fitting test data. It is amazing that small viscous forces can change the global level of forces by a factor of 2 but it does. This is it seems to me where the laws of physics can’t be solved directly and that way of explaining it can be misleading.

      • “It is amazing that small viscous forces can change the global level of forces by a factor of 2 but it does.”

        Can you explain more about this? What do you mean by “global level of forces?” Can you point to something I can read? Thanks.

      • But not the averages.

        Perhaps for well mixed phenomena.

        So, perhaps global average temperature is predictable ( with a lesser unpredictable portion from circulation variation ).

        But most events within the atmosphere, including extreme temperatures, precipitation, storms, winds, clouds, etc., are not well mixed but are rather mostly the result of the fluctuations of the circulation.

        This leaves the IPCC with global average temperature rise.
        But that’s not very scary, so they gravitate toward extreme scenarios and predicting phenomena they know are not predictable.

        Well, global average temperature rise may not be very scary because it’s not very harmful.

      • “This leaves the IPCC with global average temperature rise.
        But that’s not very scary, so they gravitate toward extreme scenarios and predicting phenomena they know are not predictable.”

        Such as?

        “Well, global average temperature rise may not be very scary because it’s not very harmful.”

        No? Says what science?

      • “This leaves the IPCC with global average temperature rise.
        But that’s not very scary, so they gravitate toward extreme scenarios and predicting phenomena they know are not predictable.”

        Such as?

        Extreme temperatures.
        Precipitation/Drought
        Storms generally ( tropical cyclones more specifically ).
        Windiness
        Cloudiness

        “Well, global average temperature rise may not be very scary because it’s not very harmful.”

        No? Says what science?

        The magnitude of natural variation for any given locale being so much greater than the global trend for one.

      • Eddie wrote:”The magnitude of natural variation for any given locale being so much greater than the global trend for one.”

        The interglacial-glacial difference of the recent ice ages is about 8 C (14 F).

        Yesterday the high-low difference where I live in Oregon was 38 F.

        So you think that what happened in my backyard last night is more significant than the changes during the ice ages?

      • Eddie wrote:
        “Extreme temperatures.
        Precipitation/Drought
        Storms generally ( tropical cyclones more specifically ).
        Windiness
        Cloudiness”

        Where exactly are the IPCC’s predictions for these?

  56. GCMs do not use The Laws of Physics. Instead, models of The Laws of Physics are used.

    The Laws of Physics are models of the actual physics.

    Stefan-Boltzmann Law? A model. Ideal Gas Law? A model. The Law of Gravity? A model.

    And any model that isn’t exactly first principles is a parameterization. At this point, that’s anything above fundamental particles and their interactions.

    • Benjamin: Perhaps it would be better to say the GCMS use simplified versions of the laws of physics and in many cases very approximate numerical methods for the dynamic aspects. Is that better?

  57. Dan Hughes:

    As I recall it was you who pointed out that GISS/E had zero as the
    heat of vaporization of water, so you have creds for attempting
    to get to the source of the problems. Thank you for your work.

    What comes to my mind is that the GCM’s are vast exercises in
    “semi-empirical physics”, analogous to semi-empirical quantum
    mechanics as used successfully in chemistry.

    “Semi-empirical” means that complicated relations which cannot
    be evaluated exactly get approximated by an assumed value or
    relation with arbitrary parameters chosen to best match whatever experimental data is available. While more general than pure curve fitting, semi-empirical methods are still only as good as their domain of verification.

    Unfortunately practitioners in semi-empiricism can come to believe that their model is in fact reality, which seems to have happened in the case of the GCM’s. And to the extent that the model has been tuned to produce the available data, it does accurately reflect the available data, whence the belief in its reality. But like any statistical exercise the model may or may not be predictive.

    Semi-empirical methods can be terribly wrong, or near perfect approximations, depending on the available math and the skill of the implementer. In the end, they are still statistics, but with a dimensionality guided by physics.

    • > Unfortunately practitioners in semi-empiricism can come to believe that their model is in fact reality, which seems to have happened in the case of the GCM’s.

      A quote would be nice to substantiate that semi-empirical mind probing.

    • 4kx3, that was back in 2009-2010. It’s still there.

      The source for the version of the GISS ModelE code used for AR5 simulations in MODULE CONSTANT still contains the same statement:

      !@param shv specific heat of water vapour (const. pres.) (J/kg C)
      c**** shv is currently assumed to be zero to aid energy conservation in
      c**** the atmosphere. Once the heat content associated with water
      c**** vapour is included, this can be set to the standard value
      c**** Literature values are 1911 (Arakawa), 1952 (Wallace and Hobbs)
      c**** Smithsonian Met Tables = 4*rvap + delta = 1858--1869 ????
      c     real*8,parameter :: shv = 4.*rvap  ????
            real*8,parameter :: shv = 0.
      

      The file linked above contains a directory/folder named Model. A global search of that directory/folder for the word ‘conservation’ or ‘energy conservation’ will give several hits related to mass and energy conservation. It appears that the numerical methods used in ModelE do not inherently conserve mass and energy. There are statements that will stop execution of the code if dialogistic checks on conservation fall outside specific ranges. There are also statements that attempt to ‘correct’ or ‘force’ conservation.

      My experiences with numerical methods that inherently conserve mass and energy has been that such checks and ‘corrections’ are not necessary. We do not even bother making such dialogistic checks. How to make ‘corrections’ to ‘force conservation’ of course are somewhat ad hoc. For example, in the trcadv.f routine:

      4    continue
      
            call esmf_bcast(ogrid, bfore)
            call esmf_bcast(ogrid, after)
      c
            if (bfore.ne.0.)
           . write (lp,'(a,1p,3e14.6,e11.1)') 'fct3d conservation:',
           .  bfore,after,after-bfore,(after-bfore)/bfore
            q=1.
            if (after.ne.0.) q=bfore/after
            write (lp,'(a,f11.6)') 'fct3d: multiply tracer field by',q
      ccc   if (q.gt.1.1 .or. q.lt..9) stop '(excessive nonconservation)'
            if (q.gt.2.0 .or. q.lt..5) stop '(excessive nonconservation)'
      c
      

      Corrections to my interpretation will be appreciated.

      • Dan, you are correct that all these flux corrections to “fix” lack of discrete conservation are very problematic. Each correction has parameters to adjust of course. I actually feel sorry for those who build and maintain and “validate” GCMs. Doing meaningful parameter studies is a Herculean task. There is a veritable mountain of work to do. There is far too much “running” of the codes by climate scientists to “study” various effects when all the computer time could easily be spent on actually validation. Most of these climate effect studies are in my view a huge waste of resources. But “running the code” is easier than trying to advance theoretical understanding or working on better data.

      • dpy: Most GCMs no longer use flux corrections.

    • oops, the lines got automatically formatted to a shorter length. Really messes up the coding.

  58. David Appell admits that the models make approximations and that they are still working to improve their codes (well, good for them). The point is not that all physical relations (Ohm’s law for example) are approximations, but that many such approximations (laws of physics) when applied to simple problems give highly accurate results compared to highly accurate measurements. But in complex settings, many things can go wrong with our approximations, discretizations, parameter estimates, numerical methods, surface data characterizations, and forcing data (to make an incomplete list). As a simple example, it is possible to get quite good predictability for fracturing of a uniform material under strain, but no-one can yet predict earthquakes. We cannot characterize the materials or their spatial makeup at all scales sufficiently to do the calculations. The climate system is like the earthquake problem: you cannot assume that just because you start out with known physics that you can get a useful result. Newton’s laws are pretty good but you still can’t predict the path of a feather dropped off a roof.

    • you cannot assume that just because you start out with known physics that you can get a useful result

      And after 15 years supporting simulators one of the most difficult tasks was figuring out what the simulator was really telling you and why.

    • > The climate system is like the earthquake problem: […]

      What’s the earthquake problem?

    • Craig wrote:
      “The climate system is like the earthquake problem: you cannot assume that just because you start out with known physics that you can get a useful result.”

      You can’t assume that, sure, but you can compare GCMs outputs to what’s happened, such as

      http://www.climate-lab-book.ac.uk/comparing-cmip5-observations/

      or to paleoclimate information, such as

      Hanesn, J., M. Sato, P. Hearty, R. Ruedy, et al., 2016: Ice melt, sea level rise and superstorms: evidence from paleoclimate data, climate modeling, and modern observations that 2 C global warming c ould be dangerous Atmos. Chem. Phys., 16, 3761-3812. doi:10.5194/acp-16-3761-2016 .

      • My reading of such tests is that it is an eye-of-the-beholder problem. Some outputs of GCMs don’t look too bad. Others, pretty bad, such as precipitation, the ITCZ, Antarctic snow, the jet stream, the mid-trop tropical hot spot. Do these “not matter”? They matter to me.
        And by the way, Hansen’s understanding of paleoclimates sucks and he is quick to make excuses for why the models don’t do paleoclimate very well.

    • Craig wrote:
      “As a simple example, it is possible to get quite good predictability for fracturing of a uniform material under strain, but no-one can yet predict earthquakes.”

      We have signfiicantly better information about recent climate parameters than we do about the geologic parameters in the deep Earth that are relevant to earthquakes.

      In fact we have essentially *no* data on the latter, let alone real-time or recent data.

      • The problems are similar in that known laws do not guarantee that you can solve the problem. The climate models use very poor input for ocean temperature distribution, an important initial condition.

      • Craig wrote:
        “The problems are similar in that known laws do not guarantee that you can solve the problem. The climate models use very poor input for ocean temperature distribution, an important initial condition.”

        Climate models don’t solve an initial value problem. You should know that.

        But there is essentially NO information about subsurface geologic and tectonic conditions.

  59. It seems to me that the defenders of the models on this thread (brandon gates, Nick Stokes, ATTP, etc) are not defending as nicely as they intend. They keep mentioning how the models do not use the same physics, cannot make predictions, the modelers are still working on the numerical methods, ENSO is not predictable, etc. Is this supposed to give the public confidence when told to shut down their coal plants? Just because a billion $ went into the models and they are doing the best they can does not mean I must believe the models. Where is the sort of testing that Dan mentions (numerical convergence for ideal problems, etc)? I have drawers full of papers showing GCM test results and these results are mostly equivocal or Rorschach test-like. Sometimes just awful. Doesn’t that bother you?

    • does not mean I must believe the models.

      Of course, you can believe whatever you like (this is obvious, right?).

    • “It seems to me that the defenders of the models on this thread (brandon gates, Nick Stokes, ATTP, etc) are not defending as nicely as they intend. They keep mentioning how the models do not use the same physics, cannot make predictions, the modelers are still working on the numerical methods, ENSO is not predictable, etc. Is this supposed to give the public confidence when told to shut down their coal plants? Just because a billion $ went into the models and they are doing the best they can does not mean I must believe the models. Where is the sort of testing that Dan mentions (numerical convergence for ideal problems, etc)? I have drawers full of papers showing GCM test results and these results are mostly equivocal or Rorschach test-like. Sometimes just awful. Doesn’t that bother you?

      #######################

      Does not bother me in the least. For the most part Policy has NO NEED WHATSOEVER for results from GCMS.

      Very simply: The best science tells us.

      A) doubling c02 will increase temps by 3C . This is NOT from GCMs
      but rather from Paleo and Observational studies, GCMs merely
      confirm this or are at best consistent with it.
      B) the estimates that the temperature will increase by 3C, is REASON enough, to take policy decisions that put an early end to the use of coal
      as a fuel of choice. On top of the climate risk, we have the risk
      to health ( from Pollution, namely pm25). Those two risks ALONE
      can justify a policy that puts an end to coal sooner rather than later
      and justifes policies that favor low risk ( warming risk) technologies such
      as Nuclear.

      In short, we knew enough about the risks without considering ANY input from a GCM, to justify policies that favor non polluting technologies like Nuclear over coal. You dont need a GCM to tell you that switching from Coal of Nuclear and NG is a lower risk path. The sooner this happens, the better.

      • Mosh: you cannot get a 3deg C warming from doubling without using the models to calculate sensitivity. Show a citation. Papers that use data (not GCMs) get a much lower sensitivity. I’ve published on this personally.

      • Does not bother me in the least. For the most part Policy has NO NEED WHATSOEVER for results from GCMS.

        Very simply: The best science tells us.

        A) doubling c02 will increase temps by 3C . This is NOT from GCMs
        but rather from Paleo and Observational studies, GCMs merely
        confirm this or are at best consistent with it.

        Let’s go with a little more defensible 2C ( early 1D Manabe ) and also realize that much of that is buffered for centuries by the oceans never to be realized all at once.

        But what does a global 1.5C rise mean about actual climate?

        Global mean temperature doesn’t tell us much about the things that matter. And it’s possible that temperature rise coincides with less extreme climate.

      • The global average daily rising temp is 17.8F, the average solar for flat ground at the stations whose numbers were measured for a average Sun ( Avg of 1979-2014 TSI) is 3740.4 Whr/m^2, which works out to 0.0047F/W
        And the seasonal change for the continental US is ~0.0002F/W

      • “Let’s go with a little more defensible 2C ( early 1D Manabe )”

        A model from 1980 is more defensible than a model from today? I’d like to see that argument.

        ECS = 2 C = 3.6 F is already bad enough.

      • Craig wrote:
        “you cannot get a 3deg C warming from doubling without using the models to calculate sensitivity. Show a citation. Papers that use data (not GCMs) get a much lower sensitivity.”

        You cannot calculate climate sensitivity using 1850-2015 data because the information on manmade aerosols is not nearly good enough.

      • A model from 1980 is more defensible than a model from today?

        Aboslutely, but try from the 1960s instead.

        Manabe was constrained by compute resources and modeling was still infant but he might have thought more about this than those rushing off to make runs.

        For the 1D, Manabe used a reasonable global approximation. Not much has changed with estimates of either CO2 forcing or a water vapour feedback since then.

        In fact, the large range of global mean temperature estimates that the IPCC pronounces, and the fact that Manabes estimates lie within that range, prove that GCMs have largely been a waste of time and money.

        They’ve been a waste because the thing they were employed to do beyond a 1D model is provide prediction of how atmospheric motion might change things. But since motion is not predictable, applying GCMs to the problem obscures the relative certainty of radiative forcing with the uncertainty of circulation. But that uncertainty is there whether or not the CO2 changes.

      • page 8. There’s also this.

        “Paleo estimates” are not valid – for one thing they’re not observed, but for another, they don’t compare.

        The LGM had
        1.) Mountains of kilometer deep ice which changed the atmospheric circulation
        2.) The ice albedo of such times
        3.) The orbital distortions of solar radiance falling differntly across earth.
        4.) lower sea levels/higher land changing circulation

        The HCO had quite different solar radiance also.

        Further, they promote a misunderstanding of climate.

        The ice ages didn’t occur because of changes in global temperature.
        The ice ages occurred because of regional insolation changes over the ice accumulation zones.
        It is more accurate to say the ice ages caused global temperature change.

        I applied a radiative model to a year’s monthly snap shots of atmospheric data for given scenarios ( Eemian, LGM, HCO, PI, 2010, and 2xCO2 ). The atmospheres are the same but with Snow, Ice, Land, and orbit appropriate for the scenario.

        Below is how they compare.
        The difference between BLACK(2010) and GREEN( hypotheical 800ppm CO2, with reduced global sea ice ) compared to the difference between BLACX and the other scenarios is instructive.

        Paleo events were quite dynamic across seasons, and generally of much greater range. It was the ice accumulation( and subsequent ablation ) that mattered, not so much the global average temperature.

        https://turbulenteddies.files.wordpress.com/2016/05/pc_net_rad_all_months2.gif

      • Turbulent Eddie wrote:
        “They’ve been a waste because the thing they were employed to do beyond a 1D model is provide prediction of how atmospheric motion might change things”

        Where is the evidence any of that matters in the big picture?

        If is does, why are the climate patterns of the Quaternary so regular?

      • Where is the evidence any of that matters in the big picture?

        The GCMs have made the big picture blurry.

      • “The GCMs have made the big picture blurry.”

        Another nice, meaningless, utterly useless claim.

        Congratulations.

      • “Very simply: The best science tells us.”

        “The best science tells us”, that’s hilarious. Also you probably mean PM2.5 not PM25.

        The current states of pollution and GHG studies are so overrun with bias they don’t tell you anything.

        1. The argunent that the CO2 level will exceed 500 PPM is so funny it is almost absurd.

        .2 The cost of PM2.5 from coal in the US is trivial. It is an invented problem. The are a few problem legacy plants. If 1/100th of the money wasted on renewables and global warming scares was dedicated to upgrading the plants the problem would be solved. Warmunists aren’t interested in fixing problems, they are interested in getting their own way and will use whatever scare tactics work.

        Found this:
        http://www.health.utah.gov/utahair/pollutants/PM/PM2.5_sources.png

        http://www.health.utah.gov/utahair/pollutants/PM/PM10_sources.png

        Couldn’t find a pie chart of all US PM2.5 sources… But coal might not even be in the top 3.

        The claim of big gains reduction of a tiny minority of PM2.5 (which is mostly dust from various sources) indicates politically motivated study writing. And it is based on bioscience studies which are mostly wrong (AMGEN 89%) anyway.

        Lets look at the claims:
        1. ” doubling c02 will increase temps by 3C . This is NOT from GCMs”.
        Since it isn’t true they probably made it up. This would make Mosher correct. “rather from Paleo and Observational studies” more humor from Mosher. Field measurement says 0.64°C for a doubling and direct CO2 forcing is estimated (probably on lab studies) at 1°C.

        2. the estimates that the temperature will increase by 3C, is REASON enough, to take policy decisions that put an early end to the use of coal
        More humor gold from Mosher. This is like warning people to stay indoors because the sun will come up tomorrow. 3°C? So what? It isn’t going to hurt crops (Ag studies show heat tolerance increasing and soybean yield increasing fastest at the equator). It isn’t clear that 3°C would be a problem today and it certainly won’t be in a couple of decades.

        3. On top of the climate risk,
        And then we do the bait and switch. The warmunists just keep throwing things against the wall and hoping something will stick. All that money wasted on warmunism must have some benefit, eh?

    • > are not defending as nicely as they intend.

      The intention probing problem may be more like the earthquake problem than the climate system, Craig

    • Craig, you don’t have to be able to predict ENSOs to project the long-term climate state. Because ENSOs redistribute heat, they don’t create it. GHGs “create” new heat in the climate system.

      Calculating the final state of climate is mostly a matter of figuring out how much energy is added to the system (viz. conservation of energy), and how much of it is distributed to near the surface.

      • Craig, you don’t have to be able to predict ENSOs to project the long-term climate state. Because ENSOs redistribute heat, they don’t create it. GHGs “create” new heat in the climate system.

        You do need to predict ENSO events and the statistics of ENSO events if you want to have any basis of predicting whether California precipitation will be higher, lower or about the same as long term averages.

        You do need to predict El Ninos if you want to predict flooding in the US.

        You do need to predict ENSO events if you want to predict Atlantic hurricane frequency.

        You do need to predict ENSO events if you want to predict fire and drought in Australia.

        You do need to predict La Nina events if you want to predict Dust Bowl type events.

        But you can’t predict ENSO events, so you also can’t predict any of these other phenomena.

      • The Earth is a heat engine. It redistributes heat from the tropics to the poles via ocean and air circulation and the heat is lost more readily that way. Does ENSO “create” heat? Of course not, but by changing how it is distributed in the ocean, an el Nino can create a spike (like last year, remember?). Long term effect? Not sure.
        My point about ENSO was that it was something the models don’t do. How many things can they fail to do and still be “just physics”?

      • Eddie, GCMs don’t predict any of the things you mentioned. Maybe some downscaled regional models are now trying.

        Is anyone even claiming it’s now possible to “predict flooding in the US?”

      • Craig: how is ENSO relevant to a calculation of ECS?

        No modelers are trying to predict the exact average global surface temperature in 2100, 2100.5, 2101, 2101.5 etc. They are trying to project (not predict) the long-term average of global surface temperatures.

      • Eddie, GCMs don’t predict any of the things you mentioned. Maybe some downscaled regional models are now trying.

        Is anyone even claiming it’s now possible to “predict flooding in the US?”

        Gosh yes.

        You won’t have to look far to find all sorts of people attributing the Louisiana floods, or the Hawaii hurricanes, or Sandy, or…
        to global warming.

        There is no basis for this.

      • Eddie wrote:
        “You won’t have to look far to find all sorts of people attributing the Louisiana floods, or the Hawaii hurricanes….”

        You are misreading — the attributions are probabilistic. The Guardian just had a good article on this:

        https://www.theguardian.com/environment/planet-oz/2016/sep/15/was-that-climate-change-scientists-are-getting-faster-at-linking-extreme-weather-to-warming

      • The IPCC is maddening because they’ll coyly admit:
        “there is large spatial variability [ in precip extremes]”
        “There is limited evidence of changes in extremes…”
        “no significant observed trends in global tropical cyclone frequency”
        “No robust trends in annual numbers of tropical storms, hurricanes and major hurricanes counts have been identified over the past 100 years in the North Atlantic basin”
        “Iack of evidence and thus low confidence regarding the sign of trend in the magnitude and/or frequency of floods on a global scale”
        “In summary, there is low confidence in observed trends in small-scale severe weather phenomena”
        “not enough evidence at present to suggest more than low confidence in a global-scale observed trend in drought or dryness (lack of rainfall)”
        “Based on updated studies, AR4 conclusions regarding global increasing trends in drought since the 1970s were probably overstated.”
        “In summary, confidence in large scale changes in the intensity of extreme extratropical cyclones since 1900 is low”
        (from RPJr).

        But then go on to intimate that extreme weather is increasing.

      • You are misreading — the attributions are probabilistic.
        If that means made up because there’s no basis, then fine.

      • Eddie, and your comments on my backyard vs the ice age?

      • David: no one is trying to predict local effects like el Nino produces? The forecasts of doom are based on drought in Africa giving crop failure, more tornados, more hurricanes, more floods, heat waves killing people in Europe, all local things. Maybe YOU are only thinking about long term global averages, but the IPCC impacts reports are based on regional forecasts.

      • Craig: Who is predicting how many people will be killed in Europe from the next ENSO?

        The ENSO “forecasts” are mostly based on historical precedent. I”m sure there are modelers trying to do regional modeling. Do you expect them to be perfect too?

      • Eddie, and your comments on my backyard vs the ice age?

        Well, global average temperature did not cause the glacial cycles.
        It is more accurate to say that the glacial cycles caused changes in global average temperature.

        But I’d also observe that your temperature range ( you must be in Eastern Oregon or the mountains for 38F ) is much larger than 2 or 3F from global warming. I don’t think your day, even in the backyard, would be very different if your temps were 52F to 90F instead of 50F to 88F.

      • Eddie: I’m in western Oregon, in Salem, west of the Cascades.

        So how is my 38 F daily range so much worse than 2 miles of ice above Chicago?

      • Eddie wrote:
        “I don’t think your day, even in the backyard, would be very different if your temps were 52F to 90F instead of 50F to 88F.”

        What science supports that view?

        Evaporation rates increase exponentially, by about 7% per 1 deg C of warming. Was that a factor in the very difficult droughts being experienced in southern and southeastern Oregon? In the California drought?

      • So how is my 38 F daily range so much worse than 2 miles of ice above Chicago?

        I’m say that raising the annual average temperature in Salem by 3C is something you would expect naturally and that the Salem didn’t end when it happened in the past.

        In fact, it seems to have done okay.

      • Who says Salem has done OK?

        (Your chart only shows about 1.5 C of warming.)

        How much money was lost by Oregonian farmers due to the multi-year drought they’re still dealing with?

      • Eddie, that Salem/McNary data only shows a temperature rise of 1.2 C, not 3 C.

      • Eddie, that Salem/McNary data only shows a temperature rise of 1.2 C, not 3 C.
        The lowest annual temp to the highest annual temp ( about the range you can expect for any year ) is about 3C.

      • How much money was lost by Oregonian farmers due to the multi-year drought they’re still dealing with?

        And why do you bring up drought?

        I though you agreed that weather is not predictable.

      • “The lowest annual temp to the highest annual temp ( about the range you can expect for any year ) is about 3C.”

        That’s not how you calculate trends.

        So you don’t know how much was lost by Oregon farmers in the recent drought. Why haven’t you taken that into account when considering impacts?

      • “I though you agreed that weather is not predictable.”

        I haven’t written a word here about weather.

      • Sorry David but the distribution of extra energy is critical. If GCMs are just very complex energy balance methods then they will be assuredly very wrong. You can get almost any answer you want for an airfoil with course grids that don’t resolve details of the boundary layer. Give me the value of lift you want and I will run a CFD code and get that answer (at least for positive values between 0.5 and 1.5)

      • We aren’t talking about airfoils or how you would manipulate their equations, we’re talking about climate. Let’s stick to the subject.

      • Well, we have a much better understanding of simple turbulent flows than climate. There is at least real data and you can say something meaningful. All I’ve ever heard for the climate problem is “physical understanding” invoked. Thats just entirely subjective.

        The point is that if simple flow modeling is sensitive to parameters one would expect more complex turbulent flows to also be sensitive. Weather models contain turbulence models and primitive boundary layer models.

      • “If GCMs are just very complex energy balance methods then they will be assuredly very wrong.”

        well, In fact, the are NOT very wrong.

        take something as simple as the average temperature.

        If you, for example, Tune the model to the GLOBAL average between 1850 and say 1880, and then let it run from 1850 to the present,
        You can see this.

        1. The global average temperature (around 15C) is captured pretty well
        with most models producing figures within 10% of this.
        2. the Trend is captured very well.
        3. the REGIONAL ( continental) averages are captured very well.

        Lets face it, if someone asked you if you could model the temperature of the earth at every location (250km grid) and get within 25% you’d probably guess NO.. but in fact they do better than this.

        Can they improve?

        of course,

        But given the complexity they do a fantastic job.. heck getting within 50%
        is astounding

      • I actually agree that it is surprising that GCMs are as good as they are. I also strongly suspect selection bias in the literature just as for CFD gives a biased positive picture. We are working to correct that in CFD. BModelers must do it for GCMs and the model tuning paper indicates a good start.

      • dpy: Again, where are all these attractors and chaos during the Quaternary?

      • David, all past climates had chaotic weather indicating that they were on a strange attractor. It’s pretty simple really. The “climate” is the statistics of the attractor. So the properties of the attractor will determine if it’s predictable and how god a numerical discretization you will need.

      • dqy wrote:
        “David, all past climates had chaotic weather indicating that they were on a strange attractor”

        And where, specifically, are these attractors in the climate of the Quaternary?

        By now it’s clear that you have no idea. You can’t even point to one.

      • The climate of the Quaternary was the climate of the part of the attractor the planet was on at the time. It’s hard to say more because we are ignorant about the details, profoundly ignorant I would say but in many are in denial about that lack of understanding. That ignorance is covered by vague references to “physical understanding” that is even more vague and unspecific.

      • Good — so you have no idea what attractors or chaos was in the Quaternary.

        So why should I expect any in the next 100 (or 300) years?

      • dqy wrote:
        “That ignorance is covered by vague references to “physical understanding” that is even more vague and unspecific.”

        If you can’t point to any implications of attractors or chaos, why should anything think it’s important?

      • David, it’s important because it defines the boundaries of our ignorance and where we need far better methods and understanding. Chaos and turbulence have strong influences on climate as anyone who has run a CFD simulation knows. The “climate” is the properties of the attractor about which we are rather ignorant.

      • dpy wrote:
        “Chaos and turbulence have strong influences on climate as anyone”

        Then prove it!!!

        All you’ve done so far is assert it.

        Prove it. I’m getting tired of asking.

      • David, I can’t prove it without a very long side trip into the fundamentals of fluid dynamics. The best I can do is send you privately my CFD for laymen write up. It has over 30 references to the literature where you can go to come up to speed.

    • Craig wrote:
      “Where is the sort of testing that Dan mentions (numerical convergence for ideal problems, etc)?”

      The climate problem is hardly an “ideal problem.” It is the most difficult problem that science has ever attempted to solve. The facts that lots of models give similar results, and that they are in accord with paleoclimate information about climate change, are very important indicators they are on the right track.

      • David: being on the right track is great, and I am impressed with how well they do. That does not mean they are “right” or adequate for policy purposes. My reading of hundreds of model testing papers does not give me the same confidence as it seems to give you, for some reason. GCMs are still exploratory, not operational engineering codes.

      • Craig: No model is ever “right.”

        For climate models it’s worse, because no one knows the socioeconomic conditions. Should we plan for RCP2.6 or RCP8.5?

        What information do you need, other than that ECS is about 3 C plus or minus, to make policy decisions now?

      • David: I don’t believe ECS is 3 deg C, more like 1.5 and that equilibrium could take hundreds of years due to ocean inertia. This leads to very different policy than 4 deg C rise by 2100, no?
        What I also need to know is that CO2 fertilization of plants seems to be real, and a good thing.
        Finally, I need to know that current alternatives to fossil fuels simply don’t work and making fossil fuels expensive hurts the poor (well, everyone really) and makes us less able to respond to whatever change happens later.
        If you are simply responding to ECS=3deg C, I would suggest you are being simplistic.

      • I don’t believe ECS is 3 deg C, more like 1.5

        Why?

      • Craig, maybe you think ECS = 1.5 C.

        But so what? You’re just one guy. The IPCC consensus is that it’s likely in the range 2-4.5 C, as of the 5AR.

        If you disagree, then convince people otherwise. (That won’t happen in comments on a blog.)

      • Craig wrote:
        “I don’t believe ECS is 3 deg C, more like 1.5 and that equilibrium could take hundreds of years due to ocean inertia. This leads to very different policy than 4 deg C rise by 2100, no?”

        How so?

      • Craig wrote:
        “What I also need to know is that CO2 fertilization of plants seems to be real, and a good thing.”

        How so?

        “Total protein and nitrogen concentrations in plants generally decline under elevated CO2 atmospheres…. Recently, several meta-analyses have indicated that CO2 inhibition of nitrate assimilation is the explanation most consistent with observations. Here, we present the first direct field test of this explanation….. In leaf tissue, the ratio of nitrate to total nitrogen concentration and the stable isotope ratios of organic nitrogen and free nitrate showed that nitrate assimilation was slower under elevated than ambient CO2. These findings imply that food quality will suffer under the CO2 levels anticipated during this century unless more sophisticated approaches to nitrogen fertilization are employed.”
        — “Nitrate assimilation is inhibited by elevated CO2 in field-grown wheat,” Arnold J. Bloom et al, Nature Climate Change, April 6 2014.
        http://www.nature.com/nclimate/journal/vaop/ncurrent/full/nclimate2183.html

        “With a 1 °C global temperature increase, global wheat yield is projected to decline between 4.1% and 6.4%. Projected relative temperature impacts from different methods were similar for major wheat-producing countries China, India, USA and France, but less so for Russia. Point-based and grid-based simulations, and to some extent the statistical regressions, were consistent in projecting that warmer regions are likely to suffer more yield loss with increasing temperature than cooler regions.”
        – B. Liu et al, “Similar estimates of temperature impacts on global wheat yields by three independent methods, Nature Climate Change (2016) doi:10.1038/nclimate3115, http://www.nature.com/nclimate/journal/vaop/ncurrent/full/nclimate3115.html

      • Abstract: “Dietary deficiencies of zinc and iron are a substantial global public health problem. An estimated two billion people suffer these deficiencies1, causing a loss of 63 million life-years annually2, 3. Most of these people depend on C3 grains and legumes as their primary dietary source of zinc and iron. Here we report that C3 grains and legumes have lower concentrations of zinc and iron when grown under field conditions at the elevated atmospheric CO2 concentration predicted for the middle of this century. C3 crops other than legumes also have lower concentrations of protein, whereas C4 crops seem to be less affected. Differences between cultivars of a single crop suggest that breeding for decreased sensitivity to atmospheric CO2 concentration could partly address these new challenges to global health.”
        — “Increasing CO2 threatens human nutrition,” Samuel S. Myers et al, Nature 510, 139–142 (05 June 2014).
        http://www.nature.com/nature/journal/v510/n7503/full/nature13179.html

      • General Mills CEO Ken Powell told the Associated Press:

        “We think that human-caused greenhouse gas causes climate change and climate volatility, and that’s going to stress the agricultural supply chain, which is very important to us.”

        http://www.chicagotribune.com/business/ct-general-mills-greenhouse-gas-cuts-20150830-story.html

      • “Anthropogenic increase in carbon dioxide compromises plant defense against invasive insects,”
        Jorge A. Zavala et al, PNAS, 5129–5133, doi: 10.1073/pnas.0800568105
        http://www.pnas.org/content/105/13/5129.full

      • I guess what surprises me David is the lack of focus on the huge theoretical and practical issues that need very strong investment and our best minds. The defenses of models by apologists and their use of the propaganda tools of Colorful Fluid Dynamics help no one and give laymen a false faith that the models are a solved problem. We have argued that the same is true of CFD generally. Overselling and overconfidence are strongly retarding progress.

        There are many tactics of these dark arts of selling simulations that need to be exposed and shamed.

      • > The defenses of models by apologists and their use of the propaganda tools of Colorful Fluid Dynamics help no one and give laymen a false faith that the models are a solved problem.

        Another semi-empirical set of claims.

      • “David: I don’t believe ECS is 3 deg C, more like 1.5 and that equilibrium could take hundreds of years due to ocean inertia. This leads to very different policy than 4 deg C rise by 2100, no?”

        It doesnt matter what you think.

        As a policy maker I can look at the science and say

        1. the science has been around 3C for some time
        2. There are some estimates Less than 3C, like 1.5C
        3. Given that, if I want to be safe rather than sorry, I will just
        use 3C as a POR, until I hear otherwise.

        DO you think policy is calculus?
        Its not.
        There is a dream that both sides of this debate have.. that more information will lead to a better policy.
        Forget that Dream.

        Instead ask, will moving quickly to nuclear be good REGARDLESS of whether it 1.5C or 3? answer yes. Resolve to do it

        Instead ask, is coal really work the health risk, regardless of the climate risk? answer.. nope.. Kill it as mercifully as you can.

      • Instead ask, is coal really work the health risk, regardless of the climate risk? answer.. nope.. Kill it as mercifully as you can.

        Not to me, but that doesn’t matter either – natural gas has been killing coal in the US for years now.

        But give ’em the coal and then they’ll come for your natural gas.

        Probably doesn’t matter much anyway because the ‘secular stagnation’ of the global economy is indicative of falling population and falling demand for energy and everything else.

      • Eddie wrote:
        “Probably doesn’t matter much anyway because the ‘secular stagnation’ of the global economy is indicative of falling population and falling demand for energy and everything else.”

        Wacked — population isn’t falling.

        Energy demand is falling slightly, in some sectors. Why is that a bad thing?

      • Steven Mosher,

        As a policy maker I can look at the science and say

        1. the science has been around 3C for some time
        2. There are some estimates Less than 3C, like 1.5C
        3. Given that, if I want to be safe rather than sorry, I will just
        use 3C as a POR, until I hear otherwise.

        You continually get this all wrong. You are not a policy maker, never have been and don’t understand rational policy analysis.

        Fir rational policy analysis you have to estimate the cost of the policy through time (e.g. effect on GDP), balance up the benefits of your policy against the forgone benefits of other policies that could have been doe for the same cost. Michael Cunningham is highly experienced in this and has explained it in many comments in the past.

        So, we need to estimate the costs of the proposed policy, the benefits (e.g. reduced climate damages) and probabilities of the proposed policies achieving the benefits. And compare that against other polices that we will have to go without in order to go with your control-the-climate policy.

        Now for the key point (which you have continually dodged). We do not have credible empirical evidence that GHG emissions are damaging or likely to be. We do not have credible damage functions – even IPCC AR5 WG3, acknowledges that (repeatedly). Without a credible damage function we cannot do rational policy analysis and cannot do rational policy decisions.

        The public is smart enough to know when they are being hoodwinked. That is why they voted out (in a landslide) the Australian government that implemented a carbon pricing scheme in Australia. Treasury’s analysis tried to justify the enormous cost to GDP by innuendo and implication, not a rational analysis of the evidence of the benefits (climate damages forgone).

        It’s a pity you seem to keep ignoring or dodging this key point.

      • Peter Lang wrote:
        “Without a credible damage function we cannot do rational policy analysis and cannot do rational policy decisions.”

        Snarf. Do you think policy decisions are as easy as solving a differential equation? And that the only way they can happen is IF there is a differential equation? That’s incredibly naive.

      • Appell,

        Snarf! yourself. You have nothing constructive or rational to contribute. Your only expertise is in being a troll.

      • Peter, you avoided the question: can you only make policy decisions when you are given a detailed “damage function?”

        How often does that happen in real life, if ever?

      • No Appell, You have the belief in CAGW abd think you know a lot about it; I’ll ask the questions; you answer them.

        Provide justification, supported by valid, relevant empirical evidence, that the costs of climate policies can be justified by the claimed climate damages avoided.

        I say you can’t, and nor can anyone else. That’s why ideologues like you are losing. Some extremists ideologues like yourself will hang on to your beliefs like flat-Earths forever. You just become irrelevant, then all you’ve got left is trolling.

      • Peter, I can avoid questions too.

        Can you only make policy decisions when you are given a detailed “damage function?” That seems ridiculous.

        How often does that happen in real life, if ever?

      • You’ve got nothing. Case closed. Give up.

      • Peter Lang,

        Provide justification, supported by valid, relevant empirical evidence, that the costs of climate policies can be justified by the claimed climate damages avoided.

        I say you can’t, and nor can anyone else.

        Of course nobody can — providing “valid, relevant empirical evidence” of the future is impossible by definition.

        You were saying about making rational policy decisions again?

      • Peter

        ‘You continually get this all wrong. You are not a policy maker, never have been and don’t understand rational policy analysis.

        Fir rational policy analysis you have to estimate the cost of the policy through time (e.g. effect on GDP), balance up the benefits of your policy against the forgone benefits of other policies that could have been doe for the same cost. ”

        Actually you dont need any of that to decide on a rational policy.

        YOU think you need that for an OPTIMAL policy, but you dont need any of it for a rational policy.

        You can rationally decide that because of the risk ( which is impossible to estimate accurately ) we should make nuclear a more viable option.
        We can decide BECAUSE its uncertain, that we ought o avoid the risk,
        and like all policy we can decide to revisit it periodically as information improves.

        For example, in the US we decided to spend Billions on defense systems built to counter threats that had ZERO CHANCE of ever materializing. we make decisions with minimul information and wrong information ALL THE TIME. These decisions are rational and defensible.

      • What I’ve got, Peter, is you avoiding questions and thinking policy analysis is about solving differential equations once someone gives you some simple “damage function” that describes the real, complex world.

      • Steven,

        You’ve got all that wrong. How do you think risk is estimated? You need to quantify the consequences of an event or condition and the probability of that event occurring. The alternative is a free for all where decisions are based on ideological beliefs, not relevant evidence. Then we end up making decions on the basis of scaremongering. That’s irrational, not rational decision making.

        And your assertion in your nuclear example is exactly wrong. We can quantify the risk of nuclear power plant failures and we’ve been doing it for 60 years. We have 16,500 reactor years of successful, safe operation accumulated so far and it is demonstrably the safest way to generate electricity.

        https://bravenewclimate.files.wordpress.com/2010/07/accidents_energy_chains.jpg?w=444
        Source: Peter Lang ‘What is risk – a simple explanationhttps://bravenewclimate.com/2010/07/04/what-is-risk/

      • Brandonr gates,

        All policy decisions are about the future and all important decisions rely on rational policy analysis. Dummy!. Where on Earth have you been for your adult life?

      • .‘You continually get this all wrong. You are not a policy maker, never have been and don’t understand rational policy analysis.

        Too funny.
        How many rational people do you know?
        I’m up to zero.

        Policy means placating the body Politic.
        That either means assuaging fears or satisfying greed.

        Fear and greed – not particularly conducive to anything remotely rational.

      • Peter Lang,

        All policy decisions are about the future and all important decisions rely on rational policy analysis. Dummy!. Where on Earth have you been for your adult life?

        Wondering whether statements like your first sentence are meant to be satirical … and thence wondering if I’m laughing for the correct reason.

      • Brabdonrgates,

        Your comment suggests that policy is intended to change the past. How ignorant?

      • Correction:

        “Your comment suggests YOU THINK policy is intended to change the past. How ignorant is that?”

    • Craig Loehle,

      Brandon gates: you seem to be taking the position that the urgency of the problem means that we can’t wait for the modelers to meet impossible testing standards, we MUST ACT.

      My position is that models will never meet imposible performance standards.

      And yet the urgency of the problem is only demonstrated by….GCMs.

      I disagree. You might start with the Richard Betts quote I provided upthread, his original comment is here.

      And the cost of some actions (immediately shut all power plants, according to James Hansen) could be immediately catastrophic (North Korea lifestyles anyone?).

      No could about it, that would be immediately catastrophic. The economy would tank and people would die.

      A direct citation to Jim Hansen advocating such an insane position would be nice.

      I do not believe it is ever a good idea to throw out scientific integrity just to “get an answer”.

      Me either. However, my definition of scientific integrity does not include expecting perfection.

      If we currently can’t answer the question with GCMs, it should be admitted.

      If you’re expecting perfection from GCMs, you should admit it.

      If the GCMs are tuned and full of estimated parameters and we can’t even do a V&V or sensitivity analysis, we should admit it.

      That GCMs are tuned and use parameterizations is well documented by the IPCC and the modeling groups themselves in refereed journals.

      If you’re conflating GCM V&V results not meeting impossible standards with an inability to do V&V, you should admit it.

      If you’re begging questions based on arbitrary and vague definitions you should definitely keep doing it.

      • Here is the citation to Hansen’s insane demand:
        https://www.theguardian.com/commentisfree/2009/feb/15/james-hansen-power-plants-coal

        Betts is making a qualitative argument but his argument is based on GCMs. He is admitting the problems with the models yet saying we should act anyway. Many people do not come to the same conclusions from the qualitative arguments he makes.
        Brandon: I am not talking about impossible standards. I am talking about reasonable standards. Many people see the global temperature output of the models for the 20th Century and it looks sort of ok so they say the models are good. But they don’t see that these outputs were put on an anomaly basis and actually differ from true global mean temperature. There are so many things the models get wrong (ITCZ, ENSO, jet stream, precip…on and on) and these things are not trivial especially for estimating risk to humans.
        For an engineering decision (evaluating an airplane design, nuclear safety, even approval of a pesticide by EPA) there is an evaluation of risk of failure or harm done. There is some criterion for how good the model has to be for it to be acceptable. Where is the IPCC statement that the GCMs are GOOD ENOUGH for the purpose, have sufficient accuracy? IPCC reports are full of statements like clouds are understood poorly, low confidence in x or y. They may be good enough for you but for many people it does not seem good enough to change their lifestyle. And indeed policies put in place in various countries have doubled or tripled electricity prices. That is a big problem for people. Already, international agencies are discouraging the building of power plants in the third world because of climate change.

      • Yes Craig, An airplane design is validated by a very exhaustive flight test program. Simulations help with reducing cruise fuel burn and structural design. Devotees of scientism tend to believe the role of simulations is far bigger than it is. The traveling public probably prefers the flight testing method. CFD is simply not ready for certification by the FAA. Colorful Fluid Dynamics doesn’t cut here when public safety is the issue.

      • Craig,
        If our current scientific understanding is reasonable, then climate change is likely irreversible on human timescales. If your view prevails, then we have to hope (I think) that the changes are not as damaging as some thing they could be.

      • ATTP: yes, that is correct.

      • The traveling public probably prefers the flight testing method.

        How exactly do we flight test AGW without any pax on board, dpy6629?

      • Craig,
        Sorry, put my response in the wrong place.

      • Craig Loehle,

        I am not talking about impossible standards. I am talking about reasonable standards.

        Which begs the question that current standards aren’t “reasonable”. Whatever that really means.

        Many people see the global temperature output of the models for the 20th Century and it looks sort of ok so they say the models are good.

        Many people say many things. Some people ask why they should listen to what many unnamed people are saying.

        But they don’t see that these outputs were put on an anomaly basis and actually differ from true global mean temperature. There are so many things the models get wrong (ITCZ, ENSO, jet stream, precip…on and on) and these things are not trivial especially for estimating risk to humans.

        Yup. It could be better than we thought, or worse than we thought. Which? Flip a coin? Don’t like the first answer? Best out of three?

        For an engineering decision (evaluating an airplane design, nuclear safety, even approval of a pesticide by EPA) there is an evaluation of risk of failure or harm done.

        None of which are ever perfect because predictions are hard, especially of the future. You ever read anything from the IPCC AR5 WGIII? Many of its graphics give all new meaning to the term “scatter plot”.

        There is some criterion for how good the model has to be for it to be acceptable.

        Which anyone and their dog can dispute as not good enough.

        Where is the IPCC statement that the GCMs are GOOD ENOUGH for the purpose, have sufficient accuracy?

        I know of no such broad statement about any of the models used in IPCC reports.

        IPCC reports are full of statements like clouds are understood poorly, low confidence in x or y.

        Indeed. It’s my opinion that the IPCC does as good or better job pointing out what Teh Modulz do poorly than the average Internet climate contrarian does. In between ARs, I see papers in primary literature identifying problems all the time.

        I would expect nothing less from good scientists doing good science in their particular domain expertise.

        They may be good enough for you but for many people it does not seem good enough to change their lifestyle.

        That’s because many people don’t understand that I don’t think Teh Modulz are good enough. I’m not just talking about AOGCMs, which I consider the strongest link in the chain. I’m talking about Teh Modulz which rely on their output … the ones which purport to say something about impacts on the biosphere and economy.

        When many people implicitly assume that AGW will not be catastrophic because … Modulz, I question where they’re getting their information. My crystal ball is only for show, and I do my best to not assume that it actually works.

        The best way I can think of to kill the Uncertainty Monster is to stop making the changes to the real system which have created it.

  60. Brandon gates: you seem to be taking the position that the urgency of the problem means that we can’t wait for the modelers to meet impossible testing standards, we MUST ACT. And yet the urgency of the problem is only demonstrated by….GCMs. And the cost of some actions (immediately shut all power plants, according to James Hansen) could be immediately catastrophic (North Korea lifestyles anyone?).
    I do not believe it is ever a good idea to throw out scientific integrity just to “get an answer”. If we currently can’t answer the question with GCMs, it should be admitted. If the GCMs are tuned and full of estimated parameters and we can’t even do a V&V or sensitivity analysis, we should admit it.

  61. The average amount of time that passes between when a molecule of CO2 in the atmosphere absorbs a photon until it emits one (the relaxation time) is about 6 microseconds (values from 6 to 10 microseconds are reported) http://onlinelibrary.wiley.com/doi/10.1002/qj.49709540302/abstract . Heat is conducted in the atmosphere by elastic collisions between molecules. The average time between collisions of molecules in the atmosphere at sea level conditions is less than 0.0002 microseconds http://hyperphysics.phy-astr.gsu.edu/hbase/kinetic/frecol.html .

    Thus it is at least 30,000 times more likely that a collision will occur (thermal conduction) than a photon will be emitted. The process of a molecule absorbing the energy in a photon and conducting the energy to other molecules is thermalization. Thermalized energy carries no identity of the molecule that absorbed it.

    Thermalization explains why CO2 (or any other gas which does not condense in the atmosphere) has no significant effect on climate. Discover what does (98%match with measurements 1895-2015) at http://globalclimatedrivers2.blogspot.com

    • Dan

      Assuming all above is correct, what specific calculations, concepts or physics did Tyndall, Arrnhenius and Callendar get wrong. For greater understanding by a financial guy. Thanks.

      • ceres – I am unfamiliar with Callendar. I have ARRnhenius’ paper on my computer and read it some time ago and briefly scanned it just now. Arrnhenius and Tyndal IMO did wonders considering that they did it before 1896. Their work was before the Kinetic Theory of Gases as we now understand it was available to them. Accurate assessments of the details of absorption wavelengths for water vapor and CO2 had not yet been made. They simply did not yet have the necessary instruments. They apparently were not aware that water vapor declines comparatively rapidly to near zero above about 10 km. They had no clue of the relative elapsed time of molecular relaxation compared to collision between molecules.

      • “what specific calculations, concepts or physics did Tyndall, Arrnhenius and Callendar get wrong.”

        Arrnhenus did estimates by latitude band for ocean and land, his tropical calculations over estimated CO2 impact by quite a bit. Callendar left about 17% of the effect unexplained which might be more a feature than a bug.

    • Dan,

      Just as a matter of interest, some GHG proponents claim that CO2, for example, can only absorb and emit photons of a very particular wavelength.

      This leads to the interesting situation where CO2 absorbs and subsequently emits, exactly the same energy. The laws of thermodynamics relate to energy neither being created nor destroyed.

      If a molecule absorbs and emits an identical amount of energy, the resultant effect on the molecule is precisely and absolutely zero.

      No change, no movement, no phase change – nothing. There is no net energy change – all absorbed energy has been emitted. Any changes to a molecule require energy change – or magic.

      The GHG crew refuse to that CO2 (or any gas) can be heated to quite high temperatures by the simple expedient of compressing it. No photons of the required wavelength magically leap from the compressor piston to heat CO2, while different photons are magically emitted to heat, say, O2.

      If you can’t find a compressor, friction works just as well as a heat source – no specific wave lengths needed.

      Cheers.

      • Mike Flynn commented:
        “Dan, If a molecule absorbs and emits an identical amount of energy, the resultant effect on the molecule is precisely and absolutely zero.”

        More Flynn idiocy, where, instead of trying to understand the science, he believes his misconception is right and that of millions of scientists are wrong.

        THAT’S the phenomenon I’d like more insight on.

      • David Appell,

        You might care to quote just one of the “millions of scientists” who claim that a molecule which has absorbed and emitted an identically energetic photon can be distinguished from an identical molecule which has not interacted with a photon during the same period of time.

        That’s just a long winded way of saying what I said before.

        Now you might be so good as to provide a quote from a scientist who claims that the law of the conservation of energy does not apply to the situation where an emitted photon of exactly the same energy as one previously absorbed somehow creates additional energy which affects the molecule which absorbed and emitted the photons concerned.

        In a group of CO2 molecules, which ones have absorbed and emitted photons of identical wavelength? Have those molecules moved, perhaps? No, that requires energy, and precisely as much has been emitted as absorbed (at least according to the GHG proponents).

        Maybe the molecules changed shape? No, that would require energy.

        Got hotter? No, once again, requires energy, and none was left after the absorption and emission process.

        So, once again, name one of the millions of scientists who claims that the principle of conservation of energy doesn’t apply to CO2. A peer reviewed paper to this effect would help, I suppose.

        Cheers.

      • Mike Flynn: Have you ever tried to educate yourself, by, say, reading a textbook on climate science?

      • Mike Flynn wrote:
        “So, once again, name one of the millions of scientists who claims that the principle of conservation of energy doesn’t apply to CO2”

        Why don’t you go learn something before being so arrogant as to dismiss all known science from a position of ignorance?

      • David Appell,

        It seems that your millions of scientists resemble the Indian Rope Trick – everyone knows about them, but you can never see one.

        You may choose to believe that the the principle of conservation of energy can be disregarded at will, but it remains unlikely that you can find millions of scientists who will support you.

        The fact remains that a molecule which emits a photon identical to one it previously absorbed, cannot change in any way way which requires energy left over from the absorption / emission pair, as there is none available. No free lunch, no perpetual motion.

        Keep on with the ad hominem attacks. Do you think it might indicate you can’t produce any facts to contradict me?

        Your appeal to authority of millions of scientists seems to have fallen on deaf ears. So sad.

        Cheers.

      • Flynn: I haven’t done any ad hominem attacks. If I do do one, you will sure to recognize it.

        You are incapable of explaining where chaos has happened in Earth’s climate history. You are unable to read climate textbooks, and make very ignorant statements that ignore 150 years of science. And when challenged, you whine about it.

        You do this everywhere, not just here.

      • > I haven’t done any ad hominem attacks.

        Here, DavidA:

        Have you ever tried to educate yourself, by, say, reading a textbook on climate science?

        This presumes that MikeF is uneducated, and it conveys that his anticlimatic positions are based on his lack of climate science education.

        Not only does the rhetorical question carry a personal attack, but it functions as an ad hominem argument, since it attacks MikeF’s competence instead of his arguments.

        You’re welcome.

        Please chill. You have more than 100 comments right now on this thread. That’s more than 20%.

      • No williard, it’s not. I simply asked if Flynn has ever read a climate science textbook. It’s very clear that he has never bothered to do this, based on questions (and bragging) he’s done here and elsewhere.

        What to say about someone who dismisses climate change, when it’s clear he doesn’t understand its basics and has clearly made no effort to educate himself?

      • David Appell,

        If you can point to a single instance where I have indicated belief that the climate does not change, I’d be surprised, as I’ve never said such a thing.

        Maybe you could misquote me, or make something up, but that’s not he same thing. The weather – and the average of weather – climate – has been changing for as long as the atmosphere has existed.

        Your appeals to your own authority, and your reluctance to accept statements from the IPCC, Feynman, Lorenz, Tyndall, and a host of others as reasonable seems to smack of fanaticism – just a little.

        I’m guessing that few care what you think of me, and possibly vice versa. I prefer facts (or a reasonable simulacrum – the truth is out there, of course). If you can produce new facts, I’ll change my mind.

        What about you?

        Cheers.

      • “If you can point to a single instance where I have indicated belief that the climate does not change,”

        Typical misdirection from you, like I’ve seen elsewhere.

        You’ve been harping about chaos. Point to an example of it in Earth’s climate history.

        Tell us whether you’ve read a climate science textbook, and tried to answer your own simplistic questions.

      • > No […] it’s not.

        That’s a powerful argument you got there, DavidA.

        Your ball is not MikeF. Your rhetorical question targets MikeF. Your argument is therefore invalid.

        Since you like to ask questions, pray tell: how much is 106/504?

      • Mike – Energy is conserved. Have you noticed humid nights cool slower than when the air is dry? That is because the energy in the radiation from the surface is thermalized,(mostly by water vapor) warming the air. There simply is not enough time, by a factor of about 30,000, for the molecule to emit a photon.

        The thermalized energy carries no identity of the molecule that absorbed it. At low altitudes, any photons that emit from the jostling molecules are thousands of times more likely to be absorbed by water vapor molecules than CO2 molecules. At extremely high altitudes, ca 30 km, The situation reverses, there are very few water vapor molecules and the time between collisions becomes the same or lower than the relaxation time so CO2 radiant emission to space dominates.

      • “That is because the energy in the radiation from the surface is thermalized….”

        What does that even mean, thermalized?

      • David Appell said something like : “Have you ever read a climate science textbook”

        Tell-tale comment from someone with no actual arguments, reduced to ad hominems.

      • David Appell
        You ask Dan
        What does that even mean, thermalized?

        According to Dan’s comments just above you are supposedly reacting to:
        The process of a molecule absorbing the energy in a photon and conducting the energy to other molecules is thermalization.

        Do you dispute this, or just not much bother reading what you respond to? (And is that maybe what the climate science textbooks you maybe read recommend?)

      • “You do this everywhere, not just here.”
        David:
        There’s a simple solution…..
        Don’t read and certainly don’t reply to him/her.
        I don’t. The rabbit-holes are deep enough on here and other places without engaging with his rude sky-dragon persona.
        If you’re reading MF then …. If you say so.

      • “Even if it is saturated, there could still be a problem couldn’t there? Heat could in theory be accumulating and hiding in the deep oceans where we still can’t measure it.”
        Punksta:
        Yes, OHC is undoubtedly rising but we are measuring it (never well/long/widely/consistently/accurately/etc enough to satisfy sceptics however).
        This is why ECS is so import and not TCS.
        That heat is not hidden forever.
        PDO/ENSO shows us that it comes back after a period of storage.
        That extra heat is coming from AGW.
        Look at a graph of ENSO vs ave GMT.
        Rising for all events (Nino and Nina) – a modulation on TOP of the AGW signal.

      • Tony
        It’s not only sceptics that have little faith the OHC data, it’s people like Trenberth too, although they try not to mention this in public.

        So on what do you base your faith in OHC data? Do you think it is as good as atmospheric data? If so, why is atmospheric data almost always the one that is cited, even though the thermal capacity of the oceans is 1000 times that of the atmosphere, so presumably 1000 times more significant?

        And why, if you think OHC data is good, do you say the heat can’t be hidden forever? Either the heat is measured, or it is hidden, surely ?

      • The problem is what you would do with Trenberth’s reservations about pre-ARGO OHC and what you would do with those reservations is world’s apart. He’s smart…

      • JCH
        So you think we have robust OHC data on the deep oceans ? Of similar quality to atmospheric data ?

      • Punksta commented:
        “So you think we have robust OHC data on the deep oceans ?”

        Johnson et al Nature 2016 found deep ocean (> 1800 m depth) heating of 0.07 ± 0.04 W/m2 over 2005-2015.

        The top ocean (< 1800 m) heating was 0.61 ± 0.09 W/m2 over the same time period.

        So only about 10% of ocean heat seems to be going into the deep ocean.

        NATURE CLIMATE CHANGE | VOL 6 | JULY 2016

    • Thermalization explains why CO2 (or any other gas which does not condense in the atmosphere) has no significant effect on climate.

      Quite the contrary, thermalization explains why CO2 *does* have an effect. Because it shows that any outgoing radiation absorbed by CO2 or other GHG will generally be passed back to the atmosphere before being re-radiated. So it takes that much longer for the energy to escape to space. *Much* longer.

      If you think your results are correct, you really should try to publish them. But… I’m sorry, you’re fighting against 150+ years of science. This is one of those cases where you’ve got it wrong, and the tens of thousands of other scientists have it right.

      • Ben – The key word is ‘significant’. Water vapor has thousands more ‘opportunities’ for absorption than CO2 and once absorbed and thermalized, the WV ‘hogs’ the radiation energy. So, yes CO2 probably has an effect, it’s just so tiny it is lost in the ‘noise’.

        The science I am using is part of that 150+ years of science that you assert I am “fighting”. Actually I am agreeing with the conclusion of tens of thousands of scientists and engineers. However, I don’t know of anyone else who has achieved a 98% match 1895-2015 as detailed at http://globalclimatedrivers2.blogspot.com.

      • “Water vapor has thousands more ‘opportunities’ for absorption than CO2 and once absorbed and thermalized, the WV ‘hogs’ the radiation energy. So, yes CO2 probably has an effect, it’s just so tiny it is lost in the ‘noise’.”

        No, it most certainly does not hog anything. How can you even say that? There is a proportion of LWIR that (in your terms) “gets thermalised” by CO2 IN ADDITION to that by WV. It matters not that WV has “more opportunities”, as those opportunities are an average constant for any given average global temp. What does matter is than CO2 concentration is increasing and is an extra to those “more opportunities”, with especial effect in the dry upper atmosphere and over deserts/poles. And please don’t come back with CO2 is saturated in the lower atmosphere myth.

      • deserts

        Deserts can cool near 40F in a night, co2 isn’t causing any accumulation of heat in the deserts.

      • Tony
        And please don’t come back with CO2 is saturated in the lower atmosphere myth.
        Even if it is saturated, there could still be a problem couldn’t there? Heat could in theory be accumulating and hiding in the deep oceans where we still can’t measure it.

      • Deserts can cool near 40F in a night, co2 isn’t causing any accumulation of heat in the deserts.

        Well, that’s true – but only at the surface.

        A very long time ago, when global warming arguing was in the form of something called usenet news I made such a case – that dewfall meant CO2 was not a limiting factor, because dew releases latent heat of condensation, so dew, not CO2 constrained minimum temperatures.

        The reason this is not strictly applicable is that at the tropopause, RF does appear. Here is a calculation of 2xCO2 RF for months in 2012. Observe the Sahara Desert ( or any of your favorite deserts ). There is seasonal and spatial variation, but for the entirety of the troposphere, as measured at the tropopause, RF occurs most everywhere.

      • at the tropopause, RF does appear.

        So what are they complaining about the surface temp for then?

      • “Deserts can cool near 40F in a night, co2 isn’t causing any accumulation of heat in the deserts.”

        The answer as to why lies in Meteorology my friend.
        Not in AGW science.

      • Not in AGW science.

        Right.

    • “Thermalization explains why CO2 (or any other gas which does not condense in the atmosphere) has no significant effect on climate.”

      Observations disagree:

      “Observational determination of surface radiative forcing by CO2 from 2000 to 2010,” D. R. Feldman et al, Nature 519, 339–343 (19 March 2015)
      http://www.nature.com/nature/journal/v519/n7543/full/nature14240.html

      They found CO2’s forcing increasing about +0.2 W/m2/decade.

      • David Appell
        They found CO2’s forcing increasing about +0.2 W/m2/decade.

        How exactly did they “observe” the specific contribution of added CO2 ?

        Or to put it another way : how did they finally solve the attribution problem ?

      • Even the skeptics accept the attribution of forcing, at least the ones that know physics.

      • Jim D
        Ok, then maybe you can tell us how exactly they directly measured the specific contribution of added CO2 (aka the attribution problem).

      • They measured the change of radiation at different wavelengths, and those changes exactly matched what increased CO2 should do.

      • So they merely measured a change in radiation. They did not do anything remotely like measure the warming this caused, as wrongly implied above.

      • They are talking about forcing not warming. There is a difference. While it is obvious that an increase in forcing leads to warming, that is not what this study was about. It was only about how increasing CO2 increases the forcing.

      • Even the skeptics accept the attribution of forcing, at least the ones that know physics.

        Well, everything else being equal, CO2 imposes RF.

        But, we should drag along the fact that no one knows what albedo was in the past.

        And no one knows what albedo is to an accuracy or precision less than CO2 RF – it’s just assumed to be constant.

        There’s not a compelling case to believe it was different, but no one knows, because it wasn’t measured and isn’t measurable very well to this day.

      • We do know that less snow and ice means lower albedo, so that is another positive feedback there,

      • We do know that less snow and ice means lower albedo, so that is another positive feedback there,

        Yes.

        But you don’t know what albedo is today or if it is , or = to 100 yr. ago.

      • Aerosols have increased the albedo, but not enough to offset the warming due to GHG increases in the last century or so. Later estimates seem to even downgrade the net aerosol effect.

      • Jim D wrote:
        “They are talking about forcing not warming. There is a difference. While it is obvious that an increase in forcing leads to warming, that is not what this study was about. It was only about how increasing CO2 increases the forcing.”

        If an increase in forcing means warming, and CO2’s forcing is increasing, then CO2 is causing warming. No, we don’t know the rate of that warming from just the forcing, but the Feldman et al forcing increases is equal to what global climate models calculate.

      • Punksta wrote:
        “So they merely measured a change in radiation. They did not do anything remotely like measure the warming this caused, as wrongly implied above.”

        I didn’t say it caused warming. I said it showed CO2 has a signfiicant effect on climate, which the OP denied.

        0.20 W/m2/decade is a good amount of energy, comparatively. Unless you think energy doens’t cause heating, then it does cause climate change.

      • Punksta wrote, in response to David Appell:
        “How exactly did they “observe” the specific contribution of added CO2 ?
        Or to put it another way : how did they finally solve the attribution problem?”

        Read the paper. That’s why I cited it.

      • Dav – It appears they did not consider the increase in water vapor over that decade. The global effect of increased WV (TPW) has an influence of about 0.1 +/- 0.04 W/m2 (according to my EXCEL model) which might have produced the effect that they assumed was due to CO2 at 0.2 +/- 0.06.

      • Dan wrote: “It appears they did not consider the increase in water vapor over that decade.”

        Of course they did — they’re not idiots.

      • “No, we don’t know the rate of that warming from just the forcing, but the Feldman et al forcing increases is equal to what global climate models calculate.”

        The Feldman et al paper is a very good paper on the face of it. I find it credible in demonstrating that there is indeed a positive clearsky surface forcing from CO2. No doubt the skydragons will eventually find some way to explain the results.
        The Feldman results translate into 2.5 W/m2 for a doubling of CO2, which seems reasonable. Using (the same) HITRAN code to reverse-engineer the surface forcing back to an instantaneous TOA forcing should yield about 4 W/m2 at TOA. I haven’t checked this, but again it seems quite plausible.
        What is not credible is your gratuitous additional comment that the forcing increase is equal to what global climate models calculate. Instantaneous forcing values for CO2 in the GCMs vary by a factor of 3. Indeed Forster has argued that this variation explains more of the difference in GCM results than the (between-model) variation in effective climate sensitivity. So when you say Feldman’s forcing is equal to what global climate models calculate, which particular climate model do you have in mind?

      • kribaez wrote:
        “What is not credible is your gratuitous additional comment that the forcing increase is equal to what global climate models calculate.”

        From the Feldman paper:
        “These results confirm theoretical
        predictions of the atmospheric greenhouse effect due to anthropogenic
        emissions, and provide empirical evidence of how risingCO2 levels,
        mediated by temporal variations due to photosynthesis and respiration,
        are affecting the surface energy balance.”

        See their references within.

  62. Dan,

    I agree with the thrust of your argument, even though I might attempt to reduce it to Feynman’s assertion that all non nuclear processes can be described in terms of –

    “An electron moves from place to place.
    A photon moves from place to place.
    An electron absorbs and emits a photon” – or something very much like it.

    The GHG crew latch on to things which are true, but irrelevant, and point out things like –

    “The energy from the photon causes the CO2 molecule to vibrate. Shortly thereafter, the molecule gives up this extra energy by emitting another infrared photon. Once the extra energy has been removed by the emitted photon, the carbon dioxide stops vibrating.”

    More or less true. The key point here is that after the energy has been removed, the CO2 molecule is indistinguishable from any other. The number of photons is unchanged. The energy they carry is unchanged. In this case, the effect of the CO2 on the amount of energy reaching a target from the source is precisely and absolutely zero.

    Of course, there is more to the story than the simplistic, but misleading, quote. CO2 can be warmed – by compression, friction, or other methods. Once warmed, it cools down by radiating energy. Left alone, all the way to 0 K.

    Radiative transfer equations can solve the energy lost being transmitted through gases. Chlorine, for example, appears greenish in white light because it is reflecting those wavelengths we perceive as greenish. Those reflected photons obviously don’t carry their energy through the gas.

    Non visible light reacts with matter such as gases in measurable ways, following the same physical rules. Just because we can’t see it doesn’t mean it doesn’t happen.

    You are probably aware of all this, but I am reasonably certain many others aren’t. Obviously, I appreciate correction if I have erred in fact. I am aware that I have probably over simplified some things.

    Cheers.

    • Mike Flynn wrote:
      “The key point here is that after the energy has been removed, the CO2 molecule is indistinguishable from any other. The number of photons is unchanged.”

      The FLUX is photons is changed. IR photons are emitted upward by the surface, and would escape to space if nothing absorbed them. CO2 and other GHGs absorb them, and then emit them in a random direction (since the molecules’ orientations are random). Hence some of them go down or have a downward velocity component. Those warm the atmosphere below and the surface.

      See now?

  63. Validation.

    Internal checks are all very well, but can never tell you if there’s something you just plain missed. Why isn’t more funding given over to external checks, better measurements?
    – complete ocean heat content
    – toa energy budget in absolute terms

    Only when we have robust data on these, can we really say if / how much global heat tracks CO2 levels.

    • Sorry, you’re not going to get your absurd way.

    • JCH
      So the notion of validating models by real-world models measurements is absurd ?

      • No, the notion that V&V hasn’t been ongoing since the beginning is absurd.

      • Read the GISS Model E instructions –

        “Rerun that year with the new values and hope that the annual mean net heating at the ground averages out to less than .3 W/m2 in absolute value over the next few years.”

        In other words, just keep adjusting inputs until you get the desired result.

        Instant gratification, validation and verification! How convenient!

        Does away with all that bothersome scientific process nonsense.

        Cheers.

      • Mike Flynn,

        In other words, just keep adjusting inputs until you get the desired result.

        Which is radiative balance when ocean data are prescribed.

        Keep hunting, maybe some day you’ll find teh s00per $ekrit c0dez where they dial in ECS to “ZOMG most catastrophic!!!!!111111”.

      • Mike Flynn commented”
        “In other words, just keep adjusting inputs until you get the desired result.”

        When does that ever NOT happen in science? We don’t use Millikan’s oil drop value for the electron charge. Einstein went through several versions of his field equations for general relativity before getting them right. Planck *guessed* at E=h*nu.

      • Mike has a good point of course. It’s one thing to adjust a turbulence model to get a better boundary layer profile. You do that in the fine grid limit and for a simplified problem. If there are thousands of parameters, it is hard to call such adjustments science. The parameter space is too large for statistical methods. It is certainly not based on the “laws of physics.”

        It takes decades even for simple eddy viscosity models to settle down and controversy over adjustments will probably never be fully settled. Everyone Hs their own tweaks and discretization methods.

    • Punksta wrote:
      “Why isn’t more funding given over to external checks, better measurements?
      – complete ocean heat content
      – toa energy budget in absolute terms”

      First is already funded and being built now. Second is updated and published regularly.

  64. Tony Banton,

    /humour on

    I support your advice to just ignore anybody who disagrees with you. Facts are irrelevant. Faith is all.

    Keep believing! The luminiferous ether is real! The atom is indivisible! The continents don’t move! Stomach ulcers are caused by stress and spicy food! GHGs have magical warming properties!

    Ignoring inconvenient truth will make it go away.

    Or not.

    /humour off

    Cheers.

    • 100 Years of Phlogiston as PC Science?

    • Keep believing! The luminiferous ether is real! The atom is indivisible! The continents don’t move! Stomach ulcers are caused by stress and spicy food! GHGs have magical warming properties! …

      More abject tripe from the gimmick.

    • Mike Flynn wrote:
      “Keep believing! The luminiferous ether is real! The atom is indivisible! The continents don’t move! Stomach ulcers are caused by stress and spicy food!”

      You do know, right(?) that just referring to such things does not make AGW false — you have to actually do the work of disproving it. Lists like these seek to skip the work part and just call whatever isn’t like false because ulcers.

      “GHGs have magical warming properties!”

      This is funny, because Mike Flynn has written elsewhere that he knows perfectly well how the greenhouse effect works, by reducing the rate of heat loss:

      Mike Flynn wrote:
      “In cold conditions, I wear clothes to reduce the rate of heat loss.”

      http://www.drroyspencer.com/2016/08/uah-global-temperature-update-for-july-2016-0-39-deg-c/#comment-219326

  65. Back to the subject. Since when did numerical models, and other such computer approximations, becomes the laws of physics? Have the rfundamental rules of hypothesis, experiment and accepted theory changed since I studied? The more I read on climate models the less useful as reliable predictive tools the models become. The dependencies less and less certain. Not good enough for an engineer to build anything based on them that would work predictably. . More and more like emperical neural nets, or numerical models, fuelled more by the availability of computers powerful enough to run the models and pure mathematicians than absolute science to run on them, and less like the predictive rate control transfer functions from chemical process modelling that I understand to be real science – and even they struggle – in highly controlled and repeatable environments.

    These same “scientists” seem desperate to find corroborating evidence for their models AKA hypotheses, and even more desperate to deny contrary evidence and attack its publishers rather address their the critcisms. The opposite of real science.

    QUESTION: If you clever empirical manipulators of data sets decided to force the models to blame aerosols, cloud cover or whateveritis, could another variable be blamed for “climate change”, with the use of modellers 2×4, or 4×2 outside the USA? I bet it could – sorry, that is my hypothesis.

    I also present a Nobel Prize winning physicist outside the climate change industry giving his views to his fellows, as I’m a bit thick and old fashioned about science, my engineering personality expects me to be able to make it myself from the proven laws. So it seems, must he be, according to climate scientists. https://www.youtube.com/watch?v=fYpxBSV8Qqw But what does he know?

    • “These same “scientists” seem desperate to find corroborating evidence for their models AKA hypotheses, and even more desperate to deny contrary evidence and attack its publishers rather address their the critcisms. The opposite of real science.”

      Desperate??
      No, just digging deeply into the truth of the science.
      It’s what they do.
      I don’t know about you my friend but I tend to think that it’s a bit pointless cheating on myself when I play Patience.

      “…could another variable be blamed for “climate change””
      No.
      You need to find the “variable” that is either inputing more energy, or that keeping it in.
      We have – with greater than 95% prob.
      That’s the settled bit.
      What is unsettled is how much.

      Your Nobel Scientist:
      A quote from Ivar Giaever…..

      “”I am not really terribly interested in global warming. Like most physicists I don’t think much about it. But in 2008 I was in a panel here about global warming and I had to learn something about it. And I spent a day or so – half a day maybe on Google, and I was horrified by what I learned. And I’m going to try to explain to you why that was the case.”

      Perhaps he Googled WUWT.
      That’ll be like most of the other D-K sceptic types then.
      Oh, and his opinion is just that.
      Why would you even consider it valid when you read his above statement?
      Does a Nobel in one field mean he is correct in any other, simply by “Googling it for half a day”?

      • stevenreincarnated

        You have found a source. That doesn’t mean it is the only source or even the dominant source. Therefore the correct answer to the question “…could another variable be blamed for climate change?’ is yes. A small change in poleward ocean heat transport could cause the amount of warming we have experienced.

      • ” Therefore the correct answer to the question “…could another variable be blamed for climate change?’ is yes. A small change in poleward ocean heat transport could cause the amount of warming we have experienced.”

        Well OHC is increasing, yes – but the Arctic is warming faster than anywhere else on the planet.
        Your suggestion would exhibit the opposite of that.

      • stevenreincarnated

        No it wouldn’t. There are many model studies on changes in OHT. Find a few and don’t rely on the garbage you read on web sites.

    • brianrlcatt wrote:
      “The more I read on climate models the less useful as reliable predictive tools the models become”

      That’s because (for the Nth time) climate models don’t make predictions about the future, and can’t. They an only make projections.

  66. Navier-Stokes seems an interesting case study for Laws of Physics vs GCMs. An expression of conservation principles, but in incompressible form just a convenient working approximation. Appeal to authority (eg WJ Rider at Sandia Labs) suggests, while useful, incompressibility kills off important features of turbulence. Bit of a drawback for weather/climate that. Among the many parameterizations in GCMs, must therefore be some for generating turbulence features. And parameterizations seem mostly somewhat subjective and empirical. Weather forecasting seems to confirm this, since they allegedly all start with the same hydrostatic approximation to NS. But the performance over 7-day horizons varies markedly. This is all before getting to the numerical solution difficulties well described in the post.

    • Makes you wonder just what the Global Mean Wind Speed, is at 33 feet above ground level…slippery I’d bet.

      • Yep, the state of the flow at the 10 meter height is used in empirical correlations to estimate heat, mass, and momentum exchanges at the surface. Possibly to estimate the temperature at the surface, too. That location is used because that’s typically the location used when collecting data to make a correlation. Basically, it’s an extrapolation down to the surface.

        An interesting question is how the state of the flow at 10 meters is estimated?

  67. More Code from the source for the version of the GISS ModelE code used for AR5 simulations.

    Note that the comment material says Rather than requesting the user to supply molecular radius we specify here a generic value of 2.E-10 m for all molecules,

    However, the PARAMETER statements set the radius of the other gas to 1.5D-10 m and the radius of the air molecule to 1.2D-10 m. Neither of which is 2.D-10 m.

    A rule of thumb is that if the comment and the coding don’t agree, then both are usually wrong.

    (The code will very likely be messed up again. Paste in your fav text editor to get a better view)

    C=====================================================================
    C  This function calculates the molecular diffusivity (m2 s-1) in air
    C  for a gas X of molecular weight XM (kg) at temperature TK (K) and
    C  pressure PRESS (Pa).
    C  We specify the molecular weight of air (XMAIR) and the hard-sphere
    C  molecular radii of air (RADAIR) & of the diffusing gas (RADX). The
    C  molecular radius of air is given in a Table on p. 479 of Levine
    C  [1988].  The Table also gives radii for some other molecules. Rather
    C  than requesting the user to supply molecular radius we specify here
    C  a generic value of 2.E-10 m for all molecules, which is good enough
    C  in terms of calculating the diffusivity as long as molecule is not
    C  too big.
    C======================================================================
    !@param XMAIR air molecular weight (KG/mole)
    !@param AVOG Avogadro's number (molecules/mole)
    !@param RADX hard-sphere molecular radius of the diffusing gas
    !@param RADAIR hard-sphere molecular radius of air
    !@param PRESS pressure (kg/s2/m) used to calculate molec. diffusivities
    !@var TK passed local temperature in Kelvin
    !@var XM molecular weight of diffusing gas (KG/mole)
    !@var Z,DIAM ?
    !@var FRPATH mean free path of the gas
    !@var SPEEN average speed of the gas molecules
    !@var AIRDEN local air density
          REAL*8, PARAMETER :: XMAIR = mair * 1.D-3,
         &                     RADX  = 1.5D-10,
         &                     RADAIR= 1.2D-10,
         &                     PRESS=  1.0d5
          REAL*8, INTENT(IN):: TK,XM
          REAL*8 :: Z,DIAM,FRPATH,SPEED,AIRDEN     
    
    C* Calculate air density AIRDEN:
          AIRDEN = PRESS*avog*bygasc/TK ! can't we get this from the GCM?
    

    • A wonderful toy, the GISS model E.

      Unfinished, and not to be relied upon for anything except Government work, by the look of it.

      “Model development is an ongoing task. As new physics is introduced, old bugs found and new applications developed, the code is almost continually undergoing minor, and sometimes major, reworking. Thus any fixed description of the model is liable to be out of date the day it is printed.”

      In other words, anything produced up to and including yesterday is likely to be out of date,

      From “How to tune a specific ocean run . . .”

      “Rerun that year with the new values and hope that the annual mean net heating at the ground averages out to less than .3 W/m2 in absolute value over the next few years.”

      You hope it gives you what you wanted – if not, use new values until you get the desired result. Tell any critics that model development is an ongoing task. New physics is continually being introduced. The science is neither settled nor unsettled, so we need more funding to learn the new physics, after we get around to properly understanding the old physics.

      Before anyone tears into me for being a little cynical, you might care to peruse the code and the appropriate documentation. Feel feel to laugh, cry, or descend into a state of deep and persistent depression!

      Cheers.

  68. Craig,

    ATTP: yes, that is correct.

    Okay, thanks. So, you seem to be explicitly arguing against emission reductions on the basis of ECS probably being low and because of the possible damage it might do to developing nations (correct me if that is not your position). What if you’re wrong?

    • What if you’re wrong?

      Huh?

      You have no proof you are right. It is like claiming a 1 km asteroid will strike in the next 10 years. CAGW is possible but not likely.

      We have had a 1°C temperature rise since 1900. At least 1/2 (more likely 2/3rds or 3/4ths) is due to other causes than GHG.

      The measured IR forcing 0.2 W/m2 per 22 PPM indicates a low forcing of about 28% of the post 1900 rise.

      The 3°C is really 2°C above current temperatures. It assumes 2-3 times the actual forcing and twice the likely CO2 rise.

      Further, warmunists are still grasping at straws to explain why 2°C (3.6F) above current temperatures is bad, left alone a fraction of that amount.

      • PA,

        You have no proof you are right.

        Neither do you. That’s kind of the point.

      • At least 1/2 (more likely 2/3rds or 3/4ths) is due to other causes than GHG.

        Do you mean causes completely unrelated to GHGs?

      • attp, “Do you mean causes completely unrelated to GHGs?”

        I believe that he means unrelated to CO2, black carbon and other natural/manmade aerosols, longer than anticipated ocean response times to change and variations in land use caused by man and nature. So you have to reach for your “earth system sensitivity” to various “forcings” or something other than the standard definition of climate “sensitivity” to explain things.

      • capt,
        Well, then that brings me back to my original question. The evidence currently suggests that most (>50%, and maybe all of it) of our current warming is anthropogenic. Arguing that most is not anthropogenic then goes against what most of the evidence suggests. If one then argues against emission reductions on the basis of climate sensitivity probably being low, then one is basing this on a view that is at odds with most of the available evidence (it’s not impossible, but it is regarded as unlikely). So, what if you’re wrong?

      • then one is basing this on a view that is at odds with most of the available evidence (it’s not impossible, but it is regarded as unlikely).

        CS to solar is easy to measure, daily it’s about 0.02F/Wm^2, And seasonally it’s ~0.005F/Wm^2

      • attp, ” Arguing that most is not anthropogenic then goes against what most of the evidence suggests.”

        You are leaping too far again. If there was ~1 C of warming from circa 1700, water vapor feedback, changes in “land use”, i.e. hydrology, plant types/desertification, and variations in CO2 sinks could be related to pre-industrial man which is kind of ironic don’t you think?

        Pre-industrial man was know to deforest, over graze, slash and burn, over populate regions then die out due to over population etc. which was likely less “earth friendly” than switching to fossil fuels and all of its ills.

        It really doesn’t matter if you “blame” pre-industrial man or not, you just cannot directly blame atmospheric CO2.

      • attp, What if you are wrong? There are a couple of papers that indicate that “Arab Spring” and tropical deforestation are related to green attempts to promote “sustainable” biofuels. Biofuels were not sustainable for pre-industrial man leading to wars, famines and forced migrations which are negative feedbacks to over-population and local “climate change.” So if your “solutions” driven by your sense of “urgency” lead to unintended population control, are you a hero or a villain?

      • attp, What if you are wrong?

        And don’t forget the lost opportunity costs wasting billions snipe hunting, that could go to other useful tasks.

      • So if your “solutions” driven by your sense of “urgency” lead to unintended population control, are you a hero or a villain?

        I’ve haven’t suggested, or promoted, any solutions. I also fully agree that some of what might happen to “combat climate change” could do more harm than good. I’m asking those who seem to think that we shouldn’t reduce emissions because they think that CS will be low, to consider the implications of them being wrong.

      • attp, ” I’m asking those who seem to think that we shouldn’t reduce emissions because they think that CS will be low, to consider the implications of them being wrong.”

        I would say most don’t want to increase emission, they just expect a more reasonable approach to reducing emissions. Coal use in the US produces about 20% of all US GHG emissions and just focusing on efficiency and general emissions reductions would reduce that to about 10% and provide an efficiency/emission baseline or standard for developing nations which would have a far greater impact.

        Governments “picking” what they want to win stifles creativity and too much “urgency” leads to all sorts of poor choices.

      • Capt,

        Governments “picking” what they want to win stifles creativity

        That’s one of the motivations for a carbon tax.

        too much “urgency” leads to all sorts of poor choices.

        Sounds like it would be better to start sooner, rather than later, then.

      • attp, “Sounds like it would be better to start sooner, rather than later, then.”
        “It” started in the 1970s and has had an impact on the economy and wage inequality since the beginning while reducing “emissions”. So yes I have been a fan of starting early and not a fan of rushing things.

      • PA wrote:
        “We have had a 1°C temperature rise since 1900.”

        More to the point, we’ve had a 0.85°C temperature rise since 1970.

      • Capt,
        As far as actually reducing emissions, it hasn’t really started.

      • PA commented:
        “CAGW is possible but not likely.”

        It’s possible your house will burn down tomorrow, but not likely.

        But you still carry fire insurance.

        Unfortunately, no one is yet selling CAGW insurance.

      • micro6500 wrote:
        “CS to solar is easy to measure, daily it’s about 0.02F/Wm^2, And seasonally it’s ~0.005F/Wm^2.”

        Where did you get these numbers? Thanks.

      • Where did you get these numbers?

        I calculate them from NCDC global summary of days data set.

      • As far as actually reducing emissions, it hasn’t really started.

        Of course it has. Started decades ago.

      • Of course it has. Started decades ago.

        Reducing emissions mean emitting less than we once were.

      • Reducing emissions mean emitting less than we once were.

        Well, there’s certainly no hurry about that. What matters is the preparatory work.

        And that’s been in progress since the ’70’s.

      • attp, “Capt,
        As far as actually reducing emissions, it hasn’t really started.”

        I will cut you some slack since you are a Brit and a bit young, but the first CAFE standard in the US was in the mid 1970s following the oil embargo, higher fleet fuel efficiency reduces emissions. The first super and ultra super critical coal power plant designs started being taken seriously around the same time, 40+ percent efficiency versus 30 percent with huge reductions in “other” emissions. The 1970s was the start of the “clean” nuclear power full court press which was derailed by many of the same whiners that are saying “emissions reductions haven’t started yet.” The derailing started before TMI in 1979 by anti-science activists like Fonda When do you think combined cycle and co-generation started being “buzz” words?

        Just how “new” do you think some of the alternate energy research is?

      • Capt,

        I will cut you some slack since you are a Brit and a bit young

        Not only is that rather condescending, but I’m neither really a Brit, nor all that young. The beginning of this discussion related to reducing emissions. This specifically means emitting less, not emitting less than we might have. I’m not dismissing that we have indeed introduced things that have led to us emitting less than we might have, but that wasn’t the motivation behind my initial comment, which referred to actually reducing emissions.

      • attp, ” I’m not dismissing that we have indeed introduced things that have led to us emitting less than we might have, but that wasn’t the motivation behind my initial comment….which referred to actually reducing emissions.”

        emitting less would be reducing emissions – increasing efficiency is reducing emissions, what you appears to mean is reducing emission in some way “you” find acceptable.

      • emitting less would be reducing emissions – increasing efficiency is reducing emissions

        Indeed, but globally we have yet to start reducing emissions. When I used the term “reducing emissions” I meant actually emitting less globally. The US is not the globe.

        , what you appears to mean is reducing emission in some way “you” find acceptable.

        No, this is not what I mean.

      • AK wrote:
        ATTP wrote: “As far as actually reducing emissions, it hasn’t really started.”
        “Of course it has. Started decades ago.”

        US CO2 emissions 1973: 4.7 Gt
        2015: 5.3 Gt

      • attp, “Indeed, but globally we have yet to start reducing emissions. When I used the term “reducing emissions” I meant actually emitting less globally. The US is not the globe.”

        Then you should appreciate setting higher efficiency AND emissions standards for any type of primary energy source or industrial activity should be a good thing. Because of the lack of focus and over-hyped sense of urgency, China and India have neglected that during their build out. China actually over built coal in order to have a bargaining position and generally avoided other environmental considerations to take advantage of outsourced “dirty” industry.

        Setting standards requires more than arm waving.

      • US and Japan peaked about 1 decade ago.
        Russia and Europe as a whole peaked about 2 decades ago:

        http://edgar.jrc.ec.europa.eu/img/part/co2_report_2015_009g_muc15.png

      • appell, “US CO2 emissions 1973: 4.7 Gt
        2015: 5.3 Gt.

        Right, basically a 10% increase while population increased by over 30% which is a per captia reduction while increasing agricultural output by nearly 200% and GDP by 300%.

      • T.E. That chart doesn’t appear to include the US net carbon sink which offsets close to 15% of emissions.

      • micro6500 wrote:
        “I calculate them from NCDC global summary of days data set.”

        How do you subtract out other factors relevant to temperature?

        The literature says the climate’s sensitivity to solar change is <~ 0.1 K/W/m2. Awhile back I found these papers, which find it at that order of magnitude:

        0.07 K/W/m2, Meehl et al, GRL 2014
        doi:10.1002/grl.50361

        0.03 K/W/m2, Feulner and Rahmstorf, GRL 2010, first example doi:10.1029/2010GL042710

        0.08 K/W/m2, Feulner and Rahmstorf, GRL 2010, second example doi:10.1029/2010GL042710

        0.07 K/W/m2, Song et al GRL 2010
        doi:10.1029/2009GL041290

        0.18 K/W/m2, Camp and Tung, GRL 2007 doi:10.1029/2007GL030207

      • It’s the effective sensitivity to solar, basically by surface station, though most reports contain dozens to hundreds of stations. But it’s all based on lat lon to 3 decimal points.

      • “Then you should appreciate setting higher efficiency AND emissions standards for any type of primary energy source or industrial activity should be a good thing.”

        Maybe. Are you familar with Jevon’s paradox? It’s only at about a present day living standard of Europe/US that per capita energy use starts to flatten out (though earlier in European countries that aren’t the energy hogs we are). US per capita energy use only started a meaningful decline in about 2000.

        https://qph.ec.quoracdn.net/main-qimg-c4af299e5b0b3c4de2aa6902a83cd75d?convert_to_webp=true

      • micro: But again, how did you correct for other factors that cause warming?

        The zero-dimensional model (1-albedo)S/4 = sigma*T^4 gives

        dT/dS = T/4S = 0.05 C/(W/m2)

        where S is the TOA solar irradiance.

      • But again, how did you correct for other factors that cause warming?

        What other factors cause warming?
        Technically the Sun causes warming, the only thing the atmosphere can do is block warming, and or slow cooling.
        So I make no corrections.

      • David:

        You had written in response to my SO2 “model”:

        “Burl, this isn’t evidence. It’s just a bunch of numbers. Where did they come from? It’s all no more than gibberish.”

        Let me enlighten you without using any numbers which you are either unable or unwilling to consider.

        At issue is whether climate change is due to increasing CO2: emissions or to decreasing SO2 emissions–and for the sake of the planet, we had better get it right!.

        Between 1880 and the present there have been 28 business recessions. All are associated with temporary increases in average global temperatures.
        These increases can only be due to the reduction of SO2 aerosol emissions emitted into the troposphere, due to the reduced industrial activity during a recession.. The resultant cleaner air allows sunshine to strike the earth with greater intensity, causing increased surface warming.

        Thus, UNINTENDED decreases in SO2 emissions will cause temporary increases in average global temperatures.

        Likewise, INTENDED decreases in SO2 emissions (due to Clean Air efforts) will also cause increases in average global temperatures. This is the cause of all of the warming that has occurred between 1975 and the present.

        And, as proven in my “model”, this warming is so large that it completely excludes the possibility of any additional warming due to greenhouse gasses.

        It is essential that the continued reduction of SO2 emissions be halted as soon as possible. At current rates of reduction (about 2 Megatonnes per year), the “2 deg. C. limit” established by the 2015 Paris Climate Conference will be reached within 25 years, or less.

        Temperature increases due to reduced SO2 emissions cannot be reversed, apart from re-polluting the air

      • Dan, you only need to look at this TOA diagram to know that CO2 has a large effect on climate:

        http://www.giss.nasa.gov/research/briefs/schmidt_05/curve_s.gif

      • Dan, you only need to look at this TOA diagram to know that CO2 has a large effect on climate:

        That doesn’t mean anything.
        And in fact it doesn’t because nightly cool is nonlinear based on dew points.

      • Then you should appreciate setting higher efficiency AND emissions standards for any type of primary energy source or industrial activity should be a good thing.

        Seems reasonable. No idea why you think I wouldn’t.

      • appell, “Maybe. Are you familar with Jevon’s paradox? It’s only at about a present day living standard of Europe/US that per capita energy use starts to flatten out (though earlier in European countries that aren’t the energy hogs we are). US per capita energy use only started a meaningful decline in about 2000. ”

        Actually, per capita energy use peaked around 1978 in the US, there was a second drop around 2000. Most of that was due to CAFE standards and replacing smaller residential and commercial heating systems with “all electric” homes. So there was some Jevon paradox involved as far as energy usage, but the energy was cleaner.

      • “What other factors cause warming?
        Technically the Sun causes warming, the only thing the atmosphere can do is block warming, and or slow cooling.
        So I make no corrections.”

        You haven’t at all clear what exactly you’re calculating, or how.

      • You haven’t at all clear what exactly you’re calculating, or how.

        Oh. Daily solar watts/m^2 for each surface station I’ve included temps for, divided by the recorded difference between daily min and max temp as rising temp, averaged by area, in this case all the land based stations that record at least 360 records in a year.
        As well as the day to day average change in the extra tropics, from max warming to max cool per day, and the max cooling to max warming per day. Then divide the day to day change in solar by day to day change in temp.
        These I look at in lat bands.

      • Really wish I could edit posts.

        In the northern hemisphere the max day to day temp change happens in the spring, about the solstice. Changes from warming to cooling early July, and then has the largest negative day to day change at about the fall solstice.
        Day to day change in watts/m^2, divided by the day to day 30 day running average change in temp.

      • micro6500 wrote:
        “That doesn’t mean anything.”

        Ha!

        It shows that the atmosphere is blocking a lot of infrared radiation, and at the absorption spectra of GHGs, including CO2.

        That energy has to go somewhere. It creates the greenhouse effect.

      • It shows that the atmosphere is blocking a lot of infrared radiation, and at the absorption spectra of GHGs, including CO2.

        That energy has to go somewhere. It creates the greenhouse effect.

        Low humidity air after sunset cools 3 to 5 F per hour. The atm window is where the energy is radiating through, and this includes co2 feedback. It will do this all night, until it gets near dew points, then it slows to between 0.5 to 1F per hour cooling. All clear sky, no wind, all different times of year.
        So if it has an extra 1F from co2, it just cools at the high cooling rate level a little longer until it nears dew point temperatures, then it slows down.

        This process also drys the air.

      • “Actually, per capita energy use peaked around 1978 in the US, there was a second drop around 2000.”

        Barely — in 2001 it peaked at only 4% lower. No signficant change.

        Since then it’s declined by about 15%.

      • Burl:

        What is your source for global SO2 emissions?

        Yes, of course SO2 causes some cooling. That doesn’t mean CO2 doesn’t cause warming.

      • David:

        Anthropogenic Sulfur Dioxide Emissions: 1850-2005 Smith, S.J., et al (2011). http://www.atmos-chem-phys.net/11-1101/2011/acp-11-1101.pdf

        and

        The Last Decade of Global Anthropogenic Sulfur Dioxide: 2000-2011 emissions, Z. Klimont, et al (2013). See Table S-1
        http://iopscience.org/1748-932168/1/014003

        (you might have to Google the titles of the papers)

      • “Day to day change in watts/m^2, divided by the day to day 30 day running average change in temp.”

        Well that ignores lots of complicating factors (like clouds), and you’re not calculating at all what you claimed, which was climate’s sensitivity to solar irradiance changes. Which explains why your number is low by a factor of at least 10.

      • which was climate’s sensitivity to solar irradiance changes

        How is that not what it is?
        On 2 different scales, I divide the accumulated watts from the Sun today by the change in today’s min to today’s max temperature.
        And I also plot the day to day change in energy as the length of day changes, directly varying both how much energy the Earth receives during the day, and how long it gets to cool at night, by the day to day change in temperature. I do this for both Min and Max temp.

        To me, that seems like the experiment everyone wishes they had to study the Sun’s forcing.

      • Burl, neither of your SO2 links work.

        Obvious question: What do analyses of SO2 emissions vs temperature show for China and India?

      • David:

        Sorry about the links. They did work ,but sometimes are changed. Which is why I said that you might have to Google the titles of the papers.

      • Burl wrote:
        “Let me enlighten you without using any numbers which you are either unable or unwilling to consider.”

        Still only more words.

        What you need to prove is causality.

        And to explain why CO2 doens’t cause warming, regardless of what SO2 does. The theoretical and observational basis for CO2 warming is very well established.

      • And to explain why CO2 doens’t cause warming

        I did yesterday.

      • David Appell:

        In an earlier blog you had stated that “climate science, like astronomy, is an observational science”, which is to a large part true. The role, then of the climate scientist is, roughly, to determine what the observations.mean, and to develop models with high predictive ability.

        Now, I have made an observation, which is easily verifiable, that all business recessions and depressions are periods with temporary increases in average global temperatures.

        This is easily explained by my “model”, whereby the reduction in SO2 aerosol emissions that occur due to reduced industrial activity.during a business slowdown results in cleaner air and greater insolation..

        If you cannot explain these observations with your greenhouse gas model, then, I would submit, it is fatally flawed and not worthy of further consideration.

      • micro wrote:
        “On 2 different scales, I divide the accumulated watts from the Sun today by the change in today’s min to today’s max temperature.”

        What are you using for “accumulated watts from the Sun today?”

      • Since I’m going back prior to this point, I average this tsi*, and use that as input to a flat surface solar based on altitude and latitude (the station coordinates), calculated every hour 24×7 from 1940 to present, every hour, every day for every station.

        * = http://www.pmodwrc.ch/pmod.php?topic=tsi/composite/SolarConstant

      • mirco wrote:
        “Since I’m going back prior to this point, I average this tsi* => PMOD”

        That’s what I suspected, and that’s ridiculous. Has it occurred to you that clouds affect how much sunlight reaches the ground? That temperatures have a lot of autocorrelation? That they can be affected by storms, winds and humidity?

        Your calculation is junk. Which is why it gets the wrong answer.

      • That’s what I suspected, and that’s ridiculous.

        I just don’t know how to address such narrow mindedness, I’ll leave it at that.

      • Turbulent Eddie commented on Global climate models and the laws of physics.

        in response to David Appell:

        cptdallas: the climate doesn’t care about how many people live in your country or on your planet. It reacts only about how much CO2 is emitted.

        TE wrote:
        “Global average temperature is a calculable but meaningless number.”

        Sure. Because the world today is the same as 25,000 yrs ago, when average global temperature was 8 C lower. Or the same as during the PETM.

        {eye roll}

      • micro wrote:
        “I just don’t know how to address such narrow mindedness, I’ll leave it at that.”

        Talk about taking the easy way out. Avoid all relevant criticism, and just declare yourself above it all when you have no real response.

      • Did you want to have a conversation? It wasn’t evident from your comment.

      • PA wrote:
        “4. Further, CO2 has about a 16 year lifetime in the atmosphere (from nuclear test C14 studies).”

        Raw ignorance.

        “The Long Thaw: How Humans Are Changing the Next 100,000 Years of Earth’s Climate,” David Archer (University of Chicago), 2008.
        http://press.princeton.edu/titles/10727.html

      • David Appell wrote:
        “4. Further, CO2 has about a 16 year lifetime in the atmosphere (from nuclear test C14 studies).”

        Raw ignorance.

        “The Long Thaw: How Humans Are Changing the Next 100,000 Years of Earth’s Climate,” David Archer (University of Chicago), 2008.
        http://press.princeton.edu/titles/10727.html

        Cooked and adjusted ignorance.

        The Himalayas reject more energy than the increase in CO2 retains. These “we are going to cook like grilled chicken” claims are just deluded.

        Until the Himalayas are sheared off and the Antarctic moved away from the pole it is going to stay cool.

      • PA wrote:
        “These “we are going to cook like grilled chicken” claims are just deluded.”

        Typical — make up a false claim, and then blame scientists for something they have never said. Typical.

      • Burl: And India and China?

      • “and China and India?”.

        Their high levels of SO2 emission from their coal-fired power plants, etc. were, for a while, offsetting the cleansing actions in the West, resulting in a slowdown in the rise of average global temperatures (The Hiatus) .

        As they have begun to clean up their air, with nuclear plants, etc. –as they have every right to do–their decreased SO2 emissions will cause ever increasing temperatures–probably why 2016 temperatures are not falling, even though ENSO warming is decreasing.

      • David Appell wrote:

        Typical — make up a false claim, and then blame scientists for something they have never said. Typical.

        The great ice sheets in Antarctica and Greenland may take more than a century to melt, and the overall change in sea level will be one hundred times what is forecast for 2100.

        Another disastermonger.. He is either ignorant or lying. I’m not familiar with his work so I won’t speculate.

        The moment someone makes a statement like “The great ice sheets in Antarctica and Greenland may take more than a century to melt” this marks their view as tainted naked advocacy.

        Given that Antarctica is gaining ice, this fellow has problems with his “real world interface” and is accidently or deliberately misinterpreting the data he is getting.

      • Burl Henry wrote:
        ““and China and India?”.
        “Their high levels of SO2 emission from their coal-fired power plants, etc. were, for a while, offsetting the cleansing actions in the West, resulting in a slowdown in the rise of average global temperatures (The Hiatus) .”

        a) Karl et al Science 2015 showed there was no hiatus
        b) the countries, Burl — you think you showed that US SO2 emissions held down US temperatures. Where is similar proof for the countries China and for India?

        “As they have begun to clean up their air, with nuclear plants, etc. –as they have every right to do–their decreased SO2 emissions will cause ever increasing temperatures–probably why 2016 temperatures are not falling, even though ENSO warming is decreasing.”

        Were China’s and India’s temperatures increasing when their SO2 emissions were increasing?

      • PA: Typical — just throw down some words, with no indication of where they came from and a citation.

        If you think I’m going to take you at your word, you’re wrong.

      • David Appell | September 20, 2016 at 12:36 am |
        PA: Typical — just throw down some words, with no indication of where they came from and a citation.

        If you think I’m going to take you at your word, you’re wrong.

        That the Antarctic is gaining ice mass is old news and not really debatable at this point.

        “Mass gains of the Antarctic ice sheet exceed losses”
        http://www.ingentaconnect.com/content/igsoc/jog/2015/00000061/00000230/art00001
        https://wattsupwiththat.files.wordpress.com/2015/10/antarctica-ice-map.jpg

        The study is about 16 pages and examines all the factors in ice measurement. He concludes that the Antarctic is gaining about 82 Billion tons of ice per year.

        Who is this Zwally character?
        Dr. H. Jay Zwally is Chief Cryospheric Scientist at NASA’s Goddard Space Flight Center and Project Scientist for the Ice Cloud and Land Elevation Satellite (ICESat).

        http://tycho.usno.navy.mil/lod.1973-may2015.jpg

        The length of day should be increasing 0.17-0.23 ms per decade. The length of day should be about 0.74 milliseconds longer today than 1979. It isn’t.

        Between Zwally and the LOD studies (length of day/Munk’s Enigma) it is pretty conclusive that Antarctica isn’t melting.

    • cptdallas: the climate doesn’t care about how many people live in your country or on your planet. It reacts only about how much CO2 is emitted.

      • the climate doesn’t care about how many people live in your country or on your planet. It reacts only about how much CO2 is emitted.

        This is incorrect because:

        1. RF is determined by how much CO2 is present, not emitted.

        2. Climate ( the general circulation and distribution of weather ) depends largely on the gradients of radiance, not the global average of radiance. Climate doesn’t correspond to global average temperature.

        That’s why hysterics freak out about a change but can’t actually identify verifiable harm.

      • [T]he climate doesn’t care about how many people live in your country or on your planet. It reacts only about how much CO2 is emitted.

        It doesn’t care. Not about anything.

        As for emissions reduction, the solution is well in hand, and progressing nicely. As it has been since the ’70’s.

        You’re like the “skeptic” who, the day after the farmer planted his seed, says “I don’t see any crops. Produce some crops or I’ll cut your head off!”

      • TE, you had two wrong statements out of two there.
        1. The future climate cares very much how much CO2 is emitted.
        2. Global temperature is a measure of climate state (see the Ice Ages when they were that much colder). Climate is a global thing in this context, and temperature measures where we are on the scale of Ice Age to iceless hothouse, the range between these being only about 10 degrees C, and we are somewhere near the middle currently, but on a course for the top.

      • Turbulent Eddie:
        “1. RF is determined by how much CO2 is present, not emitted.”

        Obviously, but the average atmosphere fraction has remained close to 0.5. (Let’s hope that continues — scientists don’t know if it will.)

        “2. Climate ( the general circulation and distribution of weather ) depends largely on the gradients of radiance”

        Where did you ever get that notion?

      • “2. Climate ( the general circulation and distribution of weather ) depends largely on the gradients of radiance”

        Where did you ever get that notion?

        It must have been some of those silly text books.

      • “It must have been some of those silly text books.”

        So this isn’t an idea you’re able to defend here?

      • Dav – The mistaken perception that lots of folks have that CO2 has a significant effect on climate probably exists because they have no concept of the atmosphere at the molecule level (as described by the well known and demonstrated Kinetic Theory of Gases) and are unaware that at sea level it is about 30,000 times more likely that the energy of a photon absorbed by a CO2 molecule will be conducted to surrounding molecules (i.e. thermalized) than the CO2 molecule will emit a photon.

      • 1. The future climate cares very much how much CO2 is emitted.
        It is the accumulation of CO2 that imposes radiative forcing. Now, the uptake rate is less than 100%, so yes, emissions lead to accumulation and this is a quibble, but it is not the emission per se.

        2. Global temperature is a measure of climate state (see the Ice Ages when they were that much colder). Climate is a global thing in this context, and temperature measures where we are on the scale of Ice Age to iceless hothouse, the range between these being only about 10 degrees C, and we are somewhere near the middle currently, but on a course for the top.

        This is a widespread misconception that is worthy of debunking here.
        Hansen has spread this, so he bears some of the blame and if he had an education in climate ( instead of Astronomy ) he might not have been so cavalier.

        Global average temperature did not cause the ice ages!

        In fact, on day one of the last glacial ( when there was no ice accumulation ), global net radiance was ‘normal’ and global average temperatures were above average.

        The difference was net incoming solar radiance was lower, only for the ice accumulation zone ( around 60N ). And, net incoming solar radiance for the accumulation zone was lower for summer only.

        So, global incoming solar radiance and global average temperature were ‘normal’.

        Now, once ice began to accumulate, three factors led to both the continuance of ice and lower global average temperature.

        1. The orbit continued to impose lower summer sunshine over the ice ( though ice ages last through this into the next phase of increased summer sunshine because of factors 2 & 3 below ).

        2. The albedo of the area of ice accumulation was lower. This protected the ice because of reduced energy and also lowered the global albedo, reducing global temperature.

        3. The elevation of the ice meant the surface had lower temperatures ( temperatures fall with height by the lapse rate ). This meant ice could survive summers at high elevation where at lower elevations, melting would take place. That’s why Greenland’s ice persisted through the Eemian and the HCO.

        So, again, Global average temperature did not cause the ice ages!

        It is accurate, instead, to say, The ice ages caused global cooling.

      • Climate ( the general circulation and distribution of weather ) depends largely on the gradients of radiance”

        This is a fairly basic concept of Atmospheric Science which you will find in any introductory Meteorology textbook.

        But you might consume this NASA assessment:

        “Averaged over the year, there is a net energy surplus at the equator and a net energy deficit at the poles. This equator-versus-pole energy imbalance is the fundamental driver of atmospheric and oceanic circulation.”

      • Eddie: Sure. But that’s not what’s causing climate change — for which, yes, temperature is an important measure, especially to surface-based life forms.

      • Eddie: Sure. But that’s not what’s causing climate change — for which, yes, temperature is an important measure, especially to surface-based life forms.

        Perhaps and when the evidence is that biomass has increased with increasing temperature, it makes global warming panic even more silly and ridiculous.

        http://www.nasa.gov/sites/default/files/thumbnails/image/change_in_leaf_area.jpg

      • TE, whoever said global average temperature causes climate change? It defines climate change. Your temperature didn’t cause your fever, but it defines it. When you said “climate doesn’t correspond to global average temperature”, yes, it does. That is where you were wrong. Temperature is one of the primary diagnostics, if not the diagnostic, of the global climate state.

      • “Perhaps and when the evidence is that biomass has increased with increasing temperature, it makes global warming panic even more silly and ridiculous.”

        Is civilization on Earth built for plants or for people?

        More biomass = more weeds. => more insects. Positive feedback on global warming. Decreased crop nutrition. Increased temperture causing decrease in crop yields.

        Let’s ask someone with a major stake in the outcome:

        General Mills CEO Ken Powell told the Associated Press:
        “We think that human-caused greenhouse gas causes climate change and climate volatility, and that’s going to stress the agricultural supply chain, which is very important to us.”
        8/30/15
        http://www.chicagotribune.com/business/ct-general-mills-greenhouse-gas-cuts-20150830-story.html

      • Jim D,

        TE, whoever said global average temperature causes climate change? It defines climate change.

        I’m fond (as are others) of pointing out that temperatures at the LGM were 4-6 degrees cooler than pre-industrial, and thus a 2-3 C rise by the end of the century represents about half an Ice Age in the warming direction (at 20-30 times the rate). Last time I did that, TE whacked me with ice sheet retreat.

        Why CO2 forcing should be any less plausible (or predictable) than orbital forcing, I’ll never understand. CO2 lags not leads maybe.

      • TE, whoever said global average temperature causes climate change? It defines climate change.

        No, that’s wrong too.

        Global average temperature is about 5C greater during NH summer than NH winter ( much more than the measly 1.5C of AGW ). But this has nothing to do with what’s happening.

        Global average temperature is a calculable but meaningless number.

    • “As for emissions reduction, the solution is well in hand, and progressing nicely. As it has been since the ’70’s.”

      No “solution” is “well in hand,” especially as long as world population keeps increasing. Even if world annual CO2 emissions have peaked at about 40 Gt/yr, that’s a warming rate of about 0.15 C/decade, which is much too high.

      We have to get to a net human emissions of essentially 0, and we need to do it in just a few decades, before long-term positive feedbacks get too strong.

      • No “solution” is “well in hand,” especially as long as world population keeps increasing.

        Population decline is already baked into the cake.

        Population is already falling in Germany, Japan, and Russia, Spain, Poland.Working age population is falling in China.

        And every year, the list of nations with falling population grows.

        Global population excluding Africa is very close to falling.

        The ratio of working age population ( 15 to 65 ) to dependent ages ( less than 15 and greater than 65 ) is already falling worldwide.

        Children consume only what their parents give them and old farts on fixed income tend to consume less. There is great deflation of global demand for everything, including fossil fuels.

      • We have to get to a net human emissions of essentially 0, and we need to do it in just a few decades, before long-term positive feedbacks get too strong.

        Not to worry.

        The solution is well in hand, and progressing nicely.

      • “Population decline is already baked into the cake.”

        Except population isn’t declining. It’s rate of increase is positive. It’s second derivative is (slowly) negative. People think it may top out at 10B or so, with billions more (than today) seeking a western lifestyle.

        //fred.stlouisfed.org/graph/graph-landing.php?g=7ecJ&width=670&height=475

      • “We have to get to a net human emissions of essentially 0, and we need to do it in just a few decades, before long-term positive feedbacks get too strong.”

        No we don’t.

        Warmunists just make this stuff up. They make phony scare claims with false deadlines to frighten the more herd-like humans into starving themselves of food and energy by reducing CO2.

        1. Warmunists can’t demonstrate any credible harm from warming and their speculation for a 3°C increase.is pure speculation.
        2. Warmunists can’t demonstrate anything more than pathetic CO2 forcing (about 64% of predicted CO2 direct forcing and certainly not the forcing needed for 3°C).
        3. Warmunists can’t demonstrate a likely CO2 future level that comes close to 2X. 460 PPM seems to be a likely peak.
        4. Further, CO2 has about a 16 year lifetime in the atmosphere (from nuclear test C14 studies).

        The producer estimates of peak CO2 are 460 PPM.

        Pre 2000 the “marginal propensity to accumulate” of CO2 in the atmosphere was 58-59%. For 10 CO2 atoms emitted by man roughly 6 stayed in the atmosphere.

        The marginal propensity to accumulate since 2000 is about 28%. And it is going down. Barely 1 in 4 post 2000 CO2 atoms from the increase over the 1998 emission levels is staying in the atmosphere.

        In a couple of decades the CO2 level will stop rising if we do absolutely nothing.

      • PA wrote:
        “1. Warmunists can’t demonstrate any credible harm from warming and their speculation for a 3°C increase.is pure speculation.”

        Clearly you don’t read much.

        So I don’t feel obligated to have to counter your assertions. Without caring about the evidence, there isn’t anything — anything — that would matter to you anyway.

      • “…before long-term positive feedbacks get too strong.”
        When I first read Hansen 1984 I didn’t understand why the positive feedback would stop? Here it looks like:
        https://www.skepticalscience.com/positive-feedback-runaway-warming-advanced.htm
        something like T4 from something like the blackbody radiation equation is always there. We might say sensitivity is agile but T4 is massive. Fighting a battleship with a biplane. Yes I read about the Bismark.

  69. Global climate-models-and-the-laws-of-physics

    Yes, there’s physics.
    And equally important, biology
    And then there’s this:

    “There are no inductive inferences”
    Karl Popper

  70. “…a generic value of 2.E-10 m for all molecules, which is good enough in terms of calculating the diffusivity as long as molecule is not
    too big.”

    In the invisible world described to us with the laws of physics, at what point does a big molecule become too fat?

  71. Dan Hughes (cc Mosher, Appell)

    The very interesting youtube noted by Mosher above
    https://www.youtube.com/watch?v=vIiW6ugLHL4
    discusses Professor Easterbrook’s understanding of GCM code from interviewing the programmers at the Hadley Center.

    There are many fascinating tidbits in the video.

    At 8:30 Easterbrook says that “quality” does not mean that the code simulates the real world; “quality” means that the code permits asking interesting questions — is a good tool for checking the programmers understanding of how the processes work.

    This is is quite profound since it suggests that the models are being run for the benefit of the modelers, not the science underlying the processes. If so, it is a logical trap that can lead to scientific dead ends.

    At 12:10 Easterbook says that “properties of greenhouse gasses were all known the 1800’s”.

    My history is weak, but I believe that it was not until the Einstein radiation laws were published in 1916 that it was possible to model nonequilibrium radiative transfer. The pdf from Appel
    http://www.cesm.ucar.edu/models/atm-cam/docs/description/description.pdf
    references HITRAN, which does use the Einstein coefficients but I don’t know how the pieces fit together.

    Since the atmosphere is never in equilibrium, it is hard to see how
    non-quantum physics could be accurate, though it might be useful.

    • “My history is weak”
      Yes. The properties of greenhouse gases were all known in the 1800’s. Here is Tyndall’s Bakerian lecture from 1861, in which he describes the properties of many gases in absorbing “radiant heat”. He specifically showed a huge difference between the properties of ordinary air and air that has has CO2 and H2O removed. Among the many gases for which he gave detailed tables were “carbonic oxide” (CO), “carbonic acid” (CO2) and “olefiant gas” (C2H4).

      https://s3-us-west-1.amazonaws.com/www.moyhu.org/2016/09/tyndall.png

      • Nick Stokes,

        Indeed. Even your very short quote refers to “interception of the terrestrial rays . . . ”

        If you read a Tyndall’s “Heat a mode of motion”, 6th Ed. 1905, Appleton & Co., you will note that Tyndall found that the more opaque to “heat rays” the gas (or other medium) was, the less electrical current was generated by his thermopile, ie its temperature dropped below refrerence.

        Maximum reduction in electrical current occurred when the rays were blanked out by a metal plate, ie total opacity.

        The greater the heat falling on the thermopile, the greater the electrical current generated, and the greater the deflection of the galvanometer in the positive direction.

        Anybody attempting to make a “GHE effect” meal from Tyndall’s work will end up supping on meagre gruel indeed.

        In other words, you’re wrong if you think Tyndall’s experiments support the greenhouse effect. If you care to read Tyndall’s work you will see that I am correct.

        Nobody – including Tyndall – has ever managed to increase the temperature of anything by reducing the amount of energy reaching it.

        Cheers.

      • Mike F, so when Tyndall says these have an “important influence on climate”, you just say ‘no’, right? Who to believe?

      • “Anybody attempting to make a “GHE effect” meal from Tyndall’s work will end up supping on meagre gruel indeed.”
        Here’s the preceding paragraph:
        https://s3-us-west-1.amazonaws.com/www.moyhu.org/2016/09/tyndall2.png

      • Nick,

        I am aware that Tyndall originally speculated as to the existence of a GHE, just as Arrhenius did later, and Fourier did before.

        His observations at altitude, backed up by his rather brilliant and exhaustive experiments, revealed the truth – no additional heat as a result of the presence of heat absorbing media between the Sun and the surface. Rather, the opposite, supported by observations and experiments. In the Tyndall publication to which I referred earlier, Tyndall explains in detail why this is so, and provides illustrations to reinforce his words.

        One might just as well regard Tyndall’s fairly extensive and well reasoned arguments regarding the role of the ether in solubilities as proof of the existence of the ether.

        If you can’t be bothered reading Tyndall’s work, fair enough.

        Richard Feynman argued “It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.”

        The GHE theory might well be seductive, even appear logical, but it has no experimental proof. No more than Tyndall’s explanation of how matter and light interact with the ether.

        The Earth has cooled since its creation. A rough estimate of its continuing heat loss (based on measurement) shows a cooling rate of between one and three millionths of a Kelvin per annum.The rate is asymptotic, and the Earth cannot proceed to 0 K, given the heat input from the Sun.

        No CO2 heating. No heating of the Earth. As much as you may wish it otherwise, a body such as the Earth – essentially a large ball of white hot rock surrounded by a near vacuum – can only cool, given its physical environment.

        Cheers.

      • “If you can’t be bothered reading Tyndall’s work, fair enough.”
        Not only do I read his works, but I link to them, and quote explicitly. You, on the other hand, point without quotation to a large book, unlinked.

        However, that book says the same:

        https://s3-us-west-1.amazonaws.com/www.moyhu.org/2016/09/tyndall3.png

      • Nick,

        You’re confused, I believe – confusing normal physics with GHE physics.

        Tyndall is of course entirely correct. As he has pointed out, the removal of H20 from the atmosphere would result in temperatures below zero at night, as is demonstrated even in the tropics, with arid deserts. Lack of water vapour, combined with sufficient sunlight, produces extremes of both heating during the day, and cooling at night.

        His experiments supported his thoughts.

        You’ll notice he mentions the Moon (no atmosphere to speak of) as being likely uninhabitable, due to the extremes of temperature resulting from the lack of not only water vapour, but atmosphere in general.

        I commend to anyone the passages following those which you have provided –

        “From a multitude of desultory observations I conclude that, at 7,400 feet, 125.7°, or 67° above the temperature of the air, is the average effect of the sun’s rays on a black bulb thermometer. . . . These results, though greatly above those obtained at Calcutta, are not much, if at all, above what may be observed on the plains of India. The effect is much increased by elevation. At 10,000 feet, in December, at 9 a.m., I saw the mercury mount to 132°, while the temperature of shaded snow hard by was 22°. At 13,100 feet, in January, at 9 a.m., it has stood at 98°, with a difference of 68.2°, and at 10 a.m. at 114°, with a difference of 81.4°, whilst the radiating ther- mometer on the snow had fallen at sunrise to 0.7°.”

        Reading the whole work is much better, in my view.

        Tyndall points out, time and time again, that reducing the amount of energy absorbing media between source and target – say at altitude – increases the available energy, resulting in higher temperatures, as noted above.

        Most GHE proponents don’t seem to understand physics nearly as well as Tyndall did, more than 100 years ago.

        Selective quotes are all well and good, but neither you, nor anyone else of reasonable intelligence can seriously argue with Tyndall’s experimental and observational results.

        If you have read the Tyndall reference, (which you apparently seemed to find easily enough), as you say, you will be unable to find any Tyndall observation or experimental result which rebuts anything I have said.

        If you wish to present Tyndall’s ruminations, or quoting of others’ opinions as fact, you will need to be very selective. As I have pointed out previously, Tyndall’s belief in the ether, the meteoric origin of the Sun’s heat, and a few other things were beliefs or hypotheses – unsupported by fact.

        There remains precisely zero experimental verification of the GHE. The proponents of this hypothesis cannot even state it in such a fashion as to be demonstrated by scientific experiment.

        Cheers.

      • “I commend to anyone the passages following”

        In the quote you have given, he merely says that the incoming radiation is higher at altitude. He doesn’t say that it is due to IR absorption. In fact, he estimates that “four-tenths” of solar radiation is absorbed in the atmosphere. Modern is slightly higher; Trenberth says that of 340 W/m² arriving at TOA, 161 W/m² reaches the ground. However, Tyndall had no access to TOA.

      • Nick,

        Maybe you’re floundering a wee bit.

        If Tyndall estimated 40% of the Sun’s total energy doesn’t penetrate the atmosphere, it wasn’t a bad estimate.

        Now, the composition of sunlight is roughly –

        “In terms of energy, sunlight at Earth’s surface is around 52 to 55 percent infrared (above 700 nm), 42 to 43 percent visible (400 to 700 nm), and 3 to 5 percent ultraviolet (below 400 nm).”

        So if you claim that reducing the energy from the Sun reaching the surface by, say 30%, increases the surface temperature, then it might follow that by allowing more energy to reach the surface, the temperature would drop.

        The extreme example might be that by using a gas 100% opaque to sunlight, the temperature would be raised to the maximum possible.

        The proponents of the GHE are full of it, and I don’t mean knowledge of physics.

        On a more interesting note, if you look at Trenberth’s figures, you might ask yourself what happens to the energy that doesn’t reach the ground. It’s not good form to ask to be spoon fed, and then complain that you don’t like the spoon. Tyndall provides the answer. If you don’t like it, blame him.

        You can’t even provide a falsifiable hypothesis explaining the GHE, but you complain if anybody says the GHE, like the luminiferous ether, doesn’t exist.

        No heating due to CO2. Creating CO2 creates heat. Creating lots and lots of CO2 creates lots and lots of heat. Maybe you claim its non-measurable heat – I don’t know. Burl Henry explains that improving the transparency of the atmosphere increases the amount of energy reaching the surface, which, not surprisingly, results in increased temperature. Put a thermometer in the shade, then move into the sunlight. The temperature increases. Put it back in the shade, the temperature drops. This happens whether you believe or not.

        Cheers.

      • “So if you claim that reducing the energy from the Sun reaching the surface by, say 30%, increases the surface temperature”
        Well, I don’t. The reduction is mainly due to scattering and albedo. Very little is due to absorption by GHG, and almost none by CO2. That mainly happens in the far infrared of thermal emission. And that is where Tyndall was working too; he used mainly boiling water as source. Here is the spectrum:

        https://upload.wikimedia.org/wikipedia/commons/thumb/e/e7/Solar_spectrum_en.svg/800px-Solar_spectrum_en.svg.png

        Yes, there is power in the near IR, but the first GHG dip comes at about 1000 nm. Dips are not mainly responsible for the discrepancy between red and yellow.

        GHE denial is eccentric even among deniers.

      • Nick,

        No, that doesn’t work.

        You haven’t mentioned where the missing energy goes, and why.

        It doesn’t reach the ground. That’s the point. It doesn’t eventually reach the ground. It doesn’t get there at all, resulting in lower temperatures.

        If, say, 40% of the energy from the Sun doesn’t make it through the atmosphere, what parts of the spectrum are involved? At least some UV makes it through, hence UV resistant paints and plastics, and UV warnings. Most visible light makes it through, as photos of the surface from space show. What’s left?

        You might be able to tell me what part of the spectrum the 40% comes from, in the main. As Feynman points out, light is light, regardless of wavelength.

        And where might the missing energy go? It doesn’t reach the surface, and it can’t vanish. It can’t hide in the sea, as it never even reaches the surface!

        You can run, but you can’t hide. The inconvenient truth will seek you out.

        Twist and turn as much as you like, but without even a disprovable hypothesis, you’re espousing Cargo Cult Scientism, rather than science.

        Cheers.

      • Mike Flynn wrote:
        “Richard Feynman argued “It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.””

        You also owe me a nickel.

      • Mike Flynn wrote:
        “There remains precisely zero experimental verification of the GHE.”

        Here’s for you to ignore yet again:

        “Radiative forcing – measured at Earth’s surface – corroborate the increasing greenhouse effect,” R. Philipona et al, Geo Res Letters, v31 L03202 (2004)
        http://onlinelibrary.wiley.com/doi/10.1029/2003GL018765/abstract

        “Observational determination of surface radiative forcing by CO2 from 2000 to 2010,” D. R. Feldman et al, Nature 519, 339–343 (19 March 2015)
        http://www.nature.com/nature/journal/v519/n7543/full/nature14240.html

        BTW, you can’t do experiments on climate.

      • Here’s for you to ignore yet again

        The reason he does, is it doesn’t mean anything.

        Here is a better visual. Consider Earths heat is trapped in a bucket. And that bucket looks like your spectrum showing the forcing from Co2.
        What you don’t get is the bucket has a big fracking hole in it, and it doesn’t matter that there’s bottom where the Co2 is, all the water runs out the hole.

      • David Appell:

        I have read your cited paper “Radiative forcing – measured at Earths surface – corroborate the increasing greenhouse effect”.

        What a bunch of garbage!

        They did not even consider the warming that occurs whenever SD2 aerosol emissions are decreased–just assumed that it was due to greenhouse gas emissions..

      • And, Mike Flynn, you’ve never shown that the surface temperature of the Earth can be explained without the greenhouse effect.

      • No, David, but I have

      • “What you don’t get is the bucket has a big fracking hole in it,”

        I don’t understand what that’s supposed to mean.

        More energy is measured to be striking the ground, at CO2 emissions wavelengths. Yet magically that extra energy has no effect? So where does it go?

      • More energy is measured to be striking the ground, at CO2 emissions wavelengths. Yet magically that extra energy has no effect? So where does it go?

        It’s going out the optical hole to space from 8-14u. That hole is open I think all the time except when air temps are near dew point temp, but the rest of the time almost half (the higher energy half) of the energy of the surface radiates straight to space.

      • micro wrote:
        “It’s going out the optical hole to space from 8-14u.”

        Lots of it is not.

        http://www.giss.nasa.gov/research/briefs/schmidt_05/curve_s.gif

      • ” That hole is open I think all the time except when air temps are near dew point temp, but the rest of the time almost half (the higher energy half) of the energy of the surface radiates straight to space.”
        Well, it isn’t open when it’s cloudy. And it’s nowhere near half. Trenberth has it at 40 W/m2. It’s limited; the surface is at 288K, say. GHGs act on the rest, which is a lot.

      • Well, it isn’t open when it’s cloudy. And it’s nowhere near half. Trenberth has it at 40 W/m2. It’s limited; the surface is at 288K, say. GHGs act on the rest, which is a lot.

        Yes, it is blocked when it’s cloudy.
        And the bottom of the bucket not hole acts to hold water in still.
        But, it also manages to dump exactly the same energy out, as recorded coming in by surface records.

      • David Appell, that diagram of yours tells me the following two things:

        1. The big dip in emission (in the CO2 band) implies that virtually no radiation across the absorption band of CO2 is currently travelling directly from the Earth’s surface to space. The current level of atmospheric CO2 is absorbing it all. Whether more CO2 in the atmosphere (beyond what is there now) can heat the surface is not a given and will depend on how that extra CO2 causes the captured heat from the surface to be distributed in the atmosphere (i.e. what happens to the lapse rate).

        2. The bottom of the emission dip looks somewhat flat and its center has a spike. I would interpret this to mean that the emissions to space from atmospheric CO2 are happening in the tropopause with the center of the band emitting in the stratosphere. This implies that raising the ERL (effective radiating level) for CO2 (by, say, instantaneously doubling its atmospheric concentration everywhere) will not cause CO2 to radiate from a cooler altitude. It may in fact cause it to radiate from a warmer altitude, which might result in a cooling forcing at the Earth’s surface.

      • willb, increasing CO2 has the effect of widening that CO2 notch which reduces outgoing longwave radiation represented by the integrated blue area. The atmosphere has to warm to restore the balance.

      • Jim D, just because the atmosphere warms it doesn’t necessarily mean the Earth’s surface warms. If the extra heat in the atmosphere distributes itself so as to reduce the lapse rate, the surface temperature might remain unchanged.

      • willb, the lapse rate does reduce as a result of the warming and water vapor feedback, but that isn’t enough to cancel the warming that caused it.

      • Jim D, how can you possibly know that? A small change in lapse rate will keep the surface temperature unchanged.

      • The lapse rate won’t change unless the surface temperature changes. That’s the way feedbacks work in the troposphere.

      • Jim D, are you contending that changing the thermodynamic characteristics of the atmosphere by doubling its CO2 concentration will not by itself change the lapse rate? To paraphrase Manabe and Wetherald: The basic shortcoming of your argument is that it is based upon the heat balance only of the earth’s surface, instead of that of the atmosphere as a whole.

        If CO2 doubles, this will result in an imbalance in incoming and outgoing radiative flux at the TOA. The imbalance is caused by the layer of atmosphere between the former ERL and the new ERL now absorbing all IR radiation in the CO2 absorption band. To restore the balance, it seems reasonable to assume the first thing that’s going to happen is that this atmospheric layer is going to warm. That warming of an upper layer of atmosphere is going to reduce the lapse rate.

        I also think it’s a bit of a cheat to calculate a no-feedback sensitivity to a doubling of CO2 by holding everything constant except surface temperature. Of course your calculation will show surface temperature increasing under those constraints. But if lapse rate is the first thing to change from the forcing caused by the radiative imbalance, then I think it makes more sense to calculate a no feedback result by holding everything constant, including surface temperature, and just let the lapse rate vary. Then see what lapse rate is required to restore the balance. And then speculate from that condition about what feedback forcings might occur to increase the lapse rate and warm the surface.

      • willb01 commented:
        David Appell, that diagram of yours tells me the following two things:
        “1. The big dip in emission (in the CO2 band) implies that virtually no radiation across the absorption band of CO2 is currently travelling directly from the Earth’s surface to space.”

        Yes, that’s correct. The IR emitted by the surface, if at CO2’s absorption bands, is captured in the first meter, I think. The IR that escapes to space are reemissions from the CO2 in the atmosphere.

      • willb, the atmosphere cannot warm by itself, especially not the troposphere. The tropospheric temperature is guided by the surface temperature. It is defined as the layer in convective balance with the surface. It won’t warm until the surface does, and then the lapse rate determines how the troposphere warms. Adding CO2 reduces the heat loss rate at the surface, and that warms followed closely by the troposphere.

      • Adding CO2 reduces the heat loss rate at the surface,

        Not in any measurable way.

      • Observations measure

        I figured out warmists problem, they do not know what an observation is.

      • I figured out what a denialist is. They don’t like what the observations support.

      • I figured out what a denialist is. They don’t like what the observations support.

        That’ll be all of you too, all I use are observations, including 130 million surface station records.

      • Decadal averages are what matter, and trends over maybe six decades, like I showed.

      • Jim D, do you accept that, for a CO2 doubling:

        1. The resulting radiative imbalance would occur at the TOA, and
        2. The troposphere is already opaque to IR radiation from the surface in the CO2 absorption band?

        If so, then the imbalance at the TOA has to be corrected by first warming the upper layer of the atmosphere. The surface is not going to warm first because initially there is no imbalance there.

      • There is an imbalance both at the surface and top. The troposphere itself has a temperature governed by the surface temperature and a lapse rate that depends on the physics of convection. The only degree of freedom in a bulk sense is the surface temperature, and that is what changes,and it continues to change until the imbalance at the top is removed.

      • and it continues to change until the imbalance at the top is removed

        Then it’s never removed, because it is not in Balance.
        In the summer in the northern hemisphere there’s about 15 hrs of day, and 9 hrs night, and while the southern hemisphere is doing the opposite, it has about twice as much ocean, Balance doesn’t happen except over longer term averages.

      • Even if you average all that out with a decadal average, it is not in balance, and that is the part that matters for climate.

      • Even if you average all that out with a decadal average, it is not in balance, and that is the part that matters for climate.

        How do you know, there’s no way to measure toa completely, with the necessary accuracy for the necessary length of time.

      • The scientists give you these numbers, but you choose not to trust them, or maybe you don’t know them in the first place. Hard to tell.

      • The scientists give you these numbers,

        And I’m pretty sure they don’t have satellite coverage of the entire planet for +12 months with the required uncertainty, to actually measure the balance.

      • They can measure it via the change rate of the ocean heat content.

      • They can measure it via the change rate of the ocean heat content

        Yeah, right. You’re basing one bad measurement on another bad measurement.

      • You can chose not to believe that too, but the trend there meaning an imbalance is present.
        https://upload.wikimedia.org/wikipedia/commons/thumb/5/5c/Ocean_Heat_Content_(2012).png/400px-Ocean_Heat_Content_(2012).png

      • but the trend there meaning an imbalance is present.

        Maybe, or maybe not, but you do t know for fact that’s why it’s going up, but I and many others conclude they are doing such a poor job of measuring ocean temperatures we don’t know what the temp is truly doing, while team warming fully endorses the trend even while knowing any where from about 1/2 to 3/4 of the ocean has never had the temperature measured. Much the same as they do about surface measurement.
        Btw, I took a second tact on surface temperature, what I represent is what the average of the stations present in an area measured. No one know what the entire area is actually doing, and it’s only when a single area has many stations can we know that area. But there is still many many places where there aren’t any stations in a 1×1 degree cell, and a lot with only 1 or 2 stations in an entire 1×1 cell.

      • Like I say, you can choose not to believe the OHC rise over the last few decades, and you have to not believe it in order to preserve your preconception that CO2 is not important. I understand that.

      • you have to not believe it in order to preserve your preconception that CO2 is not important.

        Nonsense, the attribution of any ocean warming if there is any, is recovery from the LIA, which we really have no good understanding of it’s cause.
        And I also never said co2 wasn’t important, it just doesn’t likely effect temperature.

      • “recovery from the LIA” is a long discredited meme. The LIA was a continuation of a multi-millennial cooling trend, and what we have now is twenty times faster and in the opposite direction to the millennial cooling rate.

      • Jim D, that’s a rather simplistic view. Tropospheric temperature and lapse rate are affected by convection, conduction, radiation, mass transfer, phase changes, and probably other influences as well. I am trying to isolate the initial radiative influence due to a doubling of CO2. If an upper layer of the troposphere suddenly becomes opaque to IR radiation, it is going to become warmer if the troposphere below it is warmer and radiating in the IR band (which it will be). The Earth’s surface initially will have no effect on the temperature of the upper troposphere because all of its IR flux is already being absorbed.

      • willb, no layer becomes opaque. The whole troposphere lets heat through less easily (like added insulation). The heat, as always, originates mainly from surface solar warming, and it is that heat that escapes less easily. The surface warms first because that is where the heating is.

      • Jim D, perhaps the term ‘opaque’ is confusing. When I say the atmosphere is opaque to IR I mean that no IR radiation passes through the atmosphere without first being absorbed and re-emitted by the atmosphere itself.

        So I take it your view is that, for a doubling of CO2, despite the fact that the upper troposphere is suddenly going to be absorbing IR radiation from the tropospheric layer immediately below it, no warming will occur until the surface warms and that warmth propagates upward.

        Just out of curiosity, by what mechanism do you see the surface warming as a result of a doubling of CO2, prior to the atmosphere warming?

      • The heat mostly gets to the troposphere via the surface, so the troposphere is transmitting the surface heat back out ultimately radiating it to space by IR. If you increase the CO2, it increases its insulating effect, or decreases its transmission efficiency, and it backs up at the surface where it originates from solar heating. It is best understood in terms of knowing where the troposphere gets its heat from, the surface, and where it loses it, to space.

      • Burl wrote:
        “The IPCC diagram of radiative forcings has no component for warming due to the reduction of SO2 aerosol emissions–even though it is as large as all of the other forcings combined.”

        Burl, you still haven’t shown that the global SO2 aerosol forcing is positive.

      • David:
        You wrote:”you still haven’t shown that SO2 forcing is positive”

        It is negative until it is removed, then it becomes positive.

        The 1991 volcanic eruptions spewed approx. 23 Megatonnes of sulfurous gsses into the atmosphere (along with a lot of fine particulates which settled out within weeks)

        As the SO2 aerosols circled the planet, average global temperatures fell by 0.55 deg. C., a negative forcing.

        Then, as the aerosols settled out, temperatures rose to pre-eruption levels, a positive forcing.

        Likewise, the reduction in anthropogenic SO2 .emissions will be experienced as a positive forcing, causing surface warming..

      • micro6500 wrote:
        >> Adding CO2 reduces the heat loss rate at the surface,<<
        "Not in any measurable way."

        What experimental evidence gives this result?

      • What experimental evidence gives this result?

        50 million surface station readings.

      • micro wrote:
        >> What experimental evidence gives this result? <<
        "50 million surface station readings."

        How limp. You're just bailing on my question, which I meant seriously. It's clear you don't have any evidence for your claims and are just trying to bluff your way through.

        Typical, I've found.

      • No, I’ve been working with the 130 million record global summary of days Data set for about 8 years now.

      • micro wrote:
        “all I use are observations, including 130 million surface station records.”

        There are 130M surface station records? Says who?

        It’s not enough to just list an alleged number for stations — you have to actually look at their data too. I haven’t see the slightest evidence that you’ve done that, or are even capable of doing it.

        More bluffing.

      • micro6500 wrote:
        “Yeah, right. You’re basing one bad measurement on another bad measurement.”

        What exactly is wrong with the ocean measurements?

        Can’t wait for your expert views on this.

      • “No, I’ve been working with the 130 million record global summary of days Data set for about 8 years now.”

        Sure you are. I’m sure.

      • Sure you are. I’m sure.

        https://micro6500blog.wordpress.com/author/micro6500/

        I’ve been the Data migration expert for the largest PLM vendor for 19 years, and you have things from some of my customers.

      • Jim D chooses to believe we have robust data on OHC. We understand why he needs to.

      • Not only me, but Lewis, Curry and Lindzen, have acknowledged a positive imbalance. Some “skeptics” need to just keep up with the data.

      • have acknowledged a positive imbalance

        TOA is never inbalance, on a years average it is, but at any one point it is not.

        Over that same year, doing a balance measurement at the surface with the station data we have, at the end of the year, there is no inbalance, and there is slightly more cooling than warming. As this shows annual cooling at the stations is an even match for “in”.
        https://micro6500blog.files.wordpress.com/2016/09/corrected-rise-and-fall1.png
        This is a plot of Daily Rising temp (Tmax – Tmin) and then the following nights falling temp (Tmax – tomorrow’s Tmin)
        Notice in 1996 daily rising temps were less than 17F, 3 years later it was over 20F, and it cooled the same 20 some degrees every night. At least by the end of the year there is no inbalance at the surface. 50 million station samples, these are stations that collected 365 days of samples per year exclusively.
        That’s with data that isolates multiple point on the surface to test. I’m not sure how you do that of toa so you catch all of it’s out going radiation 24×7 for a full year. Surface data is sampled for min and max, that makes doing this with surface data relatively easy. Sampling from Orbit isn’t that straight forward.

      • micro, I mentioned somewhere above that the rising ocean heat content is a measure of the imbalance, and gave you a graph to show it, but you chose not to believe it.

      • stevenreincarnated

        Jim D, if a recovery from the LIA was discredited then you should have no problem pointing out where that happened.

      • steven, it is pure pseudoscience. No one has made a scientific case for it in a paper. That doesn’t stop it from making the rounds in the blogs. The paleoclimate temperature reconstructions show that the anomaly is now, more so than the LIA by a large factor.

      • stevenreincarnated

        So in other words it is because you say so. Thought so.

      • I have only found Akasofu as a proponent, and this was where it was debunked. No one seems to have tried to defend it after that.
        https://www.theguardian.com/environment/climate-consensus-97-per-cent/2013/sep/23/climate-science-magical-thinking-debunked-by-science

      • stevenreincarnated

        I’ve shown you reconstructions of the AMOC that showed OHT going down into the LIA and back up into the modern warming period. You can’t show what caused it so you don’t know it isn’t a recovery. It may even have been caused by the volcanoes some claim to have caused the LIA and which we would have recovered from. You don’t know so no point in pretending you do.

      • OHT could increase simply because the ocean temperature is increasing. Since 2004 the AMOC has reduced its mass transport by 30% according to this recent item posted by Judith.
        http://e360.yale.edu/feature/will_climate_change_jam_the_global_ocean_conveyor_belt/3030/
        It is very difficult to say that OHT has risen when the AMOC has slowed this much, so now you have to find a new reason for the ongoing sea-ice loss.

      • stevenreincarnated

        You remember the reconstructions I showed you. Why must we start every conversation from scratch? Do you wake up in a new world every day? I don’t.

      • steven, so how does the last decade of increased sea-ice loss and 30% AMOC slowdown fit with what you keep promoting? I see it as destroying what little is left of your idea, but maybe you have a different interpretation.

      • stevenreincarnated

        Jim, of course I do. The OHC of the N Atlantic reached its maximum in 2007. 2012 had a big Arctic storm and 2016 had an El Nino. So you have your minimum in ice in 2007 from OHT and then you have weather. You are pinning your hopes of being right on nothing but noise.

      • steven, the global OHC is still rising, probably especially in the Arctic Ocean, explaining why the sea ice is trending down.

      • stevenreincarnated

        Jim, it’s basic physics. It takes heat to melt ice and the heat has to be where the ice is.

      • stevenreincarnated

        Jim, I must be tired. I just read what you actually said. So ocean heat transport goes down, the OHC of the N Atlantic drops like a rock, and you think the Arctic is increasing in OHC? OK, whatever delusion you prefer to keep as long as you wish is fine with me just don’t offer me any of what you are taking.

      • steven, yes, the temperature rise is concentrated right there. It is called polar amplification. Northern Russia and Canada also show a lot of warming surrounding the Arctic Ocean on all sides. Nothing special about the Atlantic except for that cool blob off Greenland that is possibly the only part of the world that is cooler now than it was a century ago, which is interesting because it reflects the AMOC slowdown.
        http://data.giss.nasa.gov/tmp/gistemp/NMAPS/tmp_GHCN_GISS_ERSSTv4_1200km_Anom112_2015_2015_1901_1910_100__180_90_0__2_/amaps.png

      • stevenreincarnated

        Jim, what happens to a climate model when you reduce the poleward ocean heat transport by 30%?

    • Yes 4K, I saw that at KenRs and watched it. Very little real information except that GCM modelers build code the way everyone else does who is solving really hard problems. The science of ” software engineering” which has always seemed a very soft science to me can’t help much. My experience is it can however retard progress, sometimes dramatically.

    • “This is is quite profound since it suggests that the models are being run for the benefit of the modelers…”

      ‘The computed numbers are not only processed like data but they look like data, and a study of them may be no more enlightening than a study of real meteorological observations.’
      Edward Lorenz.

    • Mike Flynn wrote:
      “You haven’t mentioned where the missing energy goes, and why.
      It doesn’t reach the ground. That’s the point. It doesn’t eventually reach the ground.”

      Yes, some of it reaches the ground, and that amount has been measured:

      “Radiative forcing – measured at Earth’s surface – corroborate the increasing greenhouse effect,” R. Philipona et al, Geo Res Letters, v31 L03202 (2004)
      http://onlinelibrary.wiley.com/doi/10.1029/2003GL018765/abstract

      “Observational determination of surface radiative forcing by CO2 from 2000 to 2010,” D. R. Feldman et al, Nature 519, 339–343 (19 March 2015)
      http://www.nature.com/nature/journal/v519/n7543/full/nature14240.html

    • Burl wrote:
      “What a bunch of garbage!
      They did not even consider the warming that occurs whenever SD2 aerosol emissions are decreased–just assumed that it was due to greenhouse gas emissions.”

      Burl, you are misunderstanding the paper.

      Their objective was to measure CO2’s radiative forcing, ONLY. So they attempted to control for all other sources, including aerosols.

      • David :Appell

        Nowhere in the paper could I find anything that remotely indicated that they considered the warming due to the removal of SO2 aerosols, which was happening due to Clean Air efforts at that time..

        And why would they? The IPCC diagram of radiative forcings has no component for warming due to the reduction of SO2 aerosol emissions–even though it is as large as all of the other forcings combined.

      • The IPCC does consider aerosols and they have been relatively flat for the last few decades compared to the rising GHGs.
        http://www.bishop-hill.net/display/ShowImage?imageUrl=/storage/RCP45.8.18adj.aciObs1.png?__SQUARESPACE_CACHEVERSION=1355923595859

      • Jim D
        You wrote “The IPCC does consider aerosols and they have been relatively flat over the past few decades compared to rising greenhouse gssses

        1. The IPCC only considers aerosols as Negative forcings, ompletely ignoring the positive forcing that occurs when they are removed from the atmosphere

        2. Between 1972 and 2011, SO2 emissions fell by 30 Megatonnes, and the warming caused by their removal perfectly matches the rise in Land-ocean average global temperatures

      • The IPCC considers the falling level of aerosols, you can see that in the graph I provided. You have to look closely because there has been a large offsetting rise in areas like China and India.

      • Burl: The measuring sites were in rural Oklahoma and rural Alaska.

    • micro wrote:
      “But, it also manages to dump exactly the same energy out, as recorded coming in by surface records.”

      According to what observations?

      • According to what observations?

        A measure of daily min to max trmp, and max to the follow morning’s min temp from ncdc surface station global summary records.

      • micro6500 wrote:
        “A measure of daily min to max trmp, and max to the follow morning’s min temp from ncdc surface station global summary records.”

        Good god. You’re measuring temperature, not energy fluxes.

      • Good god. You’re measuring temperature, not energy fluxes.

        First they can be expressed in either manner. Second it cools off as much as it warmed.

    • willb01 wrote:
      “The bottom of the emission dip looks somewhat flat and its center has a spike. I would interpret this to mean that the emissions to space from atmospheric CO2 are happening in the tropopause with the center of the band emitting in the stratosphere.”

      Why? What the evidence for this interpretation?

      • The evidence is the shape of the emission curve itself. My line of reasoning goes like this:

        1. The temperature at which each wavelength radiates to space can be derived from the TOA spectral flux curve.

        2. For a constant lapse rate, altitude is proportional to temperature.

        3. I would expect the ERL for each wavelength in the CO2 absorption band to be proportional to the ability of CO2 to absorb at that wavelength: the greater the ability to absorb, the higher the ERL.

        4. Assuming a constant lapse rate and based on the curve of the CO2 absorption characteristic, I would expect a similar, but inverted, shape to the dip in the TOA spectral flux with the maximum dip occurring at the center of the CO2 absorption band.

        5. Since that is not the case, I’m concluding that the lapse rate isn’t constant over the range of ERLs associated with the CO2 absorption spectrum.

        6. Because the center of the dip is expected to have the highest ERL, I conclude that the lapse rate must have reversed and temperature must be increasing with altitude now (the stratosphere).

        7. The flatness to the curve elsewhere implies that the temperature is remaining somewhat constant despite a changing ERL (the tropopause).

      • willb01: temperature is the result of integrating over all wavelengths. There aren’t individual temperatures for each wavelength.

        The rest of your assumptions strike me as handwaving hookum.

      • David Appell, I’m sorry you feel this way. Do you not accept the existence of a non-zero temperature lapse rate in the atmosphere? I’m sure you’ve seen pictures of the Earth from space, in which the Earth’s surface is clearly visible between the clouds. If a similar picture of the same view were taken from the same vantage point with an IR camera operating in the CO2 absorption band, the Earth’s surface would be completely obscured by the presence of CO2 in the atmosphere.

        Clearly not all wavelengths are radiating to space from the same altitude.

    • willb01 wrote:
      “Just out of curiosity, by what mechanism do you see the surface warming as a result of a doubling of CO2, prior to the atmosphere warming?”

      Increased downwelling atmospheric radiation that strikes the surface.

      • David Appell wrote:
        “Increased downwelling atmospheric radiation that strikes the surface.”

        If this were to happen ahead of the atmosphere actually warming, would this not just suck heat out of the atmosphere, causing it to cool?

      • willb01 wrote:
        I”f this were to happen ahead of the atmosphere actually warming, would this not just suck heat out of the atmosphere, causing it to cool?”

        It happens as the amount of CO2 in the atmosphere increases, increasing the downwelling IR.

  72. Reblogged this on Climate Collections.

  73. Pingback: Weekly Climate and Energy News Roundup #241 | Watts Up With That?

  74. Nick Stokes said “The properties of greenhouse gases were all known in the 1800’s”.

    Now perhaps I am missing something.

    Was Tyndall aware that εm – εn = hν prior to the photoelectric effect (1905)
    or the Bohr model (1913)? If all this was known, why did Einstein publish
    on “Emission and Absorption of Radiation(1916)”?

    While a lot was known about the interaction of radiation and matter by 1900,
    I think the details of energy flow could not have been worked out until
    quantum theory was invented. And even if a viable continuum model existed for energy-matter interactions (phlogiston anyone?) the statistics
    would have been all wrong.

    The question posed is this: Do current models rely on 18th century physics for their radiation transfer calculations? Nick Stokes and Prof. Easterbrook seem to be saying they do.

  75. Pingback: Global Climate Models Depart from Laws of Physics | The Drinking Water Advisor

  76. More on the Mass and Energy Conservation Laws of Physics The Description of the NCAR Community Atmosphere Model (CAM 3.0) contains several sections about mass and energy conservation. I have listed them below along with quotations for some of them, and along with comments for a couple of those.

    Generally, the term ‘fixer’ is not well define for me. I take it to mean that these are methods to correct lack of mass and energy conservation by the discrete approximations and associated numerical solution methods. That this is the case is indicated by this sentence from Section 3.3.7:

    “The finite-volume dynamical core as implemented in CAM and described here conserves the dry air and all other tracer mass exactly without a “mass fixer”.”

    So, it is possible to devise methods that inherently conserve mass and energy.

    Part 3 Dynamics 3.1 Eulerian Dynamical Core 3.1.5 Energy conservation “We shall impose a requirement on the vertical finite differences of the model that they conserve the global integral of total energy in the absence of sources and sinks. We need to derive equations for kinetic and internal energy in order to impose this constraint.” [emphasis in the original ]

    I’m not sure what sources and sinks are omitted. For me sources and sinks of mass and energy would include all at the interfaces between sub-systems, evaporation, condensation, melting, and solidification of water within subsystems, viscous dissipation, and even the volumetric radiative energy within sub-systems. Digging deeper into the code might clear this up.

    Nevertheless, it is clear that some aspects of mass and energy conservation are not included in the fixer.

    3.1.19 Mass fixers 3.1.20 Energy fixer

    3.2 Semi-Lagrangian Dynamical Core 3.2.12 Mass and energy fixers and statistics calculations “The semi-Lagrangian dynamical core applies the same mass and energy fixers and statistical calculations as the Eulerian dynamical core.”

    3.3 Finite Volume Core 3.3.5 A mass, momentum, and total energy conserving mapping algorithm 3.3.6 Adjustment of specific humidity to conserve water 3.3.7 Further discussion “There are still aspects of the numerical formulation in the finite volume dynamical core that can be further improved. For example, the choice of the horizontal grid, the computational efficiency of the split-explicit time marching scheme, the choice of the various monotonicity constraints, and how the conservation of total energy is achieved.

    The impact of the non-linear diffusion associated with the monotonicity constraint is difficult to assess. All discrete schemes must address the problem of subgrid-scale mixing. The finite-volume algorithm contains a non-linear diffusion that mixes strongly when monotonicity principles are locally violated. However, the effect of nonlinear diffusion due to the imposed monotonicity constraint diminishes quickly as the resolution matches better to the spatial structure of the flow. In other numerical schemes, however, an explicit (and tunable) linear diffusion is often added to the equations to provide the subgrid-scale mixing as well as to smooth and/or stabilize the time marching.

    The finite-volume dynamical core as implemented in CAM and described here conserves the dry air and all other tracer mass exactly without a “mass fixer”. The vertical Lagrangian dis- cretization and the associated remapping conserves the total energy exactly. The only remaining issue regarding conservation of the total energy is the horizontal discretization and the use of the “diffusive” transport scheme with monotonicity constraint. To compensate for the loss of total energy due to horizontal discretization, we apply a global fixer to add the loss in kinetic energy due to “diffusion” back to the thermodynamic equation so that the total energy is conserved. However, it should be noted that even without the “energy fixer” the loss in total energy (in flux unit) is found to be less than 2 (W/m2) with the 2 degrees resolution, and much smaller with higher resolution. In the future, we may consider using the total energy as a transported prognostic variable so that the total energy could be automatically conserved.” [my emphasis]

    Less than 2 (W/m2) is not very informative in that that value is somewhat large relative to the expected effect of increasing concentrations of CO2 in the atmosphere.

    Additional information can be found here, here Some conservation issues for the dynamical cores of NWP and climate models, J. Thuburn (2008), Journal of Computational Physics, Volume 227, Issue 7, Pages 3715-3730, the abstract reads:

    Abstract The rationale for designing atmospheric numerical model dynamical cores with certain conservation properties is reviewed. The conceptual difficulties associated with the multiscale nature of realistic atmospheric flow, and its lack of time-reversibility, are highlighted. A distinction is made between robust invariants, which are conserved or nearly conserved in the adiabatic and frictionless limit, and non-robust invariants, which are not conserved in the limit even though they are conserved by exactly adiabatic frictionless flow. For non-robust invariants, a further distinction is made between processes that directly transfer some quantity from large to small scales, and processes involving a cascade through a continuous range of scales; such cascades may either be explicitly parameterized, or handled implicitly by the dynamical core numerics, accepting the implied non-conservation. An attempt is made to estimate the relative importance of different conservation laws. It is argued that satisfactory model performance requires spurious sources of a conservable quantity to be much smaller than any true physical sources; for several conservable quantities the magnitudes of the physical sources are estimated in order to provide benchmarks against which any spurious sources may be measured. [ my bold ]

    And here

    Ulrike Burkhardt and Erich BeckeR (2006), A Consistent Diffusion–Dissipation Parameterization in the ECHAM Climate Model, Monthly Weather Review, Vol. 134, pp. 1194-2004.

    ABSTRACT The diffusion–dissipation parameterizations usually adopted in GCMs are not physically consistent. Horizontal momentum diffusion, applied in the form of a hyperdiffusion, does not conserve angular momentum and the associated dissipative heating is commonly ignored. Dissipative heating associated with vertical momentum diffusion is often included, but in a way that is inconsistent with the second law of thermodynamics.

    New, physically consistent, dissipative heating schemes due to horizontal diffusion (Becker) and vertical diffusion (Becker, and Boville and Bretherton) have been developed and tested. These schemes have now been implemented in 19- and 39-level versions of the ECHAM4 climate model. The new horizontal scheme requires the replacement of the hyperdiffusion with a del^2 (edh edit) scheme.

    Dissipation due to horizontal momentum diffusion is found to have maximum values in the upper troposphere/lower stratosphere in midlatitudes and in the winter hemispheric sponge layer, resulting in a warming of the area around the tropopause and of the polar vortex in Northern Hemispheric winter. Dissipation associated with vertical momentum diffusion is largest in the boundary layer. The change in parameterization acts to strengthen the vertical diffusion and therefore the associated dissipative heating. Dissipation due to vertical momentum diffusion has an indirect effect on the upper-tropospheric/stratospheric temperature field in northern winter, which is to cool and strengthen the northern polar vortex. The warming in the area of the tropopause resulting from the change in both dissipation parameterizations is quite similar in both model versions, whereas the response in the temperature of the northern polar vortex depends on the model version.

    Erich Becker (2003) Frictional Heating in Global Climate Models, Monthly Weather Review, Vol. 131, pp. 508-520.

    ABSTRACT A new finite-difference formulation of the frictional heating associated with atmospheric vertical momentum diffusion is proposed. It is derived from the requirement that, according to the no-slip condition, the sum of internal and kinetic energy of a fluid column is not changed by surface friction. The present form is designed to be implemented along with the hybrid-coordinate differencing scheme of Simmons and Burridge. The effects of incorporating frictional heating in general circulation models (GCMs) of the atmosphere are assessed by analyzing representative long-term January simulations performed with an idealized GCM. The model employs the proposed discretization of vertical terms as well as recently derived horizontal diffusion and dissipation forms. For the conventional definition of a GCM with no frictional heating, the climatological global energy budget yields a spurious thermal forcing of about 2 W m-2. In the equivalent new model definition, this shortcoming is reduced by two orders of magnitude. Moreover, the long-term global mean of the simulated frictional heating yields approximately 1.9 W m-2. This value is in agreement with both the residuum in the conventional case as well as with existing estimates of the net dissipation owing to synoptic and planetary waves.

    Erich Becker (2001), Symmetric Stress Tensor Formulation of Horizontal Momentum Diffusion in Global Models of Atmospheric Circulation, Journal of Atmospheric Sciences, Vol. 58, pp. 269-282.

    ABSTRACT In climate and weather forecast models, small-scale turbulence in the free atmosphere is usually parameterized by horizontal diffusion of horizontal momentum. This study proposes a formulation that is based on a symmetric stress tensor. The advantage over conventional methods is twofold. First, the Eulerian law of angular momentum conservation is fulfilled. Second, a self-consistent formulation of the momentum and thermodynamic equations of motion becomes possible due to incorporation of the local frictional heating rate, that is, the proper dissipation. The importance of these issues is demonstrated by numerical experiments performed with a simple general circulation model. For example, the new scheme precisely accounts for the irreversible increase of total potential energy during the decay of a baroclinic life cycle. Also the stress generated by horizontal momentum diffusion is found to be significant in the angular momentum budget of multiple life cycle experiments.

    There’s also discussions at a few blogs from ca. 2008-2010 about accounting for viscous dissipation in GCMs. Probably somewhat out-dated at this time.

    Corrections of incorrectos will be appreciated.

  77. I was thinking about this comment in GISS ModelE

    c**** shv is currently assumed to be zero to aid energy conservation in
    c**** the atmosphere.

    as to why and how this works. Apparently, somehow, accounting for the energy content of water vapor in the atmosphere makes it more difficult to achieve energy conservation. Because there are post facto approaches to forcing energy conservation, this is apparently an indication that the lack of conservation is significantly increased if the water vapor energy content is accounted.

    However, this seems to me to raise another issue as follows. The energy to evaporate liquid to vapor was taken from some material somewhere, so how does accounting for that energy and at the same time neglecting that energy after the vapor enters the atmosphere increase the lack of conservation? Seems counter-intuitive to me.

    • I was thinking about this comment in GISS ModelE
      c**** shv is currently assumed to be zero to aid energy conservation in
      c**** the atmosphere.
      as to why and how this works. Apparently, somehow, accounting for the energy content of water vapor in the atmosphere makes it more difficult to achieve energy conservation. Because there are post facto approaches to forcing energy conservation, this is apparently an indication that the lack of conservation is significantly increased if the water vapor energy content is accounted.
      However, this seems to me to raise another issue as follows. The energy to evaporate liquid to vapor was taken from some material somewhere, so how does accounting for that energy and at the same time neglecting that energy after the vapor enters the atmosphere increase the lack of conservation? Seems counter-intuitive to me.

      I will tell you what I remember reading. And it looks like it’s a feature in both GISS and CMIP models. IIRC there was a problem where not enough water vapor was getting created, I think it was getting limited by rel humidity, it saturates and water would precipitate out at the air/water interface. Without this additional water vapor, the models were hard to get to warm, matching observations, while increasing co2 as measured. It doesn’t surprise me that it might also show up in the energy conservation calculations.
      It wasn’t until, they fixed this issue, that models warmed up. They warmed up too much, but they used aerosols and knobs and switches to tune to observations.

  78. Looks like Dan hit the nail on the head… Kudos!

    Have a great day and a better tomorrow!

  79. Willis Eschenbach

    David Appell | September 23, 2016 at 5:00 pm |

    Mike Flynn wrote:

    “There remains precisely zero experimental verification of the GHE.”

    Here’s for you to ignore yet again:

    “Radiative forcing – measured at Earth’s surface – corroborate the increasing greenhouse effect,” R. Philipona et al, Geo Res Letters, v31 L03202 (2004)
    http://onlinelibrary.wiley.com/doi/10.1029/2003GL018765/abstract

    “Observational determination of surface radiative forcing by CO2 from 2000 to 2010,” D. R. Feldman et al, Nature 519, 339–343 (19 March 2015)
    http://www.nature.com/nature/journal/v519/n7543/full/nature14240.html

    David, I think you may have misunderstood Mike. I think he was looking for evidence that current or historical changes in CO2 caused corresponding changes in temperature.

    Unfortunately, neither of your references touch that question. Instead, they look solely at whether increasing CO2 increases the FORCING … but very few people dispute that it increases the forcing. The question is whether the small change in forcing from say doubling CO2 affects the global temperature. And for that claim, like Mike I’ve never seen any evidence. Part of the problem is that the average global downwelling radiation at the surface is about half a kilowatt per square metre … which means that doubling CO2 (with a change in forcing of 3.7 W/m2) will change the downwelling radiation by less than one lousy percent …

    And that amount of change is easily counteracted by small changes in albedo and changes in the daily emergence time of tropical clouds and thunderstorms and dust devils and the like.

    Yes, certainly the greenhouse effect is the reason that temperatures are well above the Stefan-Boltzmann value … but that doesn’t mean that at equilibrium, small changes in forcing will change the temperature.

    It’s akin to the question about solar effects. We’re clear that the sun is the reason that the earth isn’t frozen solid … but that does NOT mean that the small 11-year variations in solar output will affect the temperature.

    BTW, you can’t do experiments on climate.

    No, but you can look at natural experiments. For example, the earth has been generally warming over the last few centuries. The Berkeley Earth folks say the land has warmed about two degrees in the last two hundred years or so … but as far as I know, there have been exactly zero warming related catastrophes in that time. To the contrary, the natural experiment clearly demonstrates that warming in general has been good for man and beast alike—longer growing seasons, less inclement weather, less ice, less illness and mortality, the benefits are many.

    So we actually have done the experiment to see if warming is dangerous, and the evidence says it isn’t … which makes zero difference to the alarmists near as I can tell.

    We’ve also seen an interesting natural experiment regarding downwelling radiative forcing. As I showed in Precipitable Water and Precipitable Water Redux, observed changes in the precipitable water since 1980 have produced an increase in downwelling radiation of between three and four W/m2, about the same as a doubling of CO2.

    So the question is, where is the predicted corresponding change in temperature from that forcing? The IPCC has been saying for years that such a forcing change would result in a 3° change in the temperature, but we’ve seen nothing even remotely resembling that change.

    This means that while I know of no evidence that small increases in forcing such as that from a doubling of CO2 affect the temperature, there is indeed evidence that those same small increases have NOT affected the temperature.

    And that’s how we can do experiments with the climate …

    Best regards,

    w.

    • The question is whether the small change in forcing from say doubling CO2 affects the global temperature.

      Given that a forcing is essentially the difference between the energy coming in and the energy coming out, it would be utterly remarkable if a change in forcing did not affect global temperatures. It would completely rewrite the laws of physics.

      • Willis Eschenbach

        …and Then There’s Physics | September 24, 2016 at 6:33 am

        The question is whether the small change in forcing from say doubling CO2 affects the global temperature.

        Given that a forcing is essentially the difference between the energy coming in and the energy coming out, it would be utterly remarkable if a change in forcing did not affect global temperatures. It would completely rewrite the laws of physics.

        I’m sorry, but what you’ve given is the definition of NET forcing, not of forcing in general. I was referring to individual forcings (in this case, CO2 forcing and water vapor forcing). They are NOT “the difference between the energy coming in and the energy coming out”. For example, CO2 forcing is calculated as 3.7 W/m2 per doubling of CO2 … nothing in there about energy coming in and going out.

        Finally, I fear that your underlying claim is not true. Net forcing and surface temperature are not rigidly tied together as you say. There can be a change in surface temperature without a corresponding change in net forcing. This is because the surface temperature is only reflective of one part of the complex climate system. Take dust devils as an example. When a dust devil forms it immediately starts transporting energy from the surface to the troposphere, cooling the surface … but there is no corresponding change in net forcing. All that’s happened is that the energy has been redistributed inside the system, which doesn’t require any change in net forcing.

        w.

      • Willis,

        They are NOT “the difference between the energy coming in and the energy coming out”. For example, CO2 forcing is calculated as 3.7 W/m2 per doubling of CO2 … nothing in there about energy coming in and going out.

        Yes, it is, by definition. If atmospheric CO2 doubles (and nothing else changes and the system was in equilibrium beforehand) then the difference between the incoming energy and outgoing energy would be 3.7 W/m^2. This is the definition. Of course, in reality, there is a response to a change in forcing, but that doesn’t change that the definition of a change in forcing is the change in the net TOA energy balance.

        Net forcing and surface temperature are not rigidly tied together as you say. There can be a change in surface temperature without a corresponding change in net forcing.

        You’ve reversed what I said. I said that if there was a change in forcing then there would be a change in the energy balance and it would be remarkable if this did not affect temperatures. Technically, forcings are normally defined in terms of external changes, so – by definition – a change in temperature cannot produce a change in forcing.

        What I presume you mean is that there can be a change in temperature that does not produce a change in net TOA flux. This is possible, but would only be the case if the response to that change in temperature somehow managed to produce no net change in TOA flux, or if the spatial distribution of the change were such that it didn’t produce a change in the net TOA flux. The reverse, however, is not true. Changing the net TOA flux will almost certainly affect global temperatures.

      • attp, “Technically, forcings are normally defined in terms of external changes, so – by definition – a change in temperature cannot produce a change in forcing. ”

        Right, by definition, any response to a change in temperature would be a feedback to GHGs except of course a change in temperature would produce a change is water vapor, a GHG, and clouds, produced by that GHG, so until you ferret out the “cause” or initial forcing, it is assumed to be anthropogenic, I believe by definition.

        Thank goodness y’all defined everything perfectly or things could get confusing :)

      • Technically, forcings are normally defined in terms of external changes, so – by definition – a change in temperature cannot produce a change in forcing. [ my bold ]

        If atmospheric CO2 doubles (and nothing else changes and the system was in equilibrium beforehand) then the difference between the incoming energy and outgoing energy would be 3.7 W/m^2.

        The physical domain, of course, completely resolves each and every aspect of all laws of physics. In the physical domain Conservation of Energy is always obtained everywhere and for all times. It makes no difference if energy-in is not equal to energy-out.

        I don’t understand this statement: forcings are normally defined in terms of external changes. There are no external changes. The only thing external to Earth’s climate systems is the source, the Sun. Changes in aerosols, water vapor, CO2, utilization of energy for all processes internal to the systems, and all other forcings are internal to the systems.

        And this one, either: – by definition – a change in temperature cannot produce a change in forcing. This definition is completely inconsistent with Conservation of Energy when applied to Earth’s climate systems. It is indisputable that changes internal to the sub-systems and at the interfaces between sub-systems change, to greater or lesser degrees, the radiative energy budget at the TOA. Again, the conservation of energy law of physics demands that it be so.

        The law of conservation of energy does not, and never will, require that (and nothing else changes and the system was in equilibrium beforehand). Equilibrium usually means lack of gradients in all driving potentials, both internal to a sub-system and between sub-systems. In this usual sense, especially as related to thermodynamics and the idealized processes that are typically employed to demonstrate applications of conservation of energy, equilibrium cannot be expected. In Earth’s climate systems, these gradients are life.

        Because mass and energy changes, and exchanges, are not in balance internal to and between Earth’s climate systems, change is the norm. “Nothing else changes” is impossible. Again, due to conservation of energy and mass, and all other laws of physics.

        It is usually helpful if idealizations of physical phenomena and processes correctly reflect those obtained in the physical domain.

        Corrections to incorrectos will be appreciated.

      • Idealizations of the laws of physics are helpful to the extent that the dominant physical phenomena and processes dominate the idealization.

      • Capt,
        I can’t work out what you’re getting at, but that might have been your intent.

        Dan,

        I don’t understand this statement: forcings are normally defined in terms of external changes. There are no external changes.

        In this context external means external to our climate. Changes in solar insolation is an external change. Changes in atmospheric CO2 driven by digging up, burning fossil fuels and emitting the CO2 into the atmosphere is external. Volcanic eruptions produce external changes.

        And this one, either: – by definition – a change in temperature cannot produce a change in forcing.

        Simply because the definition is normally that a change in forcing is due to some external change (such as those above). However, there are some cases were changes are internally forced (Dansgaard-Oeschger events, for example). I simply meaning that a change in TOA energy balance (which could be due to a temperature change) is not necessarily indicative of a change in forcing.

        “Nothing else changes” is impossible.

        Of course, but that does not mean that one cannot determine what would be the case if nothing else changes.

        It is usually helpful if idealizations of physical phenomena and processes correctly reflect those obtained in the physical domain.

        It’s usually helpful if those who would like to have a discussion do not resort to pedantry. YMMV, of course. A change in forcing is a well-defined concept. That one can never – in reality – produce a change in forcing without causing other things to change too, does not mean that one cannot define, and quantify, it.

      • attp, “I can’t work out what you’re getting at, but that might have been your intent.”

        It isn’t that hard, a change in temperature, no matter the initial cause, has to result in a change in atmospheric water vapor, which is part of the total atmospheric effect. Since water vapor and to some clouds are considered amplifying feedbacks, as in amplifying forcing, you have a change in “forcing” in reality but not in theory.

        In Redneckese, your forcing definition sucks because it doesn’t allow for natural variability or lack of knowledge of the proper initial conditions.

      • ATTP wrote:
        “If atmospheric CO2 doubles (and nothing else changes and the system was in equilibrium beforehand) then the difference between the incoming energy and outgoing energy would be 3.7 W/m^2.”

        Except forcings are calculated at the tropospause, so the net change in forcing isn’t quite the planetary energy imbalance.

      • > your forcing definition sucks because it doesn’t allow for natural variability or lack of knowledge of the proper initial conditions.

        Of course it allows for the first part, Cap’n, but I too hate it when operational definitions don’t measure what it can’t.

        OTOH, your job as a physics litigator could become a rent by neverendingly appealing to our infinite ignorance.

      • Willard, “OTOH, your job as a physics litigator could become a rent by neverendingly appealing to our infinite ignorance.”

        There is ignorance and then there are ignorant assumptions. attp actually has a post on the LIA “recovery?” where he uses his standard understanding of “forcing” in an attempt to explain why LIA recovery is nonsense. ” The mean temperature of the planet is essentially determined by 3 factors, the amount of energy we get from the Sun, the amount that we reflect directly back into space (albedo), and the composition of our atmosphere (the greenhouse effect).”

        70% of the planet is covered with water having about 1000 times the heat capacity of the atmosphere and the temperature of that water determines the concentration of the primary GHG water vapor plus a few side bars like atmospheric circulation patterns, sea ice extent etc. I believe there was a recent paper attempting to “explain” how early pre-industrial man had to have an impact on the tropical oceans during the mid 19th century through some mystical teleconnenction. If the composition of the atmosphere changes, the greenhouse effect, without human involvement, is it still not a forcing?

      • > There is ignorance and then there are ignorant assumptions. attp actually has a post […]

        I won’t chase that squirrel down until you agree that the definition of forcing doesn’t preclude natural variability, Cap’n. Forcing. Feedback. You should know the drill better than me.

        If you could also provide a link so I can check how much ridicule you included in your paraphrase, that’d be great.

      • willard, “I won’t chase that squirrel down until you agree that the definition of forcing doesn’t preclude natural variability, Cap’n. Forcing. Feedback. You should know the drill better than me.”

        Not going to do your homework for you. Will give ya an example though. With the average temperature of the oceans at ~18 C degrees and covering ~70% of the planet the effective radiant energy released that would be measured at the top of the atmosphere would be ~280 Wm-2 if the oceans could uniformly release energy “globally” and at a constant rate. So the oceans are a massive heat reservoir that would tend to reduce impact of negative short term (as viewed by a planet) atmospheric forcing, like say solar or volcanic aerosols by providing energy. At some time in the future that energy has to be replaced if the system is going to return to steady state or “equilibrium.” A good estimate of the time required is ~300 years per C degree and most likely longer since hemispheres don’t warm at the same rate.

        That “recovery” isn’t a “forcing”, but it does modify the composition of the atmosphere. It is a “feedback” though until the “external” cause of the change is determined. Ignorance of the “cause” produces an “unforced” variation which is kind of ironic.

      • > Not going to do your homework for you.

        Your handwaving, your citation to find for me, Cap’n.

        As for the gist of your example, i.e. ignorance of the “cause” produces an “unforced” variation is indeed kind of ironic for, like irony, ignorance is not the kind of forcing or feedback that can impact climate. You could as well argue that the IPCC’s (why you say it’s AT’s and where are your manners) definition of forcing sucks because it doesn’t allow for magic.

        Here’s a pro tip: when you discuss the impact of definitions, you enter epistemology, and Red neck physics ain’t enough anymore.

      • David,

        Except forcings are calculated at the tropospause, so the net change in forcing isn’t quite the planetary energy imbalance.

        Except I didn’t specify where the difference between incoming and outgoing energy was determined. You are correct, of course, that it’s measured at the troposphere. In fact, there is instantaneous radiative forcing, stratospherically adjusted radiative forcing, and effective radiative forcing, all of which are slightly different.

        This started, of course, with a suggestion that a change in forcing might not affect global temperatures. Given that a change in forcing indicates that that there is a change in energy balance, it would be quite remarkable if such a change did not affect global temperatures.

      • Willard, “Here’s a pro tip: when you discuss the impact of definitions, you enter epistemology, and Red neck physics ain’t enough anymore.”

        Should have been more than enough. Unless you accurately know the initial conditions for your assumed “normal” state, you are going to have “unknown” causes of change until you reach the boundary value problem you are assuming exists. Too many layers of assumptions.

        Since you didn’t “get” my example, try looking at early 20th and late 19th centuries when modeled volcanic forcing over estimated cooling, but a few decades later the cooling appears. That is due to the ocean heat reservoir being able to limit the rate of atmospheric cooling or the models suck, pick your poison.

      • Meant to put a smiley after determined in my last comment :-)

      • Okay, that now doesn’t make sense since the comment I was referring to is still in moderation.

      • > Too many layers of assumptions.

        Just one assumption, Cap’n – the same we use to decide that the number of grass leaves is either odd or even in any given lawn of America.

        We don’t need no stinkin’ climasticore!

      • Willard, “Just one assumption, Cap’n – the same we use to decide that the number of grass leaves is either odd or even in any given lawn of America.”

        Poor example. When you assume current atmospheric optical depth is “normal” you are assuming a lack of volcanic activity is neutral, so there would be no water vapor or cloud feedback. However, if a higher concentration of volcanic aerosols is “normal” then less aerosols would be a positive forcing and increased water vapor would be a positive feedback to that forcing. You get twice of more error for one assumption implying another assumption. I believe it is called sensitivity to error.

        Instead of assuming “external” forcing you can just use a change in forcing to avoid confusion then instead of assuming “pre-industrial” was “normal” you just attribute what is actually known to what is known without assuming your butt off.

      • > Instead of assuming “external” forcing you can just use a change in forcing to avoid confusion […]

        Never misunderestimate your power to confuse, Cap’n.

        Compare your suggestion with this one:

        The definition of a forcing is essentially the net change in energy balance (change in net TOA flux) due to external (e.g. solar), volcanic emissions and internally human imposed perturbations (e.g. added CO2) . Typically, it has been defined relative to some baseline time period (IPCC, 2013). This change in energy balance will cause warming/cooling and a temperature response, which will then produce a feedback response.

        https://andthentheresphysics.wordpress.com/2015/06/22/assessing-anthropogenic-global-warming/

        I don’t know about you, but when AT, Senior, and the IPCC agree on something, I think it’s time to drop the stick and back off from the horse.

        My example refers to the fact that bivalence is the hallmark of realism, BTW.

      • Very good Willard, now all you have to do is try and understand what I have been saying.

      • You go first, Capt’n – try to understand that what you’re saying is crap.

      • Willard, “You go first, Capt’n – try to understand that what you’re saying is crap.”

        Perhaps you just aren’t trying willard. Per the IPCC quote you provided, “forcing” is relative to a reference period. If you pick the wrong reference in a thermodynamics problem, your results will be crap.

        So let’s see just how much crap you can expect. tos, what the models attempt to model, is a fictitious surface being compared to an index based on land tos and ocean surface temperature which is actually a bulk ocean layer temperature.

        The index, GMST, isn’t a thermodynamic temperature, it is an average of temperatures ranging from -80C to +50C converted to an anomaly which is then assumed to be relevant to a specific energy. With a thermodynamics relevant temperature there is no assumption, that is why it has a real meaning in thermodynamics. You can consult the zeroth law if you like. It is the only reference available, but you need to be aware of its issues.

        Climate modelers assume a boundary value problem, which is fine, however, until the initial conditions are accurately determined, “forcing” as defined is meaningless until the system reaches the assumed “steady state”. That “steady state” likely depends on the bulk of the heat capacity, 70% of the “surface”, which has settling times measured in centuries or longer.

        So what the IPCC is saying and ATTP is agreeing to, is that if they have made all of the correct assumptions, then all forcing is “external”. His argument against LIA recovery is just repeating a laundry list of questionable assumptions without addressing to issue.

        Want examples? The PAGES2K gang are trying to justify “early pre-industrial” warming in the tropical oceans.

        There was recently a new paper trying to explain the southern hemisphere temperature response lag using long term ocean circulation.

        Artfully dodging the subject isn’t addressing the subject.

      • So what the IPCC is saying and ATTP is agreeing to, is that if they have made all of the correct assumptions, then all forcing is “external”.

        Ummm, no, this is not what I’m saying.

      • > what the IPCC is saying and ATTP is agreeing to

        Whatever AT and the IPCC are saying, Senior’s fine with it, Cap’n.

        Go mansplain thermodynamics to Senior.

        Report.

      • attp, “Ummm, no, this is not what I’m saying.”

        Really? Pretend that the full recovery from the LIA will be complete in 2025 and that temperatures prior to 1100 AD are the norm for extremely low global volcanic activity like we have today. You know CO2 has a forcing impact, now know that some percentage of the warming to date is related to the cannot possibly be recovery from the LIA, so how to you split attribution?

        In your blog post you said this doesn’t make “physical” sense, which is basically assuming away the possibility.

        https://andthentheresphysics.wordpress.com/2014/06/19/recovering-from-the-lia/

      • A link, Cap’n! A link at last!

        Now, try quoting. Like this:

        [O]ur climate isn’t some kind of bouncing ball that has an equilibrium that’s defined by it’s own properties. The mean temperature of the planet is essentially determined by 3 factors, the amount of energy we get from the Sun, the amount that we reflect directly back into space (albedo), and the composition of our atmosphere (the greenhouse effect). If one of these things changes (what we’d typically call a change in forcing) we will warm if the changes causes us to get more energy than we’re losing, and it will cause us to cool if we lose more energy then we’re gaining. There are other effects (internal variability) that can cause the surface temperature to change, but if these other effects do not change the solar flux (impossible), our albedo (unlikely), or the composition of our atmosphere (possible in some cases), we will return to equilibrium quickly.

        You can even emphasize, like I just did.

        These two tricks will help you in your squirrel necromancy.

      • This discussion of what a forcing is seems to be unfortunate. It’s a little like trying to separate thrust and drag. The airplane system is quite complex with feedbacks etc. the reason the problem is so hard is that it’s the difference between thrust and drag that counts and it’s very hard to model accurately. Most codes are inadequate. Temperature anomaly is similarly very small compared to total energy fluxes and is smaller than the numerical truncation error so it’s very difficult to compute.

        The problem here is that you need to compute not just total forcings but their detailed distributions accurately. That’s a very complex task for an airplane but far harder for the atmosphere.

      • Temperature anomaly is similarly very small compared to total energy fluxes and is smaller than the numerical truncation error so it’s very difficult to compute.

        But you can use the Data to constrain the problem in ways that eliminates certain possible types of warming. The data proves there is no loss of cooling at night across the planet, the the range is can cool at night is a lot higher than is normally needed to cool the years warming.

      • Willard, You are a pip. Why didn’t you highlight this, “we will return to equilibrium quickly.” Now pretend I have made that statement instead of your buddy. Why would we return to equilibrium quickly and exactly what is “equilibrium?”

        If the oceans are in a steady warming state, that would be a recovery, and the rate is ~1 C per 300 years, that would imply ~0.33 C per century and since warming by any means would increase the atmosphere’s ability to hold water vapor, what would you expect the feedback to the steady warming to be? I believe you would accuse me of arm waving if I made an assumption, “quickly return to equilibrium”,based on an assumption, that the thermal mass of the oceans is negligible with respect to the atmospheres response to “external” forcing.

        Now what is the potential energy flux of 70% of the surface being at ~18C and having a thermal mass ~1000 times greater than the atmospheric tail you have wagging your dog?

      • > Why didn’t you highlight this […]

        Because as far as I’m concerned, we’re still onto your forcing definition sucks because it doesn’t allow for natural variability, and I already said that I won’t chase your own zombified squirrels before setting definitional matters first. What I emphasized suffices to show that you strawmanned AT. That comes right after having shown you that AT’s “forcing definition” is both the IPCC’s and Senior’s.

        Not bad for a pip.

        Now, if you could please wait until Denizens don’t have to beat a link out of you before complaining about editorial practices, that’d be great.Reading the discussion in the comments before mansplaining the zeroth law might also help reduce the number of strawmen in this virtual world. Not that I mind. I rather like your strawmen. More than your squirrels, actually.

        Thanks for playing,

        W

      • Willard, “we’re still onto your forcing definition sucks because it doesn’t allow for natural variability.”

        Which is why should be playing in another thread. The heat capacity of the oceans provide a near constant energy flux which would tend to limit the impact of negative forcing and in some cases amplify positive forcing. That is what a thermal reservoir does. When forcing and feedback definitions ignore an internal reservoir, you get garbage.

      • > When forcing and feedback definitions ignore an internal reservoir, you get garbage.

        Sure, Cap’n, and by Redneck magic, these reservoirs generate more warmth than they receive. How else could they have been powering Denizen’s armwaving for decades?

        I surmise that even if you were on a plank, you’d choose the sharks before quoting the definition you’re strawmanning.

      • Willard, “Sure, Cap’n, and by Redneck magic, these reservoirs generate more warmth than they receive. How else could they have been powering Denizen’s armwaving for decades?”

        This is why you need to find a new kiddie pool to play in.

        http://climexp.knmi.nl/data/iersstv4_0-360E_-65-65N_n.png

        According to the new and improved ERSSTv4, the oceans fro 65N to 65S, that would be the normally wet part, have an average temperature of 19.5 C degrees and a best guess of the total average surface temperature is about 15.5 C degrees. The majority of the water vapor and clouds in the air are due to this ~70% of the surface that stays about 5 C above average temperature and about 11 C above the average temperature of the land masses. The oceans regulate temperature i.e. they are a thermal reservoir, massive heat energy stored relative to the atmosphere.

        Here is a wiki thing for ya. https://en.wikipedia.org/wiki/Thermal_reservoir

        No arm waving on my part – It’s a fact Jack.

      • Oh and Willard, the reason you are confused about how the ocean can always be producing heat, is you are confusing average temperature with energy, the zeroth law thing. Part of the average surface temperature anomaly is the polar regions with average temperatures in the -25 C range, which radiantly speaking is takes about 3.4 Wm-2 to change one degree. The average oceans, excluding latent heat would take about 5.5 Wm-2 for one degree change and the tropics would be close to 6.2 Wm-2 for a degree change excluding latent. So with a thermal reservoir you really should think about the thermodynamics, it doesn’t take much energy transfer from the oceans to make a large temperature change in cold and dry regions of the Earth.

        You can kind of lose sight of that with anomalies.

      • ATTP named the 3 factors: Sun, Albedo, GHGs.
        If the Earth spends hundred of years in a LIA the few hundred top meters of the ocean lose Joules because of reduced shortwave. During a LIA they reduce Joule transmission both to the atmosphere and to their deeper reserves.

        “Both water masses were ~0.9°C warmer during the Medieval Warm period than during the Little Ice Age and ~0.65° warmer than in recent decades.”
        https://wattsupwiththat.com/2013/10/31/new-paper-shows-medieval-warm-period-was-global-in-scope/

        MWP: + 0.90 C
        LIA: + 0.00 C
        Now: + 0.25 C
        Average of MWP & LIA: + 0.45 C

        “Below the sea surface, historical measurements of temperature are far sparser, and the warming is more gradual, about 0.01°C per decade at 1,000 meters.”
        https://scripps.ucsd.edu/news/voyager-how-long-until-ocean-temperature-goes-few-more-degrees
        Or 0.10 C per century.

        Atmospheric warming since a long time ago: About: 0.9 C
        Range of OHC above: 0.9 C
        To go from the low to the high of the range of OHC, 1000 times the Joules need to be stored in the oceans as need to be stored in the atmosphere to get a rise of 0.9 C.

      • > The oceans regulate temperature […]

        A regulator just ain’t a generator, Cap’n, and it would be hard to claim the IPCC don’t know the fact that oceans regulate temps, e.g.:

        Climate Feedback Parameter A way to quantify the radiative response of the climate system to a global surface temperature change induced by a radiative forcing (units: W m–2 °C–1). It varies as the inverse of the effective climate sensitivity. Formally, the Climate Feedback Parameter (Λ) is defined as: Λ = (ΔQ – ΔF) / ΔT, where Q is the global mean radiative forcing, T is the global mean air surface temperature, F is the heat flux into the ocean and Δ represents a change with respect to an unperturbed climate.

        Do I need to find where AT and Senior agree on more of the same for you to drop that strawman?

        Please read the comment section of the post you’ve cited. Everything’s there.

      • A regulator just ain’t a generator,

        First it’s a capacitor, not a regulator, and you must have never discharged a 1f 50v capacitor with a wire if you think that doesn’t generate a large flow of power.

      • Viewing oceans as a capacitor doesn’t facilitate Cap’n’s increased circulation conjecture, Micro. That’s why he needs to talk about regulators.

        If you can sell any cheap capacitator that can be pro-active in generating energy by itself, I may be able to find you buyers.

      • Viewing oceans as a capacitor doesn’t facilitate Cap’n’s increased circulation conjecture, Micro. That’s why he needs to talk about regulators.

        All I saw was him saying was warm water in the arctic makes a bigger difference than it in the tropics.

      • And yet the ocean heat content is increasing (well, that’s what the data suggests).

      • attp, “And yet the ocean heat content is increasing (well, that’s what the data suggests).”

        Another useless comment. According to Rosenthal et al the oceans heat content has been increasing since circa 1700 and current heat content is still slightly below the 800 to 1000 ad period. There has even been a slight acceleration in the rate of heat uptake that could be due to CO2. However, unless the PAGE2K gang plan on digging up pre-industrial GHG forcing from a herd of unicorns, there is evidence of long term uninterrupted warming.

      • In fairness, you can’t say that I never warned you about the you make no sense move, AT.

      • Another useless comment.

        Ooooh, touchy.

        According to Rosenthal et al the oceans heat content has been increasing since circa 1700 and current heat content is still slightly below the 800 to 1000 ad period.

        Then maybe you can explain how the oceans can be the source of the warming if the energy is increasing.

        Now, one possibility (that I think I mentioned in my post) was that something forced the cooling (changes in solar insolation, enhanced volcanic activity,….) that then went away, producing a positive planetary energy imbalance, producing warming. One issue with this is that the warming profile should then be faster initially, and then slow down. Also, if this is the case, you would then either need to find some factor that can produce a change in forcing larger than the change in forcing due to the changes in atmospheric CO2 (what is it?) or illustrate that somehow our climate is much more sensitive to changes in solar/volcanoes, than it is to changes in atmospheric CO2. The efficacies may well be different, but not quite by that much.

      • Then maybe you can explain how the oceans can be the source of the warming if the energy is increasing.

        I’m not sure your comment makes any sense, but maybe this offers an answer to what I think you were trying to ask.
        The blob, also had a high pressure zone in the middle. High pressure zones tend to be cloud free, warming the surface, which tends to make high pressure. At the same time, the jet stream went around the northern side of the blob. The area down wind from the blob was warmer than normal, whether from the blob, or airflow changes from the jet stream.

      • dipstick, ” Δ represents a change with respect to an unperturbed climate.”

        Unperturbed would be in equilibrium. You are just reaching for a quote without thinking. The point is at what temperature and energy level would the climate be considered unperturbed aka normal or ideal.

      • > There has even been a slight acceleration in the rate of heat uptake that could be due to CO2.

        In fairness, we must admit that it could also be due in part to decomposing squirrels.

      • > You are just reaching for a quote without thinking.

        That’s one thing more than you, Cap’n. Try reading, for a change:

        Climate change commitment Due to the thermal inertia of the ocean and slow processes in the biosphere, the cryosphere and land surfaces, the climate would continue to change even if the atmospheric composition were held fixed at today’s values. Past change in atmospheric composition leads to a committed climate change, which continues for as long as a radiative imbalance persists and until all components of the climate system have adjusted to a new state. The further change in temperature after the composition of the atmosphere is held constant is referred to as the constant composition temperature commitment or simply committed warming or warming commitment. Climate change commitment includes other future changes, for example in the hydrological cycle, in extreme weather and climate events, and in sea level change.

        Wait – is the IPCC really talking about the thermal inertia of the ocean? I really thought that when you said about when forcing and feedback definitions ignore an internal reservoir.

        But please, do continue mansplaining reference periods and assumptions instead of acknowledging that a regulartor just ain’t a generator.

      • “In fairness, we must admit that it could also be due in part to decomposing squirrels.”

        nope, squirrels are predominately northern hemisphere animals and the NH imbalance is near zero, they might be a negative forcing :) You should go with whales.

      • > You should go with whales.

        I’d rather go with the undead squirrels contrarian captains fish out of the virtual seas to decompose comments, Cap’n.

      • “But please, do continue mansplaining reference periods and assumptions instead of acknowledging that a regulartor just ain’t a generator.”

        If the regulator is gaining energy due to “recovery” or something other than “external” forcing, it would appear to be a generator. Do try and keep up.

        ATTP, I am not even going to copy your comment, gaining energy is warming, it is the cause of the gain that is the question.

      • ATTP, I am not even going to copy your comment, gaining energy is warming, it is the cause of the gain that is the question.

        If you gain energy you warm, hence if the ocean is gaining energy, it can’t be the source of the warming.

      • “If you gain energy you warm, hence if the ocean is gaining energy, it can’t be the source of the warming.”

        The average temperature of the ocean surface is 19.5 C and the average temperature of the ocean volume is ~ 4 C. A change in average surface velocity or the rate of overturning circulation can cause warming with absolutely no forcing involved, just a shift in weather patterns. What you are assuming requires a real “equilibrium” but what we have is an assumed steady state with lots of fluid dynamics.

        Now since it only takes 0.6 Wm-2 squared to “warm” the bulk of the oceans and the average surface is emitting 400 Wm-2 plus latent and convective, you could have a bit of an issue with your energy balance, but how well the ocean mixes determines if it is warming or not.

        There is a paper by Ding et al. 2014 where they are attempting to model response to volcanic forcing. In the NH a volcano can cause an increase in the rate of the over turning circulation and actually “cause” a global average increase in surface air temperature, but it is a negative forcing. This is a pretty fun paradox for the volcanic forcing gang. This is really why I like this problem, it is wonderfully complex.

        http://climate.envsci.rutgers.edu/pdf/Ding_jgrc20810.pdf

        I cannot find another paper that is on climate of the past, but they used some pretty exciting terms for warming when there should have been none.

      • That doesn’t answer my question. If the ocean is gaining energy, how can it be the source of the warming?

      • > If the regulator is gaining energy due to “recovery” or something other than “external” forcing, it would appear to be a generator.

        Of course it would, just as we should never misunderestimate the warming power of undead sea squirrels.

        If you consider the oceans to be a generator, why don’t you state it as a fact, Cap’n Jack? Because it’s not. It’s just a possibility, along with squirrels and unicorns.

        Since I miss Pekka, here he is:

        [I] don’t really consider understanding internal variability to be of primary importance. it’s more important to know climate sensitivity, and there are other questions related to the dynamic behavior of climate when GHGs are added that are more important.

        Understanding internal variability helps in estimating the climate sensitivity, and may also help in figuring out what the future dynamics is likely to be, but these issue may be studied even with lesser understanding of internal variability.

        Another factor that reduces the importance of understanding the role of internal variability in the past warming is that making policy decisions should not require certainty, or even high likelihood of estimated outcome, it’s enough that the likelihood of relatively fast warming is not very small. Likelihood of 50% has almost the same policy implications as likelihood of 90%, and even a likelihood of 20% may have only little weaker influence on rational decision making that’s based on risk aversion.

        As you can see, I can play squirrel too.

      • “That doesn’t answer my question. If the ocean is gaining energy, how can it be the source of the warming?”

        I gave you the answer, increased circulation. Just look at NINO and the impact changes in circulation have on surface temperature and OHC. Now Nino is a bit incomplete, since it is normalized,

        http://climexp.knmi.nl/data/iersstv4_0-360E_-10-10N_n_a.png

        Then if you feel froggy, compare that with Oppo et al. 2009, Rosenthal et al. 2013? or even Emile Geay 2012/2013. The fire box for the atmospheric engine is in the tropics and y’all are basically looking up the exhaust pipe.

        There is also a new “rewrite” of volcanic forcing, Sigl et al. 2014 which shows more forcing than Crowley and Intermann 2013.

      • captdallas2 0.8 +/- 0.3

        Re your posted graph::

        There have been 2 depressions and 28 business recessions since 1870.

        You will find that essentially all of the temperature peaks will coincide with a period of decreased industrial activity, due to fewer SO2 aerosol emissions. (a few are due to strong El Ninos)

        Sorry. They are not due to water sloshing around! .

      • The “few due to El Ninos” would be water sloshing around. When you have negative aerosols forcing for whatever reason, cooling in the northern hemisphere tends to change the way the water sloshes around. Prior to the global business issues starting in 1870, there were still volcanoes and there would still be sloshing around.

        That sloshing is mixing, and if the mixing rate and efficiency changes, heat uptake and outflow would change.

        http://climexp.knmi.nl/data/inodc_heat700_0-360E_45-90N_n.png

        It an even appear to be cyclic.

      • Yes, “sloshing” is a “given” for El Niinos and La Ninas..

        However, it is NOT the cause of any increases in sea surface temperatures.

        Earlier, I had mentioned that you would find that essentially all of the peaks in your graph of sea surface temperatures would coincide with global business recessions or depressions.

        I have taken the time to do this, using an enlarged copy of your graph, and, excluding peaks due to El Ninos, the match is essentially perfect.

        Since the Woodfortrees HADCRUT graph of global mean temperatures and your graph of mean ERSST vr Sea Surface Temperatures show the same correlation with business recessions, it is obvious that their occurrence has a profound effect on average global temperatures.

        This effect can only be due to the reduction in dimming SO2 emissions during the reduced industrial activity associated with a business recession, Fewer aerosols=cleaner air=greater insolation (surface warming). .

        It therefore follows that the reduction in global SO2 emissions due to Clean Air efforts will have the same effect, and is responsible for all of the AGW that has occurred.

        (Projections of average global temperatures based solely upon the amount of reduction in net global SO2 emissions are accurate to within approx. .02 deg.C, leaving no room for any warming due to greenhouse gasses).

      • Burl,”However, it is NOT the cause of any increases in sea surface temperatures.”

        “Warming” is related to total energy, not just surface energy or temperature. The sloshing increases mixing which transfers heat energy to depth as well as to the poles. This started out as a discussion of long term recovery from an LIA type condition and how temperature response should be treated, as an “external” forcing or a response to some event that could take centuries to recover to the “normal” heat content.

        Normal aerosol forcing is really and unknown, so as you point out cleaner than “normal” air can result in warming and dirtier than “normal” air cooling. But surface temperature is only part of the warming/cooling process, you also need to know the rate of ocean heat uptake.

        So a lower than normal rate of ocean heat uptake, less sloshing, can produce an increase in surface temperature, that is an El Nino situation, but that surface warming actually increases the rate of heat loss to space. La nina, more sloshing, produces lower surface temperature, but actually increases the rate of “warming”.

        So “sloshing” variations in surface temperature that aren’t necessarily “forced” are actually indications of warming, not just some neutral “oscillation”. This is why OHC is recommended as a co-metric to be used with surface temperature to determine the actual response to whatever forcing or feedback or unicorn is involved.

        People tend to focus on what they “suspect” rather than considering the entire system. “Sloshing” is actually pretty important.

      • I miss Pekka as well, however, internal variability is a bit different than long term persistent recovery, which would have an impact on the likelihood of continued “relatively fast” warming and provided a clue about unavoidable warming. It would really suck to do all sorts of expensive and wonderful things that do diddly.

      • ATTP:
        “…that something forced the cooling (changes in solar insolation, enhanced volcanic activity,….) that then went away, producing a positive planetary energy imbalance, producing warming. One issue with this is that the warming profile should then be faster initially, and then slow down.”

        Say we had a coffee can (perfectly insulated) covered with glass and filled up almost to the top with ¼ inch between the surface of the water and the glass. The water is at room temperature and significant sunlight is now directed through the glass into the water. The energy is absorbed by the water. There is water vapor and CO2 in the air under the glass. That traps IR but not SW. The IR source is the water. In this case, first the water is warmed then it warms the air. The water has to accumulate before it can emit. The water is trying to reach its new equilibrium and will warm until it does. Since it is the source of the IR, the air cannot reach its new equilibrium until the water does. So would the air temperature spike as suggested by ATTP? In this experiment the source is only IR upwards from the water. The air is transparent to SW, as is the glass, as it’s magical glass. The glass is transparent to SW and IR.

      • > internal variability is a bit different than long term persistent recovery

        Indeed, Cap’n, and a banana is a bit different than a fruit and the color yellow.

        When you say that:

        The fire box for the atmospheric engine is in the tropics and y’all are basically looking up the exhaust pipe.

        I sense that increased circulation looks more like a carburator than the (gasp!) internal combustion engine.

      • stevenreincarnated

        If you increase poleward ocean heat transport you warm the world through reduced albedo and a dynamic increase in water vapor. At least that’s what the climate models say. I haven’t seen a study yet that says the warming is from sucking out the ocean heat content. If you are a skeptic and think climate models are no better than chicken guts then I can understand you not knowing or caring. If you argue the climate models are pretty good then you have no excuse for what can only be described as willful ignorance. The studies aren’t hard to find especially here since I have been linking them for years. Find some.

      • The sea is (more often than not) warmer than the air above it because most of the solar energy that makes it to the surface is SW that’s absorbed in the first few meters (or tens of meters) of water. That’s at the top of the mixing layer, so that heat gets spread through its entire depth.

        There’s a high rate of (net, average) heat flow from the upper ocean into the (cooler) air above it. (You could think of it, if you wish, as a capacitor in parallel with a resistor, but that’s a poor analogy.)

        Increased GHG’s potentially can reduce that heat flow by reducing the temperature gradient across the skin layer. How much of an effect that has appears to remain under debate.

      • ATTP:
        “If the ocean is gaining energy, how can it be the source of the warming?”
        Anyone for this?
        “Therefore, the natural sinks are greater than the natural emissions and, consequently, the source cannot be natural, and must be anthropogenic.”

        Not mostly, not in large part but Must Be. If somewhere around Peru upwelling is strong and sardines are happy, none of that formerly deep ocean stuff is a natural increase in CO2 because there is none.

        I think the ocean are allowed to do 2 things at once. Warm and emit more LW. If the LIA was caused by less TSI and more volcanic activity they then cooled and emitted less. I have boiler heat at my office. The boiler gains energy and is the source of heat we say, yet we know the natural gas is the source of the heat. We can look at semantics or try to. The Sun is the source. But the battery of warmth that are the oceans might be called a source. If we had the largest volcano in the past 2000 years erupt would we then call the oceans a source as they sacrificed their stored warmth?

      • Ragnaar, A heating water distribution system is a pretty good example. You could have two zones, your living space and your garage. You set the zones so your living space is at 25 C and your garage is set to 0 C just for freeze protection.

        You can increase your garage to 1 C, and decrease your living space to 24.2 C degrees and use less energy and have a higher average temperature. So if you divert higher temperature ocean water to a significantly lower temperature zone, you have increased your heating efficiency. There isn’t any slight of hand it is the impact of the zeroth law, average temperature isn’t relevant in most thermodynamic problems.

        Now if you divert more energy from the tropics, you increase the efficacy of solar forcing in the tropics while creating a larger heat loss in a cooler region. Since the ocean constantly release energy to the atmosphere, improving circulation just increases the efficiency of heat transfer. Ein=Eout forces the system to eventual attempt to reach equilibrium, but there is no design planetary time scale or requirement for it to remain in equilibrium, it can continue hunting for perfection as long as it likes. And when the system is imbalanced between hemispheres, the hunting can take 1000s of years.

        Brierly and Toggwieler have both published on how changes in meridional and zonal temperature gradients can impact “global” temperature. Toggwieler even tried to model some of the time frames but said they need a better model.

      • Hmmm, moderation, let me break it up.

        Ragnaar, A heating water distribution system is a pretty good example. You could have two zones, your living space and your garage. You set the zones so your living space is at 25 C and your garage is set to 0 C just for freeze protection.

        You can increase your garage to 1 C, and decrease your living space to 24.2 C degrees and use less energy and have a higher average temperature. So if you divert higher temperature ocean water to a significantly lower temperature zone, you have increased your heating efficiency. There isn’t any slight of hand it is the impact of the zeroth law, average temperature isn’t relevant in most thermodynamic problems.

      • part 2

        Now if you divert more energy from the tropics, you increase the efficacy of solar forcing in the tropics while creating a larger heat loss in a cooler region. Since the ocean constantly releases energy to the atmosphere, improving circulation just increases the efficiency of heat transfer. Ein=Eout forces the system to eventually attempt to reach equilibrium, but there is no design planetary time scale or requirement for it to remain in equilibrium, it can continue hunting for perfection as long as it likes. And when the system is imbalanced between hemispheres, the hunting can take 1000s of years.

        Brierly and Toggwieler have both published on how changes in meridional and zonal temperature gradients can impact “global” temperature. Toggwieler even tried to model some of the time frames but said they need a better model.

      • ‘“That doesn’t answer my question. If the ocean is gaining energy, how can it be the source of the warming?”

        I gave you the answer, increased circulation. Just look at NINO and the impact changes in circulation have on surface temperature and OHC. Now Nino is a bit incomplete.

        #####################

        tonight I boiled water. The water gained energy. I took its temperature.
        It was getting warmer. And it was circulating.. some parts were warmer than others. The air above the pot was also getting warmer.

        At one point I found a pattern in the circulation.. I named it Big Boy.

        I think it causes all the warming.it came and went we dont no why
        That pattern of warming explains a lot of the warming

        Thats it,!!!! that warm pattern in the warming water is the cause of the warming water.

        Water is a god.

        of does the ocean get warmer because of increased radiation.

        na..

        the ocean warms because its a thing that warms

      • Mosher, ” Thats it,!!!! that warm pattern in the warming water is the cause of the warming water.

        Water is a god.”

        As far as a world that is 70% with water, water vapor produces about 67% of the GHG effect and clouds which are mainly water, are responsible for about 90% of the albedo, pretty much. Adding CO2 is going to have an impact of about 1 C per doubling, if it happens to be close to a 100% efficient process. Water vapor etc. that change will be a function of temperature not the “suspected” forcing.

        Since you have an experiment, here is another. Instead of a stove top, place an iron on top of a glass filled with water. Monitor the average temperature of the water in the glass for about 1 hour. Now redo the experiment except every 10 minutes, remove the iron for a second and give the water a quick stir. Which glass of water will be warmer? Which glass had the greater amount of external forcing? Should we release the Unicorns?

      • attp, “If you gain energy you warm, hence if the ocean is gaining energy, it can’t be the source of the warming.”

        The “warming” is currently at a rate of ~0.60 Wm-2 and limited to the southern hemisphere where average ocean temperatures are about 3 C cooler than the northern hemisphere. The rate of warming is approximately, 0.14 % of the rate available at the surface of the ocean. If took a huge immersion blender and blended the oceans to a uniform temperature, what do you think would happen to the rate of warming?

        Micro, Regulator is a better analogy because the capacitor cannot be short circuited, there are a few other components in the circuit.

      • Micro, Regulator is a better analogy because the capacitor cannot be short circuited, there are a few other components in the circuit.

        It’s just got a high internal resistance. Actually it’s not that high (iirc that water property), so it must just be there’s a lot of it.
        Thermal conductivity of
        water = 0.6065 W/m·K
        Water, vapor (steam) = 0.016
        air = 0.024
        iron ~ 80
        concrete 0.1 to ~2.0
        asphalt = 0.75
        diamond = 1000
        copper ~ 400
        gold ~ 310
        Aluminum ~205

        So, water doesn’t conduct heat very fast, but there is a lot of it. So, as a voltage source this would be modeled with a moderate series resistance to the ideal capacitor. You could lump it all together, but it’d better be models as billions of interconnected capacitors with resistors between them, because they can be at different voltages (temperatures).
        Modeled like this, you’d change up the equator cap’s and let it flow through out the entire circuit.

        I bet we could make a good thermal model of the ocean…

      • too funny capt…

        you still dont get it.

        write out an equation. use physical units.

        then you will see your problem

      • Mosher, “then you will see your problem”

        The equation is pretty simple, E=a*T^4. (Tsource+Tsink)/2 is a meaningless term in thermodynamics. When I mention the Zeroth Law of thermodynamics, that is the only 2 equations you need.

        “mixing” efficiency has an impact on the “effective rate of diffusion” of heat energy. Since we are dealing with energy flows and not static junior high level nonsense, changing how efficiently a volume mixes is pretty important in fluid dynamics.

        So since you seemed to have failed the simple glass and iron quiz, perhaps you should read more?

      • captdallas:

        I didn’t get what you said at first.

        “That doesn’t answer my question. If the ocean is gaining energy, how can it be the source of the warming?”

        A longtime back you wrote of slushball Earth. The equator is open water, poles with lots of ice. I thought of the that as the uptake zones wide open still with the poles insulated. The earth should not loss it primary source of warmth. It would seem equator to pole and back again circulations would decrease. Either you get really cold poles just because you do, or because the equatorial regions hoard their warmth to keep life going. As it warms, you get increased circulation. We invented water cooled engines but really just copied that from nature. So it’s warmer and the GMST shows that. Anchorage is now plus whatever C because the oceans are warmer. And the primary source for that is the oceans. I’d say nothing can warm without the oceans warming on time frames of 100 years. The 100 years is from me trying not to rule out as long as 60 year cycles and all shorter cycles, which are probably not cycles but chaotic behavior. Take the simple exit from the glacial about 20000 years ago until now. How can the GMST rise without the oceans warming? It can’t. Was the ocean the source? Semantics about the Sun aside it was. If it was normal or natural to enter and interglacial, the oceans caused that as they warmed. The descent to the next glacial will be the oceans cooling followed by the GMST cooling. I agree the GMST is the wrong metric. It’s warming and when see evidence the system cooling in the Arctic, that’s supposed to be bad.

      • Some last comments on this:
        http://www.physicalgeography.net/fundamentals/images/rad_balance_ERBE_1987.jpg
        Karl found the pause had not paused. What are the the polar regions? Where the TOA might as well be at the surface in Winter. Measure that TOA and you can measure the energy leaving. To find a temperature rise there in Winter is to find a flow increase of energy out of the atmosphere. That does not seem fair or accurate. To say GMST is higher as Karl did I think is stretching things somewhat. This is another way the GMST is not a good metric to use. If we were back in 1750 the poles would be colder. Less energy flow needed or wanted.

    • ATTP, Remarkable, what more is there that needs to be said now?

    • Willis: Interesting reply.

      “Unfortunately, neither of your references touch that question. Instead, they look solely at whether increasing CO2 increases the FORCING … but very few people dispute that it increases the forcing.”

      They measured the downward radiance that struck their instruments (see their Figure 1a), in power per unit area per unit solid angle. And we know what extra energy does when it strikes an object – it’s absorbed or reflected. Either way, how does that not lead to a temperature increase of the surface and lower atmosphere?

      “which means that doubling CO2 (with a change in forcing of 3.7 W/m2) will change the downwelling radiation by less than one lousy percent …”

      Yet some people often say that 1 W/m2 of solar irradiance change is enough to effect climate.

      “And that amount of change is easily counteracted by small changes in albedo and changes in the daily emergence time of tropical clouds and thunderstorms and dust devils and the like.:

      But albedo changes – due to changes in plant cover, sea ice, glaciers – also happen because of net energy absorption or emission.

      “To the contrary, the natural experiment clearly demonstrates that warming in general has been good for man and beast alike—longer growing seasons, less inclement weather, less ice, less illness and mortality, the benefits are many.”

      That’s an opinion, not an experiment. You’d have to scientifically define “good,” and then find a way to measure it.

      BTW, I’d say that the Moscow 2010 heat wave (~50,000 deaths), and subsequent impacts to Russian and world grain supplies, and the European 2003 (~70,000 deaths) were “catastrophes.” And the recent Louisiana flooding.

      “So we actually have done the experiment to see if warming is dangerous, and the evidence says it isn’t”

      This isn’t an “experiment.” A true experiment would require a control Earth, which starts in the same initial state and evolves without anthropogenic GHG emissions or manmade land use changes. Climate science is an observational science – like geology, or medicine.

      Can you suggest any experiment that can be done on climate change? How would you, even in principle, do an experiment that shows an increase in CO2’s radiative forcing changes atmospheric temperature?

      “The IPCC has been saying for years that such a forcing change would result in a 3° change in the temperature, but we’ve seen nothing even remotely resembling that change.”

      Isn’t that 3°C after equilibrium has been reestablished (which will ultimately take millennia)?

      Gotta go for now. Cheers.

    • WIllis: I’m dubious about an extra 3-4 W/m2 from the increase in water vapor, but can’t investigate right now. Later.

    • Willis wrote:
      “As I showed in Precipitable Water and Precipitable Water Redux, observed changes in the precipitable water since 1980 have produced an increase in downwelling radiation of between three and four W/m2, about the same as a doubling of CO2.”

      Santer et al PNAS 2007 found that “total atmospheric moisture content over
      oceans has increased by 0.41 kg/m2 per decade since 1988.”

      http://www.pnas.org/content/104/39/15248

      Water vapor is 0.25% of the atmosphere’s mass, so its pressure is 24.9 kg/m2. If we use a logarithmic function for water vapor forcing, then the relative change in forcing is

      ln((24.9+0.41)/29.4)) = 1.6%

      A book by Lindzen says water vapor’s clear sky forcing is 75 W/m2 (and CO2’s is 32 W/m2).

      https://scienceofdoom.com/2011/02/24/water-vapor-vs-co2-as-a-greenhouse-gas/

      So that comes to only 1.2 W/m2 from the induced increase in cliamte change.

      I don’t know if this is right. It’s just a blog comment, and your posts are just blog posts. I’m sorry, but I don’t accept blog posts as science — not mine, not yours. They are far far lower in quality than peer reviewed journal papers by professional scientists.

  80. Why has this 2 week old thread suddenly come alive again?

    JUDITH! The denizens urgently need fresh meat…

    tonyb

    • i know i know, I’m beyond stupid crazy busy. Spent half of last 48 hours in planes, airports

      • I submit this post, Judy:

        Planes. Airports.

        Discuss.

        This would be my first guest post.

      • The search goes on to find new arguments to divert from the realities…. that we will all be long dead before any statistically solid trends on planetary periodicities manifest themselves from the chaotic stochastic noise currently forced into trend lines by emprirical 4×2, or 2×4 if American, for grant support …. or the seas lap at our lower lying doors. By which time we wil have used all the fossil and gone zero carbon nuclear for electricity and synthetic renewable fuel manufacture. Problem gone. Why not get a grant to do a Fourier analysys on the temperature records and separate out the natural frequencies? What’s that? Not enough data to plot one period? Not a modellable system? Shhh, don’t let on, must be a hiatus. ;-)

      • I see ‘science’ has gone up. You just need to put one up headed ‘politics’ and you’re home and dry

        Willard’s guest post sounds good. You just need to add one sentence
        ‘Should those that believe in CAGW stop taking planes or pay heavy ‘indulgences.’?

        Hope the business is rewarding

        tonyb

      • ‘Should those that believe in CAGW stop taking planes or pay heavy ‘indulgences.’?

        No.

        Where does does such a scheme end, tonyb? I answer my own rhetorical question: with those who believe in ‘CAGW’ living in caves, holding their breath.

        Perhaps you can do something cliscep Denizens didn’t: explain how my lobbying to raise my own taxes constitutes hypocrisy.

      • > You just need to add one sentence […]

        Keep your appeals to hypocrisy to your own posts, TonyB.

        Or add them in the comments, and wait how I’ll respond.

        Thank you for your concerns.

      • Looking on the light side, it may have really only been a little more that two seconds… On, the other side of his display screen every day is a thousand years in his model Three Point One.
        Windows 10; anyone?

      • My error.

        I should have typed: …’ been a little more than half a second…’

        Not even a full twinkling of the eye either.

  81. Dan Hughes:

    Thanks again for the thread.

    Several things seem clear: that GCMs are largely computational fluid dynamics, and that many people do not distinguish physics from engineering.

    I note that engineering strives for answers, largely using semi-empirical physics, not defining questions, so the GCMs seem more about engineering than physics.

    Engineering is well known for its failures to extrapolate into the future. See “100 year flood” or “Verrazano Narrows Bridge”. Civil Engineers happily designed inadequate drainage systems for decades until set right by Mandelbrot. http://nvlpubs.nist.gov/nistpubs/jres/09/jresv99n4p377_A1b.pdf
    It seems likely that parallel errors in statistics lurk in todays GCMs.

    Participants in this thread might also enjoy your comments,and others, in
    https://climateaudit.org/2016/02/27/gerry-browning-in-memory-of-professor-heinz-kreiss/ One of the comments there is: “Craig Loehle: The limitations – – mean that the GCMs are not “just physics” as is often claimed in blog discussions–”

    Are you aware of any efforts to extend lattice-Boltzmann (LB) to weather systems? At least LB starts with several conservation laws implicit in its formulation.