“Colorful fluid dynamics” and overconfidence in global climate models

by David Young

This post lays out in fairly complete detail some basic facts about Computational Fluid Dynamics (CFD) modeling. This technology is the core of all general circulation models of the atmosphere and oceans, and hence global climate models (GCMs).  I discuss some common misconceptions about these models, which lead to overconfidence in these simulations. This situation is related to the replication crisis in science generally, whereby much of the literature is affected by selection and positive results bias.

A full-length version of this article can be found at [ lawsofphysics1 ], including voluminous references. See also this publication [ onera ]

1        Background

Numerical simulation over the last 60 years has come to play a larger and larger role in engineering design and scientific investigations. The level of detail and physical modeling varies greatly, as do the accuracy requirements. For aerodynamic simulations, accurate drag increments between configurations have high value. In climate simulations, a widely used target variable is temperature anomaly. Both drag increments and temperature anomalies are particularly difficult to compute accurately. The reason is simple: both output quantities are several orders of magnitude smaller than the overall absolute levels of momentum for drag or energy for temperature anomalies. This means that without tremendous effort, the output quantity is smaller than the numerical truncation error. Great care can sometimes provide accurate results, but careful numerical control over all aspects of complex simulations is required.

Contrast this with some fields of science where only general understanding is sought. In this case qualitatively interesting results can be easier to provide. This is known in the parlance of the field as “Colorful Fluid Dynamics.” While this is somewhat pejorative, these simulations do have their place. It cannot be stressed too strongly however that even the broad “patterns” can be quite wrong. Only after extensive validation can such simulations be trusted qualitatively, and even then only for the class of problems used in the validation. Such a validation process for one aeronautical CFD code consumed perhaps 50-100 man years of effort in a setting where high quality data was generally available. What is all too common among non-specialists is to conflate the two usage regimes (colorful versus validated) or to make the assumption that realistic looking results imply quantitatively meaningful results.

The first point is that some fields of numerical simulation are very well founded on rigorous mathematical theory. Two that come to mind are electromagnetic scattering and linear structural dynamics. Electromagnetic scattering is governed by Maxwell’s equations which are linear. The theory is well understood, and very good numerical simulations are available. Generally, it is possible to develop accurate methods that provide high quality quantitative results.  Structural modeling in the linear elasticity range is also governed by well posed elliptic partial differential equations.

2        Computational Fluid Dynamics

The Earth system with its atmosphere and oceans is much more complex than most engineering simulations and thus the models are far more complex. However, the heart of any General Circulation Model (GCM) is a “dynamic core” that embodies the Navier-Stokes equations. Primarily, the added complexity is manifested in many subgrid models of high complexity. However, at some fundamental level a GCM is computational fluid dynamics. In fact GCM’s were among the first efforts to solve the Navier-Stokes equations and many initial problems were solved by the pioneers in the field, such as the removal of sound waves. There is a positive feature of this history in that the methods and codes tend to be optimized quite well within the universe of methods and computers currently used. The downside is that there can be a very high cost to building a new code or inserting a new method into an existing code. In any such effort, even real improvements will at first appear to be inferior to the existing technology. This is a huge impediment to progress and the penetration of more modern methods into the codes.

The best technical argument I have heard in defense of GCM’s is that Rossby waves are vastly easier to model than aeronautical flows where the pressure gradients and forcing can be a lot higher. There is some truth in this argument. The large-scale vortex evolution in the atmosphere on shorter time scales is relatively unaffected by turbulence and viscous effects, even though at finer scales the problem is ill-posed. However, there are many other at least equally important components of the earth system. An important one is tropical convection, a classical ill-posed problem because of the-large scale turbulent interfaces and shear layers. While usually neglected in aeronautical calculations, free air turbulence is in many cases very large in the atmosphere. However, it is typically neglected outside the boundary layer in GCMs. And of course there are clouds, convection and precipitation, which have a very significant effect on overall energy balance. One must also bear in mind that aeronautical vehicles are designed to be stable and to minimize the effects of ill-posedness, in that pathological nonlinear behaviors are avoided. In this sense aeronautical models may be actually easier to model than the atmosphere. In any case aeronautical simulations are greatly simplified by a number of assumptions, for example that the onset flow is steady and essentially free of atmospheric turbulence. Aeronautical flows can often be assumed to be essentially isentropic outside the boundary layer.

As will be argued below, the CFD literature is affected by positive results and selection bias. In the last 20 years, there has been increasing consciousness of and documentation of the strong influence that biased work can have on the scientific literature. It is perhaps best documented in the medical literature where the scientific communities are very large and diverse. These biases must be acknowledged by the community before they can be addressed. Of course, there are strong structural problems in modern science that make this a difficult thing to achieve.

Fluid Dynamics is a much more difficult problem than electromagnetic scattering or linear structures. First many of the problems are ill posed or nearly so. As is perhaps to be expected with nonlinear systems, there are also often multiple solutions. Even in steady RANS (Reynolds Averaged Navier-Stokes) simulations there can be sensitivity to initial conditions or numerical details or gridding.  The AIAA Drag Prediction Workshop Series has shown the high levels of variability in CFD simulations even in attached mildly transonic and subsonic flows. These problems are far more common than reported in the literature.

Another problem associated with nonlinearity in the equations is turbulence, basically defined as small scale fluctuations that have random statistical properties. There is still some debate about whether turbulence is completely represented by accurate solutions to the Navier-Stokes equations, even though most experts believe that it is. But the most critical difficulty is the fact that in most real life applications the Reynolds number is high or very high. The Reynolds number represents roughly the ratio of inertial forces to viscous forces. One might think if the viscous forcing was 4 to 7 orders of magnitude smaller than the inertial forcing (as it is for example in many aircraft and atmospheric simulations), it could be neglected. Nothing could be further from the truth. The inclusion of these viscous forces often results in an O(1) change in even total forces. Certainly, the effect on smaller quantities like drag is large and critical to successful simulations in most situations. Thus, most CFD simulations are inherently numerically difficult and simplifications and approximations are required. There is a vast literature on these subjects going back to the introduction of the digital computer; John Von Neumann made some of the first forays into understanding the behaviour of discrete approximations.

The discrete problem sizes required for modeling fluid flows by resolving all the relevant scales grow as Reynolds number to the power 9/4 in the general case, assuming second order numerical discretizations. Computational effort grows at least linearly with discrete problem size multiplied by the number of time steps. Time steps must also decrease as the spatial grid is refined because of the stability requirements of the Courant-Freidrichs-Levy condition as well as to control time discretization errors. The number of time steps grows as Reynolds number to the power 3/4. Thus overall computational effort grows with Reynolds number to the power 3. Thus, for almost all problems of practical interest, it is computationally impossible (and will be for the forseeable future) to resolve all the important scales of the flow and so one must resort to subgrid models of fluctuations not resolved by the grid. For many idealized engineering problems, turbulence is the primary effect that must be so modeled. In GCMs there are many more, such as clouds. References are given in the full paper for some other views that may not fully agree with the one presented here in order to give people a feel for the range of opinion in the field.

For modeling the atmosphere, the difficulties are immense. The Reynolds numbers are high and the turbulence levels are large but highly variable. Many of the supposedly small effects must be neglected based on scientific judgment. There are also large energy flows and evaporation and precipitation and clouds, which are all ignored in virtually all aerodynamic simulations for example. Ocean models require different methods as they are essentially incompressible. This in some sense simplifies the underlying Navier-Stokes equations but adds mathematical difficulties.

2.1       The Role of Numerical Errors in CFD

Generally, the results of many steady state aeronautical CFD simulations are reproducible and reliable for thin boundary and shear layer dominated flows by assuming little flow separation and subsonic flow. There are now a few codes that are capable of demonstrating grid convergence for the simpler geometries or lower Reynolds numbers. However, many of these simulations make many simplifying assumptions and uncertainty is much larger for separated or transonic flows.

The contrast with climate models speaks for itself. Typical grid spacings in climate models are often exceed 100 km and their vertical grid resolution is almost certainly inadequate. Further many of the models use spectral methods that are not fully stable. Various forms of filtering are used to remove undesirable oscillations. Further, the many subgrid models are solved sequentially, adding another source of numerical errors and making tuning problematic.

2.2       The Role of Turbulence and Chaos in Fluid Mechanics

In this section I describe some well verified science from fluid mechanics that govern all Navier-Stokes simulations and that must inform any non-trivial discussion of weather or climate models. One of the problems in climate science is lack of fundamental understanding of these basic conclusions of fluid mechanics or (as perhaps the case may be for some) a reluctance to discuss the consequences of this science.

Turbulence models have advanced tremendously in the last 50 years and climate models do not use the latest of these models, so far as I can tell. Further, for large-scale vortical 3D flow, turbulence models are quite inadequate. Nonetheless, proper modeling of turbulence by solving auxiliary differential equations is critical to achieving reasonable accuracy.

Just to give one fundamental problem that is a showstopper at the moment: how to control numerical error in any time accurate eddy resolving simulation. Classical methods fail. How can one tune such a model? You can tune it for a given grid and initial condition, but that tuning might fail on a finer grid or with different initial conditions. This problem is just now beginning to be explored and is of critical importance for predicting climate or any other chaotic flow.

When truncation errors are significant (as they are in most practical fluid dynamics simulations particularly climate simulations), there is a constant danger of “overtuning” subgrid models, discretization parameters or the hundreds of other parameters. The problem here is that tuning a simulation for a few particular cases too accurately is really just getting large errors to cancel for these cases. Thus skill will actually be worse for cases outside the tuning set. In climate models the truncation errors are particularly large and computation costs too high to permit systematic study of the size of the various errors. Thus tuning is problematic.

2.3       Time Accurate Calculations – A Panacea?

All turbulent flows are time dependent and there is no true steady state. However, using Reynolds averaging, one can separate the flow field into a steady component and a hopefully small component consisting of the unsteady fluctuations. The unsteady component can then be modeled in various ways. The larger the truly unsteady component is, the more challenging the modeling problem becomes.

One might be tempted to always treat the problem as a time dependent problem. This has several challenges, however. At least in principle (but not always in practice) one should be able to use conventional numerical consistency checks in the steady state case. For example, one can check grid convergence, calculate sensitivities for parameters cheaply using linearizations, and use the residual as a measure of reliability. For the Navier-Stokes equations, there is no rigorous proof that the infinite grid limit exists or is unique. In fact, there is strong evidence for multiple solutions, some corresponding to states seen in testing, and others not. All these conveniences are either inapplicable to time accurate simulations or are much more difficult to assess.

Time accurate simulations are also challenging because the numerical errors are in some sense cumulative, i.e., an error at a given time step will be propagated to all subsequent time steps. Generally, some kind of stability of the underlying continuous problem is required to achieve convergence. Likewise a stable numerical scheme is helpful.

For any chaotic time accurate simulation, classical methods of numerical error control fail. Because the initial value problem is ill-posed, the adjoint diverges. This is a truly daunting problem. We know numerical errors are cumulative and can grow nonlinearly, but our usual methods are completely inapplicable.

For chaotic systems, the main argument that I have heard for time accurate simulations being meaningful is “at least there is an attractor.” The thinking is that if the attractor is sufficiently attractive, then errors in the solution will die off or at least remain bounded and not materially affect the time average solution or even the “climate” of the solution. The solution at any given time may be wildly inaccurate in detail as Lorenz discovered, but the climate will (according to this argument) be correct. At least this is an argument that can be developed and eventually quantified and proven or disproven. Paul Williams has a nice example of the large effect of the time step on the climate of the Lorentz system. Evidence is emerging of a similar effect due to spatial grid resolution for time accurate Large Eddy Simulations and a disturbing lack of grid convergence. Further, the attractor may be only slightly attractive and there will be bifurcation points and saddle points as well. And, the attractor can be of very high dimension, meaning that tracing out all its parts could be computationally a monumental if not impossible task. So far, the bounds on attractor dimension are very large. My suggestion would be to develop and fund a large long term research effort in this area with the best minds in the field of nonlinear theory. Theoretical understanding may not be adequate at the present time to address it computationally. There is some interesting work by Wang at MIT on shadowing that may eventually be computationally feasible that could address some of the stability issues for the long-term climate of the attractor. For the special case of periodic or nearly periodic flows, another approach that is more computationally tractable is windowing. This problem of time accurate simulations of chaotic systems seems to me to be a very important unsolved question in fundamental science and mathematics and one with tremendous potential impact across many fields.

While climate modelers Palmer and Stevens’ 2019 short perspective note (see full paper for the reference) is an excellent contribution by two unusually honest scientists, there is in my opinion reason for skepticism about their proposal to make climate models into eddy resolving simulations. Their assessment of climate models is in my view mostly correct and agrees with the thrust of this post, but there are a host of theoretical issues to be resolved before casting our lot with largely unexplored simulation methods that face serious theoretical challenges. Dramatic increases in resolution are obviously sorely needed in climate models and dramatic improvements may be possible in subgrid models once resolution is improved. Just as an example, modern PDE based models may make a significant difference. I don’t think anyone knows the outcomes of these various steps toward improvement.

3        The “Laws of Physics”

The “laws of physics” are usually thought of as conservation laws, the most important being conservation of mass, momentum, and energy. The conservation laws with appropriate source terms for fluids are the Navier- Stokes equations. These equations correctly represent the local conservation laws and offer the possibility of numerical simulations. This is expanded on in the full paper.

3.1       Initial Value Problem or Boundary Value Problem?

One often hears that “the climate of the attractor is a boundary value problem” and therefore it is predictable. This is nothing but an assertion with little to back it up. And of course, even assuming that the attractor is regular enough to be predictable, there is the separate question of whether it is computable with finite computing time. It is similar to the folk doctrine that turbulence models convert an ill-posed time dependent problem into a well posed steady state one. This doctrine has been proven to be wrong – as the prevalence of multiple solutions discussed above shows. However, those who are engaged in selling CFD have found it attractive despite its unscientific and effectively unverifiable nature.

A simple analogy for the climate system might be a wing as Nick Stokes has suggested. As pointed out above, the drag for a well-designed wing is in some ways a good analogy for the temperature anomaly of the climate system. The climate may respond linearly to changes in forcings over a narrow range. But that tells us little. To be useful, one must know the rate of response and the value (the value of temperature is important for example for ice sheet response). These are strongly dependent on details of the dynamics of the climate system through nonlinear feedbacks.

Many use this analogy to try to transfer the credibility [not fully deserved] from CFD simulations of simple systems to climate models or other complex separated flow simulations. This is not a correct implication. In any case, even simple aeronautical simulations can have very high uncertainty when used to simulate challenging flows.

3.2       Turbulence and SubGrid Models

Subgrid turbulence models have advanced tremendously over the last 50 years. The subgrid models must modify the Navier-Stokes equations if they are to have the needed effect. Turbulence models typically modify the true fluid viscosity by dramatically increasing it in certain parts of the flow, e.g., a boundary layer. The problem here is that these changes are not really based on the “laws of physics”, and certainly not on the conservation laws. The models are typically based on assumed relationships that are suggested by limited sets of test data or by simply fitting available test data. They tend to be very highly nonlinear and typically make an O(1) difference in the total forces. As one might guess, this area is one where controversy is rife. Most would characterize this as a very challenging problem, in fact one that will probably never be completely solved, so further research and controversy is a good thing.

Negative results about subgrid models have begun to appear. One recent paper shows that cloud microphysics models have parameters that are not well constrained by data. Using plausible values, ECS (equilibrium climate sensitivity) can be “engineered” over a significant range. Another interesting result shows that model results can depend strongly on the order chosen to solve the numerous subgrid models in a given cell. In fact, the subgrid models should be solved simultaneously so that any tuning is more independent of numerical details of the methods used. This is a fundamental principle of using such models and is the only way to ensure that tuning is meaningful. Indeed, many metrics for skill are poorly replicated by current generation climate models, particularly regional precipitation changes, cloud fraction as a function of latitude, Total Lower Troposphere temperature changes compared to radiosondes and satellite derived values, tropical convection aggregation and Sea Surface Temperature changes, just to name a few. This lack of skill for SST changes seems to be a reason why GCM model-derived ECS is inconsistent with observationally constrained energy balance methods.

Given the large grid spacings used in climate models, this is not surprising. Truncation errors are almost certainly larger than the changes in energy flows that are being modeled.  In this situation, skill is to be expected only on those metrics involved in tuning (either conscious or subconscious) or metrics closely associated with them. In layman’s terms, those metrics used in tuning come into alignment with the data only because of cancellation of errors.

One can make a plausible argument for why models do a reasonable job of replicating the global average surface temperature anomaly. The models are mostly tuned to match top of atmosphere radiation balance. If their ocean heat uptake is also consistent with reality (and it seems to be pretty close) and if the models conserve energy, one would expect the average temperature to be roughly right even if it is not explicitly used for tuning. However, this apparent skill does not mean that other outputs will also be skillful.

This problem of inadequate tuning and unconscious bias plagues all application areas of CFD. A typical situation involves a decades long campaign of attempts to apply a customer’s favorite code to an application problem (or small class of problems). Over the course of this campaign many, many combinations of gridding and other parameters are “tried” until an acceptable result is achieved. The more challenging issue of establishing the limitations of this acceptable “accuracy” for different types of flows is often neglected because of lack of resources. Thus, the cancellation of large numerical errors is never quantified and remains hidden, waiting to emerge when a more challenging problem is attempted.

3.3       Overconfidence and Bias

As time passes, the seriousness of the bias issue in science continues to be better documented and understood. One recent example quotes one researcher as saying “Loose scientific methods are leading to a massive false positive bias in the literature.” Another study states:

“Poor research design and data analysis encourage false-positive findings. Such poormethods persist despite perennial calls for improvement, suggesting that they result from something more than just misunderstanding. The persistence of poor methods results partly from incentives that favour them, leading to the natural selection of bad science.”

In less scholarly settings, these results are typically met with various forms of rationalization. Often we are told that “the fundamentals are secure” or “my field is different” or “this affects only the medical fields.” To those in the field, however, it is obvious that strong positive bias affects the Computational Fluid Dynamics literature for the reasons described above and that practitioners are often overconfident.

This overconfidence in the codes and methods suits the perceived self-interest of those applying the codes (and for a while suited the interests of the code developers and researchers), as it provides funding to continue development and application of the models to ever more challenging problems. Recently, this confluence of interests has been altered by an unforeseen consequence, namely laymen who determine funding have come to believe that CFD is a solved problem and hence have dramatically reduced the funding stream for fundamental development of new methods and also for new theoretical research. This conclusion is an easy one for outsiders to reach given the CFD literature, where positive results predominate even though we know the models are just wrong both locally and globally for large classes of flows, for example strongly separated flows. Unfortunately, this problem of bias is not limited to CFD, but I believe is common in many other fields that use CFD modeling as well.

Another rationalization used to justify confidence in models are appeals to the “laws of physics” as discussed above. These appeals however omit a very important source of uncertainty and seem to provide a patina of certainty covering a far more complex reality.

Another corollary of the doctrine of the “laws of physics” is the idea that “more physics” must be better. Thus, simple models that ignore some feedbacks or terms in the equations are often maligned. This doctrine also suits the interest of some in the community, i.e., those working on more complex and costly simulations. It is also a favored tactic of Colorful Fluid Dynamics to portray the ultimately accurate simulation as just around the corner if we get all the “physics” included and use a sufficiently massive parallel computer. This view is not an obvious one when critically examined. It is widely held however among both people who run and use CFD results and those who fund CFD.

3.4       Further Research

So what is the future of such simulations and GCMs? As attempts are made to use them in areas where public health and safety are at stake, estimating uncertainty will become increasingly important. Items deserving attention in my opinion are discussed in some detail in the full paper, posted here on Climate Etc. I would argue that the most important elements needing attention, both in CFD and in climate and weather modeling, are new theoretical work and insights and the development of more accurate data. The latter work is not glamorous and the former can entail career risks. These are hard problems. and in many cases, a particular line of enquiry will not yield anything really new.

The dangers to be combatted include:

  • It is critical to realize that the literature is biased and that replication failures are often not published.
  • We really need to escape from the elliptic boundary value problem (well posed) mental model that are held by so many with a passing familiarity with the issues. A variant of this mental model one encounters in the climate world is the doctrine of “converting an initial value problem to a boundary value problem.” This just confuses the issue, which is really about the attractor and its properties. The methods developed for well-posed elliptic problems have been pursued about as far as they will take us. However, this mental model can result in dramatic overconfidence in models in CFD.
  • A corollary of the “boundary value problem” misnomer is the idea that “If I run the model right, the answer will be right” mental model. This is patently false and even dangerous, however, it gratifies egos and aids in marketing.

4        Conclusion

I have tried to lay out in summary form some of the issues with high Reynolds number fluid simulations and to highlight the problem of overconfidence as well as some avenues to try to fundamentally advance our understanding. Laymen need to be aware of the typical tactics of the dark arts of “Colorful Fluid Dynamics” and “science communication.” It is critical to realize that much of the literature is affected by selection and positive results bias. This is something that most will admit privately, but is almost never publicly discussed.

How does this bias come about? An all too common scenario is for a researcher to have developed a new code or a new feature of an old code or to be trying to apply an existing code or method to a particular test case of interest to a customer. The first step is to find some data that is publicly available or obtain customer supplied data. Much of the older and well documented experiments involve flows that are not tremendously challenging. One then runs the code or model (adjusting grid strategies, discretization and solver methodologies, and turbulence model parameters or methods) until the results match the data reasonably well. Then the work often stops (in many cases because of lack of funding or lack of incentives to draw more scientifically balanced conclusions) and is published. The often large number of runs with different parameters that provided less convincing results are explained as due to “bad gridding,” “inadequate parameter tuning,” “my inexperience in running the code,” etc. The supply of witches to be burned is seemingly endless. These rationalizations are usually quite honest and sincerely believed, but biased. They are based on a cultural bias that if the model is “run right” then the results will be right, if not quantitatively, then at least qualitatively. As we saw above, those who develop the models themselves know this to be incorrect as do those responsible for using the simulations where public safety is at stake. As a last resort one can always point to any deficiencies in the data or for the more brazen, simply claim the data is wrong since it disagrees with the simulation. The far more interesting and valuable questions about robustness and uncertainty or even structural instability in the results are often neglected. One logical conclusion to be drawn from the perspective by Palmer and Stevens calling for eddy resolving climate models is that the world of GCM’s is little better. However, this paper is a hopeful sign of a desire to improve and is to be strongly commended.

This may seem a cynical view, but it is unfortunately based on practices in the pressure filled research environment that are all too common. There is tremendous pressure to produce “good” results to keep the funding stream alive, as those in the field well know. Just as reported in medically related fields, replication efforts for CFD have often been unsuccessful, but almost always go unpublished because of the lack of incentives to do so. It is sad to have to add that in some cases, senior people in the field can suppress negative results. Some way needs to be found to provide incentives for honest and objective replication efforts and publishing those findings regardless of the opinions of the authors of the method. Priorities somehow need to be realigned toward more scientifically valuable information about robustness and stability of results and addressing uncertainty.

However, I see some promising signs of progress in science. In medicine, recent work shows that reforms can have dramatic effects in improving the quality of the literature. There is a growing recognition of the replication crisis generally and the need to take action to prevent science’s reputation with the public from being irreparably damaged. As simulations move into the arena affecting public safety and health, there will be hopefully increasing scrutiny, healthy skepticism, and more honesty. Palmer and Stevens’ recent paper is an important (and difficult in the politically charged climate field) step forward on a long and difficult road to improved science.

In my opinion those who retard progress in CFD are often involved in “science communication” and “Colorful Fluid Dynamics.” They sometimes view their job as justifying political outcomes by whitewashing high levels of uncertainty and bias or making the story good click bait by exaggerating. Worse still, many act as apologists for “science” or senior researchers and tend to minimize any problems. Nothing could be more effective in producing the exact opposite of the desired outcome, viz., a cynical and disillusioned public already tired of the seemingly endless scary stories about dire consequences often based on nothing more than the pseudo-science of “science communication” of politically motivated narratives. This effect has already played out in medicine where the public and many physicians are already quite skeptical of health advice based on retrospective studies, biased reporting, or slick advertising claiming vague but huge benefits for products or procedures. Unfortunately, bad medical science continues to affect the health of millions and wastes untold billions of dollars. The mechanisms for quantifying the state of the science on any topic, and particularly estimating the often high uncertainties, are very weak. As always in human affairs, complete honesty and directness is the best long term strategy. Particularly for science, which tends to hold itself up as having high authority, the danger is in my view worth addressing urgently. This response is demanded not just by concerns about public perceptions, but also by ethical considerations and simple honesty as well as a regard for the lives and well-being of the consumers of our work who deserve the best information available.

Biosketch.: David Young received a PhD in mathematics in 1979 from the University of Colorado-Boulder. After completing graduate school, Dr. Young joined the Boeing Company and has worked on a wide variety of projects involving computational physics, computer programming, and numerical analysis. His work has has been focused on the application areas of aerodynamics, aeroelastics, computational fluid dynamics,airframe design, flutter, acoustics, and electromagnetics. To address these applications, he has done original theoretical work in high performance computing, linear potential flow and boundary integral equations, nonlinear potential flow, discretizations for the Navier-Stokes equations, partial differential equations and the finite element method, preconditioning methods for large linear systems, Krylov subspace methods for very large nonlinear systems, design and optimization methods, and iterative methods for highly nonlinear systems.

Moderation note: This is a technical thread, and comments will be ruthlessly moderated for relevance and civility.

214 responses to ““Colorful fluid dynamics” and overconfidence in global climate models

  1. Pingback: “Colorful fluid dynamics” and overconfidence in global climate models - Climate- Science.press

  2. This is in the conclusions:
    There is tremendous pressure to produce “good” results to keep the funding stream alive, as those in the field well know.

    In other words, “the results must be consistent with climate alarmism” or it will not get funded.

  3. Thank you, David.

  4. About a decade or so before today one of our suppliers started using CFD in the design of pump impellers. One of the experienced impeller designers said that the CFD acronym stood for “Colorful Fluid Deception.”

    Prgoress has been made over the years.

    • Progress has been made where the CFD solutions could be compared with test data.

      For climate models we have decades of climate forecasts that forecast warming for now that has happened. The coldest published forcasts for 2022 that was done a decade ago are still too hot where greenhouse gas is supposed to matter. John Christy’s results are presented by many.

  5. Moderation note: This is a technical thread,

    Using CFD to get correlations with a per-determined political answers, and hide the uncertainty in the validity of the per-determined answers is very technically difficult.

  6. In the case of CFD used in designing airplanes and space reentry vehicles, many wind tunnel tests are performed and the results of both methods are compared to confirm validity of the CFD results before it is extended beyond what is possible to test.

    The largest parallel computers are used to simulate the problems.

    To know the input data for a climate model is not possible, there is no way to validate the results. Just throw out all the results that do not get the per-determined answer.

    There are too many issues with modeling the climate system to even count them all.

    David Young did describe many of the issues very well!

    • Fortunately for the flying public, wind tunnel and flight testing are an integral part of the design and certification of new aircraft. Performance guarantees are based on at least 10 sources of data which includes wind tunnel results and buildups based on past airplanes.

      Generally, flight test campaigns include edge of the flight envelop conditions. It is truly amazing how safe flying is today.

  7. Horatio S Wildman

    At the same time that we acknowledge our uncertainties inherent in the Navier-Stokes based global circulation models, we should also acknowledge our certainties. We are confident in the quantum mechanics based radiation laws. The earth floats in a near perfect vacuum, i.e. no possibility of heat conduction. When considered as a whole, the heat conservation of our planet simplifies to the balance of only two factors, the heat already present and the heat absorbed from the sun. The heat absorbed from the sun is determined by only two parameters, the luminosity of the sun received by the earth and the relative amount reflected back into space. The heat radiated from the surface of the earth also depends on only two parameters, the average emissivity of the surface and the average back radiation from the atmosphere. Two of the four parameters (the luminosity and the average emissivity) are very constant. We are very confident in the ”top of the atmosphere” conservation laws. It’s not just a “plausible argument for why models do a reasonable job replicating the global average temperature anomaly.” Its a very strong argument. We should emphasize the progress we’ve made since Svante Arrhenius’s estimate of ECS in ~1905. There is a greater risk ignoring our certainties than from “overconfidence” in GCMs.

    • “… the heat conservation of our planet simplifies to the balance of only two factors, the heat already present and the heat absorbed from the sun.”

      That is not entirely correct. There is also residual heat in the core that is transferred to the surface by conduction (geothermal gradient) and new heat produced by radioactive decay in the mantle and crust.

      Lord Kelvin infamously made a serious error in attempting determine the age of the Earth because he was unaware of the contribution of the heat from radioactive isotopes.

    • Perhaps. But as I point out in the piece, this radiative physics tells us little that is of real value. We really need to know the slope (sensitivity of the system) and the intercept. That is strongly dependent on very complex nonlinear feedbacks in the system.

      We have made almost no progress in the last 40 years in estimates of ECS. The range in the IPCC assessments is still almost as large as it was in the first assessment. One reason for this is that models have actually gotten worse in terms of agreement with the historical record. In the detailed paper, I give a couple of references as to why. It turns out that cloud parameterizations have inputs that are not well constrained by available data and using plausible values allows engineering ECS over a pretty wide range. Many CMIP6 models tried to make changes to their cloud models to match data and the result was unrealistically high ECS.

      As Judith has pointed out, GCM’s do not replicate even the average temperature of the current climate. If memory serves, they are a couple of degrees off. This is the intercept value.

      • Indeed, re nonlinear feedbacks the system is far-from-equilibrium making ECS an oxymoron. Thus Wildman’s “certainties” tell us nothing useful. ECS is a property of models, not of the climate system.

    • You described a static energy balance problem with no energy storage and internal responses.
      When analyzing a complicated machine or vehicle, the internal springs and masses and energy storage must be considered. One car can drive down a bumpy road with a suspension the smooths the ride and another cannot even drive at that speed on that road.
      Climate stores solar energy in water and transports it to other regions including the Polar regions where evaporation using the energy from the tropics powers evaporation and snowfall and sequestering of ice.

      Warmest times are warmest even though the energy comes from the same sun and there is the most IR out . The extra IR out is forming ice.

      Coldest times are coldest even though the energy comes from the same sun and there is the least IR out.

      Consensus says the opposite happens, consensus says less IR out would result in only warmer.

      Colder times are colder because more ice is spread over larger area with cooling by reflecting, they consider that, and also cooling by the ice thawing, they have never considered that.

      Ice accumulates most when polar oceans are thawed and ice depletes most when polar oceans are frozen.

      This is why the climate has been so tightly regulated for ten thousand years, enough ice has been sequestered on Antarctica and other cold places to prevent major warming which in turn prevents major cooling.

      History and ice core records prove this true.

    • Horatio, you say:

      “The heat absorbed from the sun is determined by only two parameters, the luminosity of the sun received by the earth and the relative amount reflected back into space.”

      This is only partially true. What also counts is where the heat is absorbed—in the atmosphere on the way down, in the ocean, in the cryosphere, in the clouds on the way down, in the clouds and atmosphere on the way up after surface reflection … and all of those make a huge difference.

      Next, you say:

      “The heat radiated from the surface of the earth also depends on only two parameters, the average emissivity of the surface and the average back radiation from the atmosphere.”

      Again, only partially true. The heat radiated is also a function of the net parasitic losses (sensible and latent heat) from the surface to the atmosphere, as well as the amount of heat advected to the poles.

      To misquote the Bard,

      “There are more climate variables in heaven and Earth, Horatio,
      Than are dreamt of in your philosophy”

      w.

      • Willis Eschenbach | December 2, 2022 at 9:59 pm | Re
        “The heat absorbed from the sun is determined by only two parameters, the luminosity of the sun received by the earth and the relative amount reflected back into space.”

        This is only partially true. What also counts is where the heat is absorbed—in the atmosphere on the way down, in the ocean, in the cryosphere, in the clouds on the way down, in the clouds and atmosphere on the way up after surface reflection … and all of those make a huge difference.”

        Completely true.

        None of the factors mentioned by Willis and expanded on in several recent articles at WUWT remove the elephant in the room.

        It is the old pound of feathers pound of lead trick.

        No matter what the composition of the absorbing layers.
        No matter how much convection appears to occur.
        No matter what spread of absorbed energy appears to occur.

        One simple fact remains.
        Water, air, feathers , lead, Clouds,trees, poles and deserts.
        Any chemical or GHG or photo electric cell.
        Energy in equals energy out.

        Makes no sense?

        Physics is physics.
        When it makes no sense but it occurs then the reasoning being used is faulty at some level.
        Either SB works as a rule, and it does.
        Or the atmosphere and earth can magically store energy in the oceans when it wishes and release it when it wishes.
        TOA exists because physics denies the capacity of mass to spontaneously store energy.

      • angech:
        “Physics is physics.
        When it makes no sense but it occurs then the reasoning being used is faulty at some level.
        Either SB works as a rule, and it does.
        Or the atmosphere and earth can magically store energy in the oceans when it wishes and release it when it wishes.
        TOA exists because physics denies the capacity of mass to spontaneously store energy.”

        SB doesn’t work both ways.

        Mass stores some energy because mass is not always capable to get rid of energy.

        https://www.cristos-vournas.com

      • Horatio S Wildman

        “There are more climate variables in heaven and Earth, Horatio,
        Than are dreamt of in your philosophy”

        Ha ha! Good one. There are indeed! Thanks.
        H

      • CV
        “Mass stores some energy because mass is not always capable to get rid of energy.”

        Mass is mass and energy is energy.
        Mass cannot store energy and energy cannot be stored in mass.
        Mass can be motionless and energy moves at the speed of light
        Mass can react to the presence of energy and Energy can react to mass.

      • CV
        “Mass stores some energy because mass is not always capable to get rid of energy.”

        angech
        “Mass can react to the presence of energy and Energy can react to mass.”

        CV
        “Mass stores some energy because mass is not always capable to get rid of energy.”

        angech
        “Mass is mass and energy is energy.
        Mass cannot store energy and energy cannot be stored in mass.”
        “Mass can react to the presence of energy and Energy can react to mass.”

        angech, while energy reacts with mass, there always is some time-delay for the mass to complete reacting with the energy and to get rid of that energy. During that time-delay some of the energy has not been rid of yet, and that is what I call:
        “Mass stores some energy because mass is not always capable to get rid of energy.”

        https://www.cristos-vournas.com

      • CV
        “angech, while energy reacts with mass, there always is some time-delay for the mass to complete reacting with the energy and to get rid of that energy.”

        A new concept.
        Time delay.
        Time works in mysterious ways but it does not stop hence cannot delay.

        Energy moves at the speed of light .
        Mass moves not at all (in and of itself that is)
        There is no room for delay.

      • angech

        “Energy moves at the speed of light .
        Mass moves not at all (in and of itself that is)
        There is no room for delay.”

        Radiative energy moves in vacuum fast, it moves at the speed of light in vacuum…

        Energy doesn’t move so fast in mass.

        https://www.cristos-vournas.com

    • “As always in human affairs, complete honesty and directness is the best long term strategy.” -David Young
      (So true yet so disregarded.)

      “When considered as a whole, the heat conservation of our planet simplifies to the balance… ”

      There is huge difference in global mean average temperature between a planet (A) that transports it equatorial heat effectively toward poles and surface heat to top of atmosphere and a planet (B) that doesn’t.

      They would both would be in radiative balance. Planet (A) would enjoy a large habitable zone, with mild winter nights and summer days, but suffer melting glaciers and polar ice. Planet (B) would have more polar ice but smaller habitable zone.

  8. Climate science would be best served by throwing out Peer Reviewed Consensus Established Facts and honestly study history and data. Many false facts are used to calculate additional data that are declared facts. The evidence against consensus cannot even be openly discussed. The funding supporting the alarmism has had Billions, which is now Trillions. NASA’s website that discusses climate change only has information about greenhouse gases, nothing about Oceans and Ice. Ice ages were times the earth was colder because ice extent covered large areas of land. They claim something caused climate to get cold and that causes the ice. No, evaporation and snowfall are necessary to change ocean water into ice on land. It takes energy to power evaporation and snowfall, that can only start in the warmest times when the Arctic was deep and warm. Ice accumulated inside the Arctic and on the northern edges of the continents, first, not causing significant cooling of the ice core records for thousands of years while Antarctic Ice core records showed max ice accumulations in warmest times, ice in the far north was accumulating during the same times, major warming and rising sea levels removed the ice from the north during the major warm times so Greenland records do not go back past the major warm times but Greenland ice records do go back into the last major cold period. Ice extent advances after many years of accumulations near the poles which does not cool the rest of the climate system until the ice gets deep and heavy.

    CFD climate models have never been used to show what caused major ice ages to start and end and have not been used to show what even caused the most recent Little Ice Age to start or end.

    They claim nobody knows. Nobody knows because we are not allowed to discuss and debate the issues unless we agree with consensus. We have ice core records and other data. Answers are found by examining and discussing and debating the data. Models cannot calculate what happens in the future if the data from the past cannot be reproduced and they do not understand even how to start and have not tried.

    • Herman A (Alex) Pope:

      A good post!

      You say “CFD climate models have never been used to show what caused the major Ice Ages to start and end and have not been used to show what even caused the most recent Ice Age to start or end”..

      Actually, they are not needed.. The LIA was caused by a spate of VEI4 and larger volcanic eruptions, and since volcanoes have been erupting for millions of years, there is no reason to postulate any other mechanism. The real question is what triggered the volcanism for the LIA and the earlier recurrent Ice Ages.

      The Central England Instrumental Temperatures Data Set (1659 to the present) covers about a third of the Little Ice Age, and all but 14 years of the Maunder Minimum, and the Dalton minimum..

      For that LIA period, EVERY temperature decrease was due to a known VEI4 or larger volcanic eruption somewhere around the world, that spewed reflective (cooling) SO2 aerosols into the stratosphere, with 2 or 3 exceptions (probably sea-bed eruptions), There was no trace of any additional cooling due to the sunspot minimums

      See “The Definitive Cause of Little Ice Age Temperatures”

      https://doi.org/10.30574/wjarr.2022.13.2.0170 , or on Goggle Scholar

  9. “Time accurate simulations are also challenging because the numerical errors are in some sense cumulative, i.e., an error at a given time step will be propagated to all subsequent time steps.”

    I’m reminded of the work by Dr. Pat Frank for the propagation of uncertainty.

    • I don’t agree with Frank’s analysis. In the detailed paper, I give some references to the correct theory for ODE’s anyway that was developed in the 1970’s. Basically, some form of stability of the underlying system is required to show that errors will remain bounded. There are also rigorous adaptive time step selection methods to control numerical error. Unfortunately these are almost never used for PDE simulation software.

      • Uncertainty is not error, David. Conflating them is a common mistake.

        Growth of predictive uncertainty (not error) is unbounded in an iterative calculation.

      • Lets work through this. Say I have a stable ODE and there is uncertainty in the initial conditions and perhaps the coefficients. I then get a bundle of trajectories in which the true solution will lie. But that bundle can grow or shrink with time depending on the eigenvalues of the system.

      • ‘Error’ is the difference between a measurement result and the value of the measurand while ‘uncertainty’ describes the reliability of the assertion that the stated measurement result represents the value of the measurand.

        citing.

        Pat, can you explain this in examples where they are different? Thanks. -R

      • David, each of your trajectories will have its own uncertainty envelope that arises (at least in part) because the uncertainties in the magnitudes of the coefficients carry through the equation into each prediction.

        Ron, an example can come from Lauer and Hamilton, 2013. In their calibration expression, cloud fraction error is the (simulated minus observed) difference in cloud fraction at each grid point over the globe.

        Global uncertainty in cloud fraction is the root-mean-square of all the cloud fraction errors over all the grid-points across the globe = the global average calibration ±uncertainty in simulated cloud fraction.

      • My point is that if the system is stable and the uncertainty in the coefficients is small, the uncertainty can remain bounded or even go to zero as the system is integrated. This I think is ordinary numerical analysis.

        In the case of the Navier-Stokes equations, coefficient uncertainty is I think not too large. In subsonic attached flow, the uncertainty in initial conditions can also be low and well quantified in either a wind tunnel or in flight, particularly in an area of high pressure when turbulence is often quite low. Generally, our best CFD methods fall within the measurement uncertainty when we do flight testing. The best methods are steady state methods so there is no time integration. They are based on Reynolds’ averaging and the test data are also time averaged. In fact, its pretty amazing how well we have done for attached flows. It’s take a massive investment though.

        Of course, these are idealized and simplified models. The atmosphere is another story.

      • David, it sounds like you’re discussing engineering models that have coefficients established using bounded calibration experiments, e.g., wing surfaces within wind tunnels. Your predictions are then of behavior during conditions within the calibration bounds. Is that correct?

        Climate models are engineering models that are used to project air temperature well beyond their calibration (tuning) bounds.

        If this in-bounds, out-of-bounds distinction is correct, then the expectations of uncertainty in your model outputs will not transfer to climate model projections.

        My approach was to take the measured calibration RMS uncertainty arising out of GCM cloud fraction simulation error, and apply that uncertainty to the iterative simulations projecting future air temperature.

        Uncertainty must increase with each iterative step because the uncertainty in simulated cloud fraction enters again with each step. Growth of projection uncertainty is unbounded.

        The magnitude of physical error remains entirely unknown because the physical behavior of future states is unknown.

      • My post here shows conclusively that climate models are only skillful because of cancellation of large errors. I’m just saying that these issues are strongly case dependent.

        The other issue that is involved is that there is an attractor. This kind of analysis does not address this fact. Sometimes, the overall averages can be skillful despite the fact that for example the phase of the oscillations is all wrong.

      • I guess the bottom line here is that we desperately need further research and that these areas are simply areas of scientific ignorance. I do believe though that at least in principle with a massive effort after people admit they have been overselling the models, improvements in skill are possible. We didn’t get to current RANS methods without huge investments.

        I do wonder if it might be possible to come up with a steady state flow representing for example mean jet stream position, trade wind patterns, etc. that could be used to do something like RANS. Don’t know, but might be possible.

      • Cancellation of errors over a calibration bound does not improve predictive skill. Even if the projected outcome is close to later observations, the supposed skill is happenstantial.

        The underlying physics is wrong.

        Even when the error is small, the uncertainty remains large because the model projection is not physically unique, and does not have discrete physical meaning..

      • By the way, there is no way to know that the model attractor accurately reflects the climate attractor.

      • I really don’t want to relitigate Pat’s analysis. I did find this response persuasive. The point that is I think the most important is that a model’s uncertainty is related to the uncertainty in the RATE of response to changes in forcing, not to the uncertainty in the current forcing.

        In aeronautical CFD, the angle of attack (the forcing) is very difficult to measure accurately. But turbulence models are tuned to match the rate of change of the response with respect to changes in the forcing. Another method is simply to match the lift or some other global force with the angle of attack as a variable. Using this method, CFD models are in better agreement. This method has proved very successful and results are really very good in attacked subsonic flows.

        https://patricktbrown.org/2017/01/25/do-propagation-of-error-calculations-invalidate-climate-model-projections-of-global-warming/

        This is all I think I want to say on the subject. I’d rather stick to rigorous math and papers by my colleagues that I know to be truthful and free of bias. This is I think a much firmer basis to reject climate models ECS. The lack of skill is actually quite well documented in the literature.

      • You’d best read the debate thread beneath Patrick Brown’s video, David. He could not sustain his position.

        He conflated an uncertainty in temperature with a physical temperature, and didn’t understand that a calibration statistic is not an energetic perturbation.

        Those mistakes are basic, do not exhaust all the mistakes he made, and typify every climate modeler I’ve encountered.

        If you don’t want to read the debate, then perhaps a summary judgment will do: https://patricktbrown.org/2017/01/25/do-propagation-of-error-calculations-invalidate-climate-model-projections-of-global-warming/#comment-17372

      • What you link to is just an appeal to authority, Pat, viz., the science of calibration. You didn’t respond to any of the points made. The models are tuned to match rate of change with respect to rate of change in forcing. Uncertainty in forcing is not trivial, but there are ways to correct for this and in CFD these methods are used.

        The evidence is strong that CFD models are very skillful in subsonic attached flows dominated by thin boundary and shear layers. Your argument is contradicted by a massive amount of experimental and computational evidence.

      • I responded to every single point made, David. Patrick Brown was wrong in every single point he essayed. He made freshman-level mistakes.

        You’d know that if you’d read the debate thread. Instead, you’ve evidently concluded in ignorance.

        I posted the Drdweeb0 link to provide you an educated summary opinion, only. My argument is in the debate comments that you’ve never read.

        CMIP5 long wave cloud forcing annual average calibration uncertainty is (+/-)114 times larger than the annual forcing perturbation from CO2 emissions. If you want to believe models can resolve the effects of a perturbation 114 times smaller than their lower limit of resolution, be may guest. But you’d be wrong.

        It’s pure blind faith to believe that engineering models tuned within calibration bounds can skillfully predict behavior well outside those bounds, where energetic flux is unknown.

        And offsetting errors within calibration bounds do not correct physical theory and do not relieve uncertainty in out-of-bounds projected future states.

      • I’m not to go down the ad hominem and argument from authority road Pat. My point is that the rate of change with respect to changes in forcing is what is needed. Uncertainty in forcing does not say anything about model skill. If you had a rebuttal for this point, please give it here.

        If your argument were correct, then CFD models would show no skill. You haven’t addressed either of these points. I continue to believe that your analysis is flawed as do the vast majority of other scientists. Is there a single CFD expert who agrees with you?

      • If I can so Pat, Your comments in response to Dr. Brown are so long and repetitive that they detract from you credibility. You say at one point that:

        “Only unique results are testable against experiment or observation. If a physical model has so many internal uncertainties so as to produce a wide spray of outputs (expectation values) for the same set of inputs, that model cannot be falsified by any accurate single-valued observation. Such a model does not produce predictions in a scientific sense because even if one of its outputs corresponds to observations, a correspondence between the state of the model and physical reality cannot be inferred.”

        This is an almost fundamentalist view that is wrong by most reasonable standards. Weather models do exactly as you say, yet they are incredibly valuable and save untold lives and prevent large economic losses. CFD aeronautical models have these same issues, but yet

      • Sorry, to continue: Aeronautical models are used to design commercial aircraft and are increasingly used in the certification process. They are of course intensely scrutinized as they should be and carefully validated.

        Commercial air travel is almost incredibly safe. I can’t remember the last time there was a fatal accident on a US commercial flight.

      • I made no ad hominem, nor argument from authority, David.

        You wrote that, “the rate of change with respect to changes in forcing is what is needed. Uncertainty in forcing does not say anything about model skill.”

        Here’s the rebuttal: the issue is model simulation error. The calibration uncertainty is derived from model error in simulated long wave cloud forcing.

        The uncertainty is not the forcing due to CO2 emissions. The uncertainty is in the simulated forcing that governs the simulated air temperature,

        Calibration uncertainty due to LWCF simulation error directly indicates a lack of climate model forecast skill.

        You have not addressed the problem of projection well outside model calibration bounds. Do your CFD models exhibit skill predicting observables or behavior far outside the model calibration bounds? Does tuning model parameters improve the underlying physical theory?

        You also have not addressed the question of how a model is able to simulate the effect of a perturbation 100 times smaller than the model lower limit of resolution. Are your CFD models able to do that? Visualize the invisible?

      • “Long and repetitive”

        I went through Patrick Brown’s video point by point and supplied the minute mark for each. A sequential criticism is not repetitive. A careful and complete response does nothing to impugn my credibility.

        Perhaps the study is tedious. So, let me ask you. Is a plus/minus uncertainty in temperature identical to a physical temperature? Patrick Brown indicated so.

        Is a plus/minus uncertainty in simulated forcing identical to a perturbation on the model? Patrick Brown indicated so.

        Do you find his views persuasive? They are common among climate modelers.

        What you call “almost fundamentalist” is a standard of science. Theories must make predictions so exact as to allow an experimental test.

        It is incorrect to equate the output of tuned engineering models to deductions from physical theory.

        Weather models are updated with new data several times a day. Their projections are probabilistic. Without constant updates, their projections rapidly become useless.

        Aeronautical models are tuned by wind tunnel and other experiments. They can then accurately model behavior within their physical calibration bounds.

        E.g.: https://smartech.gatech.edu/bitstream/handle/1853/62522/JoA_Aircraft_Performance_Model_Calibration_and_Validation_for_General_Aviation_Safety_Analysis.pdf

        Quoting: “[T]he upper and lower [calibration] limits are chosen so as to always satisfy physical constraints of the problem.”

        Skill in predicting behavior within the bounded conditions does not indicate skill in predicting behavior under conditions well outside those bounds.

      • You say: “Do your CFD models exhibit skill predicting observables or behavior far outside the model calibration bounds? Does tuning model parameters improve the underlying physical theory?”

        Yes, our IBL model does indeed do accurate predictions outside the data set used in calibration, which was a few sets of data available in 1980. It is a strip 2D theory but with semi-analytic 3D approximations, it is remarkably skillful for simulating swept wing flows even involving massive separation. Integral boundary layer theory seems to be remarkably good considering the level of simplification. That is due I think to the 100 years of theory and data that went into its formulation. in the 19th and 20th centuries scientists had to do a much better job with math and theory because they didn’t have the crutch of numerical simulation.

        No, parameter tuning does not improve the theoretical basis of these methods. It appears that this theory is better than most expect.

        I’m not defending climate models here. The changes in energy flows they need to predict are smaller than the numerical truncation errors. My post shows that conclusively. The cloud models are just one area of high uncertainty. I think your argument is simply not valid for CFD models and tars all model critics with something that most people think can’t be right.

        What I am saying is that your argument can’t be right because CFD models suffer from all the issues you raise, sensitivity to initial conditions, gridding, etc. and often high uncertainty in the forcing. Yet the time averaged data agrees well with the Reynolds’ averaged simulation when we match lift instead of the uncertain alpha. How do you explain this skill given the high level of uncertainty in forcing?

        You didn’t respond really to the point that its the rate of change when forcing changes that is where skill is claimed. In CFD, we claim skill on increments between configurations, not skill at predicting absolute levels. There is often high uncertainty in the forcing itself. I believe your claim doesn’t address this. But perhaps I am wrong.

      • I don’t understand about the updating of the weather model values with data. Jerry Browning mentioned this too, but it seems to me to be not really possible. A weather simulation runs pretty fast for a week of simulated time. The data you would need to do the update is unavailable because its in the future and hasn’t been measured yet.

        How is it done then?

      • David can your aeronautical models predict the effect of a perturbation 100 times smaller than their lower limit of resolution?

        That’s the claim made for climate models.

        The physical forcing change (from CO2 emissions) is lost within the uncertainty of the simulated tropospheric thermal energy flux.

        The CMIP5 *uncertainty* in simulated tropospheric thermal energy flux is ±114 times larger than the change in CO2 forcing and is ±90 times larger in CMIP6 model simulations.

        To explicate the response you desire, the above uncertainty remains when taking differences. This is because the physical forcing change (as you say) is invisible to the model. It is lost in the much, much larger uncertainty of the simulated tropospheric thermal energy flux.

        A critical point to keep in mind is that the climate model projection error cannot be known, because the projected states are in the future. Projection reliability is available only in iteratively propagated uncertainties.

        No model, including your CFD models, can resolve an effect 100 times smaller than the model lower limit of resolution.

        Climate model air temperature projections are merely linear extrapolations of the fractional change in GHG forcing. The projected anomalies can be accurately emulated with a simple linear equation with one degree of freedom. Where is the CFD in that?

        Here’s a link to a discussion of weather models. The NAM (North America Forecast) Model is updated with new data four times a day (every 6 hours): https://www.weather.gov/hnx/models.

        You didn’t respond to the questions whether you find persuasive that a plus/minus uncertainty in temperature is identical to a physical temperature, or that a plus/minus uncertainty in forcing is identical to a perturbation on the model; two of Patrick Brown’s central points.

      • Pat, You are just repeating over and over the same statements. I don’t need to defend what you claim are implications of what Dr. Brown said. You have not responded to my main points either.

        I am not defending climate models either yet you keep repeating your arguments against them. The argument about resolution is correct and is a paraphrase of my post here.

        Basically, if your argument was correct, aeronautical CFD would not be useful. It has proven very skillful and that is because it is tuned to predict rates of change in responses to changes in forcings. Uncertainties in forcing do not invalidate this skill in the slightest. Therefore your argument must be wrong.

        I don’t think I need to continue with a conversation where you are not engaging what I say.

      • David, if, as you admit (1), “Your argument about resolution is correct…” then climate models cannot predict the impact of CO₂ emissions.

        Further, quoting (2), “Basically, if your argument was correct, aeronautical CFD would not be useful.”

        It seems your admission in 1 confutes your assertion in 2.

        The predictive skill of aeronautical models reveals nothing about the predictive skill of climate models. Particularly, as the GHG perturbation is ±100-fold below the climate model lower limit of resolution.

        I have engaged your every relevant point.

        My descriptions of Patrick Brown’s unschooled views of uncertainty are entirely accurate. Your sense of persuasion is critically bereft because you decline the study.

        I agree with you, though, that this conversation doesn’t bear the continuance.

      • David, “tuned to predict rates of change in responses to changes in forcings.”

        Only under conditions of ceteris paribus. Utterly invalid.

      • Pat, I think you may not have read my post or my comments. Facts gleaned from CFD show that current climate models are only skillful on quantities used in tuning.

        You seem to think there is a contradiction between the the fact that CFD is skillful while GCM’s are not. There is no inconsistency.

        You absolutely did not respond to any point I have made!!! You are deceiving yourself I think in this regard. You repeated a bunch of highly technical jargon which I assume comes from calibration science or related fields. This is not applicable here. There is a huge literature on numerical approximations for fluid dynamics. There is not a hint of your line of reasoning in this massive body of work.

        1. It’s the rate of change that is skillful and this is what the codes are used for.

        2. Uncertainty in forcing in no way invalidates point 1.

        Please respond to these points or we are done here.

  10. There is cyclic IR out from polar regions that forms ice. There is cyclic cooling from thawing and reflecting ice. There are thermostats and control of these cycles. Sea ice thaws and turns the evaporation and snowfall and sequestering of ice on. Sea ice forms and turns the evaporation and snowfall and sequestering of ice off. The thermostat setting is the temperature that sea ice freezes and thaws. These are Dynamic Cycles and they cannot be simulated by a Static Energy balance theory that does not even acknowledge cooling by thawing ice or energy transport from the tropics to power the polar ice machines.

  11. My freezer removes energy from water and sequesters ice in the freezer bin.

    I carry the ice in my ice chest to keep the contents cold by thawing at the beach later.

    This simple principle is ignored by climate people

  12. Clyde Spencer

    “It is also a favored tactic of Colorful Fluid Dynamics to portray the ultimately accurate simulation as just around the corner if we get all the “physics” included and use a sufficiently massive parallel computer.”

    This is not unlike the Pollyanna Promises of those involved in solving the problems of controlled thermonuclear fusion — another area of CFD.

  13. A superb guest post from a true SME.

    Having given the topic of climate model CFD considerable thought, I am of the opinion that simpler, more generally comprehensible arguments are also useful for countering AGW alarmist believers.

    As to the models themselves there is a basic problem imposed by the computational intractability of ‘rightly sized’ small grids resulting from the CFL constraint on numerical solutions to PDEs. (per UCAR, halving the grid size increases the computational requirement 10x, one order of magnitude.). So the models must be parameterized. Per written CMIP requirement, these parameters must be tuned to best hindcast 30 years, a required initial output. That unavoidably drags in the attribution problem of natural variation, which IPCC admits exists. AR4 WG1 SPM fig 4 is the evidence; the warming from ~1925-1940 is indistingushable from the warming ~1975-2000, yet the SMP says the earlier period is NOT attributable to CO2 (not enough rise). Natural variation did not suddenly disappear in 1975.

    And so the models are physically demonstrably off:
    1. As John Christy has shown for years, they produce a tropical troposphere hotspot that radiosondes prove does not exist.
    2. Now over a decade of ARGO salinity readings shows they produce about half the ocean rainfall actually observed. That means the water vapor feedback is too high, which explains (1).
    3. Willis Eschenbach at WUWT recently showed that the models get the SIGN of cloud feedback wrong.
    4. As an unsurprising result, model ECS is about 2x what observational EBMs show (for CMIP6, 3.4 vs 1.7)

    And so all the past dire predictions did not materialize:
    1. Sea level rise did not accelerate.
    2. Arctic summer sea ice did not disappear.
    3. Glacier National Park still has glaciers.

    Why, in the face of both theoretical and observational abject failure, do the climate modelers persist? Simple: it’s all they have got left, and lots of money is involved.

    • Rud

      Do you believe all GCMs are inaccurate or that the IPCC is using inaccurate ones and averaging the outputs of many models? It seems to be that there are reasonably accurate GCMs matching observed conditions but, the IPCC doesn’t like what those models show for future conditions because they are not worrisome enough.

      • Rob, late return reply. The only model in CMIP6 that does not produce a tropical troposphere hotspot is INM CM5. They published a paper on this. That model also has the lowest ECS, 1.8. Christy also shows that model best tracks observed temperature anomalies.
        All the rest are IMO junk. And the IPCC averaging of junk is still junk.

    • Yes Rud, there are plenty of examples of lack of skill of GCM’s. RealClimate’s model vs. data page even shows this with regard to TMT. In the tropics, GCMs are way too hot.

      There is a larger point that I am making and that is that the entire field of CFD is dysfunctional in important ways. The most important one is that they have convinced the money men that its a solved problem so money to address the remaining very serious and difficult problems is not there.

      • Is it “too hot” when you consider the error bars on both the observations and the model results?

        Last I heard, admittedly several years ago, what that the error bars were too big to make a conclusion (Pierrehumbert).

      • Yes, most all the models have a higher rate of change than the high error bar of the observations. Once again, you could make an interesting contribution if you actually looked at the source material before commenting.

      • I told you Appell how to find that graphic at RealClimate. You are too busy rapidcommenting to bother to check.

    • Rud

      Not that the IPCC would ever care but they could gain some credibility with skeptics by simply acknowledging the obvious, that natural variability exists and should be part of the equation and consideration. If they did however, catastrophic warming would not be in the cards.

      • Natural variability does not add or subtract heat from the planet, and hence does not contribute to long-term warming.

        And obviously the IPCC recognizes that natural variability such as the AMO, PDO. ENSO, Interdecadal Pacific oscillation, the AO, NAO, and many more I don’t need to list because this is enough to prove you wrong.

      • stevenreincarnated

        David, let’s see your published, peer reviewed reference showing natural variation doesn’t add or subtract heat from the planet.

    • And so all the past dire predictions did not materialize:
      1. Sea level rise did not accelerate.

      False.

      https://scholar.google.com/scholar?hl=en&as_sdt=0%2C38&q=sea+level+acceleration&btnG=&oq=acce

      • 02

        We’ve gone through this song and dance so many times where I have proven you wrong it’s become tedious so I am not going to even bother with the tidal gauge graphs. The satellite data is worse than Swiss cheese. You keep believing in fairy tales and keep in mind the EPA failure of having reality being 2.8% of their prediction.

      • joe the non climate scientist

        CKid – you make a good point on the satellite data. Last I read , was that the measurement error on each measurement of sea level was as much as 12 inches (not sure if that is correct – though I would be happy for someone to provide better insight on that question).

        To give a sense of the difficulty of accurate measurements – Last week, I wore a heart rate monitor to bed so that I could track my sleeping HR. The system was also set up to monitor the my position via GPS. Even though I was stationary during the night, the GPS showed me wandering all over the house (and walking outdoors). Granted, the GPS system is (or should be more accurate) than my garmin set up> My only point of bringing this up is to point out the difficulting of obtaining and assuring accurate measurements.

      • Joe

        Thanks for sharing your story. I hope all is well.

        Re satellites. This is a case where common sense should tell us the challenges of getting very accurate data.

        Though, as Voltaire said “Common sense is not so common.”

    • 3. Willis Eschenbach at WUWT recently showed that the models get the SIGN of cloud feedback wrong.

      LOL!

      In what peer reviewed science journal can I read his work?

    • “Why, in the face of both theoretical and observational abject failure, do the climate modelers persist? Simple: it’s all they have got left, and lots of money is involved.”

      That is true, but also many young scientists are no longer real scientists. They have been indoctrinated by decades of non-scientific nonsense. Many are religious zealots and many are quite simply bald-faced liars. Take Michael Mann for example.

    • Appell again shows his naive faith in the science establishment. During the pandemic peer review was shown to be a joke. Lots of fraudulent papers were peer reviewed and published. Look up Ioannidis on zombie studies. Ioannidis has another Tablet article from August I think pointing out many examples and some peer reviewed papers showing how bad things have gotten.

  14. Pingback: “Colorful fluid dynamics” and overconfidence in global climate models – Watts Up With That?

  15. Pingback: Colorful fluid dynamics” and overconfidence in global climate models – Watts Up With That? - Lead Right News

  16. If we all stick with the idea that GCMs can never work for climate prediction, life will become easier all round. They need to start working on top-down climate models (like Marcia Wyatt’s) and abandon any idea that bottom-up can ever work.

    • Mike, see:

      “The proportionality of global warming to cumulative carbon emissions,” H. Damon Matthews et al, Nature v459, 11 June 2009, pp 829-832.
      doi:10.1038/nature08047

      • Joe - the non climate scientist

        David Appell | December 2, 2022 at 9:54 pm | Reply
        Mike, see:

        “The proportionality of global warming to cumulative carbon emissions,” H. Damon Matthews et al, Nature v459, 11 June 2009, pp 829-832.
        doi:10.1038/nature08047″

        Appell – read the Bell McDermott study on premature deaths due to increases in ground level ozone in 96 US cities.

        Its a classic example of reaching erroneous conclusions based on cherrypicking which correlation is deemed the cause.

        There is also another study of 10 french cities showing an increase in premature deaths due to increases in ground level ozone during the 2003 heat wave with all 10 cities have similar increases in premature deaths. Funny thing was that only 5 of the 10 french cities in the study had an increase in ground level ozone, yet the increase in ground level ozone was determined to be the cause.

        The point is that it is very easy to reach erroneous conclusions when you cherrypick which factor the correlation.

  17. “The models solve the equations of fluid dynamics, and they do a very good job of describing the fluid motions of the atmosphere and the oceans. They do a very poor job of describing the clouds, the dust, the chemistry and the biology of fields and farms and forests. They do not begin to describe the real world that we live in.” ~Freeman Dyson

    • America had it’s turn at the top but unlike previous nation-states Americanism was a celebration of personal freedom and individual liberty of the people like never before. The centralization and bureaucratization of power has led to a formulaic authority, all at the expense of ingenuity and the dynamism of a free people.

      • My thought is that it may be natural for MJO, AO and NAO may combine differently with a peaking El Niño then a La Niña but naturally doesn’t mean predictably.

      • It’s the sun, stupid (so to speak)– the mid-latitudes receive more solar radiation, the equator receiving the most and yet, the southern hemisphere is warmer than the northern hemisphere because…
        there’s more water there.

    • I actually disagree with Dyson. The models do not do a good job in the tropics for example. They do a reasonable job on Rossby waves.

    • “[m]y objections to the global warming propaganda are not so much over the technical facts, about which I do not know much, but it’s rather against the way those people behave and the kind of intolerance to criticism that a lot of them have.”

      Freeman Dyson, Yale Environment 360, June 4, 2009
      http://e360.yale.edu/features/freeman_dyson_takes_on_the_climate_establishment

  18. Is it worth mentioning backscatter (energy transfer from small to large scales)?

  19. Many years ago (30 or so) I was a fringe observer, at management level, of two CFD research groups in conflict. Decisions on their fate had to be made. That was my first awareness of the subject. I understood nothing of what they were doing and why they fought but knew it was intense and went on for along time. It wasn’t important enough for me to learn more as my opinion was not needed. I may have taken more notice if I had thought that the fate of global energy and climate politics (and not just say impeller design) might rest on who, if anyone, was right. Would it have helped if I had understood more? I doubt it. It’s a tough field for an amateur. To think that so much hangs on it today is very disturbing.

    • Nothing much hangs on that, Tom. To portray that issue as critical is for some only a way to shine their own medal.

      Never forget that the contrarian matrix is propelled by appeals to ignorance, which they sometimes call uncertainty. Other times they call it error. In both cases it propagates as well as in a crank model in which adding more energy gets more cooling.

      • Willard, your vague assertion is, despite its extremely qualitative and totally unscientific nature, easily seen to be false. CFD simulations have assumed great importance over the last 40 years in government labs, industry, and have always played a prominent role in IPCC reports. What has in fact happened is that what used to be done in the wind tunnel or other testing is now done in CFD and spot checked with testing.

      • That modelling is more important now than in your days when the military-industrial complex made California the tech hub it now is (a process the Company for which you worked all your life exploited the most) does not imply that “so much hangs on it,” David Young.

        Until you pay due diligence to economic models (wait until you look into DICE!) your editorial amounts to little more than the perfect illustration of the spotlight effect.

        On the plys side, you showed some LaTeX skills. That helps guesstimate the contributions within your collaborations.

      • Willard, this is also meaningless. What is dishonest about your online persona is that you know it means nothing, but you post it anyway. World class obfuscation on your part.

      • Willard is just obfuscating here with vague and largely meaningless statements. There is zero interesting content. Willard is a self-professed game player so this is a pattern for him.

      • You first Willard need to stop spewing falsehoods. There are plenty of empirical results in the papers, both mine and the others I referenced. There is a massive archive of test data available to even philosophers who have no idea what it means.

        Another falsehood is that code is unavailable. You can find how to access it at the NASA website.

      • Doubling down on a falsehood is a lie, Willy. A version of our code is available from NASA. Of course you being an internet troll would have no idea how to use it. Massive amounts of data are also available from NASA from testing done there.

      • You just can’t help yourself can you? You are speaking for deep willful ignorance. A version of our code is available from NASA. But you wouldn’t know how to use it or even read it because you are nothing but a politically motivated anonymous activist.

        Plenty of data too from NASA. They have test data from a lot of airplanes. But you wouldn’t know what it means.

      • Willard, you are an anonymous non-scientist ignoramous who doesn’t know the first thing about scientific software. I’ve written a vast amount of code. But we had a team and everyone wrote code.

  20. David,
    The T&H models I am familiar with use three equations in multiple fields. Mass, energy, and momentum in vapor, liquid, and droplet fields. Do the GCMs use the same conserved quantities as the basis for the models. If not, what other quantities can possibly be used.

    • Doug, I don’t know in detail how clouds and precipitation are handled. I don’t think though that its “resolved” using the kind of conservation law approaches you refer to.

      • Thanks David,
        Many areas are solved via parametrizations in nuclear modelling also. I was trying to determine if there where any other methods to resolve from 1st principles other than those conserved quantities.

      • Parametrizations appear even in basic physics. Hooke’s Law and Ohm’s Law are two of them.

      • The issue here is that “parameterizations” are not “the laws of physics.” This blows a big hole in one of the primary talking points of the communicators who erroneously defend models.

      • Ohm’s law and Hooke’s law have been validated experimentally…which is the point of our discussion. Any parmatric relationship is likely limited in applicable range and needs to be experimentally validated.

      • doug: do you honestly think climate modelers are unaware of the need to test their parametrizations???

        “Data from field experiments such as the Global Atmospheric Research Program (GARP) Atlantic Tropical Experiment (GATE, 1974), the Monsoon Experiment (MONEX, 1979), ARM (1993) and the Tropical Ocean Global Atmosphere (TOGA) Coupled Ocean-Atmosphere Response Experiment (COARE, 1993) have been used to test and improve parametrizations of clouds and convection (e.g., Emanuel and Zivkovic-Rothmann, 1999; Sud and Walker, 1999; Bony and Emanuel, 2001)….”

        https://archive.ipcc.ch/publications_and_data/ar4/wg1/en/ch8s8-2-1-3.html

      • dpy: these two laws show that what you think are “laws of physics” are in fact parametrizations.

        Physics is loaded with them.

      • David Appell said; “Parametrization appear even in basic physics. Hooke’s Law and Ohm’s Law are two of them.”

        Identify the “parameters” and the source of numerical values.

        You will discover that they are fundamental properties of materials. No material properties in any GCM are fiddled with as tuning parameters.

      • Come on David, the question is whether these parameterizations have any chance of being skillful. Algebraic models of turbulence are quite poor in their skill. Global PDE’s are vastly better. Please read the post.

      • dpy: You originally wrote:
        “Any parmatric relationship is likely limited in applicable range and needs to be experimentally validated.”

        I showed you that climate scientists do this.

        Then you backpedaled to something about turbulence.

        How is turbulence handled in climate models, dpy?
        Do you know?
        Does it need to be included, dpy?
        If so, why?

      • dpy: does turbulence add or subtract any heat from the Earth’s climate system?

      • David – thermals do help transport heat to TOA, so I would say turbulence does play a role.

      • David, You have obviously not read the post or any of the references. Any high Reynolds’ number simulation that does not account for turbulence will be either unstable or badly wrong. This is extremely settled science. You’re arguing from ignorance.

        Modelers themselves have published papers showing that parameteriations of clouds for example have parameters that are not well constrained by data but that make a significant difference to ECS. The reference is in the paper. Please read it before denying settled science.

      • Consistent with your SOP you conveniently ignore the important parts:

        “You will discover that they are fundamental properties of materials. No material properties in any GCM are fiddled with as tuning parameters.”

        Parameters that are used for tuning are universally associated with prior states that the materials have experienced. Not fundamental properties of the materials.

    • As David said regarding model validation in aeronautical design, model validation is required in nuclear design also. The Nuclear Regulatory Commission approves the methods that are acceptable for safety related applications. Model validation via experimental results are obviously required.

  21. Yup, in areas where Computer Models really matter (flying, submarines, nuclear) there are numerous validation steps.

    In Satellite Design where every extra pound of unneeded material adds $1000’s to the launch cost each “batch”/”melt”/”mill run” of the metals is tested for it’s specific physical properties. Then all of the Computer Models (FEA) are rerun using the exact measured properties of the exact ingredients used to build that specific unit. Like measuring the exact size of every 2″ x 4″ used to build a house and updating all the drawings and calculations.

    The Climate Model approach; Model, Publish, IF Public not Scared Enough THEN Model Again, Publish…

    This whole decades long exercise in predicting the weather 50 years in the future has been an abject failure and colossal waste of “Other Peoples Money”.

    • Kevin, no one is trying to predict weather “50 years in the future.” Learn the difference between weather and climate.

      • Ha ha,

        To review, the Climate Models claim to predict the amount/severity of High/Low temperatures, precipitation, droughts, floods, winds, hail, lightning, snow. locusts, split ends…. etc etc 50 years from now…

        That used to be the “Weather” until some folks decided they where smart enough to model the “Climate”….

        Weather – whats actually going on outside
        Climate – what you expected to be going on outside

        Nobody can accurately “Model” the “Climate” 50 years from now…

        Been a complete waste of money, careers, intellectual effort and lost opportunities.

        For the amount of money wasted on “Climate Change” we could have given everybody on the planet indoor plumbing…

      • Kevin: the time scale for weather and its changes is about a week at most.

        No climate model is trying to predict climate changes in a week’s time.

      • KevinK wrote:
        That used to be the “Weather” until some folks decided they where smart enough to model the “Climate”….

        Really? Prove it. With actual scientific publications.

      • KevinK wrote:
        Nobody can accurately “Model” the “Climate” 50 years from now…

        In 1982 Exxon’s climate model accurately modeled the warming that would happen in 40 years, and the level of atmospheric CO2 we’d be at:

        https://debunkhouse.files.wordpress.com/2015/10/xom1.png

      • David, you say:

        “In 1982 Exxon’s climate model accurately modeled the warming that would happen in 40 years, and the level of atmospheric CO2 we’d be at.”

        I took your graphic, digitized it, and overlaid the Berkeley Earth temperature. Here’s the result.

        https://wattsupwiththat.com/wp-content/uploads/2022/12/exxon-mobil-1980-prediction.png

        w.

      • Clyde Spencer

        Appell,
        Looking at Willis’ graph, it appears that Exxon may have gotten the CO2 concentration quite close, at least for NH mid-latitudes. However, it appears that the temperature estimate is nearly 100% too high. I wouldn’t call that “accurate.” Quite the opposite. If an astrophysicist were to publish a study where their measurement was 100% higher than the commonly accepted value, they would almost certainly be reminded of what Carl Sagan was fond of saying: “Extraordinary claims require extraordinary evidence.”

      • TimTheToolMan

        David Appell writes “No climate model is trying to predict climate changes in a week’s time.”

        Climate models resolve climate change every timestep, so they try to resolve climate changes every 30min

        Dont believe me? Well tell me where the climate change is hiding in the model calculation then.

      • No Willis. The Exxon model shows 1.1 C of global warming in 2020, relative to 1980. That’s almost exactly where we’re at.

      • Here’s what I get for the observed warming since 1/1980 for the various datasets, using the standard calculation

        total_warming = linear_slope * time_interval

        Berk Earth (sea ice temp = air temp) 0.82 C
        Berk Earth (sea ice temp = water temp) 0.71
        HadCrut5 0.82
        NOAA 0.74
        GISS 1.10
        JMA 0.78

        Larger than whatever you calculated (you don’t say), and pretty good for a model built in 1982. (40 years ago!) Right on according to GISS.

        And I haven’t taken error bars into account. For example, the 95% confidence level for the HadCRUT5 number assuming full autocorrelation is 0.37 C, using the method of Foster & Rahmstorf.

      • TimTheToolMan wrote:
        Climate models resolve climate change every timestep, so they try to resolve climate changes every 30min

        Dont believe me? Well tell me where the climate change is hiding in the model calculation then.

        Which models do this, specifically?
        Show documentation so I can verify your claims.
        Also include the size of the spatial grids in this model.

      • TimTheToolMan

        David Appell asks

        “Which models do this, specifically?”
        All of them. They all project step by step into the future. The climate change is determined at every step. Did you think they dont calculate climate change but it magically it appears after say 30 years?

        “Show documentation so I can verify your claims.”
        What kind of documentation are you expecting to see?

        “Also include the size of the spatial grids in this model.”
        Completely irrelevant. They’re all gridded stepwise calculations and their grid size makes no difference to the fact they calculate changes at every step.

    • Geoff Sherrington

      Kevin K,
      Re your satellite manufacturing examples, there used to be a story that at Volkswagen, a near-final step before production involved engineers taking off any optional object that added weight to the car. At Rolls-Royce, a near-final step involved engineers adding optional objects that added to the comfort or luxury of the people in the car.
      The purpose of the design led to options. The purpose of aeronautical engineering is not the same as the purpose of climate modelling. I am not sure that the criteria being sought by climate modelling are adequatetely stated or publicised or evaluated. Who are they trying to please? Who is the customer? Geoff S

      • They’re trying to please Science, with a capital S. Among other ideas, bodies and people.

        Do you actually know any scientists, Geoff? Have you ever?

        It doesn’t seem like it.

      • Clyde Spencer

        Appell,
        Your insult was unwarranted. I would consider Geoff to be a scientist. I don’t know about you.

  22. I really don’t see much need for fluid mechanics in projecting global warming, since it’s pretty clear total warming is proportional to total carbon emissions:

    https://www.ipcc.ch/site/assets/uploads/2018/02/FigSPM-10.jpg

    Yes, there’s an uncertainty envelope, but so far the linearity is quite apparent.

    Given this, predicting sea level rise isn’t that complicated either. Sure, predicting changes in tropical storms is pretty tough. But sea level rise is the biggest danger by far. Does it really matter if global warming is 3 C or 4 C by 2100? No, not really, both are very dangerous.

    • The issue is the climate sensitivity: how much warming do you get for a given level of CO2. Also, whether it is a logarithmic (it most likely is), linear, or higher order function of CO2.

      So, the uncertainty envelope is really large.

      As to “both are very dangerous” – that is not clear at all, plus the lower bound is below 3.

      If you want to make decent policy response, you need to have good estimates of the consequences. For example, the simplest choice is whether to try to prevent the warming, or adapt to it. You need data for that.

      As for the current policy responses, that are absurd. I’ll recommend reading A Planning Engineer’s excellent posts for the power grid part of that.

      • Oops… on policy response, “they” are absurd, so far. Wind and solar are ridiculous attempts, as any serious analysis shows.

      • Look at the graph I provided. There is a linear relationship between cumulative carbon emissions and total warming.

        The proportionality constant is 1.5 deg C/trillion tons carbon emitted, +/- about 1/3rd.

        “The proportionality of global warming to cumulative carbon emissions,” H. Damon Matthews et al, Nature v459, 11 June 2009, pp 829-832.
        doi:10.1038/nature08047

      • 3 C of warming is dangerous — it’s 1/2 of an Ice Age’s warming.

        The warming from the depth of the last ice maximum 23,000 yrs ago, when there was 2-3 km of ice over the northern continent from about I-84 up, to the Holocene was only 6 C.

        That warming took about 12,000 years. We are doing half of that in 250 years.

        We are replicating titanic geological forces here, but about 25 times faster.

      • The question is whether the climate in 1850 was optimal or whether a warmer world is better for ecosystems. This is where the argument from “change is bad” is inconsistent with the fact that change is constant. No one would argue that human welfare in AD1000 was better than it is today. We are vastly better off. Life in 1000 was usually short, painful, and full of suffering.

      • dpy: the “optimal climate” is one to which a species has adapted.

        Period.

        Studies of the past show that species are stressed, and sometimes go extinct, during times of rapid climate change.

        “Mass Extinctions Tied to Past Climate Changes,” Scientific American 2007.
        https://www.scientificamerican.com/article/mass-extinctions-tied-to-past-climate-changes/

      • This is hand waving David. Were species not adapted during the last ice age? Of course they were and then they adapted to the end of the ice age.

    • Well, the slope is critical in helping us respond to the warming. The real issue here is vastly bigger than “global climate change” or whatever the new name is. It’s about science, the selling of simulations, and the political narratives that are sold using flawed modeling. It’s also about the future of modeling science. To make progress, the limits of current science must be made clear. There is a funding crisis for fundamental science and its due I think to the corrupt selling of current simulations and their skill.

      Science communicators who whitewash uncertainty and fundamental flaws in the science have created a crisis of misinformation that endangers future research and progress.

      I have a followup post for next year about what the pandemic did to science. Basically there is a vast body of evidence that it accelerated the crisis.

    • The point here is vastly bigger than the supposedly linear response of the climate system. The fact of the matter is that progress in science is endangered by the massive overselling of CFD simulations and the whitewashing of uncertainties and even structural instabilities in the models. We need to do better so we can better quantify climate change.

      • Again, detailed fluid dynamics doesn’t matter to projecting long-term global warming. It’s all a wash in the long-term.

        It’s really not important what exactly climate will do next year or by 2030.

        It’s not.

        It’s far more important what will happen to bulk quantities such as global temperature and global sea level by 2050 or 2100 or 2150 or 2200.

        Because local conditions will always differ. Does it really matter if Florida gets one major hurricanes a year in 2050 or three?

        No. One is bad enough.

        Or course scientists will do their best to model short- and medium-term climate changes.

        But using CFD issues to deny, demean or delay the need for mitigation and adaption is very, very foolish. The worst mistake mankind would ever make.

      • Once again, David, you are going against settled science. The details of the nonlinear feedbacks make a big difference in even the bulk properties of the flow in the long term. There are references in the paper. If you read it, intelligent and useful dialogue might be possible.

      • David, thank you for the post. You wrote: “We need to do better [with modeling] so we can better quantify climate change.”

        David Appell responded: “It’s really not important what exactly climate will do next year or by 2030…and using CFD issues to deny, demean or delay the need for mitigation and adaption is very, very foolish. The worst mistake mankind would ever make.”

        It looks to me there could be some overlapping agreement among sides here.

        1) Both might stipulate that the IPCC’s computer modeling project is a hopeless and/or pointless waste of resources.

        2) Both might agree that no regrets mitigation to harden against natural disasters is a wise application of those limited resources.

        3) Both might agree that the sea level rising was a problem set in motion before, and regardless to, fossil fuel use.

        4) Both might agree that fossil fuel needs to eventually be replaced with a better technology.

        The disagreement is: how much sacrifice of life and liberty and (central banking debt) should need to be sacrificed for future generations’ climate benefit. We both agree that we need make responsible decisions now on behalf of our grandchildren.

      • dpy6629 wrote:
        Once again, David, you are going against settled science. The details of the nonlinear feedbacks make a big difference in even the bulk properties of the flow in the long term.

        Again, turbulence or the details of fluid dynamics neither adds or subtracts heat from the climate system.

        The Matthews et al figure shows that very clearly:

        change in surface temperature is proportional to cumulative carbon emissions.

        The major feedbacks do not depend on the details of fluid mechanics:

        1) water vapor feedback
        2) ice-albedo feedback
        4) Planck response

        3) cloud feedback

        Sure, cloud changes are very difficult to predict from a dynamical model (though other methods indicate now it is very probably positive).

        Expecting, and waiting, to be able to computationally calculate every flow and eddy in the atmosphere is not computtionally possible in the time frame needed to address anthropogenic climate change, nor necessary, because (again) the details of the fluid mechanics do not add or subtract any net heat in the climate system.

        Total warming is pretty easy to calculate. The exact amount of warming in Banff, Canada or the capital of Indonesia may not be, and likely will never be possible in the time frame necessary for mitigation and adaptation to take place. There simply isn’t the computational power available. But that hardly means there aren’t serious risks that need to be addressed.

        As Steven Schneider wrote long ago, we will have to make decisions in the face of uncertainty. But we do that all the time.

      • Appell here is fundamentally confusing 2 separate issues. As I say in my paper (which I doubt Appell has read) I say that the response of both a wing and the climate to changes in forcing is roughly linear.

        Point 2 is totally different. The RATE of change is strongly dependent on the complex nonlinear feedbacks of the system. Without knowing the RATE, Appell’s simpleminded statement about linearity is of little value. The complex feedbacks must model turbulence in some way or they will be wrong, perhaps badly wrong.

        Any “laminar” NS simulation (no turbulence model) will be unstable and give valueless results. This is settled science BTW. That’s because the model adds significant dissipation that as an added bonus stabilizes the simulation.

      • The other point is that ECS which is critical to policy decisions is also strongly dependent on cloud models which are not well constrained by data, as is shown in Zhou et al.

    • Clyde Spencer

      Appell,
      All your graph demonstrates is a correlation, not cause and effect. It could well be a spurious correlation such as the fallacious claim that ice cream consumption causes drownings. If you fancy yourself to be a scientist of some sort, you should be quite aware of that.

      • Clyde Spencer wrote:
        All your graph demonstrates is a correlation, not cause and effect. It could well be a spurious correlation such as the fallacious claim that ice cream consumption causes drownings. If you fancy yourself to be a scientist of some sort, you should be quite aware of that.

        LOL. Read the Matthews et al paper that shows why the relationship is expected to be linear.

        “The proportionality of global warming to cumulative carbon emissions,” H. Damon Matthews et al, Nature v459, 11 June 2009, pp 829-832.
        doi:10.1038/nature08047

  23. Geoff Sherrington

    David Young,
    Thank you for this timely essay.
    There is an aspect of accountability.
    It might be that human nature allows for better results when the modeller can be punished for a wrong or bad result and rewarded for good.
    The aircraft engineer does not want aircrft crashing because of poor work. The GCM modeller does not have such a serious constraint. GCM modellers can correctly assert that “The climate will continue to exist, whether my work is correct or not.”
    The aircraft engineer cannot plausibly say “The aircraft would still have flown whether my work was correct or not.”
    Used well, accountability can drive higher quality.
    Geoff S

  24. I’m unable to get comments emailed to me, no matter what account I use (WordPress, Twitter or Facebook). Is there some secret?

    • When I comment using my WP creds, I have “notify me of new comments via email.” Not sure why I’d what to do that.

      You can always try an RSS reader and follow the comments feed:

      judithcurry.com/comments/feed/

      Try not to hog the mic too much or try to pick a fight with every single commenter, and best of luck.

    • “In 1982 Exxon’s climate model accurately modeled the warming that would happen in 40 years, and the level of atmospheric CO2 we’d be at:”

      Holy palozza, David Appell is citing the work of renowned “Climate Deniers” employed by Exxon as “proof” of the “climate crisis”……

      Man on Man are your arguments getting weaker by the year when you have to “rely” on Exxon Scientists to “back up” the IPCC/NASA/etc….

      All the “scientists” involved in “Climate Modelling” have failed totally and should just disappear into the woodwork.

      They have no credible or defensible “model output” that shows any “signal” of “man made climate change” above and beyond natural variability….

      • Rex Tillerson, then ExxonMobil’s CEO, publicly stated that “increasing CO₂ emissions in the atmosphere will have a warming impact.”

      • KevinK wrote:
        All the “scientists” involved in “Climate Modelling” have failed totally and should just disappear into the woodwork.

        How have they failed? Prove something for once, instead of every more word salad.

      • KevinK wrote:
        They have no credible or defensible “model output” that shows any “signal” of “man made climate change” above and beyond natural variability….

        You’re just ignorant, Kevin.

        “We find that climate models published over the past five decades were skillful [14 of 17 projections] in predicting subsequent GMST changes, with most models examined showing warming consistent with observations, particularly when mismatches between model‐projected and observationally estimated forcings were taken into account.”

        “Evaluating the performance of past climate model projections,” Hausfather et al, Geo Res Lett 2019.
        https://doi.org/10.1029/2019GL085378

        figure:
        https://twitter.com/hausfath/status/1202271427807678464?lang=en

    • It’s kind of random, David. It happens to me and probably everyone else. Just go with the colorful flow.

  25. Pingback: Colorful fluid dynamics” and overconfidence in global climate models – Watts Up With That? - News7g

  26. David Appell said:

    “I really don’t see much need for fluid mechanics in projecting global warming, since it’s pretty clear total warming is proportional to total carbon emissions:”

    “Again, detailed fluid dynamics doesn’t matter to projecting long-term global warming. It’s all a wash in the long-term.”

    Fluid mechanics is the literal core of every GCM ever built, including the original version of the original model. That is, from day zero. The fluid mechanics calculations alone are highly likely responsible for a very significant part of the total CPU/GPU cycles time costs. Thousands or reports and papers have been written that focus on the fluid mechanics core alone.

    Since day zero, 100s of millions of currencies, highly likely into the billions, have been spent on these very same GCMs. And continuing even as we type. The resources spent on hardware, and the electricity to power that, alone highly likely exceeds a billion.

    David seems to of the opinion that every (USA) penny has been the purest of entropy-generating waste of resources.

    Something does not add up here.

    • Dan Hughes wrote:
      Fluid mechanics is the literal core of every GCM ever built, including the original version of the original model.

      Of course, everybody knows that. But you’re lost in the trees and not seeing the forest.

      Because, again, fluid mechanics does not add or subtract heat from the climate system!

      total_warming is proportional to cumulative_carbon_emissions

      https://andthentheresphysics.files.wordpress.com/2015/03/cumulativeemissions.jpg

      From:

      “The proportionality of global warming to cumulative carbon emissions,” H. Damon Matthews et al, Nature v459, 11 June 2009, pp 829-832.
      doi:10.1038/nature08047

      • The Effects Of Temperature Inversion

        Due to the “unnatural” nature of temperature inversion, some unexpected and extreme conditions are created as a result of this phenomenon. Some of these results can be seen as potentially harmful and dangerous. The most important ones are:

        https://ownyourweather.com/what-is-temperature-inversion/

      • jim2 – temperature inversions are irrelevant to global warming — they don’t add or subtract energy into/out of the climate system.

      • David Young’s post addresses the trees. My comment is consistent with a post looking at the trees.

        A thread gets messed up, FUBB you might say, when the post is about the trees and some commenters constantly and repeatedly insist that the forest should be the subject. We are in the middle of a FUBB-up thread at this very moment.

        It is easy to discover who are the FUBBers. Universally, the FUBBers comments are off topic and frequently include personal attacks on the messenger while ignoring the messenger, thus self-validating the ignorance of the FUBBer.

  27. thecliffclavenoffinance

    I read at least one dozen climate science and energy articles every day of the year. I believe that qualifies me to comment on the quality of the authors. After the excellent Planning Engineer article here, — the best article I read, of about 400, in the month of November, comes this tedious disaster.

    This is the worst article I read on te subject of climate science so far in 2022. Extremely tedious reading by an author who is clueless about the POLITICAL purpose of climate models.

    He foolishly believes because scientists created the models and program the computers, that the goal is accurate climate forecasts.

    That is comical.

    The climate computer games are intended to make scary climate predictions to create fear. That’s what the scientists are paid to do. And they do it well.

    The author might have noticed that after about 40 years of “refinements”, the models have consistently overpredicted the rate of global warming. And the predictions appear to be getting worse in CMIP6 models versus CMIP5 models.

    The author might have noticed that only one model is in the ballpark of observations — the Russian INM model. That model ought to get 99% of the attention, but it gets perhaps 1% of the attention, simply because accurate predictions are NOT a goal.

    Most important:
    Detailed knowledge of climate science is not available to construct a model of the climate of our planet. All we have are computer games with guessed ASUMPTIONS THAT OBVIOUSLY DO NOT MAKE ACCURATE PRDICTIONS. Wrong predictions arer not science.

    Even if scientists actually knew the effect of each climate change variable — perhaps the top Ten, in detail — there still may be no way to predict the future climate. It could be as difficult to predict as predicting whether or not it will snow one year from today in London.

    • thecliffclavenoffinance

      An additional thought or two:

      Climate change (aka CAGW) is a prediction, not reality.
      A prediction that has been wrong for 50 years in a row.
      The CAGW prediction is supported with computer games that, on average, make the same prediction. The computer games make inaccurate predictions because the CAGW prediction is inaccurate. The computer games will ALWAYS make inaccurate predictions because the CAGW belief must be defended. It is the house of cards foundation of the climate change political movement.

      ThoSe people analyzing the climate models are comical.
      The models START WITH A CONCLUSION, based on the 1979 Charney Report, FINALLY updated by the IPCC a few years ago.

      The so-called climate models are only computer game propaganda because humans have never demonstrated the ability to predict the future climate. And if humans can’t predict the future climate, their computers can’t predict the future climate.

      Did AGW theory predict global cooling from 1940 to 1975? No.

      Did anyone notice when government bureaucrats deleted almost all that cooling (as reported in 1975) becaUse global cooling while CO2 increased was inconvenient data?

      Did AGW theory explain why the rapid CO2 emissions growth in the past 8 years was accompanied by no global warming (UAH data)? No.

      Even if climate scientists thoroughly understood the exact effects of every climate change variable, and it was possible to predict the future climate with that knowledge, the climate models would STILL overpredict the rate of global warning. That’s their job.

      Climate change is the current name for CAGW.
      CAGW was defined in the 1979 Charney Report as an ECS of
      +3.0 degrees C. +/- 1,5 degrees C., sometimes defined as +1.5 to +4.5 degrees C. That wild guessed ECS lasted until a few years ago, when the IPCC decided that +2.5 to +4.0 degrees C, was a better wild guess. The current wild guess ECS MUST be defended by the climate models, even if that results in inaccurate predictions, which it does. That is climate change politics.

      The Russians apparently did not get the consensus message for their INM model — it’s ECS prediction had been barely within the Charney Report +1.5 to +4.5 degree C. range, but is not in the new IPCC preferred ECS range. Maybe the IPCC will sanction the Russian INM model, or censor it, and refuse to mention it in the future, or include it in the CMIP7 group? Those pesky Russians are just not playing the climate change scaremongering game by the rules: A climate crisis is coming, and you must defend that conclusion at all times.

      • I think this characterization is rhetorical in nature and too harsh. The models are based on math that could at some point in the future (with massive investments in further research) work pretty well as aeronautical CFD shows conclusively.

      • TimTheToolMan

        Aeronautical CFD is a fundamentally different problem to climate CFD and they cant be compared.

        Modelling lift and drag of a wing isn’t a future prediction. Its a now calculation given a wing, its angle of attack and airflow.

        In Aeronautical CFD, what is the equivalent to a forcing? In another thread you suggested angle of attack but its just not. That is a different now calculation, another calculation that is independent of the first.

    • Richard, here is my simplified mathematical prediction model for [b]Future GMT Anomaly from 1850 Pre-industrial (1850 PI)[/b].

      (GMT Anomaly @ N decades from 1850 PI) = (+0.08 C per decade) (N decades from 1850 PI)

      For the Year 2100 we obtain:

      (GMT Anomaly Year 2100) = (+0.08 C per decade) (25 decades) = +2C

      If challenged, I shall swear by this simplified GMT prediction model, with my right hand resting on a stack of FORTRAN code listings if so demanded.

      Anyway, that said, it’s unfortunate I can’t nominate myself for the Nobel Prize for this astounding insight concerning the earth’s climate system dynamics.

      But maybe someone else will eventually get around to it.

  28. Pingback: “Exxon Knew” – Newsfeed Hasslefree Allsort

  29. Here’s something that I have not yet figured out.

    Numerical weather models very significantly diverge from physical domain reality after a few days of calculations. The divergence is assigned to the chaotic response off the core. That time period limitation has long been a focus of research, and numerical methods and data injection procedures have been developed and implemented to attain a somewhat still-limited increased valid time frame, after which divergence from the physical domain still occurs. Tim Palmer’s book, the subject of this Climate Etc. post https://judithcurry.com/2022/10/18/the-primacy-of-doubt/, is a good exposition of the subject. I find Hasselmann’s introduction to the noise-injection numerical procedure to be accessible.

    K. Hasselmann (1976) Stochastic climate models Part I. Theory, Tellus, 28:6, 473-485, DOI: 10.3402/tellusa.v28i6.11316
    To link to this article: https://doi.org/10.3402/tellusa.v28i6.11316

    Claude Frankignoul & Klaus Hasselmann (1977) Stochastic climate models, Part II Application to sea-surface temperature anomalies and thermocline variability, Tellus, 29:4,
    289-305, DOI: 10.3402/tellusa.v29i4.11362 To link to this article: https://doi.org/10.3402/tellusa.v29i4.11362

    GCMs are essentially based on the same fluid-mechanics core, the subject of David Young’s post, as the numerical weather models, but supplemented with tons o’ stuff, like including and ad hoc fiddling with parameterizations, that’s needed to do climate calculations. In the absence of the special treatments applied to numerical-weather calculations, the GCM core continues to churn out chaotic weather. I suspect that the GCM weather can be no better than un-modified numerical-weather weather, and so rapidly diverges from physical-domain reality. Any change in any aspect of the basis of the GCM calculation; the parameters, time and space discrete size, order of calculations, hardware, and maybe even compiler, produces different GCM weather.

    My problem is: How can averages of chaotic GCM unphysical numerical weather, with averages of chaos themselves being also chaotic, represent any aspect of physical-domain climate?

    The focus on a global average surface temperature, GAST, aids in obscuring the chaos in GCM averages of GCM weather. The GAST is a truly global, global functional; the average of day averages, averages of month averages, averages of yearly averages, averages of global space. The GAST maps the billions and billions of calculated numbers, said to represent the billions and billions of physical-domain phenomena and processes, to a single scalar value.

    Experience is demonstrating that the degree of fidelity of even the global-functional GAST with the physical domain is a strong function of the projections of future CO2 concentration in the atmosphere. At the present time, GCMs are not yet ready for purpose relative to the spatial and temporal ranges that are required for planning adaptations to a changing climate.

    • As I outlined in the post, the idea is that because there is an attractor, the averages could be fine despite the trajectory being very wrong. This is totally unverified and perhaps unverifiable, but it is a subject for urgent future research.

    • Circulation models attempt to predict a trend in the Northern Annular mode but they cannot predict any noise at long range so they cannot predict weather. GCM’s have the tail wagging the dog. Indirect solar forcing of NAM anomalies drives weather anomalies in the mid latitudes, providing the only real pathway to very long range weather prediction, and the ocean modes, ENSO and the AMO, respond inversely to the NAM regimes, providing the pathway to regional climate prediction. Then the state of the ocean modes improves the regional weather predictions. Without any need to mention the global mean temperature, which has nothing to do with the NAM anomalies anyway.

      • “Without any need to mention the global mean temperature, . . .”

        The GAST does not appear in any local-instantaneous formulation of any law of physics, does not appear in any useful averaged formulations of any laws of physics, does not appear in any discrete approximations to any of the PDEs, ODEs and algebraic equations, does not appear in any parameterizations in any GCM, and does not appear in any adaptation concepts that might be developed for any region of Earth.

        Other than these aspects, the GAST functional is a very useful response metric.

      • Dan: Global Average Surface Temperature is easily defined, just as is the average value of a function over any differentialable manifold.

        = (1/A) * [integral_S of T(X) * dS]

        where T is temperature at every point X on the surface S, and A is the total area of S.

        certainly exists, and can be related to the brightness temperature of the planet by standard Stefan-Boltzmann physics.

        In practice, the integral is, of course, approximated by numerical observations taken on the surface.

      • David, where did I say the GAST is not defined? Kindly quote those words that you imply that I wrote.

  30. David Appell said: “Of course scientists will do their best to model short- and medium-term climate changes. But using CFD issues to deny, demean or delay the need for mitigation and adaption is very, very foolish. The worst mistake mankind would ever make.”

    The Biden administration hasn’t presented anything resembling a credible plan of action for reaching Net Zero in the US power generation sector by 2035, and Net Zero for the entire US economy by 2050.

    A strong commitment to such a credible plan of action would seem to be a key prerequisite for successfully convincing China and India to pursue their own equivalent plans for reaching Net Zero — if that US plan actually existed, which it does not.

    Biden and his people haven’t gone nearly as far as they could go, legally and constitutionally, in forcing a reduction in America’s consumption of fossil fuels. David Appell, I have asked you, someone who is a prominent climate activist, this question before. And I’ll ask the same question yet again.

    Why haven’t the climate activists put pressure on the Biden administration to do all that is within its legal power as the Executive Branch to move forward with a strong anti-carbon program, one that pushes the envelope in regulating and reducing America’s carbon emissions?

    • “Biden and his people haven’t gone nearly as far as they could go, legally and constitutionally, in forcing a reduction in America’s consumption of fossil fuels.”

      WOW, just WOW…..

      Where exactly in the US Constitution does it give any of the distinct branches of the US Gubermint the authority to “force a reduction in the consumption of ANYTHING” ????

      Lacking a law passed by Congress outlawing the use of fossil fuels the Executive Branch of the US Gubermint has no authority to tell anyone how much “fossil fuels” they may consume.

      Anymore than Washington DC can limit a person to only attending 1 NFL football game every other year. Or a directive from the the Office of the President of the USA (POTUS) that all citizens are prohibited from consuming more than 12 hot dogs each month under penalty of fine, imprisonment or both….

      Yes, the US Gubermint may want to restrict the consumption of Fossil fuels but with no coherent plan for any viable replacement all they are causing is great suffering via inflation and reduced energy supplies.

      If you are absolutely certain that the US Gubermint MUST eliminate all use of Fossil fuels immediately then please cite which text in the original Constitution or which amendments support your proposition….

      • The Clean Air Act includes a process for adding new criteria pollutants to the original list from fifty years ago. The environmental law community has long advocated using sections 108 and 110 of the act to add carbon GHGs to the list of criteria pollutants, thus enabling several powerful regulatory tools for directly controlling and suppressing America’s carbon emissions.

        The EPA’s 2009 Clean Air Act Section 202 endangerment finding for carbon was a prototype test case used for determining if a finding for carbon GHGs could be successfully published and defended. The successful defense of that finding in the courts in 2010 and 2011 laid the legal foundation for publishing a similar Section 108 finding in support of declaring carbon GHGs as criteria pollutants.

        But the Obama administration didn’t move forward with declaring carbon GHGs as criteria pollutants, choosing instead to publish the Clean Power Plan, an anti-carbon strategy which they had to have known was almost certain to fail in the courts.

        Obama’s people were clearly afraid of the political backlash which would ensue from taking direct and highly effective regulatory action to quickly reduce America’s carbon emissions. So they published a plan which gave them cover in claiming they were addressing climate change, but which they knew could not produce emission reductions nearly as quickly as climate activists were demanding.

        Fast forward to the year 2022. The Biden administration is using a number of regulatory weapons against the carbon fuels industry, with the impacts we have seen. Moreover, Biden’s economic and climate advisors have said publicly and unequivocally they will ignore any decisions handed down in the courts adverse to their climate agenda.

        However, the Biden administration has yet to move forward with declaring a climate emergency and with adding carbon GHGs to the list of Clean Air Act criteria pollutants, thus enabling an even more powerful regulatory framework against carbon, a framework which — based on the EPA’s successful defense of the 2009 Section 202 finding — is more likely than not to survive legal challenges in the courts.

        Neither has the Biden administration produced anything resembling a credible and coherent plan of action for replacing fossil fuel energy with wind and solar energy.

        A credible plan of action would include an engineering feasibility analysis for electrifying the entire nation using wind and solar with battery backup. Such a plan, if it existed, must rely on energy conservation to fill the vast gap between the amount of energy the renewables can actually produce and the amount of energy America currently consumes.

        A rough guess is that achieving the Biden administration’s Net Zero target schedule requires America to be consuming roughly half as much energy in the year 2035 as we do today in the year 2022; possibly one-third or even one-quarter of today’s energy consumption in the year 2050.

        This is the reality which climate activists refuse to acknowledge. But it is a reality they would be forced to acknowledge if they ever began putting serious pressure on the Biden administration to reduce America’s carbon emissions as quickly as they themselves believe the law allows.

  31. It would be someone operating Biden’s teleprompter not Biden,

  32. Millenials and Zoomers grew up playing computer games. So they actually now believe that computer games ARE reality. Enter social misfit, college/knowledge dropout Mark Zuckerberg to create the anit-social “social network”…

    Now we have an entire generation of “scientists”, including many blathering on here, who truly believe that a computer model has something (indeed they believe everything) to do with science.

    It’s really very simple, however. A computer program is nothing more and nothing less than a reflection of the belief system of the person/people who wrote it. It is a “program” after all. It was “programmed” to do something. What? Whatever it was programmed to do!

    Therefore, a computer program can NEVER be used to PROVE anything. Science has nothing to do with individuals’ belief systems. It is about ACTUAL REALITY and measurements thereof.

    Of course the M’s and Z’s want you to now believe there is no actual reality, because they have been so brainwashed by their computer games (they live in Zuckerberg’s Metaverse). They want you to believe in the reality of their computer programs, actual reality be damned.

    The entire climate hoax, IPCC hoax, and the careers of all these mentally defective (and I’m not being pejorative here, simply factually accurate) scientists who spend their time endlessly blathering about the results generated by a handful of very poorly written and known-to-be-massively-inaccurate-versus-REALITY computer programs is a sad joke and a waste of huge amounts of human capital.

    • True, true, fantastic models tuned to suggest a future that is contrary to our experiences based on mechanisms that do not exist anywhere on Earth.

      • “…for example, if the models had been tuned to fit the observed course of the climate. Provided, however, that the observed trend has in no way entered the construction or operation of the models, the procedure would appear to be sound.”

        (See, Words of wisdom from Ed Lorenz)

      • We all should know the truth that GCMs are nothing but simple-minded toys that have been ‘tuned’ through the use of parameters to mimic observations, after the fact. Everyone knows that the forecasting ability of GCMs is demonstrably deficient because they cannot even ‘predict’ the past or be validated.

      • Wagathon wrote:
        “…for example, if the models had been tuned to fit the observed course of the climate. Provided, however, that the observed trend has in no way entered the construction or operation of the models, the procedure would appear to be sound.”

        Tuned how? Be specific.

        Do you mean no climate model can use observations of the past as a guide/input?

        no?

        Can you show me any physics, basic or otherwise, any at all that doesn’t use observations as an input?

        Newton’s laws? Newton’s law of gravity? Maxwell’s equations? The equations of optics? Thermodynamics?

        Do any of these fields derive their findings from very first principles?

        If so, name that field and/or equation.

      • The climatists keep fiddling with their mathematical models but to what end? They are creating not resolving statistical problems. Continually fine tuning parameters based on real-world observations may result in a fine picture of a world gone by. That such models actually capture all the forces and relationships between the forces that comprise past climate and future weather and by extension future climate is an illusion. GCMs are not helping us know the future like crystal balls with built-in answers.

      • “The scientist cannot obtain a ‘correct’ [model] by excessive elaboration” ~George Box (’76)

    • Jonathan Cohler wrote:
      Therefore, a computer program can NEVER be used to PROVE anything

      Here is the detailed physics of a well-used climate model. What did they get wrong?

      NCAR Community Atmosphere Model (CAM 3.0)
      http://www.cesm.ucar.edu/models/atm-cam/docs/description/description.pdf
      https://opensky.ucar.edu/islandora/object/technotes%3A477/

      Here’s another, that published their code (which, BTW, UAH does not). Point out their errors:

      NASA GISS GCM Model E: Model Description and Reference Manual
      https://www.giss.nasa.gov/tools/modelE/

      Of course you will not and cannot point to any meaningful problems with these models, which were/are made by people far, far more knowledgeable, skilled and experienced than you.

      You want to disbelieve the science and you will go to any desperate lengths to do it, even by writing tripe like you just did.

      PS: “All models are wrong, but some are useful.” (George Box)

      • Curious George

        “What did they get wrong?” They consider a latent heat of vaporization of water a “physical constant”. In tropical seas it is 3% off.

    • “We find that climate models published over the past five decades were skillful [14 of 17 projections] in predicting subsequent GMST changes, with most models examined showing warming consistent with observations, particularly when mismatches between model‐projected and observationally estimated forcings were taken into account.”

      “Evaluating the performance of past climate model projections,” Hausfather et al, Geo Res Lett 2019.
      https://doi.org/10.1029/2019GL085378

      figure:
      https://twitter.com/hausfath/status/1202271427807678464?lang=en

      • I discuss this in the paper. This apparent “skill” is due to tuning the top of atmosphere radiative balance and the fact that the ocean heat uptake is roughly right. There is a problem however in that we don’t know what the TOA balance will be in the future.

        This “skill” however is caused by tuning causing the cancellation of large errors. This is why all the details appear to be wrong, most importantly SST change patterns, which explains why the model ECS is too high.

      • You raise an interesting point.

        What if a model appearing to get details wrong, actually matches observed conditions on an extended basis? Is it to be relied upon for future policy decisions.

        If a model closely matches observations for 25 years, how long should it be considered reliable for the future?

      • The verification of any model should demonstrate that the “internal details” of the model match the real world. If the model “output”, such as GMST, validates but the internal details do not, then the match is happenstance or a result of tuning for an output IMO.

        For example, last I knew, the GCMs have two ITCZs, where the real world has only one. Similarly, the real world albedo between the NH and SH is nearly identical. Even though the clear sky albedo between the NH and SH is markedly different. The GCMs do not match this fundamentally important reality.

  33. The warming effect observed in our dynamic atmosphere near the surface is the difference of net downward radiative heating and the net upward convective cooling.

    The simplest artifice by which the effect of vertical energy
    transports by motions can be included in a global-mean radiative transfer model is a procedure called convective adjustment. This artificial vertical redistribution of energy is intended to represent the effect of atmospheric motions on the vertical temperature profile without explicitly calculating nonradiative energy fluxes or atmospheric motions. In a global 1-D mean model, this “adjusted” layer extends from the surface to the tropopause.

    The thermal equilibrium profile obtained with a lapse rate of 6.5 K km-1 is close to the observed global mean temperature profile. No a priori reason exists for choosing a 6.5K km-1 adjustment lapse rate other than that it corresponds to the observed global-mean value. The maintenance of the lapse rate of the atmosphere is complex and involves many processes and scales of motion which are free to vary. Any minor change to hydrological parameters will perturb the net upward energy transport and thus thermal equilibrium profiles.

    • Yes, that sounds right. I do believe there is strong evidence though that the moist adiabatic model for tropical convection is wrong. The temperature profile in the tropics does not agree with the data and the warming rate is also wrong. Even RealClimate shows this.

      • At the bottom of the front page, there is a link for model vs. data comparisons. Go to the end to see the tropical TMT. There are 4 data sets, each with error bars. The vast majority of models are outside all those bars.

  34. Willis Eschenbach | December 2, 2022 at 11:25 pm |

    “Next time, tell us what is wrong with my analysis. Where it is published is totally immaterial.”

    The energy absorbed by any surface is emitted by said surface irrespective of its composition.
    Earth air, clouds, water, CO2.
    No matter the myriad ways.
    it finds to get out.
    Your claim descriptively that where the energy gets out from can alter the total amount of energy lost is bunkum.

    This is an excuse to justify the concept of retained energy and TOA imbalances.

    Drop your natural reaction to criticism and use your usual good insight to see the problem as I see it for 2 minutes.
    Then give your serve.

    I am not knocking your analyses of how the energy getting back to space leads to different areas of heat loss which is important in models.
    Just to the unwanted implication that energy can be created (or lost) de novo, implicit in all theories neglecting a balanced TOA.

  35. I first used a CFD code in the mid 90’s modelling potential flow around a ships hull, i.e. frictionless flow ignoring any viscous effects, about as simple a CFD analysis as possible. It also happened that the vessel in question existed and I had photographs of it at service speed with its bow wave clearly visible.

    The first run of the CFD code produced a bow wave that was about twice as high as the real world version which had me scratching my head until I realised my model mesh was just too coarse for the curvature of the water surface at the bow adjacent the hull. I modified the mesh (made the cells smaller) and re ran the model. It took a lot longer to solve but the result aligned closely with reality.

    So, the problem? Too coarse a mesh to properly model what was actually happening. CFD for earth’s environment? Gee what could cutting a few corners on mesh size to speed up solution time do, say to suit publishing, Eh?

    • Yes Mike, Your experience is typical even in the well posed potential flow modeling arena.

    • Yes, and another important issue is what David alluded to…time step selection. Select the wrong time step and the model may not converge, but selecting a smaller time step also increases the model run time.

      Even the best models will produce very different answers given small changes in inputs. One method for calculating peak nuclear fuel cladding temperature following an accident seeds the model with multiple sets of inputs producing dozens of equally valid results. The purpose is to create a high confidence inequality not a prediction of an exact answer.

  36. Pingback: Weekly Climate and Energy News Roundup #531 – Watts Up With That?

  37. Sometimes models are useful.

    After reading a paper (Van Westen, 2021) which ran competing models showing the contribution of the Amundsen region in WAIS, home of the so called Doomsday Glacier, was either 4.4 or 5.3 mm over 101 years, I reread page 1263, Chapter 9, IPCC6, to see if my memory was correct that the most recent observations of GMSLR from Antarctica was 2.64mm/yr. It was.

    IPCC5 showed the most recent observations of GMSLR from Antarctica was 2.7mm/yr. Which means that there was a reduction of 0.06mm/yr in contribution to GMSLR from Antarctica from IPCC5 to IPCC6.

    Given that the thickness of a brand new US dime is 1.35mm, I know 0.06mm/yr doesn’t sound like much, but every 0.06mm/yr helps, especially if you are having nightmares about the Doomsday Glacier and you live in Denver.

    Sometimes models are useful.

  38. Climate is the average of weather, so that say.

    Climate cannot be the average of the calculated GCM weather. GCM weather is basically the same as Numerical Weather Projection, NWP, weather which is acknowledged to be wrong relative to the physical domain.

    So, what’s going on here?

    Continuing to focus on the GAST, even though it is not a very useful response metric. The GAST can be estimated to a pretty good degree by a lumped-parameter approach applied to a theoretical steady-state condition, and fiddling with a limited number of parameters, basically albedo and emissivity, in those model equations. The lump being in this case the entire planet Earth. The equations are accountings of radiative energy balance applied to the theoretical steady state for the entire lump.

    How is this related to GCM weather?

    GCMs are based on tractable approximate models of the fundamental mass, momentum, energy equations for the materials that constitute Earth’s climate systems, including bunches of sub-grid modeling. Among those model equations will be accounting for radiative energy transport in an radiatively interactive media. It’s an excellent assumption that the tractable radiative energy transport modeling does a pretty good job. In fact, we can assume that if those radiative energy transport model equations were applied to the entire lumped-up planet Earth, the simplified/modified GCM, by, say, “turning off” lots o’ stuff, would return essentially the same value for the GAST as the lump model equations do.

    GCMs, then, are in fact returning a pretty good radiative-energy accounting as perturbed by all the other things thrown into the mix. Some of those things are relatively important with respect to the chosen response metric, and some are not. Those other tings are what make up GCM weather. But, the GCM GAST is basically an average of radiative-energy transport modelling as affected by all the other stuff, like chaos in the fluid dynamics. It is not the average of GCM weather. GCM weather effects are perturbations to the radiative-energy balance.

    The problem is a matter of (1) accuracy and, (2) temporal and spatial extent. The GAST is basically useless as far as accuracy is concerned. You can tune to get values that are as close to the physical domain as you desire, but what has been accomplished that might be useful relative to human response to climate change? In fact it is well known that when examined at the level of an actual temperature in contrast to anomaly, GCMs are not doing a very good job. Making policy changes for human response to climate change requires accurate regional information at a useful time scale.

    The useful time scale is determined by the time requirements for developing and implementing the various human responses. Accuracy is a very, very wicked problem. If human response to a potential impact needs to know the local-regional temperature, along with an estimate of the time period when that temperature can be expected to obtain, within 1 degree, that’s going to be kinda hard to get ahold of. The GCM results already available indicate that the stuff that is responsible for the perturbations in the radiative-energy balance are somewhat greater than 1 degree.

    Is it not possible that the observed chaotic response in the fluid mechanics, if that chaos is actually intrinsic to the fluid mechanics equations alone, will always prevent convergence to the attractor(s) to the degree that is necessary for making timely regional decisions? As wicked as climate is, theoretical investigations into fundamental aspects of the spatial-temporal chaotic response of GCMs when considering the totality of the mathematics involved: i.e. the system of continuous equations comprised of PDEs, ODEs, algebraic parameterizations, and discontinuous algebraic switches, and then into the abyss of the discrete approximations and associated numerical solution methods, will be wicked beyond everything. Highly likely impossible. The perturbations in the governing radiative-energy response, due to fluid chaos and effects of other models might simply be so large so as to prevent attaining calculated climate to the accuracy extent that’s needed for decision making.

    Given that the interactive-media aspects of the critically important radiative energy transport are important in Earth’s atmosphere, clouds and aerosols for examples, it might also be questionable that even the well-founded radiative-energy transport modelling can attain the necessary degree of fidelity with respect to the physical domain. All aspects of clouds and some aspects of radiative interactions with aerosols, including those within clouds, are nothing more than parameterizations devoid of attempts to model from first principles.

    It is indeed a wicked, wicked problem. Is there any hope at all for moving away from the GAST and making useful progress toward future-climate information that is actually needed. Well, first of all, it needs to be determined exactly what information is actually needed.

  39. Pingback: Colorful fluid dynamics” and overconfidence in global climate modelson December 5, 2022 at 7:58 pm - Always Bet On Black

  40. Pingback: Colorful fluid dynamics” and overconfidence in global climate models - The Crude Truth

  41. I’ve had the same problem.

  42. Convergence to the attractor, including non-autonomous PDEs, like GCMs. A necessary first step.

    G. Drótos, T. Bodai and T. Tél (2017) On the importance of the convergence to climate attractors. The European Physical Journal Special Topics, 226 (9). pp. 2031-2038. ISSN 1951-6355 doi: https://doi.org/10.1140/epjst/e2017-70045-7 Available at https://centaur.reading.ac.uk/72268/

    Maura Brunetti and Charline Ragon, Attractors and Bifurcation Diagrams in Complex Climate Models, arXiv:2211.01929v1 [physics.ao-ph] https://arxiv.org/pdf/2211.01929.pdf 3 Nov 2022.

    T. Tél, T. Bódai, G.Drótos, T. Haszpra, M. Herein, B. Kaszás & M. Vincze The Theory of Parallel Climate Realizations. J Stat Phys 179, 1496–1530 (2020). https://doi.org/10.1007/s10955-019-02445-7. https://link.springer.com/content/pdf/10.1007/s10955-019-02445-7.pdf?pdf=button

  43. Can we now solve partial differential equations like Navier Stokes or does the Maths prize of $1 million still remain unclaimed ?

    • The official description/specification of the problem is here:
      http://www.claymath.org/sites/default/files/navierstokes.pdf

      The sole focus is on mathematical proof of existence and uniqueness of solutions to the equations as given in the specifications. Those equations are for transient flows of incompressible fluids. The energy equation is not included.

      Generally, no prize has been offered for actual solutions. And for that question you must be very careful about specifications of the equations. The concept of “solution ot the Naiver-Stokes equations” can easily be made to be kind of fuzzy.

      The Clay Prize situation is exactly upside down with respect to real-world, feet-in-the-mud everyday applications. We say, existence ? we’ll get to that, uniqueness ? we’ll get to that, right now I’m too busy calculating numbers.

      G. G. Stokes didn’t bother with questions about existence and uniqueness during his long and extremely productive scientific life. He, too, was busy calculating numbers.

      XXII. On the Theories of the Internal Friction of Fluids in Motion, and of the Equilibrium and Motion of Elastic Solids, Transactions of the Cambridge Philosophical Society
      https://pages.mtu.edu/~fmorriso/cm310/StokesLaw1845.pdf

      X. On the Effects of the Internal Friction of Fluids on the Motion of Pendulums,
      Transactions of the Cambridge Philosophical Society
      https://www3.nd.edu/~powers/ame.60635/stokes1851.pdf

  44. see also:
    Loehle, C. 2017. The epistemological Status of General Circulation Models. Climate Dynamics 50:1719-1731

    • This looks interesting but its behind a paywall.

    • Dan Hughes sent me a copy of this paper and it is indeed interesting if for no other reason than its fantastic list of references where various issues with climate models are documented. I don’t disagree with the paper, but a lot of it is not in my view real science or mathematics based.

  45. Given;
    the extraordinary interventions that are necessary to get numerical weather projections into a reasonable representation of physical-domain weather;

    the equally extraordinary requirements on all the continuous equations, and especially the discrete approximations and numerical solutions for GCMs;

    the massive flood of calculated output;

    the enormous time and spatial scales required for GCM applications; and

    the apparently chaotic nature of the fluid mechanics and GCM weather.

    Will useful-for-purpose regional climate projections at the specific times that these might be needed, even possible?

    Palmer gives some ideas about this in Chapter 6.

  46. We have usually referred to CFD as Colors For Directors.