by Gerald Browning

Climate model sensitivity to CO2 is heavily dependent on artificial parameterizations (e.g. clouds, convection) that are implemented in global climate models that utilize the wrong atmospheric dynamical system and excessive dissipation.

The peer reviewed manuscript entitled “The Unique, Well Posed Reduced System for Atmospheric Flows: Robustness In The Presence Of Small Scale Surface Irregularities” is in press at the journal Dynamics of Atmospheres and Oceans (DAO) [link] and the submitted version of the manuscript is available on this site, with some slight differences from the final published version. Link to paper is here: Manuscript

**Abstract:** It is well known that the primitive equations (the atmospheric equations of motion under the additional assumption of hydrostatic equilibrium for large-scale motions) are ill posed when used in a limited area on the globe. Yet the equations of motions for large-scale atmospheric motions are essentially a hyperbolic system, that with appropriate boundary conditions, should lead to a well-posed system in a limited area. This apparent paradox was resolved by Kreiss through the introduction of the mathematical Bounded Derivative Theory (BDT) for any symmetric hyperbolic system with multiple time scales (as is the case for the atmospheric equations of motion). The BDT uses norm estimation techniques from the mathematical theory of symmetric hyperbolic systems to prove that if the norms of the spatial and temporal derivatives of the ensuing solution are independent of the fast time scales (thus the concept of bounded derivatives), then the subsequent solution will only evolve on the advective space and time scales (slowly evolving in time in BDT parlance) for a period of time. The requirement that the norm of the time derivatives of the ensuing solution be independent of the fast time scales leads to a number of elliptic equations that must be satisfied by the initial conditions and ensuing solution. In the atmospheric case this results in a 2D elliptic equation for the pressure and a 3D equation for the vertical component of the velocity.

Utilizing those constraints with an equation for the slowly evolving in time vertical component of vorticity leads to a single time scale (reduced) system that accurately describes the slowly evolving in time solution of the atmospheric equations and is automatically well posed for a limited area domain. The 3D elliptic equation for the vertical component of velocity is not sensitive to small scale perturbations at the lower boundary so the equation can be used all of the way to the surface in the reduced system, eliminating the discontinuity between the equations for the boundary layer and troposphere and the problem of unrealistic growth in the horizontal velocity near the surface in the hydrostatic system.

The mathematical arguments are based on the Bounded Derivative Theory (BDT) for symmetric hyperbolic systems introduced by Professor Heinz-Otto Kreiss over four decades ago and on the theory of numerical approximations of partial differential equations.

What is the relevance of this research for climate modeling? At a minimum, climate modelers must make the following assumptions:

1. The numerical climate model must accurately approximate the correct dynamical system of equations

Currently all global climate (and weather) numerical models are numerically approximating the primitive equations — the atmospheric equations of motion modified by the hydrostatic assumption. However this is not the system of equations that satisfies the mathematical estimates required by the BDT for the initial data and subsequent solution in order to evolve as the large scale motions in the atmosphere. The correct dynamical system is introduced in the new manuscript that goes into detail as to why the primitive equations are not the correct system.

Because the primitive equations use discontinuous columnar forcing (parameterizations), excessive energy is injected into the smallest scales of the model. This necessitates the use of unrealistically large dissipation to keep the model from blowing up. That means the fluid is behaving more like molasses than air. References are included in the new manuscript that show that this substantially reduces the accuracy of the numerical approximation.

2. The numerical climate model correctly approximates the transfer of energy between scales as in the actual atmosphere.

Because the dissipation in climate models is so large, the parameterizations must be tuned in order to try to artificially replicate the atmospheric spectrum. Mathematical theory based on the turbulence equations has shown that the use of the wrong amount or type of dissipation leads to the wrong solution. In the climate model case, this implies that no conclusions can be drawn about climate sensitivity because the numerical solution is not behaving as the real atmosphere.

3. The forcing (parameterizations) accurately approximate the corresponding processes in the atmosphere and there is no accumulation of error over hundreds of years of simulation.

It is well known that there are serious errors in the parameterizations, especially with respect to clouds and moisture that are crucial to the simulation of the real atmospheres. Pat Frank has addressed the accumulation of error in the climate models. In the new manuscript, even a small error in the system impacts the accuracy of the solution in a short period of time.

One might ask how can climate models apparently predict the large-scale motions of the atmosphere in the past given these issues. I have posted a simple example on Climate Audit (reproducible on request) that shows that given any time dependent system (even if it is not the correct one for the fluid being studied), if one is allowed to choose the forcing, one can reproduce any solution one wants. This is essentially what the climate modelers have done in order to match the previous climate given the wrong dynamical system and excessive dissipation.

I reference a study on the accuracy of a primitive equation global forecast model by Sylvie Gravel et al. [link]. She showed that the largest source of error in the initial stages of a forecast are from the excessive growth of the horizontal velocity near the lower boundary. Modelers have added a boundary layer drag/dissipation in an attempt to prevent this from happening. I note in the new manuscript that this problem does not occur with the correct dynamical system and that in fact the correct system is not sensitive to small-scale perturbations at the lower boundary.

**Biosketch:** I am an independent applied mathematician trained in partial differential equations and numerical analysis concerned about the loss of integrity in science through the abuse of rigorous mathematical theory by numerical modelers. I am not funded by any outside organization. My previous publications can be found on google scholar by searching for Browning and Kreiss.

Gerald Browning – Thanks for your analysis. I hope it isn’t simply ignored. A question: When you say “the norms of the spatial and temporal derivatives of the ensuing solution are independent of the fast time scales”, what is a “fast time scale”? I’m particularly interested, given the observed behavour of clouds globally as described in my submitted paper (see https://wattsupwiththat.com/2020/06/05/cloud-feedback-if-there-is-any-is-negative/ ) in which the relationship between SST and cloud cover over a few months is opposite to that over the longer term (decades). Is your “fast time scale” of the order of a few months or is it much faster? If so, is the behaviour that I observed an example of the “ensuing solution” being “independent of the fast time scales”? Any other comment you can offer would be appreciated – especially if you disagree with my analysis (if I’ve got anything wrong I need to know it).

Mike,

The reduced system for the equatorial region has the vertical component of the velocity (w) directly proportional to the total of all heating and cooling sources (H). Thus it seems to me that an increase in ocean temperature would lead to increased evaporation (at least in clear skies) which seems it would make more moisture available to the atmosphere. If I remember correctly, there are many moisture plumes in the equatorial region so that seems to agree with an increase in moisture.. However what happens with those plumes involves physical processes that are not well understood.

Andrew Majda has used the reduced system with common parameterizations in an attempt to better understand those processes, but clearly that depends on the accuracy of those parameterizations.

Jerry

Mike,

The reduced system for the ocean has the vertical component of the velocity directly proportional to the total of all heating and cooling. It seems to me that increased ocean temperature would lead to increase in evaporation (at least in clear skies) and thus increa

Mike,

Note that not one person has been able to disprove that the system

introduced in my manuscript is the correct UNIQUE reduced system. Thus, as shown in the manuscript, the hydrostatic system is the wrong dynamical system putting all conclusions made by the IPCC based on climate models that approximate the hydrostatic system in question.

Jerry

Reblogged this on Climate Collections.

When will people learn that papers are titled and not entitled?

What exactly does this mean.?

Papers and books are titled…”The Encyclopaedia Brittanica”.

People are entitled…. “to be angry….”

Petty I know. But maybe it will sink in eventually.

Retiredgeo,

It just appalls me that such educated people have such a poor grasp of English.

As it appalls me of the enormous number of grammatically correct and flowery manuscripts by supposedly educated scientists that have a poor grasp of mathematics.

Jerry

Grant,

I assume this is a snide remark because you can’t counter the rigorous mathematics. Climate modelers have made a serious mistake and it is time it be seen for what it is.

Jerry

No it wasn’t meant to be snide. It just appalls me that such educated people have such a poor grasp of English. I’m a climate modelling skeptic. I’ve been involved with geological models for coal deposits since they first appeared in the 80s and even today, with the best of borehole data they are rubbish when it comes to predicting depths to coal seams. We need to have reliable depths to start coring and the models are no better and even worse than us extrapolating from nearby holes using triangulation. So if you can’t model a simple coal deposit, how can you ever model the climate? And the public has been led to believe they are accurate enough to predict the climate in 50 years or so. It’s all just a joke.

Thanks for the info, retiredgeo. I used to use “entitled,” but it started to seem wrong to me (unnecessarily fancy), so I dropped back to “titled.” I’m pleased to learn I guessed right.

Gerald: These corrections are good to hear, to keep one’s enemies from an excuse to sneer.

Retiredgeo,

OTOH, we used mathematical models based on surface measurements of tiny distortions of the natural magnetic field to guide diamond drilling with exquisite success for magnetite hosted gold-copper-bismuth ore deposits. Several new mines resulted. Value in the tens of billions $. I have seen a lot of modelling in geosciences and done a fair bit myself and consider that modelling can hardly be better than this magnetic work. The original models were made and tested in the 1960-70 era, on mechanical calculators, before the HP45 and the IBM PC. Thought processes went ahead of the programming.

So, I have been working to discover why some modelling is exquisite and much is rubbish. The rubbish bin is full of cases where verification is impossible or too difficult. A lot of modelling should never have been started when it was known that it could not be tested.

I fear another factor is accountability. Our salaries depended on finding new one bodies, so we responded with alacrity and soon knew the difference between a success by delivering the goods and a success shown only by self adulatory display of intellectual versatility. Also, you do not find more deposits by adjusting or homogenising your raw data, but we knew that before we started because it was honest not to fiddle.

Also, you have to know when to stop refining your model. If it does not work well enough in simpler form, you can get into diminishing returns territory and find little gain from digging a deeper hole. Geoff S

Geoff,

Totally agree.

Glad to hear you have reliable magnetite ore body models. Got a reference?

I started in 60s when there were one PDP11 computer to access in our city and no electronic calculators even. We used logs and Facit? mechanical calculators😂 and hand contouring was the standard.

Grant

Hi Grant,

I’m not aware of formal publications in the style of today, but you might search the Tennant Creek mineral field in Australia’s Northern Territory. Authors might include Richardson Robert L, and his father Lewis, plus Rayner R., Kilpatrick B. and more.

It is an irritant to me also. Unfortunately, if people use the wrong verb often enough, it becomes a part of the English language. It’s a bit like if people persistently say that CO2 appreciably affects the earth’s atmospheric temperature, it becomes fact. I love the way you folks constantly probe that mantra. Please keep it up. (Oh.. and thanks for the bit about titled. You are absolutely correct!)

Thanks Kevin. However, I should have been a bit more circumspect. I didn’t want to disrespect the great piece of work.

Another thing I’d like to see is cessation of lists using “First, Secondly, Thirdly etc” although there is some debate here. You’re right. English misuse leads to acceptance and the lowering of standards due to ignorance and laziness. Look at the noun “loan” which is now almost exclusively used as a verb. Whatever happened to “lend”?

Mike,

For large scale motions, any time scale shorter than a day is considered

fast. Thus inertial/gravity waves and sound waves are considered fast.

However, we have also derived reduced systems for smaller scale motions in the midlatitudes and equatorial regions – see references in the new manuscript and at google scholar.

Jerry

“Pat Frank has addressed the accumulation of error in the climate models. In the new manuscript, even a small error in the system impacts the accuracy of the solution in a short period of time.”

I guess we can stop there– i.e., it’s all BS… which is why scientific skeptics simply call AGW, a hoax.

Wagathon,

I wait for a scientific response to the rigorous mathematics. Given that you cannot refute the mathematics, you must resort to unsupported nonsense.

Jerry

It has been tested by Lorenz back when vacuum-tube computers were first introduced to science just fifty years ago. Lorenz used, “a set of a dozen or so differential equations involving such things as temperature, pressure, wind velocity etc.” As Concroft tells the story, Lorenz re-ran his program, “by entering a variable to 3 decimal places,” and to Lorenz’ surprise, “the results were completely at odds with what was achieved earlier.” As it turns out, “re-entering the variable to its full 6 decimal places produced a repeat of the initial results – from this Lorenzo drew the inevitable conclusion that with his dozen or so equations, even a miniscule variation on input is capable of creating massive change in output.”

Wagathon,

Clearly you do not understand how the Lorenz equation came into being. He approximated the equations of motion with only a few spectral modes

and then found very strange results. You need to familiarize yourself

With the mathematical theory of hyperbolic differential equations.

Jerry

I believe you can take the chaos out of the mathematics but to take chaos out of dynamical system by reducing its sensitivity (reducing the initial value perturbations), aren’t we also required to assume far more continuity in the system than global warming alarmism suggests, based only on increases in ppm of atmospheric CO2 content which arguably is the only factor humanity could conceivably control?

Wagathon,

The modelers have done exactly that, reduced the sensitivity of the wrong system of equations by adding excessive dissipation making them more

like molasses than air. Note that the growth of initial errors in symmetriic hyperbolic systems can be bounded thru mathematical estimation.

Clearly you do not understand the Bounded Derivative Theory which uses those estimation techniques to prove that if the initial data is chosen according to the theory, then the solution will remain large scale for a period of time. Note that in numerical examples,of the solutions behave exactly as expected from the theory, in spite of small truncation errors.

Jerry

…and, verification of such models will naturally be easy, e.g., start with the Roman warming period and then, ‘predict’ the MWP, the LIA and 20th century warming, ’25-’44 and ’78-’97…?

Wagathon,

How was the eruption of Krakatoa in 1883 predicted by a climate model? I

Jerry

Wagathon,

According to a NOAA site, the Krakatoa eruption caused a notable decrease in the global temperature for 5 years. So is that decrease in temperature shown in your results? If so you must have done something to simulate that eruption. Please explain how that was done.

How did you simulate the Maunder sunspot minimum? There is reasonable conjecture that affected the LIA?

How does the amount of dissipation in your model compare to that of the ECMWF forecast model and to reality?

Jerry

Eliminating chaos eliminates the only chance we have to capture that which we know nothing about and will never expect. We don’t ,for example, know No much if anything about volcanoes that are erupting under our oceans right now or what effect that may have on weather over the years.

“Climate change has to be broken down into three questions: ‘Is climate changing and in what direction?’ ‘Are humans influencing climate change, and to what degree?’ And: ‘Are humans able to manage climate change predictably by adjusting one or two factors out of the thousands involved?’ The most fundamental question is: ‘Can humans manipulate climate predictably?’ Or, more scientifically: ‘Will cutting carbon dioxide emissions at the margin produce a linear, predictable change in climate?’ The answer is ‘No’. In so complex a coupled, non-linear, chaotic system as climate, not doing something at the margins is as unpredictable as doing something. This is the cautious science; the rest is dogma. ~Philip Stott

Wagathon,

One thing at a time so you can’t try to wiggle out.

What is the amount of dissipation in your model compared to the ECMWF

model and reality?

Jerry

Wagathon,

Quit avoiding the questions.If you do not , you have confirmed my case.

You said you replicated the past climate but not Krakatoa which had a major impact on past climate. So in fact you have not replicated the past climate.

Evidently you did not take into account the Maunder Minimum

that also affected the climate and was in close proximity with the LIA.

And most importantly you did not provide information about the amount of dissipation you use compared to ECMWF or reality.

Jerry

Surely, the eruption of Krakatoa should be considered a weather event (e.g., leading to the, ‘year without summer’) not, ‘climate,’ no?

Gerald, thank you most kindly. You write so clearly that at the end of a paragraph, I think I understand what you said perfectly … then I go “Wait, what?” and I get to back up and cruise through the ideas again. What’s not to like?

A question: is your method modifiable to include the myriad forms and energy exchanges of water in the atmosphere (evaporation, condensation, freezing, thawing, water vapor, liquid water, graupel, hail, clouds, virga, rain, snow)?

Onwards, thanks for the study, most fascinating.

Stay well,

w.

I have tried to reply to you twice and both disappeared. Send me your email and I will send you a copy of the manuscript as submitted.

hrhrbb@gmail.com

Jerry

Gerald Browning,

Thank you for this post. Climate modelling is outside my area of expertise, so I cannot contribute to discussion of it.

However, I am not concerned about global warming. I am concerned about the widespread belief in the probably false premise that global warming is harmful, the fear that belief is generating, and the hugely damaging policies the fear and ideological beliefs are causing to be implemented.

Empirical evidence suggests global warming is beneficial for the world economy and ecosystems. Here are two recent papers on the impacts of global warming and of increasing CO2 concentrations.

1. Lang, P.A.; Gregory, K.B. Economic impact of energy consumption change caused by global warming.

Energies2019, 12, 3575. https://doi.org/10.3390/en12183575.2. Dayaratna, K.D.; McKitrick, R.; Michaels, P.J. Climate sensitivity, agricultural productivity and the social cost of carbon in FUND.

Environmental Economics and Policy Studies2020, https://doi.org/10.1007/s10018-020-00263-w.Also, a Climate Etc. post on the first paper:

https://judithcurry.com/2020/02/08/economic-impact-of-energy-consumption-change-caused-by-global-warming/

Peter,

Note that I have not claimed that there is or is not global warming.

All I have proved is that the climate models are unreliable because of serious mathematical mistakes. Stephen McIntyre has addressed the serious issues with temperature proxies. I would say that the IPCC conclusions are on thin ice scientifically if that is based on the above two points.

Jerry

Gerald,

Thank you for your reply. I do understand that you have not claimed that there is or is not global warming.

My apologies if my comment was not clear. My point is that empirical evidence indicates that global warming is beneficial for the global economy and ecosystems, whereas global cooling is harmful. Therefore, we should not be concerned about global warming and GHG emissions –they are beneficial.

Therefore, governments should stop the massive funding for climate research and stop implementing policies to reduce GHG emissions and global warming. The policies are enormously costly, are harming the global economy and slowing the rate of human well-being improvement.

Willis,

Something is very wrong. Every time I reply it is deleted. Please send me your email address,

Mine is hrhrbb@gmail.com

Jerry

Very good post Gerald. I’m glad you are frank about the implications of your math for model skill.

I have what I think is an explanation for why climate models seem skillful at hind casting global average temperature.

1. The models generally conserve energy albeit in some cases using after the fact flux corrections.

2. Models are tuned to top of atmosphere radiation fluxes.

3. Ocean heat uptake seems pretty accurate too.

4. Thus, atmospheric heat content will be reasonably accurate.

However, given the large numerical truncation errors, not to mention the sub grid models you mention, its not surprising that on virtually every measure of “patterns” of change, models are all over the place.

What I guess still surprises me is that climate scientists have circled the wagons for 40 years and partly succeeded in selling a false narrative about model skill. From my perspective after 45 years in CFD, it is astonishing and disconcerting. What I didn’t realize until about 15 years ago was how very strong positive biases affect all branches of CFD.

Dear Dr. Browning, great to see your new paper. Congratulations on completing this long anticipated work. It looks revolutionary as expected. Look forward to studying in more detail.

All,

I have a copy of the manuscript as submitted. There are some changes in the final version but the math is the same in both except for a few typos.

Anyone wanting a copy can give me their email and I will send them a copy.

I will be glad to answer any questions.

Jerry

Gerald Browning:

Pat Frank has addressed the accumulation of error in the climate models.I am glad to see Pat Frank’s work getting attention.

Thank you for your essay.

Judith Curry has my email address. Could you please send me a copy of your manuscript through her? Hoping she is not overwhelmed by the many demands on her time and attention. I would appreciate it.

Matthew,

You can email Gerald directly at the email address he posted in his comment about five above this one, i.e. here: https://judithcurry.com/2020/06/20/structural-errors-in-global-climate-models/#comment-919479

Peter Lang:

You can email Gerald directly at the email address he posted in his comment about five above this one, i.e. here:Thank you. I saw that later, and now curryja has put up a link.

Gerald Browning,

You write.

“One might ask how can climate models apparently predict the large-scale motions of the atmosphere in the past given these issues. I have posted a simple example on Climate Audit (reproducible on request) that shows that given any time dependent system (even if it is not the correct one for the fluid being studied), if one is allowed to choose the forcing, one can reproduce any solution one wants. This is essentially what the climate modelers have done.”

Yes, You are correct but your finding is not news. For example, in 1999 I published this paper,

Courtney RS, An Assessment of Validation Experiments Conducted on Computer Models of Global climate (GCM) Using the General Circulation Model of the UK Hadley Centre, Energy & Environment, v.10, no.5 (1999).

It concluded

“The IPCC is basing predictions of man-made global warming on the outputs of GCMs. Validations of these models have now been conducted, and they demonstrate beyond doubt that these models have no validity for predicting large climate changes. The IPCC and the Hadley Centre have responded to this problem by proclaiming that the inputs which they fed to a model are evidence for existence of the man-made global warming. This proclamation is not true and contravenes the principle of science that hypotheses are tested against observed data.”

The chosen input data was assumed aerosol negative forcing to compensate for the model ‘running hot’: i.e. to compensate for the model predicting more past warming tham was observed.

My finding was later confirmed by Kiehl who in 2007 found the same as me; ref. Kiehl JT, VOL. 34, L22710, doi:10.1029/2007GL031383, 2007

but he examined nine fully coupled climate models and two energy balance models which each used a unique value of chosen aerosol negative forcing to compensate for the model ‘running hot’ (see Kiehl’s Figure 2).

I hope this is helpful.

Richard

Richard Courtney:

Click to access Kiehl_modelresponses.pdfThank you for the link.

I think that the BDT preceded any work you mentioned, especially Kiehl’s.I The BDT was introduced in 1979 by Professor Heinz -Otto Kreiss .

Did you reference any of his work?

Gerald Browning,

You ask if I or Kiehl referenced the work of Heinz -Otto Kreiss on BDT.

The work that Kreiss published on BDT in 1979 was important and is pertinent but it was not specifically aimed at digital climate models because they did not exist in 1979. I note that your paper states the seminal work on BDT by Kreiss but neither Kiehl nor I did that in our papers.

As I said, I published my investigation of a leading digital climate model in 1999 and Kiel published his investigation of digital climate models in 2007. Similarly, not every paper on digital ballistic models would be expected to reference Newton’s work on dynamics.

Richard

Keihl is a a radiation guy, he is not someone who would have picked up on the subtleties of dynamics.

In 1994 Heinz and I published a manuscript called “On the impact of rough forcing on systems with multiple time scales”

that was aimed exactly at climate and weather modelers use of

discontinuous forcing. All you did was use a numerical model

to demonstrate a known mathematical fact just to gain another publication. I was at NCAR so I know Kiehl knew of our work.

Richard,

A climate was developed by Warren Washington in the the sixties.

I was at NCAR when he was doing so. So your comment that they were

not present is utter nonsense.

Jerry

Gerald Browning,

I recognise that you are trying to advertise the work of your “mentor” but it is my experience that when people use language such as “utter nonsense” they lack anything sensible to say.

General Circulation Models (GCMs) of climate were developed from computer programs intended to assist weather forecasting. ENIAC was used to make the first computer-aided weather forecast in 1950 and after that it was natural to develop variations from those primitive weather forecasting programs in attempt to emulate climate. However, in the 1960s available computing power was insufficient to enable construction of sensible GCMs: 1960s climate ‘models’ are best described as amusing toys.

None of this says anything about your paper nor my comment which intended to provide supporting information for your findings by referencing papers published by myself (published in 1999) and Kiehl (published in 2007).

Richard

Richard,

So that is why Warren Washington received an award (supposedly equivalent of a Nobel prize) for his work on developing a climate model in the sixties. Utter nonsense is very appropriate here. Do not denigrate

the early work as a toy because the current models are no better. They are still based on the wrong dynamical system with large dissipation.

Jerry

Gerald Browning,

I am not denigrating anything. On the contrary, I wrote to support your finding by citing other works (of Kiehl and myself) which agree with your findings.

But you say of my work,

“All you did was use a numerical model to demonstrate a known mathematical fact just to gain another publication.”

NO! That is both denigrating and untrue. I investigated the development of the Hadley Center GCM and published my finding which was the first peer reviewed publication in the formal literature to demonstrate the inadequacy of GCM modelling as a forecasting tool as it was (and is) being used by the IPCC.

Incidentally, almost all my research work was not published in the formal literature. Very little good and novel scientific research is published there because the most valuable research findings have commercial, financial, security, national and/or political security so they are only released to the public after becoming common knowledge: the formal literature publishes new findings and not common knowledge. Advertising is the usual reason for publication of good research findings.

My own history includes examples of such advertising. For example, when employed at the UK’s Coal Research Establishment (CRE) I devised a method to use an optical microscope with an automated stepping stage to determine the position (X and Y coordinates) of any point on the surface of a 10cm x 10cm planar surface to both an accuracy and a precision of +/-0.1 micron. The method was adopted by the UK’s National Physical Laboratory (NPL) which was and is responsible for maintaining mensuration standards in the UK. I published a paper on the method in the journal ‘Microscopy’ but anyone who used that paper to adopt the model would discover a problem: viz. machining tolerances in the stepping stage meant that it would take years to calibrate – or to recalibrate – over the entire measurement area. However, I had devised a statistical ‘trick’ that enabled the system to calibrate itself in less than 24 hours. CRE was willing to provide a commercial calibration service for the method.

I think you would do well to thank those who support your work instead of attacking and insulting them.

Richard

Reblogged this on uwerolandgross.

Dear Gerald,

Thanks for your contribution. Is it possible to get your paper.

As commonly devil sits in the details, and I would not like to comment before getting your detailed analysis.

Thank you by advance.

Glad to see some kind of constructive “revolution” in the making for a change these days.

Judith,

Will you be using this new approach at your forecasting company? If you do use the approximations Gerald is criticizing it would seem this approach might give you a competitive advantage. Also, doe this work have any implications for the Webster book?

Dr. Pat Frank

https://wattsupwiththat.com/2019/09/12/additional-comments-on-the-frank-2019-propagation-of-error-paper/

“If a model has been forced to be in global energy balance, then energy flux component biases have been cancelled out, as evidenced by the control runs of the various climate models in their LW (longwave infrared) behavior…”

The models are not an advancing clockwork like a Lorenz equation. They are reactive. Alive in a sense. Attempting to balance out. Hotter means more attempts to cool. To hang onto the normal temperature. The heat does not just accumulate. It tries to find cold, to move.

I sided with Spencer on this. Spenser’s answer more aligns with my view of the climate system.

A link to the manuscript is now posted

curryja:

A link to the manuscript is now postedThank you. Got it.

A quick comment. IMO this problem is bigger for climate models than weather prediction models. The parameterizations don’t drive the weather model solution (two wrongs making a sort of right), but these errors accumulate big time in climate models, especially as they influence cloud and water vapor feedback. That said, if this is going to be fixed, it will probably start with weather models.

Dr Curry,

You say, ” IMO this problem is bigger for climate models than weather prediction models. The parameterizations don’t drive the weather model solution (two wrongs making a sort of right), but these errors accumulate big time in climate models, especially as they influence cloud and water vapor feedback.”

I very strongly agree about accumulated errors that influence cloud and water vapour effects being severely problematic for digital climate models. Indeed. I have been saying that for a long time; for example, in this reply in 2008 to a request for information from me by US Senator James Inhoffe, Chair of the US Senate Committee on Environment and Public Works.

http://allaboutenergy.net/environment/item/2208-letter-to-senator-james-inhofe-about-relying-on-ipcc-richard-courtney-uk

My answer to the Senator’s Question 1 includes this comment on a report by Miller and Schmidt of their evaluation of the NASA GISS GCM which they were developing.

“The abstract says; “the representation of cloud cover in the model has been brought into agreement with the satellite observations by using radiance measured at a particular wavelength instead of saturation” but this adjustment is a ‘fiddle factor’ because both the radiance and the saturation must be correct if the effect of the clouds is to be correct. There is no reason to suppose that the adjustment will not induce the model to diverge from reality if other changes – e.g. alterations to GHG concentration in the atmosphere – are introduced into the model. Indeed, this problem of erroneous representation of low-level clouds could be expected to induce the model to provide incorrect indication of effects of changes to atmospheric GHGs because changes to clouds have much greater effect on climate than changes to GHGs. ”

Richard

Accumulation of water vapor errors especially in upper troposphere could be a bigger problem than clouds. The moist thermodynamics used in weather/climate models has alot of (unnecessary) approximations; not a big deal for weather time scales, but potentially a huge deal for climate time scales.

Dr Curry,

Yes. Thank you. Again, I agree.

In fact we already have evidence of your point that “Accumulation of water vapor errors especially in upper troposphere could be a bigger problem than clouds.”

All the GCMs predict between 2 and 3 times greater warming at altitude in the tropics than at the surface (an effect known as the Tropospheric Hot Spot). This accelerated warming is an effect of water vapour feedback which the models project.

However, independent measurements by radisondes on weather balloons (since 1958) and MSUs on satellites (since 1969) both show this accelerated warming (i.e. the Hot Spot) does not exist.

Richard

Dr Curry,

My response to your comment could be thought to be curt. Hence, for benefit of onlookers I am writing this addendum which includes a pertinent link.

The Hot Spot is fully described in Chapter 9 of the so-called “scientific” WG1 report of IPCC AR4 that you can download from https://www.ipcc.ch/site/assets/uploads/2018/02/ar4-wg1-chapter9-1.pdf

The Hot Spot is shown Figure 9.1 which is on page 675 and is titled,

“Figure 9.1. Zonal mean atmospheric temperature change from 1890 to 1999 (°C per century) as simulated by the PCM model from

(a) solar forcing,

(b) volcanoes,

(c) wellmixed greenhouse gases,

(d) tropospheric and stratospheric ozone changes,

(e) direct sulphate aerosol forcing and

(f) the sum of all forcings.

Plot is from 1,000 hPa to 10 hPa

(shown on left scale) and from 0 km to 30 km (shown on right). See Appendix 9.C for additional information. Based on Santer et al. (2003a)

The Hot Spot is the big red blob that is only in plots (c) for wellmixed greenhouse gases, and (f) for the sum of all forcings.

The Figure shows the ‘blob’ is warming of between 2 and 3 times the warming near the surface beneath it.

Furthermore, the plot is of predicted temperature rises “from 1890 to 1999” and the measured temperature rises are for the latter part of the period (since 1958 for the balloon data and since 1969 for the satellite data). Thus, warming measured by balloons and satellites was for when “wellmixed greenhouse gases” were at their highest.

Therefore, if the effect of wellmixed greenhouse gases is as predicted by the climate models and reported in Figure 9.1 of IPCC WG1 AR4 then the measured warming in the Hot Spot should be MORE THAN 2 to 3 times greater than warming measured near the surface beneath the Hot Spot.

The Hot Spot occurs because of the assumed water vapour feedback (WVF).

Any increase to temperature increases evapouration of water (H2O) from the Earth’s surface. H2O is the major greenhouse gas (GHG) and CO2 is the greatest of the minor GHGs. The models assume CO2 warms the surface and, thus, increases evapouration. The cold at altitude in the troposphere means there is little H2O up there so any increase to the H2O concentration at altitude has large effect.

I hope this expansion is useful to onlookers.

Richard

Richard

It’s good to see you around and about again.

Hope you are feeling well as your writing is on good form

Tonyb

Tony b,

Thanks for your comment and good wishes.

My pain relief inhibits thought so I need to reduce it to contribute to discussions such as this. Hence, I now only contribute to threads when I think I can assist the debate. Also, my resilience is low so I cannot cope with trolls and immediately withdraw if they attack.

Richard

dpy6629 | June 20, 2020 at 10:11 pm

“I have what I think is an explanation for why climate models seem skillful at hind casting global average temperature.”

A perfect fit backwards shows adjustment of the parameters of the model away from the science to fit the observed curve.

–

Professor Browning says

“1. The numerical climate model must accurately approximate the correct dynamical system of equations

2.Because the dissipation in climate models is so large, the parameterizations must be tuned in order to try to artificially replicate the atmospheric spectrum. ”

–

What happens with observation climate as opposed to model climate is that the model is allowed to be different to the observed climate because, although the maths and physics is semi known we cannot know all the variations possible as in simply volcanic effects.

Hence no true model can have true hindcasts because a true match means that the model includes volcanic effects, etc that the model cannot have known or predicted were there.

–

We can use the example of a betting system that shows which horse will win in future. It accurately predicts the winners of the last 3 years races in a hindcast, every time.

Unfortunately this is not possible. The outcomes of each race depended on unknown variables that meant the best horse did not win every time. When someone sells you a system showing you that it accurately predicts past winners each time you know you are being had.

–

Another way of looking at it is that the model developers are too frightened to let their models deviate as they should. They have to be right in the past, they feel [wrongly], to have any semblance of credibility in the future. Hence adjustments are made the the best scientific parameters they had to tweak them away from the science and to give an outcome that was never have been truly replicable.

dpy6629 | June 20, 2020 at 10:11 pm

“I have what I think is an explanation for why climate models seem skillful at hind casting global average temperature.”

A perfect fit backwards shows adjustment of the parameters of the model away from the science to fit the observed curve.

–

Professor Browning says

“1. The numerical climate model must accurately approximate the correct dynamical system of equations

2.Because the dissipation in climate models is so large, the parameterizations must be tuned in order to try to artificially replicate the atmospheric spectrum. ”

–

What happens with observation climate as opposed to model climate is that the model is allowed to be different to the observed climate because, although the maths and physics is semi known we cannot know all the variations possible as in simply volcanic effects.

Hence no true model can have true hindcasts because a true match means that the model includes volcanic effects, etc that the model cannot have known or predicted were there.

–

We can use the example of a betting system that shows which horse will win in future. It accurately predicts the winners of the last 3 years races in a hindcast, every time.

Unfortunately this is not possible. The outcomes of each race depended on unknown variables that meant the best horse did not win every time. When someone sells you a system showing you that it accurately predicts past winners each time you know you are being had.

–

Another way of looking at it is that the model developers are too frightened to let their models deviate as they should. They have to be right in the past, they feel [wrongly], to have any semblance of credibility in the future. Hence adjustments are made the the best scientific parameters they had to tweak them away from the science and to give an outcome that was never have been truly replicable.

I must admit to being a bit fraught here.

I am happy to see Professor Browning posting a system that gives a better understanding of atmospheric dynamics.

Or at the very least least showing the failings of the previous understanding.

I am upset by the failings of some past respected scientists who have let the rogue element claim model near certainty because they could not stand up to the AGW push. This let’s those rogues, and some of their acolytes who comment here with an agenda to push, display their hubris and continue to make false claims about model validity at the highest level.

Angech,

Thank you for your comment. We are in complete agreement about the second paragraph.

Jerry

This analysis does not touch the fundamental errors of GCM and CGCM climate models: 1) the lower atmosphere, as heat reservoir of the atmosphere heat engine, does not have enough heat to drive the engine. 2) the atmosphere does not drive ocean circulation and everything else, it is ocean surface energy that provides 66% of the energy that drive the atmosphere. 3) climate change is not about meteorology or atmospheric physics, It is about population growth.

I think this comment is not addressing this manuscript that is the point of this thread. Population growth is not included in climate models, only the artificial forcing of the wrong dynamics.

How can you prove your point based on observations? This what matters at the end of the day?

nabilswedan,

Read the manuscript by Sylvie Gravel at the link at the top of this thread

If you want to see how badly primitive equation forecast models go off the rails in a few days without the insertion of new obs every 6 to 12 hours.

How do you accomplish the insertion of new obs in the future in a climate model?

Mathematics has proved that climate models are based on the wrong dynamical system of equations. So how does that prove anything.

Basically that if one is allowed to choose the forcing one can obtain any solution one wants even if the time dependent system is incorrect.

I posted a simple example on climate audit to show that is exactly what the climate modelers have done. Have the climate modelers remove the

excessive dissipation and see how many days before the model blows up. Then we can talk about observations.

Jerry, weather models run out to 15 days, with a single initialization. Then the complete forecast cycle starts again 6 hours later with new data assimilated. People make decisions based on the forecasts out to 15 days. The models don’t blow up, they just show little to no skill relative to climatology after about 2 weeks.

All,

If you look at the simple example in Section 2 of the manuscript, it is possible to see that in order for the mathematical solution of the atmospheric equations to stay on the large scale, both the time and space derivatives must continue to be large scale as required by the Bounded Derivative Theory. Clearly columnar forcing used in primitive equation models is discontinuous and therefore violates this requirement. Discontinuous forcing also violates the numerical analysis requirement that the continuous solution have a number of space and time derivatives in order for the truncation error be small.

Where are you showing that you can compute more accurate approximations to actual measured processes than in the most comparable sections of Peter Webster’s book? I don’t criticize the paper, but your first post on the topic (that I know of) was a criticism of his book.

“Climate model sensitivity to CO2 is heavily dependent on artificial parameterizations (e.g. clouds, convection) that are implemented in global climate models that utilize the wrong atmospheric dynamical system and excessive dissipation.”

unproven assertion about the dependence of sensitivity to paramterizations.

Uh why do you think there is a factor of 6 uncertainty in equilibrium climate sensitivity? This is well recognized, and the main problem is cloud feedback (arising from cloud and convective parameterizations)

Steven Mosher | June 21, 2020 at 9:53 pm |

“unproven assertion about the dependence of sensitivity to parameterizations.”

–

How quaint.

We know from past experience that sensitivity is really dependent on characterizations.

Petal.

–

The important take home message is that science and probability depends on the accuracy of the science and the accuracy and range of the parameters used.

Which you have just denied.

–

Given the parameters.

Works with guys whose friends work he has shafted several times.

Blogs with warmists, who let him vent, but do not trust him an inch.

Attacks the English when he cannot dispute the science.

–

It is no wonder that he is a little sensitive to work that shows he might be wrong in pushing previous assertions.

–

N.B. Despite all his he did a fantastic job in Mann and on Peter Gliek. He has my and all our eternal thanks.

Judith,

Here we disagree based on the definition of skill. Mathematicians use relative norms to define the difference between two L_p functions while weather forecasters use a “skill score” that makes the error look smaller than it actually is because of the extra term in the denominator. That is why in Sylvie’s manuscript the standard mathematical measure was used.

Jerry

Well people who are using weather forecasts want to know if they can beat climatology. If they can do that, they are useful. If they can’t, they aren’t.

Judith,

A number of my comments have disappeared. Am I doing something technically wrong or are you editing them?

Jerry

???

Judith,

At what point is the forecast considered no longer useful in terms of the skill score and what is the corresponding relative L_2 error value at that point. I considered the forecast questionable when the latter was 50%

In Sylvie’s manuscript.

Jerry

Well there are tons of different skill scores to use. If your deterministic forecast is doing better than a climatological forecast, then there is some skill. The skill scores for ensemble forecasts are more complex.

I gave an online talk to NOAA a few weeks ago, about evaluating weather forecasts of temperature and winds from the perspective of an energy trader. I’ll post this next week.

Hi Gerald

There is another big and untouched question: the assumption that atmospheric mass (or Patm) has been constant over geological time, that is N2 and not just the changes in O2 in geocarbsulf. and as stated by Johnson & Goldblatt 2019 (EarthN) and estimated for the warm Miocene via the fossil flight of giant birds (Cannell 2020, Animal Biology). Plus several other refs.

Once adiabatic heat enters the models then it really all becomes a question of desperate fudging (such as the 4xCO2 input which is now blithely taken as OK).

On a different topic, I note your speciality and would like to hear your views on traditional flight ‘vectors’. These fail to reproduce wind gradient energy (used by many soaring birds) and which at higher densities are much more important.

best

Alan

alcannell@gmail.com

Alan,

Your questions are outside of my areas of expertise.

Jerry

Criticism of the use of models is increasingly common …

I recommend e.g. Bellprat O., et al., 2019., Towards reliable extreme weather and climate event attribution (https://www.nature.com/articles/s41467-019-09729-2) : “In this study, we challenge the current practice insufficiently accounting for reliability, by demonstrating that it is not only unjustified but also carries the risk of issuing overly strong attribution statements …”

“From visual inspection and the quantified lack of reliability, we know that the probability distributions are overly confident …”

Pingback: Structural errors in global climate models |

Pingback: Structural mistakes in international local weather fashions – All My Daily News

Gerald,

One of the biggest misconceptions of the twentieth century that precipitated climate science and models is that mathematics is the absolute truth. This may be true in the mathematical world. However, in the physical world, observations and experiments are the absolute truth. A mathematical model based on no observations or observation validations is in my opinion worthless. This is the case of climate science and models.

“I have posted a simple example on Climate Audit (reproducible on request) that shows that given any time dependent system (even if it is not the correct one for the fluid being studied), if one is allowed to choose the forcing, one can reproduce any solution one wants.”

–

True as far as one unique solution exists for any unique time dependent system but a couple of queries.

–

If one extends the models for a different time frame backwards or forwards,

And because we know the forcings are not the same as different models over the same time frame need different forcings,

The problem is not so much what twisted logic is used to give give that forcing but how the model will behave over different time frames both in hindcasting and forecasting.

–

We know that the natural variability in hindcasting means that all models, including “correct” models must by this very reason not match the hindcast perfectly.

Any that do, Cowtan and Way have been my favourite example, show by their accuracy of replication that they are purely data following, that is the hindcast is already inscribed into the programme. There is no true hindcasting possible as such and those that do this merely show their large human adjustment.

–

On the other hand going forward, if one has squeezed the parameters to match the outcome one can expect ongoing marked and continued divergence from the observational world.

–

We do not know where the temperatures will be going so to say new models are running hotter because of clouds than old models is missing the point. Both are running hotter than observations and no one knows where those observations will go in future.

–

All one should say is that the new models run hotter than the projected models in the model future only. Why, as you have explained, is the mistreatment of the science though I should hasten to add that this was thought to be good science at the time and I still believe a lot of the parameters and all of the science itself is correct.

–

Apropos Pat Frank and his good work. It must perforce also apply to your studies and projections. Hopefully they do narrow the window error propagation multiplication though I fear that his view of the treatment of natural variability will still mean a considerable boundary of uncertainty about any work in this field.

–

Science proceeds small steps at a time and a clearer way to assemble the models workings as you have done is a substantial effort.

Thanks.

Angech,

Continuum errors in the continuous equations, e.g. excessive dissipation and inaccurate forcing (parameterizations) overwhelm the numerical truncation

errors so unless those continuum errors are less that the truncation errors that provide an accurate approximation to the CORRECT dynamical system,

a forecast or climate model is nonsense. We now know what the correct reduced dynamical systems are for all scales of motion on all areas of the globe. These were not known until the introduction of the BDT by Heinz.

If forecast models switch to and accurately approximate the correct reduced systems (assuming the atmospheric system of equations are correct) then any errors in the forecast will now not be due to numerical truncation error, but only to initial conditions and parameterizations. Note that because the

reduced systems require much less dissipation if differentiable forcing is used, much less resolution is required to approximate them accurately

(no super computer is needed).

Any discontinuous function with compact support, e.g., a mesoscale storm,

can be approximated by an infinitely differentiable function. But that function must still be resolved.

Jerry

BTW thank you for the kind words.

Everything that the Sun drives gets passed off as internal variability. Weather models are blind to daily-weekly scale solar influence on the annular modes. Climate models are blind to the inverse response of the ocean phases to net changes in climate forcing. and the associated changes in cloud cover. Stronger solar wind drives La Nina conditions and a colder AMO, weaker solar wind drives El Nino conditions and a warmer AMO. Warmer SST’s reduce low cloud cover.