Lorenz validated

by Kip Hansen

Some reflections on NCAR’s Large Ensemble.

In her latest Week in review – science edition, Judith Curry gave us a link to a press release from the National Center for Atmospheric Research (which is managed by the University Corporation for Atmospheric Research under the sponsorship of the National Science Foundation – often written NCAR/UCAR) titled “40 Earths: NCAR’s Large Ensemble reveals staggering climate variability”.

The highlight of the press release is this image:

slide1

 

Original caption:

“Winter temperature trends (in degrees Celsius) for North America between 1963 and 2012 for each of 30 members of the CESM Large Ensemble. The variations in warming and cooling in the 30 members illustrate the far-reaching effects of natural variability superimposed on human-induced climate change. The ensemble mean (EM; bottom, second image from right) averages out the natural variability, leaving only the warming trend attributed to human-caused climate change. The image at bottom right (OBS) shows actual observations from the same time period. By comparing the ensemble mean to the observations, the science team was able to parse how much of the warming over North America was due to natural variability and how much was due to human-caused climate change. Read the full study in the American Meteorological Society’s Journal of Climate. (© 2016 AMS.)”

What is this? UCAR’s Large Ensemble Community Project has built a data base of “30 simulations with the Community Earth System Model (CESM) at 1° latitude/longitude resolution, each of which is subject to an identical scenario of historical radiative forcing but starts from a slightly different atmospheric state.” Exactly what kind of “different atmospheric state”? How different were the starting conditions? “[T]he scientists modified the model’s starting conditions ever so slightly by adjusting the global atmospheric temperature by less than one-trillionth of one degree”.

The images, numbers 1 through 30, each represent the results of a single run of the CESM starting from a unique, one/one-trillionth degree difference in global temperature – each a projection of North American Winter temperature trends for 1963-2012.   The right-bottom image, labeled OBS, is the actual observed trends.

There is a paper from which this image of taken: Forced and Internal Components of Winter Air Temperature Trends over North America during the past 50 Years: Mechanisms and Implications, the paper representing just one of the “about 100 peer-reviewed scientific journal articles [that] have used data from the CESM Large Ensemble.” I will not comment on this paper, other than my comments here about the image, its caption, and the statements made in the press release itself.

I admit to being flummoxed — not by the fact that 30 runs of the CESM produced 30 entirely different 50-year climate projections from near-identical initial conditions. That is entirely expected. In fact, Edward Lorenz showed this with his toy weather models on his “toy” (by today’s standards) computer back in the 1960s. His discovery led to the field of study known today as Chaos Theory, the study of non-linear dynamical systems, particularly those that are highly sensitive to initial conditions. Our 30 CESM runs were initialized with a difference of what?   One/one-trillionth of a degree in the initial global atmospheric temperature input value – an amount so small as to be literally undetectable by modern instruments used to measure air temperatures. Running the simulations for just 50 years – from starting time of 1963 to 2012 – gives results entirely in keeping with Lorenz’ findings: “Two states differing by imperceptible amounts may eventually evolve into two considerably different states … If, then, there is any error whatever in observing the present state—and in any real system such errors seem inevitable—an acceptable prediction of an instantaneous state in the distant future may well be impossible….In view of the inevitable inaccuracy and incompleteness of weather observations, precise very-long-range forecasting would seem to be nonexistent.”

­What is the import of Lorenz? Literally ALL of our collective data on historic “global atmospheric temperature” are known to be inaccurate to at least +/- 0.1 degrees C. No matter what initial value the dedicated people at NCAR/UCAR enter into the CESM for global atmospheric temperature, it will differ from reality (from actuality – the number that would be correct if it were possible to produce such a number) by many, many orders of magnitude greater than the one/one-trillionth of a degree difference used to initialize these 30 runs in the CESM-Large Ensemble. Does this really matter? In my opinion, it does not matter. It is easy to see that the tiniest of differences, even in a just one single initial value, produce 50-year projections that are as different from one another as is possible(see endnote 1).   I do not know how many initial conditions values have to be entered to initialize the CESM – but certainly it is more than one. How much more different would the projections be if each of the initial values were altered, even just slightly?

What flummoxes me is the claim made in the caption – The ensemble mean (EM; bottom, second image from right) averages out the natural variability, leaving only the warming trend attributed to human-caused climate change.

In the paper that produced this image, the precise claim made is:

The modeling framework consists of 30 simulations with the Community Earth System Model (CESM) at 1° latitude/longitude resolution, each of which is subject to an identical scenario of historical radiative forcing but starts from a slightly different atmospheric state. Hence, any spread within the ensemble results from unpredictable internal variability superimposed upon the forced climate change signal. 

This idea is very alluring. Oh how wonderful it would be if it were true – if it were really that simple.

It is not that simple.   And, in my opinion, is almost certainly not true.

The climate system is a coupled non-linear chaotic dynamical system – the two major coupled systems being the atmosphere and the oceans. These two unevenly heated fluid systems acting under the influence of gravitational forces of the Earth, Moon and Sun while the planet spins in space and travels in its [uneven] orbit around the Sun produce a combined dynamical system of incredible complexity which exhibits the expected types of chaotic phenomena predicted by Chaos Theory – one of which is a profound dependence on initial conditions. When such a system is modeled mathematically, it is impossible to eliminate all of the non-linearity’s (in fact, the more the non-linear formulas are simplified, the less valid the model).

Averaging 30 results produced by the mathematical chaotic behavior of any dynamical system model does not “average out the natural variability” in the system modeled.   It does not do anything even resembling averaging out natural variability. Averaging 30 chaotic results produces only the “average of those particular 30 chaotic results”.

Why isn’t the mean the result with natural variability averaged out? It is because there is a vast difference between random systems and chaotic systems. This difference seems to have been missed in making the above claim.

Coin flipping (with a truly fair coin) gives random results – not heads-tails-heads-tails, but results that, when enough sample flips have been made, it is possible to average out the randomness and give a true ratio of the possible outcomes – that being 50-50 for heads and tails.

This is not true for chaotic systems. In chaotic systems, the results only appear to be random – but they are not random at all – they are entirely deterministic, each succeeding point being precisely determined by applying a specific formula to the existing value, then repeating for the next value. At each turn, the next value can be calculated exactly. What one cannot do in a chaotic system is predict what the value will be after the next 20 turns. One has to calculate through each of the prior 19 steps to get there.

In chaotic systems, these non-random results, though they may appear tyo be random, have order and structure. Each chaotic system has regimes of stability, periodicity, period doubling, ordered regimes that appear random but are constrained in value, and in some cases regimes that are highly ordered in what are called “strange attractors”.   Some of these states, these regimes, are subject to statistical analysis and can be said to be “smooth” (regions being visited statistically evenly) – others are fantastically varied, beautifully shaped in phase space, and profoundly “un-smooth” (with some regions hot with activity, while others are rarely visited, and can be deeply resistant to simple statistical analysis – albeit there is a field of study in statistics that focuses on this problem, it does not involve the type of average or mean used in this case).

The output of chaotic systems thus cannot simply be averaged to remove randomness or variability – the output is not random and is not necessarily evenly variable. (In our case, we have absolutely no idea about the evenness, the smoothness, of the real climate system.)

(On a more basic level, the averaging of 30 outcomes would not be valid even for a truly random, two-value system like a coin toss or a six-value system such as the toss of a single dice. This is obvious to the first-year statistics student or beginner at craps. Perhaps it is the near-infinite number of possible chaotic outcomes of the climate model that make it appear to be an even spread of random outcomes that allow this beginner’s rule to be ignored.)

Had they run the models 30 or 100 more times, and adjusted different initial conditions, they would have potentially had an entirely different set of results – maybe a new Little Ace Age in some runs — and they would have a different average, a different mean.   Would this new, different average, also be said to represent a result that “averages out the natural variability, leaving only the warming trend attributed to human-caused climate change.”? How many different means could be produced in this way? What would they represent?   I suspect that they would represent nothing other than the mean of all possible climate outputs of this model.

The model does not really have any “natural variability” in it in the first place – there is no formula in the model that could be said to represent that part of the climate system that is “natural variability” – what the model has are mathematical formulas that are simplified versions of the non-linear maths formulas that represent such things as non-equilibrium heat transfer, the dynamical flow of unevenly heated fluids, convective cooling, fluid flows of all types (oceans and atmosphere), which dynamics are known to be chaotic in the real world. The mathematically chaotic results resemble what we know as “natural variability” – this term meaning only those causes not of human origin – causes specifically coded into the model as such. The real world chaotic climatic results, what is truly natural variation, owe their existence to the same principles – the non-linearity of dynamical systems – but real world operating dynamical systems in which natural variability includes not just those coded into the model as “natural” but also all of the causes that we do not understand and those we are not aware of. It would be a mistake to consider the two different variabilities to be an identity – one and the same thing.

Thus “any spread in the ensemble” cannot be said to result from internal variability of the climate system or be taken to literally represent “natural variability” in the sense commonly used in climate science.   The spread in the ensemble simply results from mathematical chaos inherent in the       formulas used in the climate model, and represents the only the spread allowed by the constraining structure and parameterization of the model itself.

Remember, each of the 30 images creeated by the 30 runs of the CESM have been produced by identical code, identical parameters, identical forcing – and all-but-identical initial conditions. Yet none of them match the observed climate in 2012. Only one comes approximately close. No one thinks that these represent actual climates.   A full one-third of the runs produce projections that, had they come to pass in 2012, would have set climate science on its head. How then are we to assume that averaging – finding the mean of – these 30 climates somehow magically represents the real climate with the natural variability averaged out? This really only tells us how profoundly sensitive the CESM is to initial conditions, and tells us something about the limits of projected climates that the modeled system will allow. What we see in the ensemble mean and spread is only the mean of those exact runs and their spread over a 50 year period. But it has little to do with the real world climate.

To conflate the mathematically chaotic results of the modeled dynamical system with real-world “natural variability” – to claim that the one is the same as the other – is a hypothesis without basis particularly when the climate effects are divided into a two-value system consisting only of “natural variability” and “human-caused climate change”.

The hypothesis that averaging 30 chaotically-produced climate projections creates a ensemble mean (EM) that has had natural variability averaged out in such a way that comparing the EM projection to the actually observed data allows one to “parse how much of the warming over North America was due to natural variability and how much was due to human-caused climate change” certainly does not follow from an understanding of Chaos Theory.

The two-value system (natural variability vs. human-caused) is not sufficient as we do not have adequate knowledge of what all the natural causes are (nor their true effect sizes) to be able to separate them out from all human causes – we cannot yet readily mark down effect sizes due to “natural” causes, thus cannot calculate the “human-caused” remainder. Regardless, almost certainly, comparing the ensemble mean of multiple near-identical runs of a modeled known-chaotic system to the real-world observations is not a scientifically or mathematically supportable approach in light of Chaos Theory.

There are, in the climate system, known causes and there remains the possibility of unknown causes. To be thorough, we should mention known unknowns – such as how clouds are both effects and causes, in unknown relationships – and unknown unknowns – there may be causes of climatic change that we are totally unaware of as yet, though the possibility of major “big-red-knob causes” remaining unknown decreases every year as the subject matures. Yet because the climate system is a coupled non-linear system – chaotic in its very nature – tweezing apart the coupled causes and effects is, and will remain for a long time, an ongoing project.

Consider also, that because the climate system is a constrained chaotic system by nature(see end note 2), as this study serves to demonstrate, there may be some climate cause that, though it is as small as a “one/one-trillionth degree change in the global atmospheric temperature”, may spawn climate changes in the future far, far greater than we might imagine.

What the image produced by the NCAR/UCAR Large Ensemble Community Project does accomplish is to totally validate Edward Lorenz’ discovery that models of weather and climate systems are, they must be, by their very nature, chaotic – profoundly sensitive to initial conditions – and thus resistive of, or possibly impervious to, attempts at “precise very-long-range” forecasting.

# # # # #

End Notes:

  1. Climate models are parameterized to produce expected results. For instance, if a model fails to generally produce results that resemble actual observations when used to project known data, then it must be adjusted until it does. Obviously, if a model is run starting in 1900 for 100 years, and it produces a Little Ice Age by the year 2000, then something must be assumed to be amiss in the model. There is nothing wrong with this as an idea though there is increasing evidence that the specific practice may be a factor in the inability of models to correctly project even short-term (decadal) futures and their insistence on projecting continued warming in excess of observed rates.
  2. I refer to the climate system as “constained” based only on our long-term understanding of Earth’s climate – surface temperatures remain within a relatively narrow band and gross climate states seem restricted to Ice Ages and Interglacials. Likewise, in Chaos Theory, systems are known to be constrained by factors within the system itself – though results can be proven chaotic, they are not “just anything”, but occur within defined mathematical spaces – some of which are fantastically complicated.

# # # # #

Moderation note:  As with all blog posts, please keep your comments civil and relevant.

 

 

384 responses to “Lorenz validated

  1. Curious George

    It might be a staggering climate variability. It is for sure a staggering model immaturity.

  2. The natural variability is largely the response to solar variability rather than being internal. Solar science suffers the same problem in assuming that solar variability is internal.

  3. Kip- A very insightful post. See also

    Pielke, R.A., 1998: Climate prediction as an initial value problem. Bull. Amer. Meteor. Soc., 79, 2743-2746. http://pielkeclimatesci.wordpress.com/files/2009/10/r-210.pdf

    Rial, J., R.A. Pielke Sr., M. Beniston, M. Claussen, J. Canadell, P. Cox, H. Held, N. de Noblet-Ducoudre, R. Prinn, J. Reynolds, and J.D. Salas, 2004: Nonlinearities, feedbacks and critical thresholds within the Earth’s climate system. Climatic Change, 65, 11-38 http://pielkeclimatesci.wordpress.com/files/2009/10/r-260.pdf

    Roger Sr

    • Dr. Pielke ==> Thank you for the links to some of your work in this area. I have recently read the first on climate prediction as an initial value problem, but hadn’t looked at the second for a long time. Appreciate your bringing it to my attention once again.

      I recommend both papers to those interested in the climate and non-linearity.

  4. “The ensemble mean (EM; bottom, second image from right) averages out the natural variability, leaving only the warming trend attributed to human-caused climate change.”

    Very funny, you can’t average out AMO effects with the period in question going from a cold AMO to a warm AMO.

  5. Computer Alchemy.

    • Great writeup.
      Would add that chaotic systems are impossible to compute numerically due to the underlying instabilities. And numerical error is absolutely not uncorrelated, so averaging is nonsense.

  6. Thanks, Kip, for this post. New information about the real Sun has occasionally slipped past the gatekeepers of public knowledge before their old models could be twisted into explanations:

    http://www.universetoday.com/10546/powerful-flare-shook-up-our-understanding-of-the-sun/

  7. David Wojick

    Even tossing a fair coin 30 times will not average anything out. Getting something like 21 heads and 9 tails would be unexceptional. The caption is thus statistically absurd. It also assumes that the models correctly characterize natural variability, which is far from true. That there is even an anthro signal to detect is speculative.

  8. “The hypothesis that averaging 30 chaotically-produced climate projections creates a ensemble mean (EM) that has had natural variability averaged out in such a way that comparing the EM projection to the actually observed data allows one to “parse how much of the warming over North America was due to natural variability and how much was due to human-caused climate change” certainly does not follow from an understanding of Chaos Theory.”

    you forgot the rest of what Lorenz stated.

    https://www.youtube.com/watch?v=SlwEt5QhAGY

    1. look at annual not just winter, as winter has the highest variaability.
    2. actually compute the average TREND for all those EM

    • Curious George

      That produces a correct average trend for the model.

      • catweazle666

        “That produces a correct average trend for the model.”

        Precisely.

        As I’m sure Mosher is well aware, but hopes there are visitors to the blog that aren’t, and will by dazzled by his obfuscatory prognostications.

        So as usual, he’s blowing smoke out of his fundament.

      • One may easily compute the trend over each member of the ensemble and look at the spread. No need to average them all away.

      • catweazle666

        “One may easily compute the trend over each member of the ensemble and look at the spread.”

        Why bother?

        The result will be entirely meaningless, and with no merit whatsoever.

      • Funny how no matter where you move the goalposts, the answer stays the same.

      • catweazle666

        “Funny how no matter where you move the goalposts, the answer stays the same.”

        Exactly so, Brandon.

        Computer games are an utter waste of computing power, and produce exactly zero insight into future climate.

        And that – as they say – is that.

      • Your powers of omniscience are indeed impressive, catweazle666.

    • Steven ==> I did not forget the rest of what Lorenz has to say — and include a brief section on statistical analysis of chaotic systems. Some chaotic systems, like the Lorenz’ Mill and the famous Lorenz Butterfly Attractor, are subject to interesting statistical analysis — though such analyses have not proven to have much practical application. Other, more complex chaotic systems have resisted statistical analysis. In the long run, for the climate, understanding the statistical probabilities of the real world system, if that is even possible, may be important but in the end be “not useful” — as there will be only one 6 October 2016 in the Central Hudson Valley of New York and only one North American Winter 2016/2017.

      If we wish to make statistical analyses of North American Winter, we would be better served by looking at what Nature has already given us — the historical record.

      I can see little utility in knowing that thousands of different possible North American Winters can be projected by making infinitesimal changes to any one input initial condition of the CESM or in knowing what the model shows to be their relative statisitcal probabilities — based on that one perturbation.

      How many inputs? How many initial conditions to initialize the CESM? How many already existing “little perturbations” are there in the CESM?

      It is far more complex than cute movies of toy systems.

      • I can see little utility in knowing that thousands of different possible North American Winters can be projected by making infinitesimal changes to any one input initial condition of the CESM or in knowing what the model shows to be their relative statisitcal probabilities — based on that one perturbation.

        We only get one realization of the real system, Kip, and there is a great deal of uncertainty in the observational data — including the critical initial conditions. Even so, it would be far more useful to be able to run the real system 40 times over the same forcings but randomly perturbed initial conditions to see what happens. But we can’t.

        So, it’s Teh St00pid Modulz all the way down. Deal with it.

      • It is far more complex than cute movies of toy systems.

        Supremely ironic, Kip. Lorenz’s seminal work on the topic used toy models which were so abstract that it was necessary for him to specifically caveat that his results did not give an answer for the atmosphere (see the paragraph just before Acknowledgements). Nor was the 1963 paper the last time he did so.

      • Brandon ==> surprised to see you agreeing with my points, but appreciated none the less. Cheers.

      • We only agree on some points, Kip. I appreciate the disagreements far more.

    • Steven Mosher,

      If you understood anything at all about basic chaos theory, you probably wouldn’t continue to spout the rubbish to which you seem to be addicted.

      Of course, delusional psychotics reject conventional reality, and substitute their own.

      Unfortunately, there is no way of predicting the outcome of a chaotic system mathematically. There is not even any method of predicting a lowest value at which a chaotic equation such as the logistic differential equation will transition to chaos.

      Go on, try if you wish. Look up a value on the Internet! How about Wikipedia, or Wolfram?

      Or you could always fly into a perfect lather of deny, divert, and confuse. You apparently know nothing about chaos – pretty much on a par with your knowledge of any other branch of science or mathematics.

      Good luck with your trend following. I’ll give you a clue – the longer a trend has persisted, the closer it’s getting to a change point. Here’s another clue – you haven’t one.

      Cheers.

      • I’ll give you a clue – the longer a trend has persisted, the closer it’s getting to a change point.

        Sounds like a prediction based on a model, Mike. I thought that was supposed to be impossible to do.

      • Brandonrgates,

        You wrote –

        “Sounds like a prediction based on a model, Mike.”

        No. it’s an assumption. Just like assuming that the Sun willl rise tomorrow, or that winter is generally colder than summer.

        If you have a falsifiable hypothesis to the effect that trends continue forever, please put it forward. I’m assuming you’re just attempting to be gratuitously offensive. Please correct me if I’m wrong.

        Cheers.

      • If you have a falsifiable hypothesis to the effect that trends continue forever, please put it forward.

        I have a falsifiable hypothesis that forced trends can be expected to continue as long as the net forcings do.

        “But Chaos” is the fly in that ointment today. Do try to keep up, Mike.

        I’m assuming you’re just attempting to be gratuitously offensive.

        One good turn deserves another, hey?

      • Brandonrgates,

        I may be wrong, but I wonder if you understand the concept of a “falsifiable hypothesis”.

        Are you trying to say that unless things change, they will remain the same?

        How would you disprove the assertion by means of a reproducible scientific experiment?

        You appear to be waffling. You haven’t even defined your terms scientifically. Is this your view of science?

        I know you are just trying to deny, divert, and confuse. Facing chaos is simply terrifying for many scientists, let alone the ill-assorted crew calling themselves cli

        Cheers.

      • Brandonrgates,

        Sorry – climatologists rather than cli

        Cheers.

      • I may be wrong, but I wonder if you understand the concept of a “falsifiable hypothesis”.

        I don’t wonder that you “wonder” at such things, Mike.

        Are you trying to say that unless things change, they will remain the same?

        No. Chaotic variability kind of precludes that, neh?

        How would you disprove the assertion by means of a reproducible scientific experiment?

        Build an exactly identical spare planet and hold external forcings to exactly zero for a few million years, give or take a billion. Then run it forced over the same interval. It wouldn’t be “proof” in the logical/maths sense of the word but it would be a heck of a lot better than relying on Teh Modulz for doing such experiments.

        You appear to be waffling. You haven’t even defined your terms scientifically. Is this your view of science?

        No, but it’s an awfully fun caricature of it. Thanks ever so much for your numerous strawmen, they’re quite informative as to your oh-so honest and robust approach to doing science in blog comments.

    • If the attractor is complex even long term behavior will appear chaotic as the ice ages record appears to be. People get so used to low dimensional mental models they tend to forget that.

      • What if there are several attractors (as climate appears to)?
        What if the attractors have a fractal shape?
        What if the attractors are the equivalent of 3D projections of moving 4D objects?
        What if the attractors have variable, umm, attractiveness?
        What if all four of the above are true simultaneously?

        I guess that counts as “complex”, huh? ;-)

        It would be like trying to solve a 6 body gravity only problem where gravity, mass and time were all variable and unknown. Good thing we don’t rely on anything like that for public policy, innit? Oh wait…

    • Mosher, “1. look at annual not just winter, as winter has the highest variaability.
      2. actually compute the average TREND for all those EM”

      Let’s add 3. look at the 8 runs that were excluded for whatever reason.
      4. look at varying start time instead of just initial conditions.
      5. Look at the impact of the range of forcing estimates.

      With Lorentz models you know they are bounded and why, climate vs climate models, not so much.

      • With Lorentz models you know they are bounded and why, climate vs climate models, not so much.

        Well let’s see, Captain. Lorenz models do better lend themselves to scrutiny because (especially) his early ones were so relatively simple. Kip calls them “toy models” in the OP, and I don’t disagree. I don’t think Lorenz would have either. Are you sure you want to talk about what simple models *might* tell us about climate?

        http://3.bp.blogspot.com/-5NsdtYi0Ifg/VqQqEq8O-BI/AAAAAAAAAkU/BzdI7Q7-Gsk/s1600/HADCRUT4%2Bvs%2BCO2%2Bmonthly%2B2015-12.png

        I didn’t think so.

        In the 40 CESM Large Ensemble, they only randomly perturbed atmospheric temps by amounts which approached the rounding error level of the floating point precision of the hardware, i.e., *immeasurably* different in terms of the real system.

        Maybe Gremlins explain the measurable differences which evolved as the runs diverged from the initial conditions.

        Regardless, the nifty thing about the results is how well the ensemble members bounded observational estimates:

        http://journals.ametsoc.org/na101/home/literatum/publisher/ams/journals/content/bams/2015/15200477-96.8/bams-d-13-00255.1/20150904/images/medium/bams-d-13-00255.1-f2.gif

        I realize that kind of consistency is disconcerting for Chaos of the Gaps aficionados, especially since even Saint Lorenz’s own GCM experiments lead him to conclude that the real system is “unlikely to be intransitive”.

        Perhaps he was an heretic unto himself?

      • Brandon, “Regardless, the nifty thing about the results is how well the ensemble members bounded observational estimates:”

        Right. There is a large difference between early years of HADSST and ERSST which became larger recently. Instead of a trillionth of a degree there are points closer to half of a degree, so if you start in say, 1880 instead of 1920, this experiment would indicate about 0.5 C of variability when there is about 0.5 C of uncertainty. So you have a choice, tune the models to include the uncertainty/variability in the past or not.

        As far as the limits of internal variability, there are actual papers that look at the impact of meridional and zonal temperature gradients which would be another factor to explore in an experiment to estimate “natural” variability using a virtual model. If you test that, instead of ~25% of the runs being invalid, you might have 50% or more.

        When you do a test like was done, you are testing how stable your model has been tuned to be, not how much variability there may be in the system it is modeling.

      • There is a large difference between early years of HADSST and ERSST which became larger recently.

        Oh look. A squirrel.

        As far as the limits of internal variability, there are actual papers that look at the impact of meridional and zonal temperature gradients which would be another factor to explore in an experiment to estimate “natural” variability using a virtual model.

        There are practically an infinite number of things they *could* have done, Big D., and a finite number of (expensive) CPU cycles and storage space. Even visible Unicorns could find a place to hide in such an unassailable state space for eternity. Just imagine how invisible Unicorns might fare. Why, it’s a veritable cornucopia of never ending questions for anyone but you to answer for us!

        If you test that, instead of ~25% of the runs being invalid, you might have 50% or more.

        I wasn’t aware that 10 of 40 members were “invalid”. We wouldn’t be trading in speculation and/or rumor, would we? No, of course not. Team Integrity doesn’t do such things.

        When you do a test like was done, you are testing how stable your model has been tuned to be, not how much variability there may be in the system it is modeling.

        It never ceases to amaze me the lengths “skeptics” will go to remind otters of that which they know all too well, and to which they have already spoken. To wit, Kay et al. (2015):

        The influence of small, initial condition differences on climate projections in the CESM-LE parallels initial condition impacts on weather forecasts (Lorenz 1963). After initial condition memory is lost, which occurs within weeks in the atmosphere, each ensemble member evolves chaotically, affected by atmospheric circulation fluctuations characteristic of a random, stochastic process (e.g., Lorenz 1963; Deser et al. 2012b). As we will show, internal climate variability has a substantial influence on climate trajectories, an influence that merits further investigation, comparison with available observations, and communication. Evaluating the realism of internal climate variability simulated by the CESM-LE is challenging, especially on decadal time scales, but vital (e.g., Goddard et al. 2013), especially given differences in model variability (e.g., Knutson et al. 2013). Model biases can degrade the realism of simulated internal variability and forced climate responses and we therefore encourage users of the CESM-LE to understand relevant model biases and their potential ramifications.

        The “proper” thing to do of course is do an infinite number of runs on the real system. Pity that’s not possible.

        ***

        I notice that, just like Kip who continues to do so, you dodged Saint Lorenz’s conclusion in 1989 that the ocean/atmosphere system is “unlikely to be intransitive”.

      • Brandon, “I wasn’t aware that 10 of 40 members were “invalid”. We wouldn’t be trading in speculation and/or rumor, would we? No, of course not. Team Integrity doesn’t do such things.”

        Well, the em is based on 30 runs and the paper mentions 40 runs or Earths. Perhaps I should have called them misplaced?

        The ERSST vs HADSST is more of a bear than a squirrel. HAD did a great deal of research and selling of the bucket to intake adjustment and ERSST seems to have changed that to all buckets all the time instead of using a bit more sophisticated temperature sensors. The largest impact of Karl et al. was the past doncha know.

        Since there is such a difference between the two and since the models are so wonderful, I would imagine they could determine which of the two is more accurate since they can nail natural variability so well.

      • oops, “I would imagine they could determine which of the two is more accurate since they can nail natural variability so well.”

        I guess the right thing would be for the models to split the difference so no scientist is left behind :)

      • Well, the em is based on 30 runs and the paper mentions 40 runs or Earths.

        Here’s what the paper says:

        All CESM-LE simulations use a single CMIP5 model (CESM with the Community Atmosphere Model, version 5). The core simulations replay the twenty to twenty-first century (1920–2100) 30 times under historical and representative concentration pathway 8.5 external forcing with small initial condition differences.

        Perhaps I should have called them misplaced?

        I think you shouldn’t uncritically pass off speculations and assumptions as statements of fact, Dallas.

        HAD did a great deal of research and selling of the bucket to intake adjustment and ERSST seems to have changed that to all buckets all the time instead of using a bit more sophisticated temperature sensors. The largest impact of Karl et al. was the past doncha know.

        Yes, yes, I know all about friendly neighborhood Uncertainty Monsters. Like polar bears, they must be carefully cultivated and preserved at all costs. Also like polar bears, they’re fuzzy and adorable right up to that moment they’re munching on your liver whilst you’re still alive and screaming.

        Since there is such a difference between the two and since the models are so wonderful, I would imagine they could determine which of the two is more accurate since they can nail natural variability so well.

        Again:

        Model biases can degrade the realism of simulated internal variability and forced climate responses and we therefore encourage users of the CESM-LE to understand relevant model biases and their potential ramifications.

        Much more in that vein from whence that derives. Your strawman has weak knees.

        ***

        Still no comment on Lorenz (1989) I see.

        No matter, there’s more than one way to skin this cat. Fans of Lorenz’s 1968 work might do well to ponder the alleged wisdom of forcing a (proposed) transitive (or almost-intransitive) climate out of bounds it hasn’t seen for on the order of a million years when other options are available. Heck, even Senior goes there in the second of his self-citations upthread.

      • brandon, “I think you shouldn’t uncritically pass off speculations and assumptions as statements of fact, Dallas.”

        The fact is the paper is called 40 Earths and delivers 30 and they intended to deliver 40 but only delivered 30. Renaming the paper 30 earths is more accurate and less distracting. I also get my panties in a wad when professional papers have spelling and grammar errors, leaving in spurious results for effect and generally appear to be less than professional.

        “Yes, yes, I know all about friendly neighborhood Uncertainty Monsters. Like polar bears, they must be carefully cultivated and preserved at all costs.”

        Seems you are lapsing into your true nature. The difference between ERSSTv4 and HADSSTv3 in the earlier part of the 20th century is close to 1 C degrees which would represent about 100 years of ocean heat uptake in the top 700 meters. Once the assumption of RCP 8.5 kicks in, any natural variability would be de-emphasized, so the critical part of their test should be the part of the record likely to have the most natural variability. Its a Joules thing.

      • The “proper” thing to do of course is do an infinite number of runs on the real system. Pity that’s not possible.

        More runs are pointless.

        The model is “twitchy” in response to minute changes in initial conditions.

        The model is known to have low resolution both temporal and spacial, mis -model a number of effects and use empirical solutions for others. All of which effect the simulation more than tiny initialization changes.

        The models (CESM in this case) are much more twitchy than the real world which they model badly. Until the models are more stable and accurate it is hard to claim any predictive value.

      • >The fact is the paper is called 40 Earths and delivers 30 and they intended to deliver 40 but only delivered 30.

        “40 Earths: NCAR’S Large Ensemble Reveals Staggering Climate Variability” is the title of the press release (dated Sept. 29, 2016). The title of Kay et. al (2015), the paper it references (Final Form: 25 September 2014, Published Online: 16 September 2015), is “The Community Earth System Model (CESM) Large Ensemble Project: A Community Resource for Studying Climate Change in the Presence of Internal Climate Variability”.

        >I also get my panties in a wad when professional papers have spelling and grammar errors, leaving in spurious results for effect and generally appear to be less than professional.

        I on the other hand delight when spelling and grammar lamers get their knickers in a bind over factually incorrect information. Thank you for making my evening, Dallas.

        >The difference between ERSSTv4 and HADSSTv3 in the earlier part of the 20th century is close to 1 C degrees which would represent about 100 years of ocean heat uptake in the top 700 meters.

        Let me guess, that’s the maximum difference you find between the timeseries for monthly global values.

        >Once the assumption of RCP 8.5 kicks in, any natural variability would be de-emphasized […]

        Um. Why?

        >[…] so the critical part of their test should be the part of the record likely to have the most natural variability.

        Good grief. How are you getting from largest maximum discrepancy between two SST constructions to period of time most likely to have the most natural variability? The general rule is earlier in the record, the higher the *uncertainty*. Here, read the paper:

        Like any individual model ensemble member, the observations also represent one possible response of the climate system to external forcing in the presence of internal climate variability. As a consequence, comparing trends from a single ensemble member to the observed 1979–2012 temperature trends to “validate” the climate model simulation is problematic. Similarly problematic is comparing the ensemble-mean trend to observations, as internal climate variability is (by construction) muted in the ensemble mean. To confound matters even further, the available observations in some regions are too sparse to reliably detect a trend from observations alone [e.g., mountainous and polar regions in the Hadley Centre Climatic Research Unit temperature (HadCRUT4) dataset (Morice et al. 2012)].

        Another e.g. can just as easily be sparse ocean sampling between the mid-19th and early-20th Centuries, not to mention any uncorrected bucket brigade gremlins still lurking in the raw observations.

        I think the *substantive* issues here are a lot more interesting and important than the twists in your semantic undergarments.

        Heck I don’t know, it just might make the most sense to “validate” Teh Stoopid Modulz over the period of *least* uncertainty and *best* agreement between competing observational products. I can see how this might be a foreign concept to Uncertainty Monster maximizers, however.

        >Its a Joules thing.

        Yah. And we already know from Argo that quarterly variability down to 2 km swamps the mean annual trend over the entire record (I make it by a factor of 250 at two standard deviations). Need I really remind you the difference between climate and weather?

      • >More runs are pointless.

        lol. Lorenz would disagree.

        >The model is “twitchy” in response to minute changes in initial conditions.

        Yes, which “validates” Lorenz. Ask Kip and/or Dr. Curry if you don’t believe me.

        >The model is known to have low resolution both temporal and spacial, mis -model a number of effects and use empirical solutions for others. All of which effect the simulation more than tiny initialization changes.

        Show your maths?

        Never mind, you didn’t do any.

        >The models (CESM in this case) are much more twitchy than the real world which they model badly.

        You’re Doing It Wrong, PA. The But Chaos/Climate is Always Changing paradigms prevail on the real system being “twitchy” and therefore unpredictable. Stability and the inherent predictability that brings is a Bad Thing, we want it to be as capricious and wobbly as plausible so that we’ll never have enough ability to make policy decisions.

        Get with the program.

      • brandon, “>The difference between ERSSTv4 and HADSSTv3 in the earlier part of the 20th century is close to 1 C degrees which would represent about 100 years of ocean heat uptake in the top 700 meters.

        Let me guess, that’s the maximum difference you find between the timeseries for monthly global values.”

        Yep, monthly, decadal, a few decades. The paper assumes “equilibrium” or pre-industrial conditions are stable and an extremely low rate of OH uptake, the same thing as assuming LIA recovery ended in ~1850.

        >Once the assumption of RCP 8.5 kicks in, any natural variability would be de-emphasized […]

        Um. Why?”

        If you add insulation to a system with variable heat loss, would it become more stable or less stable? For CO2 forcing to have a maximum impact it would need to become more stable. Less stable would more heat loss due to things like deep convection, sudden stratospheric warming events etc. The odds of natural variability remaining the same with RCP 8.5 forcing would be quite small.

      • >Yep, monthly, decadal, a few decades.

        Dallas, we have a problem. The maximum annual discrepancy is 0.34 C, in 1938. It stands to reason that the decadal max would be smaller, and the multi-decadal even smaller.

        I think it’s about time you started showing your work.

        Speaking of, I need to set a “no maths after midnight” rule, my last post contained a serious flub: And we already know from Argo that quarterly variability down to 2 km swamps the mean annual trend over the entire record (I make it by a factor of 250 at two standard deviations).

        That’s two orders of magnitude off, the smell-tester should have caught it. The real answer is 2.7 times. The calcs are:

        Slope: 1.0611*10^22 J/yr
        2-sigma deviation of the residual: 2.9063*10^22 J

        Divide the latter by the former obtains a factor of 2.7 difference.

        Point is that you looking at monthly SST discrepancies like they’re climatically relevant is bogus. Try crunching multi-decadal trend discrepancies. The numbers will be less impressively large, but they’ll be far more relevant to the argument you’re making.

        >The paper assumes “equilibrium” or pre-industrial conditions are stable and an extremely low rate of OH uptake, the same thing as assuming LIA recovery ended in ~1850.

        How about a direct quote, I don’t trust your interpretations for some reason.

        >If you add insulation to a system with variable heat loss, would it become more stable or less stable?

        For relatively simple, lab-bench sized system I’d expect additional insulation to damp variability and add lag. I don’t think we can make that assumption about *internal* variability of the climate system at inter-annual to multi-decadal time scales. To gain traction with me on this point, you either need to do a lot more math, or provide a citation which has done the maths for us.

        The only case I can think of where the principle you invoke has traction in the literature is with respect to diurnal temperature range, which is expected to decrease as GHG concentrations rise, with daily min temps rising faster than daily max temps. Some people not going by the handle micro6500 have found it in instrumental data. It’s subtle, and varies by region. IMO, the secular trend in average temps is much stronger and clearly significant.

        >The odds of natural variability remaining the same with RCP 8.5 forcing would be quite small.

        How small? How did you calculate these odds? Which model(s) did you use to do it?

        It never ceases to amaze me how you jump from “we don’t know nuffin’ because … assumptions” to making confident statements about probability.

      • “The odds of natural variability remaining the same with RCP 8.5 forcing would be quite small.”

        “How small? How did you calculate these odds? Which model(s) did you use to do it?”

        I’ll use Realclimate’s:
        http://www.realclimate.org/images/attribution.jpg

        When forcings are causing more than 100% of the warming, I am guessing natural variability is small. Best guess seems to be 10% of forcings. If you heat a pot of water, natural variability is swamped. Compare this to a pot of water. It lags room temperature. Heat the planet, same thing. Natural variability for before is 100%. With heat, almost 0%. Let’s take a try at going down the rabbit hole. Without heat, how would a room temperature pot of water exhibit chaos?
        https://www.youtube.com/watch?v=uWdKVpQ94Ns
        Is the rotating dishpan showing chaos? Maybe not, but the measurements and calculations for the eddies are so difficult it’s easier to think of them as chaotic.

      • Unlike the “butterfly” effect simulation for 1/1000th degree, the oceans actually have 1000 times the energy and can physically produce a large impact if there is a large range of surface temperatures they can influence. CO2 should warm the poles more than any other area reducing the range of potential variability. Basically, the oceans can warm the poles more if they are colder to begin with.

      • brandon, “It stands to reason that the decadal max would be smaller, and the multi-decadal even smaller.

        I did overestimate, 11yrma is 0.38 lower circa 1938 and is an average of ~0.22 C lower for the 1870 to 2015 comparison I made. They start at nearly the same temperature and then ERSSTv4 cools relative to HAD then nearly recovers in 2016.

        http://climexp.knmi.nl/data/iersstv4_0-360E_-60-60N_n.png

        http://climexp.knmi.nl/data/ihadisst1_0-360E_-60-60N_n.png

        Since the difference is in absolute temperature, smoothing doesn’t have much of an impact. If ERSSTv4 is right though, about 0.38C of what they are calling warming would be natural variability on nearly a century time scale.

      • Ragnaar,

        >When forcings are causing more than 100% of the warming, I am guessing natural variability is small.

        Natural *forced* variability. We need to be careful to distinguish between that and *internal* variability, which is the main concept being addressed here.

        >Is the rotating dishpan showing chaos?

        Yes, turbulent flow is a classic manifestation of physical chaos. There’s absolutely no question in my mind that the climate system is chaotic, and I don’t know of any “consensus” climate scientists who would seriously dispute me on that point. The Kay et al. (2015) authors certainly don’t; they cite Lorenz (1963) twice.

        Kip’s essay doesn’t pose the more interesting question of whether the climate is transitive, intransitive or, as Lorenz asked in 1968, “almost-intransitive”. The predictability of climate rests on those concepts, not whether or not the climate system is chaotic.

        To be fair, neither do Kay et al. (2015).

      • >I did overestimate, 11yrma is 0.38 lower circa 1938 and is an average of ~0.22 C lower for the 1870 to 2015 comparison I made. They start at nearly the same temperature and then ERSSTv4 cools relative to HAD then nearly recovers in 2016.

        Thanks, Dallas, much better. Two more random stats I can lob in. For annual means, the average discrepancy is 0.15 C over the same interval, and the standard deviation of the same is 0.08 C.

        >Since the difference is in absolute temperature, smoothing doesn’t have much of an impact.

        A nice feature of those SST products is that they come in absolute temps.

        >If ERSSTv4 is right though, about 0.38C of what they are calling warming would be natural variability on nearly a century time scale.

        Yes, if. Maybe. Could be. Logically possible. But *all* of it *natural* variability, really?

        I’d note that over 1870-2015, the difference in their linear trends is statistically indistinguishable from zero: 0.003 C/century. Yes, per century.

        That *could* just be a coincidence of course. Hopefully another century or ten of higher-quality observation will settle the question.

        Speaking of unanswered questions:

        >The paper assumes “equilibrium” or pre-industrial conditions are stable and an extremely low rate of OH uptake, the same thing as assuming LIA recovery ended in ~1850.

        You got a direct quote for me from Kay et al. (2015) on that one yet?

      • >You got a direct quote for me from Kay et al. (2015) on that one yet?

        “Our 1850 ocean initialization strategy leverages two assumptions. First, the upper ocean equilibrates on much shorter time scales than the deep ocean. Therefore, the upper ocean adjusts to a preindustrial state after several decades under constant forcing. Second, modern observations still reflect preindustrial conditions at depth because of the long abyssal ocean equilibrium time scales. After an expected initial surface ocean cooling, the 1850 control arrived at a balanced coupled state with climate drift only in the deep ocean (global ocean temperature drift of ∼0.005 K century–1 for years 400–1000).”

        The 1850 control is smooth as a baby’s butt and as you are aware, there are several paleo indications that ocean were about 1 C colder circa 1700.

        I believe the linear trend is closer to 0.3 per century or about 0.9C per 300 years which would have some amplification in the atmosphere and land masses. Using BEST, land warming has maintained a roughly 2x warming rate since the beginning of records. Sea level rise also has a fairly constant rate and provides a reasonable proxy for OHC.

        I am not going to redo everything just for ERSSTv4, but the changes do indicate more “natural” warming and natural variability than HAD and v3.

        Realistically, I should have over sold the 0.38 more if we are going to start negotiations. About a third of warming due to multi-century ocean heat uptake is pretty reasonable. It is consistent with ~1.6 C sensitivity via energy balance estimates rather than the 3.0 C used to inspire policy.

    • This merely suggests there is a chance to create a theory concerning invariants of chaotic systems.
      It does not address numerical error, which is highly correlated. Such systems are impossible to compute accurately and so averages are meaningless.
      Also, such a theory would have to directly address the quantities desired and whether they are invariant or no.

      But numerical error is the dominating issue. Until you can prove the statistics of the numerical system are somehow related to the statistics of the mathematical solution you can’t say much.

      But it is interesting. A good project to drive some PhD candidates to total insanity (probably very hard).

    • 2. actually compute the average TREND for all those EM

      Except that there is only ONE ensemble mean in this study and it is of a pathetically small sample for which any statistics will be useless. Look at how many thousands of samples were taken until the statistical patterns converged, and the water mill was an incredibly simple system mechanically.

      So how did the researchers chose the figure of 30 runs as being that which is necessary to determine that statistics of the model ?

      In short they didn’t. The result is spurious, like just about everything else which comes out of climate models, and for the same reason.

    • Even with the perfect model all initial conditions would need to be sampled to accurately define the “natural variability” chaotic attractor. There is no reason to believe that the only initial condition that matters to define the attractor is temperature. Also, what about the sampling of model uncertainties, e.g. mixed phase heat transfer multipliers, etc.

    • Nice illustration there Steven.

  9. This idea is very alluring. Oh how wonderful it would be if it were true – if it were really that simple. It is not that simple. And, in my opinion, is almost certainly not true.

    Lorenz might disagree, Kip.

    In chaotic systems, the results only appear to be random – but they are not random at all – they are entirely deterministic, each succeeding point being precisely determined by applying a specific formula to the existing value, then repeating for the next value.

    Lorenz (1968) is worth reading for many reasons. The final paragraph questioned even climatic determinism.

    The output of chaotic systems thus cannot simply be averaged to remove randomness or variability – the output is not random and is not necessarily evenly variable.

    Calculating an ensemble spread isn’t taking the average.

    • Brandonrgates,

      You wrote –

      “Calculating an ensemble spread isn’t taking the average.”

      It’s just a completely pointless waste of time. Is that more to your liking?

      Cheers.

    • catweazle666

      “Calculating an ensemble spread isn’t taking the average.”

      Indeed.

      It’s even more meaningless that taking an average would be – if such a thing is possible.

    • Brandon ==> See Pielke’s links above.

      As for “average” and “mean” — the words are theirs — the error the same.

      • See Pielke’s links above.

        Lorenz’s works being the topic of your article, I’d say there is more than enough material to reference already.

        As for “average” and “mean” — the words are theirs — the error the same.

        Not in the quote to which you were critiquing. They specifically said “spread”.

      • Brandon ==> There are two issues being discussed “The ensemble mean … averages out the natural variability, leaving only the warming trend attributed to human-caused climate change. ” and separately “Hence, any spread within the ensemble results from unpredictable internal variability superimposed upon the forced climate change signal.”

        I specifically characterize the ensemble spread in the essay as “represent[ing] the only the spread allowed by the constraining structure and parameterization of the model itself.”

      • Kip,

        There are two issues being discussed “The ensemble mean … averages out the natural variability, leaving only the warming trend attributed to human-caused climate change. ” and separately “Hence, any spread within the ensemble results from unpredictable internal variability superimposed upon the forced climate change signal.”

        Yes, the former statement comes from the press release, the latter (and I think better) statement from literature.

        I specifically characterize the ensemble spread in the essay as “represent[ing] the only the spread allowed by the constraining structure and parameterization of the model itself.”

        Duh. That’s how they can make confident statements about what the ensemble spread, means and any other statistics they care to compute represent in *model world* — they’re the ones who programmed the thing.

        Same applies to Lorenz’s toy models from his earlier works. Same also applies to the more realistic, but still quite simple GCM he used in his 1989 paper wherein he concluded that the ocean/atmosphere system is “unlikely to be intransitive”.

        I think you’re right that the CESM Large Ensemble “validates” Lorenz. At the very least, the results look to be entirely consistent with *especially* his 1989 paper. I’m far from convinced you understand *how*.

      • Brandonrgates,

        According to you, the literature states –

        “. . .only the spread allowed by the constraining structure and parameterization of the model itself.”

        Colour me totally underwhelmed.

        Does this mean the model is constructed in such a way as to produce climatologically acceptable results only?

        Even the IPCC – presumably the mouthpiece of the Church of Climatology – states that predicting future climate states is not possible. Are you truly expecting anyone to believe that you are right, and the IPCC is wrong?

        Keep playing with words. You appear not to believe anyone other than yourself, which might lead people to assume even the IPCC is more knowledgeable than you.

        Maybe you could resort to “Duh!” or “Errr, wrong.”, or some similar attempt to deny, divert and confuse. Even better, present a falsifiable hypothesis relating to the GHE. Neither the IPCC nor anybody else has managed to do this yet. I’m prepared to be blinded by your effulgent intellect.

        Cheers.

      • According to you, the literature states –

        “. . .only the spread allowed by the constraining structure and parameterization of the model itself.”

        Not exactly. I take it as a given that computer programs only do exactly what one tells them to do.

        Colour me totally underwhelmed.

        Colour me shocked.

        Does this mean the model is constructed in such a way as to produce climatologically acceptable results only?

        Acceptable is such a loaded term, Mike, with a lot of subjective wiggle-room to boot. I already know you don’t truck with the idea of CO2 being an external forcing agent, and you bet your sweet bum Teh Modulz are constructed to respond varying quantities of it.

        The idea of doing 40 runs with the same model is an *attempt* to *better* constrain the range of *possible* climate states under the same forcing regime — with only infinitesimally small random perturbations to initial conditions — than the much smaller ensemble sizes seen in AR5. (IIRC, the max ensemble size for any single given model in AR5 was 10 members.)

        Kay et al. (2015) essplain:

        COMPARISON TO CMIP5.

        CMIP5 is frequently used to assess uncertainty in future climate projections. But CMIP5 is an ensemble of opportunity, and the spread within the CMIP5 archive is not easy to interpret. Individual CMIP5 ensemble members can have differing physics, dynamical cores, resolutions, and initial conditions. To complicate matters, many CMIP5 models share genes and are therefore not independent (Knutti et al. 2013). In sum, spread in CMIP5 climate projections results both from model formulation differences and from internal climate variability, the relative importance of which is unknown. Unlike the CMIP5 ensemble spread, CESM-LE ensemble spread is generated by internal climate variability alone. Given these points, a natural question becomes how much of the spread in CMIP5 projections can be explained by internal climate variability alone? We can answer this question by using the CESM-LE to estimate the influence of internal climate variability on ensemble spread.

        It’s really worth reading the summary:

        SUMMARY AND BROADER IMPLICATIONS.

        Understanding forced climate change in the presence of internal climate variability is a major challenge for climate prediction. To make progress, the science and stakeholder communities need relevant climate model experiments with useful outputs. The CESM-LE addresses this challenge through its transparent experimental design and relevant accessible outputs. Initial illustrative CESM-LE results affirm that because of the internal climate variability, single realizations from climate models are often insufficient for model comparison to the observational record, model intercomparison, and future projections. A publicly available ensemble with the scope, and the amount of community input as the CESM-LE, has never been performed before. We anticipate analysis of this ensemble, alone or in combination with other ensembles and/or regional climate simulations, will lead to novel and practical results that will inspire probabilistic thinking and inform planning for climate change and climate model development for many years to come.

        Emphasis mine. Reminds me of everyone’s favourite AR4 quote even before you “reminded” me of it:

        Improve methods to quantify uncertainties of climate projections and scenarios, including development and exploration of long-term ensemble simulations using complex models. The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible. Rather the focus must be upon the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. Addressing adequately the statistical nature of climate is computationally intensive and requires the application of new methods of model diagnosis, but such statistical information is essential.

        Again, my emphasis. Thanks, as ever, for leaving out what comes after “the long-term prediction of future climate states is not possible”. Go Team Integrity!

        I think it’s rather nifty when scientists continue to find ways to do that which was identified as necessary almost (at least) a decade prior. I’m not surprised when “sceptics” don’t acknowledge progress for what it is.

      • All ==> Brandon Gates has been obfuscating this discussion with erroneous, vague references to and interpretations of, various papers by Edward Lorenz.

        The best list, with links to [mostly] .pdfs , of E. N. Lorenz’ work is found here.

        I will assume that Brandon is citing these papers from [faulty] memory.

        It is Lorenz’ 1968 paper on Climatic Determinism which discusses the intransitive and transitive nature of various physical systems — as well as the “almost intransitive” physical systems, which is currently considered the most likely case for the coupled atmosphere/ocean system in climate science today. Pielke Sr. and friends discuss this in the linked paper he offered early on in these comments: Climate prediction as an initial value problem.

        It is this same 1968 paper, very early in the study of Chaos Theory and very early in Climate Science, in which Lorenz stated “In summary, climate may or may not be deterministic. We shall probably never know for sure, but as further mathematical theory is developed, and as more realistic models are constructed, we ma become more and more confident of our opinions.”

        Edward Lorenz continued to contribute to both Chaos Theory and Climate Science through the first decade of the 21st Century — having passed in 2008. His last scientific work being published that same year.

        He was still professionally active and replying to a comment by others “regarding the general practice of approximating solutions of ordinary differential equations (ODEs) numerically.”

        Any and all references to the work of Edward Lorenz, and interpretations of what he thought or believed, should be checked carefully against his actual published works.

      • I will assume that Brandon is citing these papers from [faulty] memory.

        Bad assumption, Kip.

        Any and all references to the work of Edward Lorenz, and interpretations of what he thought or believed, should be checked carefully against his actual published works.

        No kidding. Are you ready to talk about Lorenz (1989) yet? The part where he concludes that the ocean/atmosphere system is “unlikely to be intransitive”? It’s right there in the abstract. You can’t miss it.

        I’ve only cited it like three times prior to this post, and quoted it twice.

      • > Pielke Sr. and friends [1] discuss this in the linked paper [2] he offered early on in these comments [3] […]

        Some comments:

        [1]: It’s single-authored.

        [2]: It’s a Letter to the Editor.

        [3]: It’s far from being the first time Senior plugs that letter somewhere on the Internet.

        Why climate should be considered an initial-value problem has not been argued in that letter. All that this letter does is to reach the conclusion that if we do, there’s little prediction we can reasonably do beyond weather patterns.

        Delimit this enormous state space (i.e. fixing boundary values) just makes more sense to me.

  10. Kip,

    Nicely done.

    Please excuse me for lapsing into incivility with commenters who seem incapable of even trying to understand what you wrote, and the implications.

    Cheers.

    • Mike ==> No worries.

      • Kip Hansen

        Using models, one is trying to predict the future; in fact, a loosing proposition. If any model were successful in its ability to predict the future of say a stock price or even what the weather will be the very next day, we would see a much different Wall Street, and BTW, climate prediction center press releases. One is stymied in laying any bet by the plethora of outcomes possible, even short range.

        So some very smart, some very monied people still can’t say what the future holds. Climate scientists, with their agenda driven efforts, are in the same boat as stock speculators. Sometimes they win, and sometimes they loose.

        My statement is not just: “A pox on your house”. Rather, I would like to know what tomorrow will bring. I can guess. I can speculate. I can ever lay a bet with a willing taker. Yet, all I have is just that, a bet on the future.

        Acknowledging Lorenz and chaos and speculations on the future remain…indeterminate.

      • RiH008, it depends on how complex is the system you are trying to model. Predicting pressure in a steel cylinder when a valve is opened is fairly easy. Predicting the flight path of a punctured latex balloon is difficult.

  11. I’ll give a simple analogy. We have two number, 1 and 101, both of which are claimed to be correct because “the science is settled”. The “ensemble mean” is 51 but it’s virtually worthless. What’s more if either 1 or 101 was actually correct why is anyone bothering with the other number.
    This is the fantasy world of climate modellers, who claim that their models are always accurate. Maybe we should just call then developers of online climate games.

  12. Kip, yes, what you wrote is quite correct.

    Most successful engineering modelling methods (finite element mechanical strength modelling and electrical circuit network modelling) work by iterating on the “goes inta’s” and “comes outa’s” values of specific physical values (stress, strain, voltage, current) at a number of locations in a model until they all match known physical laws (within a specified tolerance). These models “converge” (when done properly) to a final answer.

    Other engineering models like optical ray tracing actually “trace” thousands and millions of rays through a model of an optical system (lens curvatures, spacing’s, refractive indexes, etc.) to predict the distribution of light “rays” at some output location,

    Taking a non-linear chaotic system and starting a model of it with initial conditions that are thousandths of a degree (temperature) apart and expecting any useful outputs is silly. And a complete waste of taxpayers dollars.

    And averaging these useless outputs into an “ensemble” just compounds the waste. Reminds me of a quote; “Having lost sight of our goal, we redoubled our efforts”. Seems the climate science folks lost sight of the goal of understanding the climate so now they are just rerunning the models over and over again hoping some clarity will suddenly appear…..

    There is no “ensemble” of models that will predict the “average” strength of a bridge.

    There is one proper model that predicts the strength of the bridge and tells if a proper safety margin exists, This margin is intentionally included to allow for corrosion, overloading, wind loads, etc, etc.

    If one wants to lower the “extra strength” designed into the bridge and make it “just exactly” strong enough for the loads expected ones needs to refine the model.

    Cheers, KevinK.

    • Curious George

      Is there an estimate of accuracy of these models? If so, where?

      • George, this is somewhat of a trade secret.

        In general, if a finite element mechanical strength model matches the actual performance of a structure to better than 20% everyone is quite pleased. That is “state of the art”. Nobody to my knowledge can predict the strength (i.e. when it will fail) to within a few percent, and certainly not to thousandths of a degree.

        Depending on the application (bridge, building, airplane, satellite) the margin of safety ranges from 125% (satellites) to 200% or 300% (bridges). In the case of a bridge it is more cost effective to make it 300% of the strength needed rather than have it fail. For a satellite where the cost to launch it is significant the standard is more like 125% – 150% stronger than necessary to survive the expected loads. In the case of satellites a “qualification model” (QM) is built and tested to failure. Once the test shows that the device can survive loads 125% greater than necessary it is considered safe to make an exact copy (down to tracking the exact foundry “melts”, or batches of metals) of the “QM” and have reasonable confidence that the final product will survive the loads.

        Models of electrical circuits that match to 5%-10% are considered quite good.

        Cheers, Kevin

      • Curious George

        Kevin, thanks. Climate models are not finite element models, and, whatever climastrologists say, they “solve” an initial value problem, not a boundary value problem. I don’t believe that an estimate of accuracy is a trade secret; I believe it does not exist at all. What accuracy would you expect from a 200 km grid?

  13. Brilliant analysis.
    Thank you.

  14. Would like to add that a body of evidence exists for the the Hurst phenomenon of persistence and memory in the evolution of surface temperature. Here is an example:
    http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2763358

    Also, since you invoked independence in coin tosses as an example, I would like to share this video of a chaotic coin toss created by adding a small degree of memory and persistence to the coin.

    • chaamjamal ==> Thank you for the link to your paper. I admit to only having glanced at it so far, but it looks interesting.

      Both the Lorenz 1968 paper and Pielke Sr. have some relation to the apparent fact that climate involves “memory” in some sense on some time periods.

    • We use coin toss as a handy metaphor for randomness, but coin tosses are actually completely determined. True randomness does not exist, or science would be meaningless. This is the fundamental error in the attempt to parse the signal using a mathematical construction of randomness (statistics).

      The incorrect assumption of randomness becomes an initial condition.

      Of course, this is what we use statistics for; the blind hand grasping. Yet the clues statistics gives us become useful only when the results become predictive.

      Let their unforced (random) component now “predict” variation before 1850…

  15. I’m curious if some model runs were done from identical starting conditions to test if variation was introduced due floating point errors or other errors. For something this complex, that test should be run.

    • jim2 ==> Hmmmmm…sort of. This whole deal with the CESM-LE is almost that sort of thing. They do multiple runs of the exact same model, with all the inputs and parameters exactly the same except they alter the input (initial conditions) of “global atmospheric temperature” by “less than one/one-trillionth of a degree C” for each run — which is, in effect, the magnitude of error introduced by floating point errors. Result? 30 entirely different projections of 2012 North America Winters.

      • Yes, Kip, I caught the “less than one/one-trillionth of a degree C!!!” part.

        But you have either misunderstood my question or you are dodging it. When I said run the models multiple times with IDENTICAL starting conditions, I didn’t say except for “less than one/one-trillionth of a degree C!!!” change in global temperature.

        I said run the same model with IDENTICAL initial conditions multiple times. This has to be done in order to prove your “less than one/one-trillionth of a degree C!!!” variation is meaningful.

        I suspect the model output will be all over map even when initial conditions are identical. That’s probably just the way the model works.

        https://www2.physics.ox.ac.uk/sites/default/files/2011-07-05/arnold2013_weather_pdf_68409.pdf

      • jim2 ==> It is not, of course, my study…it is a study run by the Dr. Dreser involved with LENS | Large Ensemble Community Project.

        If they had run the exact same model repeatedly with exactly the same initial conditions on exactly the same computer they would have had 30 exactly the same results.

        If they had used different computers running differing operating systems and underlying code, they might indeed have had one result per system, because each operating system/software/firmware combination can have differing rounding rules, and thus inject the same type of infinitesimal changes in conditions in the middle of model runs.

        This last issue is well known and understood.

      • catweazle666

        Kip Hansen: “If they had run the exact same model repeatedly with exactly the same initial conditions on exactly the same computer they would have had 30 exactly the same results.”

        That depends IMO on whether the model had some stochastic elements to emulate climate variability, and how the random number generator that produces the stochastic element is initialised. Some systems AFAIK use the system clock to initialise one or more of the parameters used in the random number generator, should that be the case, it would be unlikely that a number of runs using otherwise identical initialisation parameters would be the same result.

        Of course, it may be that I am entirely incorrect and the devices used for the model runs either don’t use stochastic methods, or initialise them the same every time. This would not be the first time!

      • So you say, but show me the model runs with identical starting conditions. Until then, I don’t believe it.

        You are assuming the program is, overall, deterministic. Prove it instead.

      • jim2 ==> I am having trouble taking your question seriously now….identical computer code running with identical inputs on the identical equipment can do nothing other than produce identical results. Its a computer, for heaven’s sake, and the code works exactly the same every time. There are no computer pixies — and no sneaky chaos monsters — in there messing with things. The roundings will be exactly the same each time it is run under the stated identities.

      • Kip, I guess you didn’t look at the link I supplied, from that pdf:

        Stochastic parametrization is a controversial
        topic in atmospheric science. It represents
        uncertainty in forecasts by including random
        numbers, or noise terms, in the forecast
        model. However, it is not clear whether
        including these random terms also has detrimental
        effects on the forecast which might
        be avoided by using traditional deterministic
        schemes. This National Meeting of the Society,
        held at Imperial College London in April 2013,
        was opened by the organiser, Paul Williams
        (University of Reading). He explained how the
        origins of this meeting could be found ten
        years ago, in an article in The New Scientist
        magazine (Samuel Reich, 2003). This article
        describes results from Williams’ PhD work,
        where he was surprised to find that including
        noisy stochastic terms improved the performance
        of his model; the two experts quoted in
        the article commenting on his results gave
        the opening talks at this meeting.
        A recurring theme of the meeting

        https://www2.physics.ox.ac.uk/sites/default/files/2011-07-05/arnold2013_weather_pdf_68409.pdf

      • jim2 ==> There is no mention in the CESM project of the use of stochastic (chaotic or randomly changing) parameters — had there been, then their claim of identical everything except the initial condition perturbation would be false, and their hypothesis that the spread of the ensemble and its variable-ness being only due to the natural chaotic variability of perturbed initial conditions would be senseless.

        However, one never knows — they could have overlooked this issue or even not really understand the CESM details.

        The full paper includes an email address for Dr. Dreser if you’d like to check the point with her. If you do so, please post the answer here.

      • Hi Kip. Your comment concerning the understanding of the model is apropos. Here are some bugs in CESM. These mean the model may do some things not according to Hoyle.

        But this doesn’t answer my question if the model is deterministic. (Same inputs, same machines, OS, etc.)

        http://bb.cgd.ucar.edu/forums/atmospherecam

      • jim2 ==> The link you provide shows that the CESM was under active develpment and debugging — with the items all from 2014. Actually, having been in IT (web) development, it looks like they beat most of the bugs out in that period — nothing new reported for two years. Of course, that doesn’t mean that problems don’t remain.

      • It appears they do expect bit-for-bit replication of a test. But I don’t see the test results for a given build of the model. Also, the test is only for 5 model days, so any divergence might not appear in such a short model time. Also, how do you know you’ve hit all branches of the code in a five day test?

        https://www2.cgd.ucar.edu/sites/default/files/events/related/automated-test-system-print.pdf

      • I’m a software developer. Hence, the questions.

      • jim2 ==> Yes, I’ve done my time in the IT salt mines (in web development on massively high volume web sites) and your questions are appropriate for the CESM development team — and, if the paper under discussion were instead a manned space flight — very important.

        The LENS community project might be the place to pose these questions.

        For this essay, the important point is that they did expect (as I did) a bit-for-bit replication between model runs in the absence of any differences in inputs.

  16. Kip Hansen:
    I think I agree they over reached. Previously I gave this example:
    https://judithcurry.com/2016/10/01/week-in-review-science-edition-57/#comment-814806
    Suppose they didn’t change the CO2 levels and did the runs. That they got a PDF and an ensemble mean for that. Now let’s use their runs and there’s a PDF and an ensemble mean. Suppose with and without CO2 both outcomes though differing in temperature are nearly identical. We might say in both cases that’s natural variability. However if the outcomes differ by more than just temperature differences, for instance the mean is wider or more narrow now what? We might have CO2 changing natural variability. Or we might have natural variability changing with temperature.

    1916 Natural variability is X
    2016 Natural variability is Y
    CO2 caused the temperature to change.
    The temperature rise caused the natural variability to change.
    Which natural variability do we use now?
    When temperature records are being set, I want to use Y. Everything’s fine. But the temperature data sets include the natural variability of X.

    In their study, which natural variability are they using? Maybe equally from all years of the run. So the results would refer to the middle of the run or middle half of the run. I don’t know. This is the natural variability of what time frame? If it’s not now then what?

  17. Consider this figure from the paper.
    http://journals.ametsoc.org/na101/home/literatum/publisher/ams/journals/content/bams/2015/15200477-96.8/bams-d-13-00255.1/20150904/images/large/bams-d-13-00255.1-f2.jpeg
    What we see up to 1850 is the variability when the forcing is kept constant, basically a non-changing mean temperature. What we see after 1920 is the variability when the forcing is gradually increasing. You can use this to quantify the effects of the forcing versus natural variability. All the members have the same upward curve from their common forcing, but vary about the mean rise rate. Natural variability is tightly constrained by the energy balance (as you see the Lorenz butterfly diagram is tightly constrained with no wandering outside certain bounds). The natural range is that of El Nino to La Nina years, maybe about 0.5 C in global temperature. Meanwhile the forcing adds degrees to where the mean is. This is what is meant by climate change. It is the signal over the noise. In 1981, Hansen predicted that the signal would appear out of the noise by 2000, and he was right.

    • Jim D ==> I appreciate your clear statement of the [erroneous] consensus interpretation.

      What your model output (which does not show climate at all, but rather numerical output of projected “global mean surface temperature”) means to me is that the model is primed to produce the graph as shown — through structure, hard-coded assumptions, and repeated parameterization.

      This is the same thing that the CESM-LE shows: what the model produces regardless of actual climate factors (of any kind).

      • OK, if the global mean surface temperature is changing on decadal time scales, that means the climate is changing: agree or not? Or have you just moved the goalposts? What does it take for you to say something is climate change if not what has happened since 1960? From the graph we see that in the last 50 years, the global mean temperature in all the ensemble members moved several times more than their internal variability, making the effect of their common forcing signal easily detectable. We also see from the 1850 experiment what happens when the forcing doesn’t change? Your article makes no sense in the context of these results that they show, because it doesn’t even address the 1850 experiment and what it means.

      • Jim D,

        I think the key to understanding Kip’s argument is this snippet from the essay:

        To conflate the mathematically chaotic results of the modeled dynamical system with real-world “natural variability” – to claim that the one is the same as the other – is a hypothesis without basis particularly when the climate effects are divided into a two-value system consisting only of “natural variability” and “human-caused climate change”.

        Which is a howler because not only do Kay et al. (2015) NOT do that, they do rather the opposite (my emphasis):

        While the comparison of models and observations provides important insights, climate change projections are made in part to plan for a future we cannot observe. As such, we next examine the influence of internal climate variability on near-future trends, looking forward as far as we looked backward (i.e., 34 yr). Similar to Fig. 4, Fig. 5 shows that most CESM-LE members exhibit 2013–46 warming trends in most regions, with the exception of the North Atlantic. Yet, in some ensemble members, the internal climate variability is large enough to overwhelm the forcing and result in cooling in some regions. For example, ensemble member 20 shows pronounced cooling of 3 K over Asia that is in stark contrast to the appreciable warming of 6 K in that same region in ensemble member 24. The take-home message is clear, consistent with previous studies analyzing large ensembles with the same model and the same external forcing, and needs to be better communicated (e.g., Deser et al. 2012a): we need to plan for a range of future outcomes not only because climate models imperfectly represent the relevant processes, but also because there are inherent predictability limits in a climate system with large internal climate variability.

        I remain convinced that consensus climatologists do more to attack the limitations of Teh Modulz than “skeptics” do. As it should be. After all, it’s probably a safe assumption that climate modelers have a far better understanding of how their own models work than any of we sideline punters do.

  18. Kip,

    One comment is that the press release about the paper refers to natural variability and mentions numerous climate mechanisms. However, it appears that those more climate concerned substantiate the work only utilizing temperatures. It was commented before on the WIR thread and should be said again here. Temperature in and of itself, is not climate.

    Much more time is needed to be spent working thru this. Just wished to express appreciation.

    • Danny ==> Thank you. You are right about the temperature-centric view.

    • However, it appears that those more climate concerned substantiate the work only utilizing temperatures.

      Well gee, Danny, we don’t have as good of records of ice sheet coverage, humidity (relative and specific), deep ocean temperature, TOA radiative flux, wind speed, precipitation, cloud cover … etc. going all the way back to the mid 19th Century like we do surface temperature. Sure would be nice though, wouldn’t it.

      Temperature in and of itself, is not climate.

      I’ve gone and made myself a Post-It note saying just that thing because you’re right, I’m always somehow managing to forget it.

      ***

      The data page for the CESM Large Ensemble contains no fewer than 1,046 distinct output parameters. Feel very free to *yourself* discuss in *specific detail* any or all of the ones which you think have not been given enough attention. That’s why they made the data publicly available for everyone.

      Or you could continue to handwave to them and implicitly make the job of “those more climate concerned” to reveal for you whatever it is you *apparently* think is in there which better “substantiate[s]” (or not) “the work”.

      • Brandon,

        “we don’t have good of records of…………” temperature “back to the mid 19th Century” apparently because of coverage and methodology. Hence those data are ‘created’ today. (Note. Not arguing validity, just making the statement upon which surely we can concur)

        “1,046 distinct output parameters.” which are stated as having been ‘bounded’ by this model ensemble yet as you suggest:”we don’t have as good of records of ice sheet coverage, humidity (relative and specific), deep ocean temperature, TOA radiative flux, wind speed, precipitation, cloud cover … etc. going all the way back to the mid 19th Century like we do surface temperature. Sure would be nice though, wouldn’t it.” And it would be nice so the premise that ‘historic’ ‘natural variability’ has been ‘defined’. (And please correct me if I’m wrong, but ‘historic’ extends a bit further than mid 19th century).

        Leading to: “implicitly make the job”. It wasn’t my statement that the work has accomplished what it’s purported to have done. How is it ‘implied’ (imposed?) that it’s my ‘job’ to substantiate?

        AFAIK only the chart of temperature has been shown by ‘those more climate concerned’. So based on your count, 1045 additional resources need to be presented. It’s your baby and you’re defending it strongly. I’ll look forward to your upcoming very long post with 1045 more charts showing ensemble results vs. ‘historic’.

        I’m skeptical that a follow on post from you, doing as you suggested I do, is forthcoming.

      • So based on your count, 1045 additional resources need to be presented.

        To what end, Danny? So all and sundry can simply dismiss them as more useless pretty pictures from Teh Modulz?

        No thanks. I’ll knoodle the ones I’m interested in as I have time to get to them.

        It’s your baby and you’re defending it strongly.

        It’s only my baby in the sense that I’m a US citizen, and my tax dollars supported its creation. If we’re residents of the same country, it’s just as much yours as it is mine.

        Your challenge: Temperature in and of itself, is not climate. is yours. I don’t disagree with you, but *you* get to be the one to illustrate why your formally correct argument has any utility other than the rhetorical quality of implying that anyone who cannot, or has not, ground through in excess of 200 TB of data is ignorant that temperature is only one of many climate parameters.

        AFAIK only the chart of temperature has been shown by ‘those more climate concerned’.

        Look harder, though be forewarned — doing so might increase what you know about and thereby wreck your ability to argue that the climate concerned think it’s all about temperature. Learn too much, and you may completely run out of facile arguments.

      • Danny Thomas

        Brandon,
        “To what end, Danny? So all and sundry can simply dismiss them as more useless pretty pictures from Teh Modulz?” Well. That’s one choice. Alternatively it could be to ascertain the viability of the claim.

        So (and thank you for the link). Looking ‘harder’ here: http://www.cesm.ucar.edu/projects/community-projects/LENS/projects/aridity-evolution.html

        “We will also look at how aridity may evolve in the 21st century.” Noble cause. But makes me wonder why they want to know how aridity “may” evolve as opposed to how aridity (or lack of) ‘will’ evolve. Since they ‘know’ the bounds of the natural viability and the scenarios (RCP’s) are worked out, why the ‘waffle’?

        Phew. That was close. I almost ‘increased what I know about’ and nearly ‘wrecked’ and ran out of facile arguments.

      • But makes me wonder why they want to know how aridity “may” evolve as opposed to how aridity (or lack of) ‘will’ evolve. Since they ‘know’ the bounds of the natural viability and the scenarios (RCP’s) are worked out, why the ‘waffle’?

        Much depends on which ‘they’ we’re talking about. I didn’t get the sense that the authors of Kay et al. (2015) are still beating their significant others. PR flacks have different aims, and thus different training for composing prose.

        Phew. That was close. I almost ‘increased what I know about’ and nearly ‘wrecked’ and ran out of facile arguments.

        s

        As you said last thread, sometimes these things take time, Danny. It may come down to which one of us is the more bull-headed and stubborn. On that note, I doubt I’ll soon tire of quoting this snippet from the paper:

        SUMMARY AND BROADER IMPLICATIONS.

        Understanding forced climate change in the presence of internal climate variability is a major challenge for climate prediction. To make progress, the science and stakeholder communities need relevant climate model experiments with useful outputs. The CESM-LE addresses this challenge through its transparent experimental design and relevant accessible outputs. Initial illustrative CESM-LE results affirm that because of the internal climate variability, single realizations from climate models are often insufficient for model comparison to the observational record, model intercomparison, and future projections. A publicly available ensemble with the scope, and the amount of community input as the CESM-LE, has never been performed before. We anticipate analysis of this ensemble, alone or in combination with other ensembles and/or regional climate simulations, will lead to novel and practical results that will inspire probabilistic thinking and inform planning for climate change and climate model development for many years to come.

        Way I learnt it, ‘probabilistic thinking’ is the very antithesis of of ‘knowing’ how things ‘will’ evolve.

        Willard might wonder if it’s a vocabulary thing.

      • “PR flacks have different aims, and thus different training for composing prose.”

        What do you suggest then were the ‘aims’ of the “PR flacks” who put together the link which began all this Brandon? Personal preference is not to impose motives to others but have at it if you choose.

        “Initial illustrative CESM-LE results affirm that because of the internal climate variability, single realizations from climate models are often insufficient for model comparison to the observational record, model intercomparison, and future projections.” Me red az skience not settledd.

        Maybe someone should come along and Fire some PR folks?

      • > What do you suggest then were the ‘aims’ of the “PR flacks” who put together the link which began all this

        Scientists always seem to make contrarians do it.

        Wonder why.

      • “Scientists always seem to make contrarians do it.”

        Not sure that’s what Brandon suggested W. Think he was indicating that PR flacks were misrepresenting the scientists (or the science).

      • I think that’s what you’re suggesting, Danny.

      • Might I ask then what you’d ‘think’ Brandon suggested via his choice of words quoted: ” PR flacks have different aims, and thus different training for composing prose.”?

        Maybe I misunderstood ‘PR Flacks’, ‘different aims’, and ‘thus different training for composing prose’.

      • You may have rather misunderstood “began all this,” which interestingly you did not quote, Danny.

        It’s seldom science, but it’s always important.

      • “You may have rather misunderstood “began all this,” which interestingly you did not quote, Danny.”

        Or perhaps you did?

        “In her latest Week in review – science edition, Judith Curry gave us a link to a press release from the National Center for Atmospheric Research (which is managed by the University Corporation for Atmospheric Research under the sponsorship of the National Science Foundation – often written NCAR/UCAR) titled “40 Earths: NCAR’s Large Ensemble reveals staggering climate variability”.

      • Either Kip started all this, Danny, or the PR.

        If it’s the PR, then contrarians can reply: it’s science, but it’s important. Then the excuse becomes: the scientists (via their PR, which is not science, but is important) make contrarians do it.

        I think contrarians should own their schtick for a change. What about you?

      • I’m going with PR.

        And Brandon indicates: “PR flacks have different aims, and thus different training for composing prose.”

        Since the PR works for NCAR, wonder then what are their ‘aims’? Any response, not having queried both PR and NCAR, would seemingly be highly speculative.

      • > I’m going with PR.

        Of course you do, Colombo.

        Thus the PR Makes Kip Do It.

        Simple, isn’t it?

        It’s not science, but it’s important.

      • Maybe someone should come along and Fire some PR folks?

        Not gonna happen, Danny. As such, I typically ignore the PR and just read the papers. I’m of the mind that the best way to be informed about what *science* says is to read about it from the source. God(s) only know how much will be lost in translation when PR flacks set to summarizing the stuff.

      • Brandon,
        While not doubting the soundness of your method it’s interesting that you’re so skeptical of the chosen practice of NCAR (and so many other scientific organizations).

        Does this apply to all statements not peer reviewed?

      • >Does this apply to all statements not peer reviewed?

        I’m skeptical of refereed literature too, Danny. I’m even skeptical of the peer review process.

        It’s a matter of degree. I tend to be *more* skeptical of PR than primary literature. Whether that’s truly warranted in all or even *most* cases is an open question, it’s based on a default *assumption* that PR flacks don’t have the same formal training and expertise of the study authors. Motive is also a question; an explicit function of PR typically is to promote or advertise … to tout. As such, appropriate qualifiers and caveats regarding shortcomings, uncertainties, assumptions, etc. are often glossed over. Not that scientists are immune from doing the same, mind — the Kay et al. (2015) authors are undeniably proud of their efforts and optimistic about the utility of their product. Again, it’s a matter of degree. IMO, the authors’ sales job didn’t come at the expense of them pointing out the places where their product might — or even likely — be a lemon.

        Let me be clear, I didn’t think the CESM-LE release is “bad”. In fact, as PR goes, I think it’s rather good. I think the paper is “better” in that it is more informative especially with respect to the issues and weaknesses inherent in using Teh Modulz to suss out climatic behaviour of the real system and (hopefully) make some useful inferences about the *statistics* of its future states under a given (*assumed*) forcing regime.

        A summary of my argument to you (and by extension, Kip) might be: Don’t judge a book by its cover, judge it by its own text.

        On that note, Kudos to Kip for contacting one of the authors (Deser) for clarification on the “missing” 10 ensemble members.

      • Brandon,

        “I’m skeptical of refereed literature too, Danny. I’m even skeptical of the peer review process.” Me too, hence my initial (in WIR) hesitation for acceptance lacking greater time, greater acceptance, and greater confirmation.

        “On that note, Kudos to Kip for contacting one of the authors (Deser) for clarification on the “missing” 10 ensemble members.” Agreed. And to your earlier point, a good PR flack (team?) would have addressed this in the release. (Had to find an area of facile disagreement, missing this tidbit I give PR a poor rating………it’s a pretty large miss).

        And further Kudos to Kip for the expanding the discussion.

      • >Me too, hence my initial (in WIR) hesitation for acceptance lacking greater time, greater acceptance, and greater confirmation.

        Like I said in that thread, we’ve only been using climate models of various types since the 1960s. Lorenz was one of the early advocates of multi-member randomly-perturbed weather forecasting models. IOW, it’s not like Kay & Co. have only just invented this particular wheel. How much more time do you need, Danny?

        The clock ticks. What it approaches is apparently only for Gods, not mere Modulz (no matter how variously perturbed and numerously ensembled), to know.

        >And further Kudos to Kip for the expanding the discussion.

        Content is always welcome I suppose. I can’t say that I endorse his interpretations of Lorenz. I might feel differently if he’d address Lorenz’s 1989 conclusion that the ocean/atmosphere system is “unlikely to be intransitive” because that’s really what the But Chaos argument comes down to vis a vis reasonable ability to describe and predict an attractor.

      • Brandon,
        “How much more time do you need, Danny?”

        That’s a very good question and one I cannot specifically answer largely because I’m not the decider. By appearance, some works seem to be accepted by the scientific community (some may prefer the term ‘mainstream’) upon publishing. Some works seem to take more time to gain traction. Some seem to not achieve acceptance ever.

        This work is young but sure seems to have traction. 2nd level works produced as a result of this ensemble once scrutinized should give a sense of where it stands.

        Is there anything unreasonable about that thinking?

      • Danny, PS:

        >And to your earlier point, a good PR flack (team?) would have addressed this in the release.

        Let it not be said I’m not annoyed for that lack of clarity, if only for the fact that it gives manufactroversy artists that much more from which to practise their dark arts.

      • Brandon,
        ” manufactroversy artists “. Like PR teams?

        Somewhat telling you’ll say naughty, naughty when it comes to NCAR and it’s choices yet will put out all the energy it takes to invent/utilize ‘meme’-like words when others ares suspicious.

        In complete agreement with you that going to the source is the prudent way to approach these issues. But do you even consider why NCAR has a PR team? Apparently NCAR disagrees with both you and I.

      • >Like PR teams?

        Sometimes. And then there are think tanks.

        >Somewhat telling you’ll say naughty, naughty when it comes to NCAR and it’s choices yet will put out all the energy it takes to invent/utilize ‘meme’-like words when others ares suspicious.

        Of course it’s telling, Danny. Your suspicions are constantly seeking confirmation.

        Mine are no different.

        >In complete agreement with you that going to the source is the prudent way to approach these issues.

        Good. That really should be the end of the discussion, but …

        >But do you even consider why NCAR has a PR team?

        Yes.

        >Apparently NCAR disagrees with both you and I.

        Apparent is in the eye of the beholder.

        Next.

      • Danny Thomas

        Musta been too obtuse. You and I agree going to the source (the work) is best. NCAR sees a need to not just put out the work but to process it thru PR which interjects 40 earths from 30 proxy.

      • Danny T.,

        >That’s a very good question and one I cannot specifically answer largely because I’m not the decider.

        I assume you are a taxpaying member of society living in a representative democracy. If true, no, you’re not *the* decider, but you are *a* decision-maker.

        >This work is young but sure seems to have traction. 2nd level works produced as a result of this ensemble once scrutinized should give a sense of where it stands. Is there anything unreasonable about that thinking?

        It implies that the CESM-LE approach is conceptually unprecedented, with which I do not agree. I also don’t think that ‘skeptics’ will ever consider Teh Modulz anything but Stoopid no matter how much they’re scrutinized by 2nd, 3rd, nth-level works.

        My asking you “How much more time do you need?” was rhetorical. My suspicions already had their answer.

        I do hope you prove me wrong some day, but I won’t be waiting with bated breath.

        ***

        As a description of the scientific process in general, I have no issue with what you wrote.

      • >NCAR sees a need to not just put out the work but to process it thru PR which interjects 40 earths from 30 proxy.

        Here are the facts near as I can tell, Danny:

        1) Kay et al. submitted to the journal in final form on 25 September 2014. It was accepted and published online on 16 September 2015.
        2) According to personal communication between co-author Deser and Kip, only 30 ensembles were complete at the time Kay & Co. did the study.
        3) The paper describes the ensemble as being composed of 30 ensembles. There is no mention of 10 more ensembles to come in that paper that I have found (I’ve specifically looked).
        4) On 29 September 2016, NCAR published a press release entitled: “40 Earths: NCAR’S Large Ensemble Reveals Staggering Climate Variability”.

        From that information, my inference would be that the 10 additional members were only just recently finished hence prompting the press release.

        Before making any stronger statements of belief or suspicion, I’d personally do more due diligence to make sure I had my facts sorted. But that’s just me.

        I’d also be more interested in the data itself and the papers written about it than the intrigue of phantom ensemble members or the ubiquitous presence of research institute PR departments — but again, that’s just me.

      • >Just because you chose to post a rhetorical question doesn’t mean I’m not allowed to address it.

        I didn’t say anything about what you’re allowed or not allowed to do, Danny. Your freedom of expression is important to me, as is my own.

        >If you have issue with how I’ve done so state it (or not).

        My issue is that I don’t believe you’re sincere about wanting to know what this model ensemble may or may not tell us about climate. That’s *my* choice, and I do own it.

        >Your ‘suspicions’ might not match with my intent. My ‘suspicions’ are that my intent did not indeed match your ‘suspicions’.

        Agreed. Now apply that same thinking to this 10 missing ensemble member bug hunt, and you’ll perhaps understand where I’m coming from when I talk about manufactroversies and sincerity.

        >You keep using statements along the lines of: ” I also don’t think that ‘skeptics’ will ever consider Teh Modulz anything but Stoopid no matter how much they’re scrutinized by 2nd, 3rd, nth-level works.” as if ‘skeptics’ are some sort of amorphous monolithic entity.

        Which would be supremely irrational since ‘skeptics’ are objectively anything but a cohesive, organized group. Teh Modulz R Stoopid is so common, however, that I don’t mind so much using the broad brush stereotype.

        >This is you and I speaking.

        If you’re asking me to dial back the categorical pasting of ‘skeptics’ when I talk to you, I can respect that and oblige. I’d prefer it if you’d move off making hay about PR department snafus and ulterior motives and stick more closely to the technical/scientific issues (which are more important and interesting to me) but I won’t make that a condition of complying with your request.

      • Danny Thomas

        “My issue is that I don’t believe you’re sincere about wanting to know what this model ensemble may or may not tell us about climate.” Well I own this. Models are important. As a monolith ‘they’ have issues and successes. So ‘my’ preference is that ‘they’ be demonolithalized and evaluated based on each individual performance and/or same for ensembles. Which is exactly what I thought I’d communicated earlier. And I’ll not be ‘the decider’, but ‘science’ will.

        Out of all of yours and my discussion I’ve not suggested emergency action now, nor postponement. That’s policy and separate from model evaluation.

        “If you’re asking me to dial back the categorical pasting of ‘skeptics’ when I talk to you, I can respect that and oblige. I’d prefer it if you’d move off making hay about PR department snafus and ulterior motives and stick more closely to the technical/scientific issues.” Paste ‘skeptics’ as you wish. But recognize that I am one with caveats and a lukewarmer with caveats. Climate is regional, your region appears a bit warmer than mine. Over time that is subject to change, but as we’ve learned elsewhere recently even more severe winters have been attributed to climate change.

        As with Wadhams and Arctic, Hansen and SLR /negative CO2, Hoerling/Trenberth/Dai and drought, Zwally vs. Grace and Antarctica & Steig and ‘it ain’t settled’ there are ‘scales’ of levels of concern. That this should be not fully understood by this amateur might be expected if these representatives of ‘science’ are not in full agreement. That it seems prudent to me that full acceptance of a model ensemble be kept at arms length awaiting some level of acceptance by science does not seem unreasonable.

        But I’ve been wrong before.

      • Danny,

        >That’s maybe your implication, not mine. Mine is the work should stand alone and the rest of what was said before was based on that.

        Which doesn’t conform to my view of how the process of iterative science works … or more to the point how I think iterative science *should* work. The sum total of what we think we know about climate does not exist in the single silo of the CESM-LE members and I think it would be folly to think of it that way.

        To put it another way, scientific results rarely ever stand on their own. They stand on many many previous results, often (and optimally) done by completely independent research teams. A scan of most research paper’s citations should suffice to demonstrate veracity of my claim.

        My ultimate argument is, as ever, we don’t need no steenking Stoopid Modulz to know that CO2 is responsible for most of the observed warming since the mid-20th Century, and to know that reducing emissions will slow down that warming. Good old-fashioned empirical evidence tells us this.

        I repeat my rhetorical question to you: how much more time do you need? The clock ticks. Since you obviously don’t trust Stoopid Modulz, I don’t see that you have any basis for knowing how much time you have to indulge waiting.

        It should be obvious I’m vexed by my real or imagined opinion of your position.

      • Danny Thomas

        Brandon,

        “It should be obvious I’m vexed by my real or imagined opinion of your position.”

        Preceeded by “I repeat my rhetorical question”

        I’m not surprised.

        Non-rhetorical question for you. How much time for what? My ‘assumption’ was were still discussing ‘the ensemble’ but I didn’t ask. However, based on that assumption it’s how I responded.

      • >Non-rhetorical question for you. How much time for what?

        The first instance of me asking was directed specifically at this comment of yours, Danny:

        Me too, hence my initial (in WIR) hesitation for acceptance lacking greater time, greater acceptance, and greater confirmation.

        I repeat: the clock ticks. I’d ask you how long we have for you to accept and confirm, but Teh Modulz R Stoopid so unfortunately I can’t tell you either.

        Unavoidable ignorance could be a problem here, no? Some might call it Wicked.

      • “The first instance of me asking was directed specifically at this comment of yours, Danny:”

        All of which related specifically to the CESM ensemble. (Somehow I think you’ve expanded it in your mind). Answer to this: “I’d ask you how long we have for you to accept and confirm” still stands. It’s not based specifically on a time frame but on acceptance by ‘science’ and I’m not the decider.

        “Unavoidable ignorance could be a problem here, no? Some might call it Wicked.” U.I. is always a problem. Known or not.

      • >All of which related specifically to the CESM ensemble. (Somehow I think you’ve expanded it in your mind).

        I don’t see the CESM-LE as a revolution, but an evolution. When I ask how much time you need, the hint is that I don’t think you should need much. The CESM family of models have already been extensively validated.

        >Answer to this: “I’d ask you how long we have for you to accept and confirm” still stands. It’s not based specifically on a time frame but on acceptance by ‘science’ and I’m not the decider.

        Classic luckwarmer wishy-washy. “It’s out of my hands” is a nice touch.

        What’s so doggone difficult about saying, “I don’t know how much time we have”? I sure as heck don’t have a problem saying that.

        >U.I. is always a problem. Known or not.

        It’s been said that what’s in the rear view mirror can be seen with greater clarity. YMMV.

  19. Kip posted:

    a) “What flummoxes me is the claim made in the caption – “The ensemble mean (EM; bottom, second image from right) averages out the natural variability, leaving only the warming trend attributed to human-caused climate change.”

    b) Averaging 30 results produced by the mathematical chaotic behavior of any dynamical system model does not “average out the natural variability” in the system modeled. It does not do anything even resembling averaging out natural variability. Averaging 30 chaotic results produces only the “average of those particular 30 chaotic results”.

    c) The output of chaotic systems thus cannot simply be averaged to remove randomness or variability – the output is not random and is not necessarily evenly variable. (In our case, we have absolutely no idea about the evenness, the smoothness, of the real climate system.)

    Although I agree with much of what you say, I think statement c) may be incorrect as a generality. Over a long enough period of time, the statistics of transitive chaotic systems are well defined and independent of minor differences in starting conditions. With different initialization conditions and a long enough run, output can converge to a meaningful average. In the case of climate models run under constant forcing, we don’t know how long the model needs to be run for the average output to reliably converge to a meaningful average, nor do we know how many different possible starting conditions we might need to sample to find that meaningful average. Climate is defined as a 30-year average of weather, but we don’t know if thirty years is long enough so that the average and standard error from thirty 30-year periods be observed in the next set of thirty runs with different initialization.

    A common phrase is “weather is an initial value problem”; climate is a forced boundary value problem”. As I understand it, we don’t know how long we need to wait before climate becomes a boundary value problem. 10 years is too short. The IPCC assumes a century is long enough.

    In the most extreme case, the Earth seems to have two stable climate states – glacial and interglacial. If you initialize your climate model in a glacial state, it may stay there for many centuries or millennia and then switch to an interglacial. If you initialize in an interglacial state, it may stay there for many many centuries or millennia and then switch to glacial. (No single theory explains why our planet oscillated with a 40,000-year period and then shifted to a 100,000-year period. We think orbital mechanics are responsible for the shift between climate states, but there is no significant global forcing associated with orbital mechanics.) The long-term behavior of this system could be an average of these two states. Climate models would never know about ice ages because they would run their models long enough and they would discard any run that headed for glacial conditions.

    ScienceofDoom has some great posts on chaos:

    https://scienceofdoom.com/2014/11/29/natural-variability-and-chaos-four-the-thirty-year-myth/

    https://scienceofdoom.com/2014/07/27/natural-variability-and-chaos-two-lorenz-1963/

    • franktoo ==> The real nut here is “What do you actually get when you average [find the “ensemble mean”] the output of a chaotic dynamical system model?” We all know only-too-well what can be accomplished with modern statistical packages on computers that can slide into our suit-coat pockets. In my view, the “usefulness” of knowing the ensemble mean of the purely chaotic output of the CESM (which is exactly what they have done with their tiny perturbations of a single initial value) is ONLY to discover what limits — what boundaries — their MODEL enforces upon itself. It can not inform us of the limits — the boundaries — of the real world climate — the means they develop only inform us about their model as well.

      Climatic history informs us of the actual limits of Earth’s climate — the dynamical system has been running for thousands of years — and, as you point out, there appears to be two long-term semi-stable states — two attractors — Ice Ages and Interglacials, with very limited variation within those states.

      As for “weather is an initial value problem; climate is a forced boundary value problem”, that is still under study and discussion — and the jury is out. Lorenz called it an “almost intransitive” system in his paper Climate Determinism.

      Someday we may tease out the underpinnings.

      • Kip: So we may agree: Averaging the output from a model of a chaotic system may or may not produce a meaningful mean and standard deviation. As best I can tell, we have no way to knowing whether the average the output from the IPCC’s AOGCMs produces a meaningful average the way it is currently being performed*. We do know that initialized AOGCMs failed to hindcast decadal change in climate. Since climate models can be tuned to reproduce 20th-century warming (via aerosol and ocean heat uptake), we have know way of knowing what happens on a centennial scale. In other words, the IPCC is making inappropriate assumptions about averaging model output.

        * There is a revealing section in AR4 that discusses the IPCC’s models in terms of an “ensemble of opportunity” that doesn’t cover parameter space and admits that the spread in model output can’t be used to produce valid confidence intervals. This “likely range” in IPCC projections is based on “expert judgment” – which simply reports the “very likely” range from model output as a “likely” range. Policymakers don’t have a clue. I suspect the economists using model output don’t either.

      • Kip: Thanks for the link to Lorenz’s Climate Determinism paper. Have you read:

        http://eaps4.mit.edu/research/Lorenz/Chaos_spontaneous_greenhouse_1991.pdf

      • franktoo ==> Yes, even that early they were struggling with the attribution problem. I fear we are no further on today.

        The persistence of climate features, like the step-like warming we have seen following El Ninos, is also discussed in that paper.

  20. Kip: Natural variability is an ambiguous term. I find this combination of terms useful: naturally-forced variability (solar and volcanos), anthropogenically-forced variability (GHGs and aerosols), and unforced/internal variability.

    • “unforced/internal variability” This term needs clarification IMO. Internal variability seem to be the result of different combinations and/or accumulative weather conditions acting as forcings on the climates of regions and on their longer term trends.

  21. As always, there is nothing special to atmospherics about solving chaotic nonlinear systems with sensitive dependence on initial conditions. It is done all the time in the mainstream engineering activity of computational fluid dynamics (CFD). They solve the same Navier-Stokes equations with similar turbulence models. And they are not, generally, interested in response to initial conditions, nor to the vagaries of eddies etc that are created. To get what they need to know, they average. Plain linear averaging – well OK, probably integrating. It’s built into subgrid turbulence models over time as Reynolds averaging – goes back over a century. But it is how the useful results are derived.

    A typical analysis is lift over a wing. You set up some kind of approximate initial conditions – not only is there indeterminate dependence on initial state, but you never have maningful data for it anyway. Then you run for a while to let any wrinkles that you introduced settle – just as GCM modellers wind back and let settle. Then you run the flow and monitor the lift (just as you would in a wind tunnel). The instantaneous lift is an integral of pressure over surface (resolved in vertical), and you average that over time. Why? It’s chaotic. But it’s also what the plane does. And the time-averaged lift really does have to balance the weight of the plane (in steady flight).

    And that is how it works with GCMs. You create all sorts of winds, but you don’t want to know about them. You want to know what happens to long-term heat and moisture transport etc. These are conserved quantities, and averaging is just the right thing to do.

    • Nick Stokes,

      A miracle! Nothing special about solving chaotic non linear systems!

      I believe there’s a lazy million dollars or two waiting for you. From the Clay Mathematics Institute –

      <emurrents follow our flight in a modern jet. Mathematicians and physicists believe that an explanation for and the prediction of both the breeze and the turbulence can be found through an understanding of solutions to the Navier-Stokes equations. Although these equations were written down in the 19th Century, our understanding of them remains minimal. The challenge is to make substantial progress toward a mathematical theory which will unlock the secrets hidden in the Navier-Stokes equations."

      Real mathematicians and physicists seem to think the problems still haven’t been solved. Maybe you should share the solution with us mere mortals.

      Or are climatological chaotic non linear systems defined as equations of straight lines? That would certainly make the solution a lot easier!

      Cheers.

      • “Real mathematicians and physicists seem to think the problems still haven’t been solved.”
        Well, real engineers deal with them routinely. And even mathematicians. I did, and I’m real.

      • Nick Stokes,

        Deny, divert, confuse – standard climatological fare.

        I didn’t mention engineers. Nor did I mention “dealing with.”

        Are you as competent a mathematician as Gavin Schmidt? A few years ago, he sneeringly dismissed chaos, saying he saw nothing to convince him that the atmosphere behaved chaotically.

        You initially mentioned “solving” rather than “dealing with”. Maybe you were using the climatological definition of “solving” which is more related to wishful thinking than reality.

        Anybody flying a paper aeroplane deals with fluid dynamics. Only a GHE apologist would think this a grand intellectual feat, and put it on the same plane as solving chaotic non-linear equations.

        Let me know when you’ve solved the Navier – Stokes equations. It would help if you show your workings. You’re a mathematician – how hard can it be?

        Cheers.

    • Nick: You’ve grossly over-simplified. The reliability of CFD depends on the resolution of the grid applied to the problem. One problem with AOGCMs is the large size of the grid cells compared with the important fluctuations caused by turbulence.

      Climate models couldn’t reproduce the QBO until they had sufficient resolution in the stratosphere to properly model the phenomena. Climate models still produce a double ITCZ. The early climate models didn’t exhibit a ENSO and they still don’t represent it well. It is absurd to suggest that any fluid flow problem can be properly modeled on a grid without discussing grid cell size. The limitations of CFD explain why wind tunnels were used (and sometime are still used) by aeronautical engineers even though the Reynolds averaging N-S equations has been know for more than a century. Computational power limits both CFD and AOGCMs.

      The entrainment parameter (which controls mixing between rising and descending air masses in convective towers) is the parameter than often makes the biggest difference to climate sensitivity. The GFDL group had a 2016 publication showing that differences in entrainment parameterization between several of their models could change ECS by 1 K without causing a deterioration in model performance! The entrainment parameters are needed because grid cell size is too small to properly model the phenomena that occur in tropical convection

      Nick writes: “You create all sorts of winds, but you don’t want to know about them. You want to know what happens to long-term heat and moisture transport etc. These are conserved quantities, and averaging is just the right thing to do.”

      Unfortunately, evaporation is proportion to wind speed, so you can’t get moisture transport right without getting wind right. You won’t get Hadley circulation right – a main mode of heat transport in the tropics – without getting the wind right.

      CFD isn’t a reliable method for dealing with precipitation and clouds, especial boundary layer clouds. The lower the clouds, the more cooling the provide. Climate models predict an increase relative humidity in the boundary layer over the ocean, but have little capability to model the clouds that could result

      • Frank
        “The reliability of CFD depends on the resolution of the grid”
        You need resolution. But the contention of this post seems to be that Navier-Stokes equations can’t be solved at all because of chaos etc. And you can’t average because of, well, something. I just point out that all that is possible and routinely done in CFD.

      • Nick: The contention of your comment is that the fact that aeronautical engineers use CFD to get valuable solutions the N-S equations means that AOGCMs produce valuable solutions. That is false. We know that analysis of models of transitive chaotic systems produces meaningful averages and standard deviations if you run the model for long enough. In the case of AOGCMs, we have no idea of how long is necessary. We do know that attempts to hindcast decadal climate change were a complete failure. We also know that AOGCMs can be tuned (with aerosols and ocean heat uptake) to match the limited historical record over the past century. Then there is the issue of parameterization. A priori, there is no reason to believe there is any meaningful difference between climate models that project ECSs of 2.0 3.0 and 4.0 K; and I probably should be saying 1.5, 2.5, 3.5 and 4.5 K.

    • You create all sorts of winds, but you don’t want to know about them.

      Meteorology.

      The winds you don’t want to know about determine the clouds, rains, floods, droughts, storms, etc.

      If the IPCC wants to say global average temperature will increase – fine.

      If the IPCC wants to say floods, droughts, storms, precipitation, etc will change in a predictable way – they are on very shaky ground.

      • “The winds you don’t want to know about determine the clouds, rains, floods, droughts, storms, etc.”
        Yes. And that is what weather forecasting programs do. And they are determined by initial conditions. But climate models don’t. They generate all that because you need to derive the average effect, but you don’t care about the instances. Just as you don’t care about the instances of turbulent eddies passing over the wing. They won’t be replicated exactly in any real flight anyway. You want the average lift.

      • What this has meant from the start is that climate is not predictable.

      • Yes, because airplanes fly so unpredictably by design.

      • Airplanes crash predictably when they try to land in unpredictable thunderstorms downdrafts.

      • Most horses drink when drug to water, TE.

    • maksimovich1

      As always, there is nothing special to atmospherics about solving chaotic nonlinear systems with sensitive dependence on initial conditions. It is done all the time in the mainstream engineering activity of computational fluid dynamics (CFD). They solve the same Navier-Stokes equations with similar turbulence models

      Very Very Wrong. eg Gallavotti

      Fearless engineers write gigantic codes that are supposed to produce solutions to the equations,they do not seem to care the least (when they are aware of the problem which unfortunately seldom seem to be the case) that what they study is not the Navier Stokes equations,but just the informative code they produced.No one to date,is capable of writing a n algorithm that, in an priori known time and within a prefixed approximation,will produce the calculation of any property of the equations solutions following an initial datum and forces wich are not very small or very special.Statements to the contrary are not rare ,they may even appear in the news,but they are wrong.

    • Nick Stokes, What you say is mostly right but leaves out an important recent result that the lift is sensitive to the initial condition you start at in steady state calculations for the Navier-Stokes equations with common turbulence models. This result was suppressed for a long time as people had the cultural belief that there was only one solution and initial conditions didn’t matter so they generated all sorts of reasons why they got inconsistent results. They were wrong. Some of the solutions seem to have physical realizations and some don’t. AIAA Journal, I think 2014 August, “numerical evidence for multiple solutions for the Reynolds’ Averaged Navier Stokes equations.” Venkatakrishnan was the first author.

      This is a classic case of selection bias in the literature.

      • Thanks, David
        I found an abstract here. Unfortunately, I can’t find a cost-free pdf, but I’ll get one next time I’m in the office. It seems the different solutions relate to flow separation. I suppose there is always the possibility that there really are two quasi-steady states.

  22. funniest own goal I have ever seen

    • Steven Mosher,

      I’m a little surprised that having ensured your team lost by scoring an “own goal”, you’re laughing at them.

      Seems a little disloyal – are you intending to be the first rat to leave the sinking ship, leaving the rest of the cultists to their ignoble fate?

      Or are you trying to start up your own branch of the Church of Climatology – rejecting chaos in favour of those who worship the Purity and Beauty of Linearity?

      Here’s a clue – you still haven’t got one. Best of luck, anyway. It’s hard to be miserable while you’re laughing. Keep it up!

      Cheers.

  23. One could argue that because the model was man made it only covers anthropogenic climate change and doesn’t qualify as natural variability from the outset.

  24. Maybe someone has already made this point, but I’ll make it again anyway. Even though the system is inherently chaotic, it will still tend towards a state of energy balance. If we are receiving more energy than we’re losing, we will warm. If we’re losing more than we’re gaining, we will cool. That it is chaotic, does not mean that this will not be true. It is still bound by these conditions and most who work in the field regard chaos as being more relevant for weather than for climate.

    There are three main factors that determine our energy balance, the amount we receive from the Sun, the amount we reflect (albedo), and the composition of the atmosphere (greenhouse effect). Those three factors largely set the temperature of the surface at which we will be in energy balance.

    So, even if some past changes were triggered by chaotic variability (and Dansgaard-Oeschger events may indeed be an example) it’s not this chaotic variability that actually causes the long-term warming, or cooling; what cause the warming is what changed as a result of this variability. In the case of Milankovitch cycles, the warming/cooling is still driven by changes in albedo (from changes in the ice sheets) and changes in atmospheric GHG concentrations. Hence, we can use these changes to constrain climate sensitivity – as has been done.

    Today we have seen a substantial increase in atmospheric CO2, a well-known greenhouse gas. It will almost certainly cause us to warm. That the system is chaotic and – consequently – variable means that the exact manner in which we warm will depend on the complex internal dynamics of the system. Clearly this can influence the rate at which we warm and can influence the regional distribution of the warming.

    However, if you’re suggesting that it could be a major contributor to the warming (i.e., the anthropogenic influence is much smaller than we think) then you need to show what has changed in order to drive this warming – the chaos itself does not drive long-term warming; this would require some change in albedo, or atmospheric composition (the Solar insolation being reasonably constant).

    So, that it is chaotic and, hence, highly variable, does not suddenly mean that it can magically cool or warm without some change in the properties of the actual system; it probably means that it is difficult/impossible to predict the precise state to which we will tend, but doesn’t mean that we can’t infer the consequence of continuing to emit CO2 into the atmosphere.

    • Yes, I agree, and was trying to say this above. Ultimately Navier-Stokes deals with conserved quantities. They are moved around in a chaotic way, but the totals, and hence averages, follow rules.

    • Yes, I noticed your comment after I wrote mine.

    • Even though the system is inherently chaotic, it will still tend towards a state of energy balance.

      But at a potentially different equilibrium.

      Imagine State A: more troughs over land & more ridges over ocean.
      That leads to more clouds over land, fewer clouds over ocean.
      Thus State A has a lower planetary albedo.

      Now, imagine State B: more ridges over land & more troughs over ocean.
      More clouds over ocean, fewer clouds over land.
      State B has a higher planetary albedo.

      Global energy balance can change because of chaotic fluid flow.

    • But at a potentially different equilibrium.

      Consider:
      State A: more troughs over land but more ridges over ocean leads to
      more clouds over land, fewer clouds over ocean which leads to
      reduced planetary albedo

      State B: more ridges over land but more troughs over ocean leads to
      more clouds over ocean, fewer clouds over land which leads to
      increased planetary albedo

      Global energy balance can change because of chaotic fluid flow.

    • …and Then There’s Physics ==> “If we are receiving more energy than we’re losing, we will warm.”

      Wouldn’t it be more correct to say that ‘If we are receiving more energy than we’re losing, then the system will gain energy.’ The energy goes into the entire system — oceans, plants, soil, water vapor in the air, etc etc — it is the entire system — and across all time — that the energy must be conserved. Life sequesters not only C02 but energy as well. After all, it is the extraction of the energy stored in the past by life forms — burning fossil fuels – that has caused the increase in CO2 concentrations.

      What form that energy takes in the system is not pre-determined to be “warming”.

      • Kip,
        No, I think “we will warm” is fine. It’s true that there is more than one form of energy, but if we have an energy imbalance (receiving more than we’re losing, for example) then it would be extremely difficult to continually store that excess energy in some other form. Kinetic energy would, for example, eventually dissipate. We could store it as chemical energy that is sequestered underground, but the rate at which that can happen is much slower than, for example, our currrent energy imbalance.

        Also, the only way we can retain energy balance is to warm so as to radiate more energy back into space, so if we are receiving more energy than we’re losing, we would expect that excess energy to warm the system until we retain energy balance.

      • …and Then There’s Physics ==> “Kinetic energy would, for example, eventually dissipate. ” Yes, certainly, but we must remember that energy balance is a concept, a physical law, that does not demand instant execution — it is for “all time”. Each of the ways in which the planetary system receives and stores energy have to be kept in mind — it is not like turning on the heat under a pot of water.

        Climatically, it may well be more important to say “the system will gain energy”.

      • Kip,

        Climatically, it may well be more important to say “the system will gain energy”.

        Of course the system gains energy, but the idea that we can somehow gain – or lose – energy for a sustained period of time without warming – or cooling – is probably physically implausible. A planetary energy imbalance of 0.5W/m^2 (roughly what we currently have) is around 10^22 Joules per year. Suggesting that somehow the system could store this excess energy for a sustained period of time in other forms of energy (chemical/kinetic) is extremely unlikely given that I think it would require substantial increases in both.

        Another point is that the Planck response is around 3/2W/m^2/K. The further we are from equilibrium the more rapidly we accrue (or lose) energy. Given the heat capacity of the system, there is a limit to how far it can get from equilibrium because the rate at which it would re-equilibrate increases the further from equilibrium the system is shifted. This brings me back to my initial point. Simply because the system is chaotic does not mean that it can swing wildly away from the equilibrium state set by energy balance. Any large swing would require that the chaotic nature has also changed the energy balance by either changing the composition of the atmosphere, or albedo.

      • These statements are very vague and not very scientific. The system is complex and conservation of energy can play out in many different ways including more energy eventually lost to space.

        A simpler example is saying that higher thrust must increase speed. That is simply not true and dangerously oversimplified. At a certain point, higher thrust decreases speed due to a drag crisis. A lot our “just physics” intuition is based on linearized thinking and is often wrong.

      • The system is complex and conservation of energy can play out in many different ways including more energy eventually lost to space.

        The only way we can lose more energy to space is via radiation. The only way a blackbody can lose more energy via radiation is to get warmer.

      • ATTP, “The only way we can lose more energy to space is via radiation. The only way a blackbody can lose more energy via radiation is to get warmer.”

        If you are dealing with an ideal blackbody. Without a uniform surface temperature and not even having a fix surface radiant layer, there are lots of potential combinations. When Panama closed after the Antarctic Circumpolar Current was formed the northern hemisphere warmed while the southern hemisphere cooled. The Antarctic is more efficient at losing heat than the Arctic.

        http://eesc.columbia.edu/courses/w4937/Readings/Brierley%20and%20Fedorov.2010.pdf

      • Even though the system is inherently chaotic, it will still tend towards a state of energy balance. If we are receiving more energy than we’re losing, we will warm. If we’re losing more than we’re gaining, we will cool. That it is chaotic, does not mean that this will not be true. … -aTTP

        Each of the ways in which the planetary system receives and stores energy have to be kept in mind — it is not like turning on the heat under a pot of water. Kip Hansen

        David Young’s thrust example… massive confusion. There is no example – outside of magic – of a pan of water cooling off once a burner is turned on. Stop claiming the climate is a complex system. It is not; it is a large system that mankind had no real interest in understanding, no real reason to understand, until just recently. It’s a gigantic pan of water in a big kitchen; the sun is the gas burner. Find me a cookbook that addresses what to do if the water cools when you turn the burner up. Such a paragraph would have placed Julia Childs into a French insane asylum. Trillions of experiments on turning the burner up under a pan of cold water have been done… always with the same result… hotter water.

        The last 240 months (two decades) has a trend of .18 ℃ per decade. The models apparently led the IPCC to write that the first two decades of the 21st century would warm at .2 ℃ per decade (have never read how that foolish prediction got into the IPCC report; who did it.) Guess what? Looks like they’re going to nail it. Why? They’re fairly smart; the system is fairly simple; it is chaotic, but that ain’t what is being sold here: what is being sold here is the system is magical, and it ain’t magical… because… and then there’s physics.

    • The problem is we are not measuring energy, we are measuring temperatures at the surface and inferring that temperature change indicates a change in energy balance.

      Global average temperatures are largely driven by Arctic temperatures. BUt the loss of thick Arctic ice due to a shift in the direction of freezing winds that pushed insulating ice from the Arctic allowed more heat to ventilate from the Arctic Ocean’s warm Atlantic layer as well as as heat released due to the heat of condensation as open waters freeze each winter.

      A loss of heat capacity due to the drying of the soils can increase temperatures despite no change, or even a decrease, in energy inputs.

      An increase in meridonal transport of heat from the tropics can cause warming of the average without an input of energy due to greater ventilation of tropical heat in extratropical regions

      • The problem is we are not measuring energy, we are measuring temperatures at the surface and inferring that temperature change indicates a change in energy balance.

        No, we’re mainly inferring the energy balance using the change in ocean heat content (or ocean energy).

      • If we view the oceans as the planet Earth’s radiator, I can infer that this radiator is huge with great circulation & will not run hot at all, to me.

      • Salt water is great anti-freeze and is thrown in for free.

      • North to South mountain ranges around the world above fifteen thousand feet as the fan and these blades travel near a thousand miles per hour. Too simple, I know.

      • North to South mountain ranges around the world above fifteen thousand feet

        Though not many mountains at the equator.

        as the fan and these blades travel near a thousand miles per hour

        Most people don’t notice because the air is already going that fast.

      • I get to infer what I like, just like you. The picture is that a fan moves air across the block in a disturbed state to improve cooling.

      • Maybe we could have Christo, create a fifteen thousand foot, orange fabric barrier 250 miles long, to place across the Equator and see if the sail fills?

    • “There are three main factors that determine our energy balance, the amount we receive from the Sun, the amount we reflect (albedo), and the composition of the atmosphere (greenhouse effect).”

      There are two main factors that determine the cylinder temperature of an air cooled engine, the amount of fuel burned per minute, and the cooling fins structure and extent (greenhouse or not so much greenhouse effect).

      There are three main factors that determine the cylinder temperature of a water cooled engine, the amount of fuel burned per minute, the radiator fins structure and volume of coolant flow, and the thermostat control setting that cycles between open and closed in response to coolant temperature.

      I agree, measuring temperature increases in the polar regions might well be closer to measuring emissions to the TOA.

      When one insulates their attic so that it’s very good at keeping in warmth, they are still dealing with their windows and walls low R values. Energy takes the path of least resistance.

      • Energy takes the path of least resistance.

        And the optical window is open even while we have lwir.

        Although, even this closes as the air temps near dew points and visibility drops due to moisture nucleating as it condenses.

  25. [i]Had they run the models 30 or 100 more times, and adjusted different initial conditions, they would have potentially had an entirely different set of results – maybe a new Little Ace Age in some runs —[/i]

    Models like this and many similar others are run for thousands upon thousands of model years under differing scenarios easily equating to 100 more times or possibly 1000.

    The chaotic weather produced within these models has plenty of opportunity to produce little ice age or non-CO2 warming if it were feasible, and if they did it would mean that one could not produce a “control run” that validated the model.

    But even chaotic events will average out.

    For example, once you have run a double pendulum enough times, you will be able to say something about its statistics (e.g. the number of times the lower pendulum does a 360 degree loop-the-loop) that will continue to hold however many times you run it. It becomes unlikely that doubling the runs will lead to a large increase in the proportion of loop-the-loops.

    • Steve Milesworthy (@smilesworthy) ==> In this case, they ran the model 30 times with identical conditions, less one infinitesimally altered each time, thus ALL of the differences in the outcomes are due to the mathematically chaotic nature of the model — the non-linearities.

      Of what use is it to us to know the average of 30 chaotic model outcomes (or 30,000)? Will it reveal to us the future somehow?

      We already known the average of the real climate system from history — it is the climate we find on Earth today, with its Ice Ages and Interglacials.

  26. A remarkable assertion:

    A full one-third of the runs produce projections that, had they come to pass in 2012, would have set climate science on its head.

    Unfortunately, no justification is provided and the paper is paywalled.

    In what, way exactly would these projections have “set climate science on its head”?

    • Paywalled? PDF here.

      • Thanks Nick.

        A very quick skim leaves me none the wiser as to why Kip asserts that 30% of these projections would turn climate science on it’s head if they were observations. It all seems very mainstream and unsurprising to my amateur gaze.

        Any idea what Kip is on about?

      • verytallguy,

        I suppose this sums it up –

        We anticipate analysis of this ensemble, alone or in combination with other ensembles and/or regional climate simulations, will lead to novel and practical results that will inspire probabilistic thinking and inform planning for climate change and climate model development for many years to come.

        Or maybe not.

        This is science?

        Cheers.

    • verytallguy ==> This essay is only about the image in the press release, its caption,, and a claim made for it in the paper.

      Look at the images in the post, count on your fingers the number of “North American 2012 Winters” that show temperatures in northern reaches trending down and down — instead of the expected and predicted up and up.

      • Kip,

        Look at the images in the post, count on your fingers the number of “North American 2012 Winters” that show temperatures in northern reaches trending down and down — instead of the expected and predicted up and up.

        I’m not aware of temperature trends in “North American winters in northern reaches” being such an integral part of climatology that an unexpected result in this would “turn mainstream science on it’s head”

        Also, given that AR5 states

        Warming will continue to exhibit interannual-to-decadal variability and will not be regionally uniform

        it’s not clear that this result is at all unexpected.

        Perhaps you could provide references to inform
        (i) what is projected for north american temperature trends
        (ii) how far from any such projection these simulations are

      • verytallguy ==> The expected outcomes are warming in general, particularly towards the poles — thus sharply downtrending temperatures in the Alaska/Canada over the fifty year period 1963-2012 would have been very unexpected, upsetting the apple cart of consensus AGW.

      • OK so you’re asserting that a trend in a very small geographical area would overturn climatology. Even though global trends are as expected (see the paper). And it’s explicitly expected that there will be regional, interranual and decal variation from global long term trends.

        I’ll see your assertion and raise you mine:

        The evidence shows you to be wrong.

        Let’s agree to differ, shall we?

  27. It might be noted that the existence of variational principles underlying physical phenomena, be they known or unknown, is contrary to the presumption that model ensemble averages are meaningful. The innards of an operating steam engine are surely chaotic in some sense and would defy Navier-Stokes modeling, yet are certainly subject to thermodynamic limitations. The Carnot equation for instance, it may be argued, is a minimum dissipation theorem which implies a variational principle minimizing thermal gradients and thus keeping surface temperatures minimal. Should this prove the case, approximate solutions will run hot and lower results should better describe physical reality.

    • dan thanks for these links, these are brilliant.

    • Sophomoric and circular. Trash.

      • Dan ==> Interesting paper, thanks for the link. Especially important to keep in mind their caution: “The output from experiments conducted on low dimensional idealised models, such as the L63 model, are inevitably limited in their relevance to higher dimensional systems. One cannot expect the results described here to generalise in any specific way to higher-order climate models, let alone the real climate system.”

        I enjoyed the SoD series on Chaos — albeit with about a thousand caveats.

      • Hugely useful paper, thx. This is all very relevant to something i’m working on, hope raise some practical issues regarding using climate model ensembles in a post next week.

      • Dan Hughes,

        From the paper –

        “If the behaviour exhibited in the L63 model was to be observed inmore complex climate models, then the output of climate statistics based on single model realisations or small ensembles would be misleading for both model interpretation and as the basis for model derived predictions.”

        Shock! Horror!

        Pretending chaos doesn’t exist won’t make it go away.

        The authors talk about “experiments” which consist of choosing numerical inputs to the three differential equations involved. Typical.

        Cheers.

    • All yogurt weaving. Most natural variability is various responses to solar variability, not internal, and not chaotic.

    • Dan Hughes,

      I had a brief look at the second link.

      I believe the writer is somewhat confused.

      “There’s lot of theory around all of this as you might expect. But in brief, in a “dissipative system” the “phase volume” contracts exponentially to zero. Yet for the Lorenz system somehow it doesn’t quite manage that. Instead, there are an infinite number of 2-d surfaces. Or something. For the sake of a not overly complex discussion a wide range of initial conditions ends up on something very close to a 2-d surface.

      This is known as a strange attractor. And the Lorenz strange attractor looks like a butterfly.”

      So much nonsense in so few words.

      The Lorenz “system” is dissipative. The “phase volume” does not contract to zero.

      For the infinite inputs where the system iterates to zero, talk of 2D surfaces is rubbish.

      For results producing 3D outputs, of course you can project an infinite number of 2D surfaces.

      An infinite number of outputs result in points which lie on a plane. An infinite number do not.

      Or something? Indeed. Definitely “something”.

      There are infinitely many numbers which, when used as input parameters for the Lorenz equations, produce zero. There are infinitely many which lead to chaos. There are infinitely many which lead to 3D Lorenz knots.

      There are infinitely many strange attractors. They don’t all look like butterflies, or, indeed, anything in particular at all.

      Note –
      “As numerical exploration has shown, the transition occurs for r = 13.926…, and the solution leaving the origin along the local unstable manifold returns to the origin along the local stable manifold (tangent to the z-axis). It is hard to construct this homoclinic orbit numerically, because it is unstable. We take an indirect approach, and look at a sequence of plots for a range of r values bracketing 13.926. We construct the orbits using in each case an initial point displaced h from the origin along the local unstable manifold.”

      Numerical exploration. Suck it and see. 13.926… going to infinity.

      The “butterfly” graphic is a 2D projection of one of an infinite number of strange attractors. It resonates because it seemingly bears a relationship to the “butterfly effect”. The values resulting in this figure are arbitrary. 8/3, for example, is 2.666… . Results will vary depending on the number of decimals you can handle.

      Trying to dismiss chaos, by misdescribing or misunderstanding it, probably won’t help understanding the operations of the atmosphere.

      Aint chaos grand?

      Cheers.

  28. As a number of others have posted, I don’t really understand the point of this post. We know that climate is chaotic. The question is, are there statistical statements that can be made that are correct nevertheless, and if so which ones? Overall temperature is a pretty good candidate, because of conservation of energy – maybe: don’t forget heat escaping to the unobserved deep ocean, and how the albedo of clouds might affect how much heat stays in the system, etc. Are there other quantities that we expect to be predictable? It’s hard testing models when you only get _one new data point per month_.

    • Michael Aarrh ==> This essay is about one thing — the claim that in regards to these particular CESM-LE projections over 1963-2012 for North American Winters (included in the press release linked by Judy Curry) “The ensemble mean (EM; bottom, second image from right) averages out the natural variability”. I state why I do not think that this is (or even could be) true.

      However, your question: “Are there statistical statements that can be made that are correct nevertheless, and if so which ones?” is perfectly valid — and I would add “If so, which of those questions would provide useful answers?”

      The short answer to your question is: Probably, maybe. I base this on the historical climate record.

      The answer to my additional question is: I doubt it — not in the sense that it would improve our responses to climate effects.

    • Way to go JT.

      An OT essay on “The climate change” by a bunch of Cato economists invoking (a dubious translation of) Lao Tzu, Voltaire, Kuhn and Galileo!

      And of course our eminent hostess.

    • Cato?

      Cant even get the history right

      “AGW theory is an example of a contemporary Kuhnian lock-in.
      How and why did it emerge? According to Lindzen (1992) it has its
      origins in the observation that CO2 levels are increasing in the atmosphere
      due to the burning of fossil fuels, and that CO2 increases are not
      simply correlated with rising global temperatures, but are forcing
      agents. Thus in Mann’s view (Mann, Bradley, and Hughes 1998) the
      post-1970 surge of global growth has created a “hockey stick” of
      increased emissions, higher CO2 levels, and therefore temperatures.
      The mechanism is that of the “greenhouse effect.” CO2 is one of several
      greenhouse gases—methane is another—that inhibit the radiation
      of heat from the earth’s surface back to space: hence AGW. The
      abrupt increase in temperature supposedly captured by Mann’s
      hockey stick led invariably to the conclusion that there was greenhouse
      warming, that humans were the cause, and that dramatic intervention
      was required to prevent runaway global warming in the future.”

      Mann’s Hockey Stick did not capture a the abrupt increase in temperature.

      the abrupt increase is in the actual temperature record, The whole frickin point of his hockey stick was the Shaft.

      Flat Shaft. no MWP

      remember?

      Mann’s contested point: Flat shaft, no MWP.

      not the blade.. the blade is the instruments

      Cato should read climate audit.

      every last post.

      • Steven Mosher,

        You wrote –

        “The mechanism is that of the “greenhouse effect.” CO2 is one of several greenhouse gases—methane is another—that inhibit the radiation
        of heat from the earth’s surface back to space: hence AGW.”

        Err . . . no.

        Slowing the rate of cooling does not result in heating. Conversely, slowing the rate of heating does not result in cooling.

        Even a very good insulator, such as a vacuum flask, will not prevent hot objects cooling, or cold objects from warming, tending towards thermodynamic equilibrium with the external environment.

        The Earth has cooled for four and a half billion years. Night is generally cooler than day, winter generally cooler than summer.

        Thermometers around the world have recorded the increase in human heat output as industrialisation took place, and the population increased exponentially. The fact that this heat came about as the result of creating CO2, by and large, leads to the conclusion that GHE is magical nonsense.

        Your claim that you can heat the Earth by delaying the rate at which the surface loses heat to space is completely nonsensical. The Earth would never have cooled to its present pleasant conditions.

        AGW (anthropogenically generated warming) is a fact. Burning stuff creates heat, CO2, and H2O at a minimum.

        No one has ever to falsify this hypothesis by means of reproducible physical experiment. Maybe you will be the first!

        Cheers.

      • Slowing the rate of cooling does not result in heating.

        Wherein Flynn relegates winter coats to the placebo effect.

        Even a very good insulator, such as a vacuum flask, will not prevent hot objects cooling, or cold objects from warming, tending towards thermodynamic equilibrium with the external environment.

        Good thing for us that we orbit a massive fusion reactor. The CMB is quite cold.

      • Brandonrgates,

        Deny, divert, confuse.

        Overcoats? Who mentioned overcoats? Ever tried to heat up a cooling corpse by putting an overcoat on it?

        The Sun shines at night? News to me.

        The Earth has heated over the last four and a half billion years?

        If you say so, Brandon.

        Dream on, laddy! I’m sure you can heat things by surrounding them with CO2 – in your imagination, anyway. Pity it doesn’t actually work in reality.

        Cheers.

      • >Ever tried to heat up a cooling corpse by putting an overcoat on it?

        No, and I’ve had plenty of opportunity, Mike. The overcoat analogy needs a heat source to work, as you darn well know. Speaking of …

        >The Sun shines at night?

        It’s always high noon somewhere.

        >The Earth has heated over the last four and a half billion years?

        I refer you to your previous comments about trends. Under that ‘logic’, we’re due for some warming.

        Cheers.

      • brandonrgates,

        Last time I looked, the Sun was located outside the Earth. Wrapping the Earth in an overcoat will only stop energy reaching the Earth. It will just cool a little slower. It won’t heat up, will it?

        It’s always night around 50 % of the time. That’s when the surface loses all of its daytime heat (plus a little more, as Fourier pointed out).

        Are you agreeing with me that the Earth has cooled for four and a half billion years? The trend will change if the Suns’s output changes – say if it becomes a red giant. Agreed?

        Still no falsifiable hypothesis relating to the GHE. If you claim the GHE is due to an insulating effect, you might have to demonstrate how placing an insulator between an object and a source of energy causes the temperature of the object to rise, as the amount of energy it absorbs falls.

        How hard can it be? Or maybe there’s an equally implausible magical explanation?

        Time for you to deny, divert, and confuse. How about forcing, feedback, missing heat, back radiation, TOA, DLWIR, and all the rest of the sciency sounding folderol?

        A scientific fact or two might be nice.

        Cheers.

      • >Wrapping the Earth in an overcoat will only stop energy reaching the Earth.

        Good thing for us the atmospheric overcoat is largely transparent to incoming SW radiation and mostly opaque to LW.

        >It will just cool a little slower. It won’t heat up, will it?

        All else being roughly equal, that’s the theory, Mike. Even first year physics students can understand this part of the argument.

        >It’s always night around 50 % of the time. That’s when the surface loses all of its daytime heat (plus a little more, as Fourier pointed out).

        And it’s always daytime 50% of the time, so there you go.

        >Are you agreeing with me that the Earth has cooled for four and a half billion years?

        The Hadean is appropriately named.

        >The trend will change if the Suns’s output changes – say if it becomes a red giant. Agreed?

        Catastrophically so, but your trends change argument specifically excluded external forcing. Plus, the schedule for the Sun going off the main sequence exceeds my planning horizon by a few billion years, give or take.

        >A scientific fact or two might be nice.

        You’re already impervious to logic, why would I wish to confuse you any further with facts?

    • You are very kind Kim.

      Perhaps the first thing to note is that nowhere in your suggested translations, or indeed in mine, does the word “predict” occur. So where under Heaven or on Earth do you suppose the Cato Institute got their Lao Tzu quote from?

      • Jim Hunt ==> I have no idea — haven’t read the link (and don’t intend to), my post isn’t about the Cato Institute, nor am I responsible for every link readers throw into the comments.

        I just happen to have studied comparative religions at uni (UCSB) and ‘accidentally’ know something about the Tao and have personal opinions about the relative value of its translations.

      • My apologies Kim. My question was directed at JT, since I have no idea either. Unfortunately the nesting of comments only goes far here, and I neglected to reference JT in mine.

        JT – Where under Heaven or on Earth do you suppose the Cato Institute got their Lao Tzu quote from?

      • Jim, while linguistics is an interesting topic, it’s you that attach some relevance to the quality of translation for the quote. I don’t suggest any translation. You imply you’re not a linguist, however, you state the word “predict” appears nowhere in the actual text which suggests you have “a” entire translation at hand. So, instead of brow beating the issue, why not amuse us and paste the entire sentence in question here? I have no judgement, just mild interest, but not enough to ferret it out. It seems important to you.

        Frankly, I don’t care about the referenced quotes. Are the stated philosophies truisms regardless of attribution? Does translation matter? While perhaps interesting fodder for argument, I don’t care about these things either.

        As for the Cato essay itself, ultimately policy is going to come from politicians with law and economic backgrounds. Whether you like what they have to say, or not, is not all that relevant. These are the types of people that are going to distill the information science provides to determine policy. The journals, peer review, have had their say, the next level of review will come from non scientists; even the layperson will have a say with their vote. Scientists have made their case.

        So far the judgement by the “other” experts appears to be, as Steven would say, “do more science”. Experts outside of science are well aware of the degree that politics has influenced climate science. The Cato paper makes clear how prejudices have empowered the issue in myriad ways. They are asking scientists more questions, to “do more varied science”. You don’t have to like the questions, but the gauntlet that will drive policy so far doesn’t believe the science enough to bet the farm (economy); largely because the evidence provided by science is filtered through a monolithic group think pedigree of conviction that relies on models and the deep pockets of governments for its existence.

      • JT – You were the one that asked me “What’s your accurate translation of the mentioned Lao Tzu quote?”. Remember?

        I have no idea which verse from which translation of which book the Cato Institute are quoting. Perhaps they just made it up?

      • Kip, my apology for interrupting the conversation on your fine essay. I wasn’t trying to create diversionary legs.

      • Yes, I asked if you could provide what you considered to be an accurate translation of the mentioned Lao Tzu quote that you were dubious of, but now your critique comes across more like clueless ankle biting, simply taking a shot. You implied more interpretive knowledge or familiarity of the language; perhaps a study of it, or possibly other references to the quote in question; but you obviously don’t have anything more.

        The science of linguistics is as much of an art as a science. Experts can deduce different interpretations from the same text, more so if a language is arcane, so stating a quote is “dubious” isn’t necessarily wrong at face value, but it’s misleading when it’s not expert opinion or otherwise used for taking a shot. When you state the word “predict” doesn’t appear in the texts, no surprise, so what? A linguist expert may interpret a nuanced phrase to translate into the word “predict”. Sometimes a phrase doesn’t capture the emotive qualities a single word has in a different language. It’s past time to drop it.

      • JT – I was dubious about it because it didn’t sound anything like any of the verses in any of the English translations of Lao Tzu’s work that I have read.

        Here’s a few more for you to inspect at your leisure, if you so desire:

        https://www.bu.edu/religion/files/pdf/Tao_Teh_Ching_Translations.pdf

        As you say, let’s drop it.

      • jungletrrunks ==> No worries about diversions….in this case, at least it turned out to be something that I have an interest and training/education in — Eastern religious philosophies.

  29. Whoops! Cut n paste fat finger.

    r/The climate change/The climate change debate/

  30. I believe people are confusing 2 things. The global temperature may be constrained by energy balance issues (though if chaos send more or less heat to the arctic that could affect the global temperature some) as Nick points out. However, many of the impacts of climate change claimed by the IPCC are based on regional changes (drought in Africa or US SW, warming in Europe). This paper shows that regional changes are likely NOT predictable ever. I do not believe that the approach they use to average out internal variation is valid since it is based only on 1 model and only tiny changes in initial conditions. What about the other models with different structures? The many other uncertain parameters? Forcing uncertainty? You can get warming or cooling in the southeast US varying a hundred ways with all these possible perturbations.

    • My comments above got stuck in moderation, but I don’t think:
      “Ultimately Navier-Stokes deals with conserved quantities.”
      is correct.

      Global albedo can change because of fluid flow.

    • Craig Loehle ==> Thanks for checking in and for the support. Glad to see someone has understood the one and only point I wished to make in this [originally] short essay — “regional changes are likely NOT predictable ever. … the approach they use to average out internal variation is valid since it is based only on 1 model and only tiny changes in initial conditions.”

      My original essay was a bit more strident — but Dr. Curry reined me in a bit.

    • All good points. The part I find most confusing is that they did not make tiny changes to “initial conditions”. They made tiny changes to one single initial condition.

      I am actually intrigued that perhaps this could be done, but this isn’t the way to do it.

    • Craig, One has to wonder if in fact the global temperature is “determined” strictly by the energy balance issues. It may be “constrained” to be in some range, but that is I would say unknown at the present time. I wonder if there is not some wishful thinking going on out there among the “then there’s physics” crowd or perhaps a little arrogance. Even very simple fluid flow situations in 2D have bifurcation and singular point issues.

      On a previous post, David Appell posted the global temperature time series for the ice ages and it dawned on me that it looked chaotic. I wonder if given all the nonlinear feedbacks with ice sheets, dust, etc. that very small changes in forcing or even weather extreme events could activate some feedback that results in global changes. Certainly the ice ages require some such explanation since the total solar insolation didn’t change much.

      • very small changes in forcing or even weather extreme events could activate some feedback that results in global changes.

        Yes, it is possible to argue that climate sensitivity is extremely high. But it’s quite unlikely, looking at the evidence in toto. Hence the 1.5 to 4.5 range in AR5.

      • Yes, it is possible to argue that climate sensitivity is extremely high. But it’s quite unlikely, looking at the evidence in toto. Hence the 1.5 to 4.5 range in AR5.

        You’re looking at the wrong evidence. The only overall response that matters is the cooling response after today’s warming across the planets surface. And based on the stations we have there hasn’t been a loss in the planets ability to shed heat at night. And while there might be some extra joules that need shed at night, cooling is regulated by dew point temps and weather (clouds, rain, etc), not co2.

      • VTG, We need to be very clear here. Climate sensitivity is strongly state dependent. My assertion is that ice ages are times where the local derivative is infinite at some points and the climate is probably chaotic. It is an open question whether in warmer climates this is true. I would not be surprised though.

        It is a further argument that despite all our attempts at modeling, climate has an unpredictable element, just like simple turbulent flows at least in certain ranges of conditions. We need urgently to address our deep ignorance about these issues I would argue.

      • Climate sensitivity is strongly state dependent.

        Surface data shows a change in the 20 to 30 or 40 North Latitude changed climate sensitivity in 97.

        I have started to wonder if something in that el nino left an “impression” , and created a self re-enforcing “feature”, and locks in strong water vapor distribution downwind into areas with surface stations where previously it was blown someplace else.

      • micro,

        You seem to be disputing the basic thermodynamics of the greenhouse effect. Others may be interested in debating that, I’m not. Sorry.

      • You seem to be disputing the basic thermodynamics of the greenhouse effect.

        No, I don’t dispute it at all. Just that at night, on earth, the radiative cooling domain controlled by co2, from sunset until air temps near dew point temp, has a high cooling rate of 3 or 4 degrees F per hour, it’s only when you near dew point temps that the cooling rate slows down to under 1 degree F per hour. Since this transition is temperature controlled, any GHG warming is lost at the high cooling rate prior to the switch to the slow cooling mode.

      • David,

        Climate sensitivity is strongly state dependent. My assertion is that ice ages are times where the local derivative is infinite at some points and the climate is probably chaotic. It is an open question whether in warmer climates this is true. I would not be surprised though.

        It is a further argument that despite all our attempts at modeling, climate has an unpredictable element, just like simple turbulent flows at least in certain ranges of conditions.

        I think we are in agreement, barring small quibbles over definitions.

        Yours are cogent arguments against low climate sensitivity.

        They also, of course, strongly push the balance towards mitigation and away from adaptation as a response to AGW.

      • VTG, I didn’t say any of what you attributed to me. Please try to focus on the smaller technical points first.

      • We don’t really have very good estimates of climate sensitivity. Simple models are probably as good as more complex but highly tuned ones. We desperately need better tools and methods and better data. People are not focused on this and that’s in my view a serious problem.

      • David,

        I didn’t intend to attribute anything to you. I agreed with you, and pointed out the implications of your arguments. That’s all.

        I doubt that sensitivity can be significantly constrained from the current range in the next decade or two, barring some revolutionary advances.

        What we know now is all we have to set policy on, uncertainty and all.

        Just my opinion.

      • Why do we desperately need better tools? Never mind.

      • Well, if you want to really address climate change in a scientific and rigorous way we desperately need better tools. Especially if geo-engineering is in our future with all its politics.

      • David Young ==> I like your point “One has to wonder if in fact the global temperature is “determined” strictly by the energy balance issues.”.

        It is energy conservation — energy balance — that is the issue. “Heat” (warming, cooling) is a diversion in my opinion — we need to realize that “energy in the system is increasing” under imbalance. It is important to know where this energy is going — the planetary system as a whole has huge ability to receive and store energy.

  31. It always amazes me that they use so much computer time that gives so different results for in the end to average it all out and then believe it tells something meaningfull.
    Is it just a smoke screen to cover up they could just have used CO2 forcing instead?

  32. Pingback: Climate | Transterrestrial Musings

  33. As we can clearly see

    These models cannot validate.

    Nobody would use them to set an evacuation policy.

    http://www.tropicaltidbits.com/storminfo/14L_tracks_latest.png

    • Steven Mosher

      Evacuating a coastal region in the face of a potential landfall of a hurricane makes sense…if one lives in Florida that is. That’s known as “gettin’ out of Dodge while the gettin’ is good.”

      OTOH, some of the spaghetti graph tracts course up to Norfolk Virginia. Maybe the Navy might want to prepare to send some ships to sea as a precaution. Nobody yet is telling Virginia residents to abandon their homes/businesses and evacuate now. Keep a weather-eye out for more hurricane news, yes. But no action.

      Some of the spaghetti graphs go up the coast of New England. Don’t interrupt your siesta for this one, it is way too early to say or do anything. And that’s the thing, predicting or even projecting probabilities too far into the future really is futile. There is no way to know until there is more data: course and speed, and in this case, intensity. Doesn’t this vaguely sound like the story of climate change? Predicting the future and acting upon such distant future scenarios just because somebody says so. Besides, there is a “credibility” gap that needs to be bridged,

    • Re; Hurricane Matthew ==> Let’s not drag the real dangers and heartbreak of Hurricane Matthew into petty climate wars squabbling.

      On a personal level, I have sat through two hurricanes on my sailboat — and these were not Force 4s. It is no laughing matter — but life and death and destruction.

      It is a wonderful thing that Hurricane Prediction has improved as much as it has — to the point of fairly reliable 48 hour projections and useful five day projections. Short term projections of weather phenomena and major weather events (that can be followed by satellite) have greatly improved over the last decade.

      Hurricane path spaghetti-plots make it possible for civil authorities to plan for the worst — and be relieved when nature spares their area.

      Again on a personal level, out sailboat, the Golden Dawn, would have normally been berthed at Cape Canaveral this summer — directly in the path of the eye of Matthew. It is by chance (or grace) that we serendipitously moved the boat to North Carolina, to a boat yard away from the sea (on the ICW), and hauled it out for the summer. It is safe there. many of our dear friends and relatives, however, are in harm’s way.

  34. Another chaos thread brings in the swarm of chaos-deniars:
    BrandonRGates (zombie return of rgates)
    Steve Mosher
    Jim D
    And the there’s denial of chaos
    Nick Stokes
    etc.
    They’re right to be scared of chaos-nonlinearity. Deterministic Nonperiodic Flow, Lorenz 1963, completely undermines the null hypothesis of CAGW that climate can’t change without an external forcing. It can, and does all the time. That is the finding of DNF63 that everyone talks about but no-one understands or acknowledges.

    Climate changes by itself.

    Talk of external “forcings” means you have not understood DNF63, at all.

    Climate forces itself.

    In fact, Edward Lorenz showed this with his toy weather models on his “toy” (by today’s standards) computer back in the 1960s. His discovery led to the field of study known today as Chaos Theory, the study of non-linear dynamical systems, particularly those that are highly sensitive to initial conditions.

    No it was Mary Cartwright in 1943 who formulated chaos mathematically and even characterised periodicallly forced nonlinear oscillation, way before her time. The true discoverer of chaos (mathematically) as acknowledged by Freeman Dyson.

    http://cwp.library.ucla.edu/Phase2/Cartwright,_Mary_Lucy@951234567.html

    • ptolemy2 ==> Not even Lorenz claims to have “discovered” Chaos. many mathematicians were at work on the non-linear aspects of dynamical system — and chaotic systems were known to very early scientists,

      I quote the link you provide for those less inclined to follow such:

      Cartwright published her discoveries [the unexpected phenomenon that is now called chaos] at the end of the war, but nobody paid much attention to her papers and she went on to other things. She became famous as a pure mathematician. Twenty years later, chaos was rediscovered by Ed Lorenz and became one of the most fashionable parts of physics. In recent year I have been calling attention to Cartwright’s work. In 1993 I received an indignant letter from Cartwright, scolding me because I gave her more credit than she thought she deserved. I still claim that she is the original discoverer of chaos. She died, full of years and honors, in 1998.”
      — Freeman Dyson

      Lorenz himself was surprised by the attention given him as “The Father of Chaos” — which he felt was simply attributable to the invention of the copy machine which allowed copies of his paper on Deterministic nonperiodic flow to passed hand-to-hand outside of the field of meteorology (where it was published) to interested young mathematicians, such as those at UC Santa Cruz, who suddenly has access to useable small computers.

      In Climate Science, Lorenz is acknowledged for the discovery that mathematical models of weather systems (his famous toy model–L63) are chaotic, extremely sensitive to initial conditions.

    • ptolemy, Milankovitch showed that forcing is also important. Many skeptics don’t consider forcing to be important but somehow do allow for Milankovitch cycles being inconsistent with their view. Forcing also shows up in the effects solar variations, volcanoes and changing CO2 levels on global mean temperatures. Forcing changes of the predictable kind allow some level of predictability on top of the chaos of internal variability.

      • Geoff Sherrington

        Jim D,
        Solar variations and volcano’s come and go over time and might even be no more than small bumps on a more solid trajectory. We can measure solar variations and optical depths of volcanic aerosols and quantify them separate to the main trajectory.
        You cannot lump CO2in with them if you are using common GHG hypotheses. You cannot change from no effect to some effect measured and then back to base.
        Essentially, you have to guess that GHG have an effect on climate and you have to guess about magnitude. Why, GHG might even be agents that change future chaos driven events – and do nothing much more than that.
        Or more likely, just do nothing much.
        You cannot invoke GHG hypotheses without guesses until you can cleanly separate natural from anthropogenic. You cannot do this separation by feeding guesses about parameters into parameterized models. Truly, nobody knows what outputs from these models means. That does not give licence to average guesses and make an attribution asbto cause or magnitude of cause.
        Geoff.

      • You can quantify the relative sizes of these forcings, which puts GHGs as the top driver by far in terms of total forcing changes over the last 50 years. This is just physics that is independent of models.

      • Geoff Sherrington

        No, I cannot quantify the relative sizes of these forcings, which is why I argue.
        What are the values and how was CO2 forcing measured? In an atmosphere, I mean, not in a lab.
        Geoff

      • Even the skeptical scientists agree with the CO2 forcing being near 2 W/m2 and growing, which you can compare with solar fluctuations being about a tenth of that, and occasional large volcanoes that spike at a few W/m2. If you want to disagree with the skeptical scientists, fine, I won’t get in your way.

      • Geoff Sherrington

        I am a sceptical scientist, one of the millions awaiting an acceptable calculation of climate sensitivity to CO2. Nobody made a guess on a figure for AR5, the IPCC gave a range that extends from no bother to huge emergency.
        Why, Jim, do you think no IPCC scientist got into print for backing a particular number?
        It is because many scientists, sceptical or not, do not go along with your 2 W/sq m figure.
        You are confusing climate science fiction with proper conventional science.

      • The forcing due to GHGs has a lot more certainty than you seem to know, and that it is by far the largest forcing is also well known. I think if more skeptics could separate forcing from sensitivity, we would be half way there.

      • Geoff Sherrington

        Jim D
        Forcing and sensitivity go together.
        I have yet to see any measurement science that excludes sensitivity = 0.
        While the community might accept unproven GHG postulates from a barrage of advertising of untruths, hard scientists are resisting because “trendy” does not mean “scientifically valid”.
        Now show us a proof that climate sensitivity is not zero.
        Geoff

      • As I have said many times here before, the measured positive imbalance shows that all the warming we have had so far is still not enough to keep up with the forcing change that in turn is dominated by GHGs. The concept of positive imbalance or, equivalently, warming in the pipeline are foreign to skeptics who don’t want to understand what the observational data show us already.

      • Geoff Sherrington

        Jim D
        What are the trie error bars on the uncertainty of the w/sqm here?
        People blithely discuss forcings in terms of 0.1 w/sqm when the data are corrected from a range of 10 w/sqm.
        The whole of the in/out balance depends on the correvtions to earlier data.
        What happens with the next satellite? Bet you it gives another meaning to uncertainty.
        Why do you persist in defending subjective corrections of huge size when the topic is so central to the whole GHG argument?
        Do you have a betting problem?
        Geoff.
        http://www.geoffstuff.com/grlkopp.jpg

      • The imbalance can be independently constrained by satellite and ocean heat content observations. OHC provides the tighter constraint currently. The odds provided by the IPCC are conservative, if anything. It is not just “very likely”, but quite certain that the warming rate matches the middle projections of effective sensitivity. Having 1 C already at 400 ppm is a sign of warming on track with the projections.

      • Geoff Sherrington

        Jim D
        The true error of the OHC content is so large that one should not put any importance on the estimates we are fed.
        I continue to be amazed at how comprehensively you seem to have thrown out measurement science principles in favour of repeating the dogma of others.
        Geoff.

      • The ocean measurements are going against your internal hopes and so you reject them. In fact, the error bars are small and a positive imbalance is recognized by skeptical scientists too.

      • JimD
        Milankovich forcing is clearly a reality, and a very interesting one. Why did the mid pleistocene revolution happen, when glacial cycle timimg changed from obliquity (41000 yrs) to eccentricity (100,000 yrs). And why, when the amplitude of eccentricity oscillation is highest are the interglacials double-headed and unstable, while with the weakest eccentricity amplitude nodes, eg now, 400,000 and 800,000 years ago, interglacials are sharp and clear? The forced signal strength reciprocal to that of the forcer? Milankovich forcing must be weak and complex. Perhaps the MPR was a transition between strong and weak periodic forcing.

    • Another chaos thread brings in the swarm of chaos-deniars […] And the there’s denial of chaos

      I’m almost afraid to ask the difference, ptolemy2.

      That is the finding of DNF63 that everyone talks about but no-one understands or acknowledges.

      We must not be reading the same literature. Kay et al. (2015) cite Lorenz (1963) *twice* as a main purpose for doing the CESM Large Ensemble project:

      The influence of small, initial condition differences on climate projections in the CESM-LE parallels initial condition impacts on weather forecasts (Lorenz 1963). After initial condition memory is lost, which occurs within weeks in the atmosphere, each ensemble member evolves chaotically, affected by atmospheric circulation fluctuations characteristic of a random, stochastic process (e.g., Lorenz 1963; Deser et al. 2012b). As we will show, internal climate variability has a substantial influence on climate trajectories, an influence that merits further investigation, comparison with available observations, and communication. Evaluating the realism of internal climate variability simulated by the CESM-LE is challenging, especially on decadal time scales, but vital (e.g., Goddard et al. 2013), especially given differences in model variability (e.g., Knutson et al. 2013). Model biases can degrade the realism of simulated internal variability and forced climate responses and we therefore encourage users of the CESM-LE to understand relevant model biases and their potential ramifications.

      Deterministic Nonperiodic Flow, Lorenz 1963, completely undermines the null hypothesis of CAGW that climate can’t change without an external forcing.

      Thphphht. From TAR WGI Ch. 1:

      Predictability, global and regional

      In trying to quantify climate change, there is a fundamental question to be answered: is the evolution of the state of the climate system predictable? Since the pioneering work by Lorenz in the 1960s, it is well known that complex non-linear systems have limited predictability, even though the mathematical equations describing the time evolution of the system are perfectly deterministic.

      The climate system is, as we have seen, such a non-linear complex system with many inherent time scales. Its predictability may depend on the type of climate event considered, the time and space scales involved and whether internal variability of the system or variability from changes in external forcing is involved. Internal variations caused by the chaotic dynamics of the climate system may be predictable to some extent. Recent experience has shown that the ENSO phenomenon may possess a fair degree of predictability for several months or even a year ahead. The same may be true for other events dominated by the long oceanic time-scales, such as perhaps the NAO. On the other hand, it is not known, for example, whether the rapid climate changes observed during the last glacial period are at all predictable or are unpredictable consequences of small changes resulting in major climatic shifts.

      And of course, the ever-popular statement from the TAR WGI Executive Summary:

      The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible. Rather the focus must be upon the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. Addressing adequately the statistical nature of climate is computationally intensive and requires the application of new methods of model diagnosis, but such statistical information is essential.

      For some reason, I’m pretty much thinking that the IPCC’s stance isn’t one of Chaos Denial.

      Talk of external “forcings” means you have not understood DNF63, at all.

      Yes, because in it, Lorenz goes on at length about how, e.g., varying levels of insolation couldn’t possibly ever affect the long-term statistics of weather.

      • rgates
        OK that’s a fair point, so Lorenz DNF63 have not been ignored, that’s good to hear. I’m not a researcher in the field so haven’t read much of the latest literature.

        The focus tends to be on the atmosphere with chaotic processes. However I think that in regard to climate, chaos in oceanic circulation and mixing is more important. There is so much heat in the ocean that if – hypothetically – there was chaotic-nonlinear emergent pattern in oceanic vertical mixing (cooling the surface, warming the deep ocean) then you would have chaotic variations in climate with a fractal signature, coming just from the ocean’s internal dynamics.

        This could serve up climate change on century-millenial timescales without necessarily any external forcing.

        However there are of course forcings, eg Milankovich, atmospheric perturbations etc. As Cartwright found, there are periodically forced nonlinear oscillators. When the forcing is strong, the time signature of the forcer is clear. However with weak forcing it is not.

      • >OK that’s a fair point, so Lorenz DNF63 have not been ignored, that’s good to hear. I’m not a researcher in the field so haven’t read much of the latest literature.

        Your recognition is unexpected and appreciated, ptolemy2. Please accept my apology for misjudging you, and the sarcasm it prompted in my previous response.

        >This could serve up climate change on century-millenial timescales without necessarily any external forcing.

        Problem with the way that you’ve describe the mechanism is that we’d expect surface/atmospheric warming to then be associated with subsurface ocean cooling, and since the mid-1950s they’ve been warming:

        https://climexp.knmi.nl/data/itemp2000_global.png

        Indeed, most of the estimated TOA radiative imbalance can be accounted for by the estimated rate of ocean heat uptake over that interval, which is consistent with an external radiative forcing from, e.g., CO2.

        Several commenters here (stevenreincarnated and AK are two I can think of quickly) have argued the possibility of varying internal heat distribution leading to albedo changes (e.g., clouds) such that secular atmospheric warming trends are matched by increasing ocean heat content over the same interval. I can’t dismiss those arguments out of hand, but I’m also far from convinced it unseats CO2 as the main driver since 1950.

      • I can’t dismiss those arguments out of hand, but I’m also far from convinced it unseats CO2 as the main driver since 1950.

        but when you start adding the way min and max change in different directions, the amount the daily swing changed, they start to make a compelling case.

      • Problem with the way that you’ve describe the mechanism is that we’d expect surface/atmospheric warming to then be associated with subsurface ocean cooling…

        Maybe simply energy redistribution resulting in little change in OHC, but big changes in albedo and water vapor?

      • Maybe simply energy redistribution resulting in little change in OHC, but big changes in albedo and water vapor?

        Yes, that is the premise that I think is how the oceans decadal oscillation alter surface temps.
        Just changes in the direction prevailing winds blow, or a warm spot that becomes high pressure and the jet stream goes around (the blob).

      • I can’t dismiss those arguments out of hand, but I’m also far from convinced it unseats CO2 as the main driver since 1950.

        http://cdiac.ornl.gov/ftp/ndp030/global.1751_2013.ems
        http://cdiac.ornl.gov/ftp/Global_Carbon_Project/Global_Carbon_Budget_2015_v1.0.xlsx

        Up to 1979 111685 GT of carbon emitted 1980 CO2 338.68 PPM
        1980-1999 118355 GT of carbon emitted 2000 CO2 369.52 PPM
        2000-2015 135773 GT of carbon emitted 2016 CO2 402 PPM (est)

        Question: is the post 2000 temperature rise greater or equal to the 1980 to 2000 temperature rise?

        http://i.imgur.com/izdZCwi.gif

        About 0.03 new adjustment was added to the post 2000 period since mid 2015.

      • >Question: is the post 2000 temperature rise greater or equal to the 1980 to 2000 temperature rise?

        Even if I didn’t have the orbiting (A)MSU estimates all but mesmerized, I could guess less than, PA.

        Why are you asking me about the last 16 years when the claim I made refers to the past 66?

      • >Maybe simply energy redistribution resulting in little change in OHC, but big changes in albedo and water vapor?

        Anything’s possible, JCH. And as everyone knows, over six decades a tenth of a degree change in vertically averaged ocean temps down to two kilometers is piddly. How can they even measure to that precision?

      • >but when you start adding the way min and max change in different directions, the amount the daily swing changed, they start to make a compelling case.

        … for noisy, imperfect daily data improperly aggregated, micro6500. Don’t ask me how, we’ve already established that I can’t tell you because I don’t understand what you’re doing.

        Get it published in a respectable journal. I might then pay more attention.

      • for noisy, imperfect daily data improperly aggregated, micro6500. Don’t ask me how, we’ve already established that I can’t tell you because I don’t understand what you’re doing.

        You sure have a lot of insults over something you don’t understand.

      • >You sure have a lot of insults over something you don’t understand.

        Thank you, micro6500, that’s particularly gratifying given the amount of effort I put into packing so much rudeness into such a minimal amount of text.

        I’ll tell you what I think is rude, it goes something like this:

        1) I make an argument.
        2) You present the results of your analysis to rebut it.
        3) I investigate them by asking you questions about the method, downloading you data, and attempting to reproduce your results.
        4) I propose some ideas about why I think your method fails and why I think your results are spurious.
        5) You complain that I don’t understand what you’re doing and exit the conversation in a huff without bothering to explain what it is I don’t understand.
        6) Time passes.
        7) I make an argument.
        8) You present the results of your analysis to rebut it as if 1-6 had never happened.

        Here’s what me going out of my way to be insulting might look like: you’re an idiot.

        For the record, I don’t actually think that. What I really think is that your novel method might have some actual merit, and I said so two years ago (or so) when we first spoke about it. What I’d suggest you do is seek out coaching from someone with relevant expertise and whom you also trust to help you identify the errors I strongly believe are there and get it published.

        Then I might pay more attention to it.

        I’m definitely not going to be swayed by you whining about my good-faith critiques. I don’t know too many scientists who aren’t blunt and to the point when it comes to error. Capice?

      • I’ll tell you what I think is rude, it goes something like this:

        1) I make an argument.
        2) You present the results of your analysis to rebut it.
        3) I investigate them by asking you questions about the method, downloading you data, and attempting to reproduce your results.
        4) I propose some ideas about why I think your method fails and why I think your results are spurious.
        5) You complain that I don’t understand what you’re doing and exit the conversation in a huff without bothering to explain what it is I don’t understand.
        6) Time passes.
        7) I make an argument.
        8) You present the results of your analysis to rebut it as if 1-6 had never happened.

        Now, I have a few minor differences on what happened at 3, you went to examine my work and I didn’t hear back, and a while later you tell me you looked at it, didn’t like something, but didn’t really remember it was a long time ago. Where we try again. Then I get a lot of questions asking for things that are there already, and suggestions to see if I can find help explaining what I am doing wrong.

        For the record, I don’t actually think that. What I really think is that your novel method might have some actual merit, and I said so two years ago (or so) when we first spoke about it. What I’d suggest you do is seek out coaching from someone with relevant expertise and whom you also trust to help you identify the errors I strongly believe are there and get it published.

        I haven’t found this person, and in the mean while I have added additional looks at the data, and how temps change in relationship to solar.
        In the mean while I know the collection of observational facts I have shows there’s no loss of an ability to cool. This morning(after calm cloudless night and prior day) before sunrise the grass in my backyard was 32-33F on my ir thermometer, the air temp was 48F. It’s cooling radiatively just fine at night.

        And I don’t have a keyboard for long periods, and wait til later sometines for a lengthy reply, I might have forgot a reply, but I don’t go away in a huff.

  35. Here is the variability of the 44,000 years of station data and the actual station measurements of Min and Max temps in North America, as well as the daily rising temp, and the following night’s falling temp. This is 1980 to 2015 (I used 1980 because that’s when my TSI measurements start).

    https://micro6500blog.files.wordpress.com/2016/10/min-and-range-44-0k-360-days-1980-to-2015.png

  36. “In a paper based on a January lecture he declared, “For more than three decades, macroeconomics has gone backwards” by abandoning a commitment to objective facts in favor of models.
    The “dismissal of fact goes so far beyond post-modern irony that it deserves its own label. I suggest ‘post-real’,” he said.”

    Obviously macro economists need a few tips on effective use of models from climatologists. Or should it be the other way round?

    Cheers.

  37. Willard – in reply to your criticism in your comment of October 6, 2016 at 1:26 am

    see, for example (single author :-)). [Also, my article in BAMS was reviewed].

    F. Giorgi, 2005 : Climate Change Prediction: Climatic Change (2005) 73: 239. DOI: 10.1007/s10584-005-6857-4

    which discussed this subject. Girogi writes

    “….because of the long time scales involved in ocean, cryosphere and biosphere processes a first kind predictability component also arises. The slower components of the climate system (e.g. the ocean and biosphere) affect the statistics of climate variables (e.g. precipitation) and since they may feel the influence of their initial state at multi decadal time scales, it is possible that climate changes also depend on the initial state of the climate system (e.g. Collins, 2002; Pielke, 1998). For example, the evolution of the THC in response to GHG forcing can depend on the THC initial state, and this evolution will in general affect the full climate system. As a result, the climate change prediction problem has components of both first and second kind which are deeply intertwined.”

    If the climate system is both a boundary value and an initial value problem (which I agree with), it is an initial value problem!

    I commented on the F. Giorgi article in my post

    https://pielkeclimatesci.wordpress.com/2011/04/29/climate-science-myths-and-misconceptions-post-4-on-climate-prediction-as-an-boundary-value-problem/

    Roger Sr.

    • > Girogi writes […]

      Then follows a quote in which we find a citation to Pielke 1998.

      Yet again, it’s Pielkes all the way down.

      Quoting Palmer 1998, to which that letter handwaves, might be more appropriate than deducing a truism (if A & B, therefore A) from a possibility (“it is possible that climate changes also depend on the initial state of the climate system”).

      ***

      > in reply to your criticism in your comment

      Yet it is unresponsive to it: constraining the state space involved makes more sense than armwaving to a framework which gives very little in the end.

      Systems are not problems, BTW.

      • Millyun years based on a tu quoque.

        Got to love dry humor.

      • Millyun years based on a tu quoque.

        Sheer arm-waving. Maybe a squirrel?

      • Number of “initial value”: 0.

        The author’s suggestion:

        The climate community could meet these challenges in several ways. Longer and more densely-sampled paleo records could illuminate the behavior of ENSO farther back Figure 3. Distribution of wait times between moderate-tostrong warm event peaks, for CM2.1 (black line) and a Poisson process with CM2.1’s average wait time of 8 yr (red lines). (a) Probability that wait time does not exceed time t; (b) probability density of wait times, smoothed using a Gaussian kernel with a 2-month e-folding halfwidth. Poisson percentiles are computed from 100,000 Monte Carlo realizations of 250 Poisson events, processed just like the 250 CM2.1 events. 4 of 5
        in time. More extreme tests of climate models – e.g., under mid-Holocene or glacial conditions – could produce larger ENSO changes that are more detectable in the face of
        sampling uncertainty. Alternate ENSO metrics – such as assimilation and forecast skill, or regressions scaled by ENSO amplitude – could highlight mechanisms with less sampling variability than that associated with ENSO spectra.

        No mention of INITIALIZE ALL THE PROBLEMS!

      • Haven’t you read the post, AK?

        It goes from one body made an ad hoc definition of climate as the 30-year average of weather to 30-year statistics might be just as dependent on the initial state as the weather three weeks from today..

        Therefore, millyun years.

        My turn:

        [W]eather forecasters are strongly biased towards data collection as the most important problem to tackle. They tend to regard computer models as useful, but of secondary importance to data gathering. Of course, I’m generalizing – developing the models is also a part of meteorology, and some meteorologists devote themselves to modeling, coming up with new numerical algorithms, faster implementations, and better ways of capturing the physics. It’s quite a specialized subfield.

        Climate science has the opposite problem. Using pretty much the same model as for numerical weather prediction, climate scientists will run the model for years, decades or even centuries of simulation time. After the first few days of simulation, the similarity to any actual weather conditions disappears. But over the long term, day-to-day and season-to-season variability in the weather is constrained by the overall climate. We sometimes describe climate as “average weather over a long period”, but in reality it is the other way round – the climate constrains what kinds of weather we get.

        http://www.easterbrook.ca/steve/2010/01/initial-value-vs-boundary-value-problems/

      • But over the long term, day-to-day and season-to-season variability in the weather is constrained by the overall climate.

        Yup. But “the long term” could easily mean 30,000 years. Or 300,000 years. The idea that it’s constrained to 30 years just because a political body picked that number is a myth, as SOD demonstrates.

        Anyway, my point was that it isn’t “Pielkes all the way down.” Plenty of other work pointing the same way.

      • > But “the long term” could easily mean 30,000 years. Or 300,000 years.

        Or 3,000,000 years. Or 3,000,000,000 years.

        And that’s conservative.

      • Or 3,000,000 years. Or 3,000,000,000 years.

        Exactly. But if you have to wait even 30,000 years to get a good statistical constraint of “day-to-day and season-to-season variability in the weather” then the first 30 years of it remain an initial value problem.

        And don’t forget that the beginning of any arbitrary time-slice represents its own initial value.

        Thus the bit with ENSO.

        Nor should we forget that all this is studying the model behavior.

        Just because the model can replicate a few small aspects of the real world doesn’t mean it replicates anything else.

      • > Exactly.

        Or not.

        It’s hard to tell
        Better to say everything at once

        Everything is possible, thus
        Everything else may be wrong

        Millyuns of years to make emerge this
        Rational architectonic

        Go team!

      • Everything is possible, thus
        Everything else may be wrong

        Which is why GCM’s are worse than useless when it comes to constraining climate behavior.

      • Indeed, which means we need to embrace the specify climate as a problem we’ll never be able to solve.

        The meteorological fallacy strikes again.

      • Indeed, which means we need to embrace the specify climate as a problem we’ll never be able to solve.

        So there you are trying to drive screws with a hammer, and I tell you it’ll never work.

        And you say “you’re saying I’ll never get these screws driven.”

        The meteorological fallacy strikes again.

        It’s not a “fallacy” and calling it so is sheer question-begging.

        Just because you’d be happier if GCM’s weren’t “stoopid” and could “be relied upon for policy decisions” doesn’t make it so. It just makes you self-deluding.

      • > trying to drive screws with a hammer

        Let’s go with a can opener instead. At least we’ll then be sure never to solve anything.

        ***

        > It’s not a “fallacy” and calling it so is sheer question-begging.

        The fallacy should be obvious – it derives a necessity (the A&B |- A above) out of possibility (If A, then nothing happens).

      • Let’s go with a can opener instead. At least we’ll then be sure never to solve anything.

        Or you could invent a screwdriver.

        The fallacy should be obvious – it derives a necessity (the A&B |- A above) out of possibility (If A, then nothing happens).

        Nope.

        It’s not a case that’s appropriate for Aristotelian logic.

        Besides, I was treating your “Everything is possible, thus Everything else may be wrong” as metaphor. Since I can’t see how it makes sense literally in this situation.

        GCM’s are worse than useless for predicting global climate behavior because they have a large probability of giving you a wrong answer while leaving you persuaded it’s right.

        As for policy, if we accept a risk, which is inevitable with such uncertainty/ign0rance, than what do we need GCM’s for?

        2° (or 1.5°)? Sheer nonsense. Makes policy people happy because it channels their decisions, but those channels are also worse than useless.

      • > It’s not a case that’s appropriate for Aristotelian logic.

        It’s actually the way conjunction is defined in propositional calculus. There are logics that don’t include PC. I’ll let you find them yourself.

        Best of luck.

        ***

        > Or you could invent a screwdriver.

        Sure. Since you’re becoming boring, dear AK, here may be the one you’re looking for.

        I hope one day you’ll realize that trying to solve climate as an initial-value problems will underwhelm you for a while. The best I could find from Senior was one season over a small region for two parameters.

        Meanwhile, please do continue to posit your can opener.

      • As for policy, if we accept a risk, which is inevitable with such uncertainty/ign0rance, than what do we need GCM’s for?

        That’s a question I’ve raised in this very forum many times, AK. I’m usually left to answer it for myself: because it’s entirely possible that we’re not going to be able to meaningfully reduce emissions soon enough to avoid appreciable damage. AOGCM’s *might* be of some utility for planning adaptation.

        We for darn sure don’t need AOGCMs to help us sort out ECS (or to produce the “correct” value of ECS) if delta-F is roughly zero.

        Even with anthropogenic forcings near zero, climate models could have some utility since interannual natural variability is such that, say, growing seasons in particular locales are not consistently optimal for all the crop types normally grown there.

        But who knows? The thing about finite understanding and the incidental ignorance which accompanies it is that we don’t always know where a particular line of inquiry will lead or how it might be of practical use. I personally don’t mind covering as many bases as possible.

      • That’s a question I’ve raised in this very forum many times, AK. I’m usually left to answer it for myself: because it’s entirely possible that we’re not going to be able to meaningfully reduce emissions soon enough to avoid appreciable damage.

        That’s a good reason for a large focus on developing the right tool for the job. Which GCM’s aren’t

        AOGCM’s *might* be of some utility for planning adaptation.

        Most likely not. Again, they’d probably be worse than useless through making people think they know something when they don’t.

        Even with anthropogenic forcings near zero, climate models could have some utility since interannual natural variability is such that, say, growing seasons in particular locales are not consistently optimal for all the crop types normally grown there.

        Especially since it’s highly probable that “climate” routinely changes every few decades on its own.

        But I wasn’t asking whether climate models are useful, I was asking whether GCM’s are useful. In policy, since they clearly have some utility for what they were originally used for, which was to create hints regarding mechanisms and how they might work.

        I personally don’t mind covering as many bases as possible.

        Including making a real effort to develop models that don’t depend on the assumptions that underlie GCM’s?

      • >That’s a good reason for a large focus on developing the right tool for the job. Which GCM’s aren’t

        So you continue to repetitively *assert*, AK.

        >Most likely not.

        You’re not likely to gain much traction with me by plucking probabilities out of thin air, especially not when the topic is a massive, complex, chaotic physical system like climate.

        >Especially since it’s highly probable that “climate” routinely changes every few decades on its own.

        A conclusion to which the CESM Large Ensemble adds support.

        >But I wasn’t asking whether climate models are useful, I was asking whether GCM’s are useful.

        See above for one possibility, particularly larger ensembles of the *same* AOGCM. You might also read the bit of Kay et al. (2015) I keep quoting about thinking probabilistically.

        AOGCMs belong to the set of climate models in my personal taxonomy. My arguments don’t rely on the specific distinction, but if you wish, substitute “AOGCM” for every instance I instead wrote “climate model” above.

        >In policy, since they clearly have some utility for what they were originally used for, which was to create hints regarding mechanisms and how they might work.

        Cannot any type of model be said to have utility for hypothesis formation? Is it ever possible to project/predict/forecast *without* using some type of model?

        It seems to bear repeating that we don’t need any stinking Stoopid Modulz to know that whatever the “true” value of ECS is, it becomes more and more moot as delta-F approaches zero.

        >Including making a real effort to develop models that don’t depend on the assumptions that underlie GCM’s?

        Any model, or type of model, is necessarily an incomplete simplification. Hence I’m dubious that no real effort exists which would not succumb to vague critiques of its assumptions. It should be possible to replace “bad” assumptions in any given model, or type of model, without necessarily having to resort to the drastic (and expensive) method of throwing the baby out with the bathwater and starting over from scratch.

        That said, I’m no AOGCM purist and am open to novel methods which show promise. Emphasis on *show*, which entails actual execution and specific descriptions of its weaknesses and capabilities. I much prefer evaluating concrete results than answering to rhetorical hypotheticals and special pleadings.

      • @brandonrgates…

        >That’s a good reason for a large focus on developing the right tool for the job. Which GCM’s aren’t

        So you continue to repetitively *assert*, AK.

        Nope.

        Argument by assertion is typically defined as “ argu[ing] a point by merely asserting that it is true, regardless of contradiction.

        What I am repeating (in various ways) is a conclusion of a development (with, granted, some ellipsis) in the thread above and especially in the posts and comments linked.

        Although I started out to demonstrate that more than one person had argued that “30-year climate” was a myth and probably an initial value problem just like one-week weather, I extended it to the obvious (IMO) conclusion that GCM’s were not fit for use trying to constrain the behavior of “climate”.

        If you wish to debate the issue, perhaps you should tell me which step to start at:

        •       There is no evidence that either the “real climate” system or the GCM’s purporting to model it can stabilize “to get a good statistical constraint of ‘day-to-day and season-to-season variability in the weather’” even in 30,000 years, therefore “first 30 years of it remain an initial value problem.”

        •       GCM’s, like the much finer-scaled weather models, are currently unable to provide good predictions past a week or so, and won’t be able to until, at least, GCM’s can operate at grid/time-scales similar to current weather models.

        •       Even if some demonstration could be developed that both the “real climate” system and some GCM model(s) can be completely reduced to a “boundary value” problem within a 30-year time-frame, there is no reason to suppose, much less assume, that the basin of attraction demonstrated by a model will be at all similar to the basin of attraction demonstrated by the GCM. Even if they had similar numbers of dimensions to their phase-space, which they don’t by around 20 orders of magnitude.

        •       Therefore, to the extent that climate modelling is either an initial value problem or a boundary value problem, GCM’s are worse than useless (for constraining the behavior of “real climate”) because of their (unknown but plausibly high) probability of providing a wrong answer that convinces people it’s right.

        That summarizes the chain of reasoning behind my “assertion” statement that GCM’s are the wrong tool for the job (of constraining climate behavior).

        You’re not likely to gain much traction with me by plucking probabilities out of thin air, […]

        This is ridiculous. Are you just making fumblydiddles? It wasn’t “out of thin air,” it was justified by the phrase immediately following.

        See above for one possibility, particularly larger ensembles of the *same* AOGCM. You might also read the bit of Kay et al. (2015) I keep quoting about thinking probabilistically.

        Again, you’re missing my point, which is that there’s no reason for assuming that any GCM will provide a statistical behavior that duplicates the statistical behavior of the actual climate.

        IIRC you agreed with me a while back that what’s being looked for under the rubric “climate” is some “shape” or understanding of the attractor (actually the basin of attraction with attractor(s) within it) and how it reacts to a change in boundary condition such as doubling the pCO2.

        But just because that basin for a model (assuming that it doesn’t take multiple millennia to stabilize) shows some similarity to the single observed instance of real climate behavior doesn’t mean its statistics are anything like the statistics of the real climate.

        Cannot any type of model be said to have utility for hypothesis formation?

        Yes, which was why I limited my question of utility to policy.

        Is it ever possible to project/predict/forecast *without* using some type of model?

        Of course not. It’s models all the way down. Our brains, in effect, form models of the real (observed) world in terms of neuro-chemical activity in the brain.

        Any model, or type of model, is necessarily an incomplete simplification. Hence I’m dubious that no real effort exists which would not succumb to vague critiques of its assumptions.

        There’s nothing “vague” about the critique of GCM’s above.

        It should be possible to replace “bad” assumptions in any given model, or type of model, without necessarily having to resort to the drastic (and expensive) method of throwing the baby out with the bathwater and starting over from scratch.

        The “bad” assumption is that a GCM or similar cellular model can reproduce the behavior of a real complex non-linear system with 20 or so orders of magnitude more dimensions of phase space.

        That said, I’m no AOGCM purist and am open to novel methods which show promise.

        Well, as others have said (even here under this head post), we drastically need such innovations. And what I’m doing is trying to show how, and why the current system of models is the wrong tool, so that greater attention can be applied to inventing the right one.

        You can spend day after day using a hammer on screws, creating almost useless messes at best, or you can stop, admit that you don’t have the right tool for the job, and focus on inventing a new one that works.

        But the focus won’t shift to inventing the new tool until you admit that the one you have doesn’t work.

      • >Argument by assertion is typically defined as “ argu[ing] a point by merely asserting that it is true, regardless of contradiction.”

        Which there is in this instance, AK, one of your own creation. Your argument is self-refuting.

        >That summarizes the chain of reasoning behind my “assertion” statement that GCM’s are the wrong tool for the job (of constraining climate behavior).

        Or *any* tool. See the 30,000 year problem and Willard’s comments about can openers.

        >Again, you’re missing my point, which is that there’s no reason for assuming that any GCM will provide a statistical behavior that duplicates the statistical behavior of the actual climate.

        Or *any* type of model. Do not conflate not my accepting your point with my missing it.

        I take it as a given that no model will ever *exactly* duplicate the statistics of weather. It’s a definitional thang.

        If we require omniscience to set policy, we may be sunk.

        >IIRC you agreed with me a while back that what’s being looked for under the rubric “climate” is some “shape” or understanding of the attractor (actually the basin of attraction with attractor(s) within it) and how it reacts to a change in boundary condition such as doubling the pCO2.

        I don’t recall the exact discussion, but I can agree that’s one thing we’re attempting to do with climate models.

        >Yes, which was why I limited my question of utility to policy.

        Good, noted.

        >Of course not. It’s models all the way down. Our brains, in effect, form models of the real (observed) world in terms of neuro-chemical activity in the brain.

        I couldn’t agree more.

        >But the focus won’t shift to inventing the new tool until you admit that the one you have doesn’t work.

        You’re asserting they don’t work on the basis that they’re silent on 30,000 year natural variability, and by extension, 4.543 billion year natural variability. I’m loath to go hunting for Unobtanium when the scope of policy we’re trying to set is concerned with timescales on the order of a century, especially when it may very well take to the end of time to search that state space for an answer which you would *choose* to accept.

        But supposing *you* find a solution to the 30 ka problem in lieu of simply positing one. Why could AOGCMs necessarily NOT incorporate it?

      • Your argument is self-refuting.

        Nope.

        Or *any* tool. See the 30,000 year problem and Willard’s comments about can openers.

        Or *any* type of model. Do not conflate not my accepting your point with my missing it.

        I take it as a given that no model will ever *exactly* duplicate the statistics of weather. It’s a definitional thang.

        Perhaps not exactly, but there’s certainly the possibility that modeling systems could be built that didn’t depend on what is, in essence, a “brute force” approach to modeling.

        If we require omniscience to set policy, we may be sunk.

        But, of course, we don’t. We know there’s some risk to digging up all the fossil carbon and dumping it into the system.

        For that matter, the focus on GCM’s, and especially 2/1.5°, may have blinded the policymakers to some of the risks.

        The answer, IMO, is to focus on weaning our civilization off of fossil carbon as quickly as is consistent with other, equally important priorities.

        The identification and relative importance of those priorities is a political problem, and the only input it needs from climate science is the fact that there’s an unquantifiable risk.

        Since GCM’s have little skill or use informing such policies, and probably won’t pending at least several generations of “Moore’s Law”, their use could be scaled back considerably, freeing resources for a search for a better approach.

        You’re asserting they don’t work on the basis that they’re silent on 30,000 year natural variability, and by extension, 4.543 billion year natural variability.

        That looks like a straw man. But I’ll just assume it’s a failure of communication.

        It isn’t the 30,000 year problem, it’s the fact that we have no way of knowing whether either the “real climate” or “teh modulz” remain initial-value problems at time-scales smaller than 30,000 years. But whether they do or not, they have nothing but false confidence to contribute to “policy […] concerned with timescales on the order of a century

        But supposing *you* find a solution to the 30 ka problem in lieu of simply positing one. Why could AOGCMs necessarily NOT incorporate it?

        Because CPU cycles are expensive.

      • AK,

        >Nope.

        Yep.

        >Perhaps not exactly, but there’s certainly the possibility that modeling systems could be built that didn’t depend on what is, in essence, a “brute force” approach to modeling.

        This is progress. I’m dubious when Denizens start talking about Teh Modulz having “no skill”, not being “accurate” etc. It’s what prompts me to invoke the omnsicience argument, and as you specifically recognize that we’re not, I’ll drop it.

        I evaluate a model’s utitlity by the magnitude of its errors — that it will have at least some is inevitable. I then *assume* that any of its projections/preditions/forecasts will never be better than that error margin — they will in fact almost surely peform even *worse*. All I can do is rely on the modelers for an indication of how big the forward-looking uncertainties will be.

        I cannot, will not, rely on mere possibilites of modeling systems that don’t depend on brute force or any other characterization of Teh Modulz you can assert. Why should I defer my decision making until such unknown time as a hypothesized can opener becomes reality?

        >The answer, IMO, is to focus on weaning our civilization off of fossil carbon as quickly as is consistent with other, equally important priorities.

        It always makes me happy when you say that becuase it’s the one thing upon which we both absolutely agree.

        >>You’re asserting they don’t work on the basis that they’re silent on 30,000 year natural variability, and by extension, 4.543 billion year natural variability.
        >
        >That looks like a straw man. But I’ll just assume it’s a failure of communication.

        It’s where your argument self-refutes. Being sparing with CPU cycles and development funds does not make the (real) epistemological problem you’ve raised go away. See again Unobtanium, No True Scotsman, Special Pleading … they add up to a hypothetical can opener. Some might call it vaporware.

        The only thing worse than an empty promise is an empty hope. Please stop — I can’t see that it helps our *mutual* desire to reduce CO2 emissions.

      • @brandonrgates…

        I cannot, will not, rely on mere possibilites of modeling systems that don’t depend on brute force or any other characterization of Teh Modulz you can assert. Why should I defer my decision making until such unknown time as a hypothesized can opener becomes reality?

        What decision-making?

        If policy, the models are best not used, because you have no idea whether the error margin you’ve been given is correct. And, most likely if it is it’s so broad as to be useless.

        How much CO2 can be dumped into the air without crossing the 2° line? If a lower value is correct, a precautionary policy based on the uppermost plausible value would be hideously wasteful and unfair to developing peoples.

        If an upper value is correct, a non-urgent policy would risk very serious consequences.

        There’s little if any real chance that the current developmental direction of GCM’s will provide significant improvement in the time-frame in question.

        It’s where your argument self-refutes. Being sparing with CPU cycles and development funds does not make the (real) epistemological problem you’ve raised go away.

        Two problems with that:

        If equally useful results could be obtained with 1/100th the CPU cycles, multiple runs of the sort this post is about could be made far more often. While that wouldn’t directly address the issue of differing attractors, it would certainly help to quantify the nature of the model’s attractor.

        Second, an alternative approach to modeling might very well offer far superior fidelity, even while reducing the processing cost.

        See again Unobtanium, No True Scotsman, Special Pleading … they add up to a hypothetical can opener. Some might call it vaporware.

        Sorry, I don’t buy it. I have substantial experience in finding my way around technical “impossibilities” (albeit in another sub-field), and I’d say there’s a much better than even chance that a workable alternative approach could be found, with some looking.

        The only thing worse than an empty promise is an empty hope.

        Not empty. And I must say your repetition of Wilbur’s scoffing “can opener” sophistry leaves me suspicious that your agenda includes not wanting better models.

        Please stop — I can’t see that it helps our *mutual* desire to reduce CO2 emissions.

        Since GCM’s have little relevance to that goal, it doesn’t matter. What it does have a very good chance of helping with is in understanding and modeling the climate.

      • >What decision-making?

        I had to decide to believe that AGW is real and that it presents a credible *potential* future threat if emissions are allowed to continue unfettered. That necessarily relied on projections obtained from AOGCMs by way of IPCC assessment reports. That I publicly lobby for reducing emissions to near zero gives me a duty to be as informed as *possible* (as limited time and aptitude allow) about the strengths *and* weaknesses of Teh Stoopid Modulz, a duty I take seriously.

        I don’t normally like being wrong. In this case, I very much hope that I am — but I cannot in good conscience rest on my hopes. Too much is at stake for me to indulge wishful thinking.

        >There’s little if any real chance that the current developmental direction of GCM’s will provide significant improvement in the time-frame in question.

        And yet the consequences of our decisions (or perpetual indecision as the case may be) will be in accordance with whatever the real properties of the system are. Our ignorance will do nothing to change that. Only our actions (or lack thereof) matter.

        Thus, your argument is academic … in the disparaging sense of being useless. Plus, since it tends toward policy paralysis, your argument *could* be worse than useless — it *could* be detrimental.

        >Two problems with that: If equally useful results could be obtained with 1/100th the CPU cycles, multiple runs of the sort this post is about could be made far more often.

        C’mon, AK. 1/100th of a near-infinite state space is 1/100th of near-infinity.

        Maybe quantum computing can save us (they’d have to be good at butterflies) but until then I’m going with: Hypothetical can openers don’t even exist hypothetically just because you can will them into imagination with the magical word “if”.

        >And I must say your repetition of Wilbur’s scoffing “can opener” sophistry leaves me suspicious that your agenda includes not wanting better models.

        If I had a nickel. Did you read the bone he threw you last night? That thing is a ‘Teh Modulz R Stoopid” quoteminer’s mother lode.

        My wet dream is Modulz so not-stoopid they could nail AMO/ENSO/PDO/ETCETERO from initial conditions for a century. My *suspicion* is that “skeptics” would then look at the near perfect 100-year hindcast and (rightfully) conclude that such a perfect long-term weather forecast must have been falsified.

        Instead, we have the absurdity that “skeptics” look at the imperfections and claim that the results must have been falsified. I would think it funnier if such types didn’t have political influence.

        Were I you, I’d be more suspicious of those who argue that we need to “wait and see” because … Stoopid Modulz and Uncertainty Monsters. That’s a foot-dragging tactic which has at least logical consistency going for it.

      • AK, PS:

        >Sorry, I don’t buy it.

        That’s your right. Nothing I can do about it.

        >I have substantial experience in finding my way around technical “impossibilities” (albeit in another sub-field), and I’d say there’s a much better than even chance that a workable alternative approach could be found, with some looking.

        That doesn’t factor in my decision-making. I mean no personal offense or insult by this, but you are an unknown entity to me going by untraceable initials which may not even represent your real name. Therefore I can’t vet your credentials or expertise by any of the normal ways I’d otherwise have at my disposal. I don’t deem your arguments good enough to accept on their merits alone. So, my turn to not buy it.

        On the possiblity you have the chops you allege and you’re looking for some other form of gainful employment, I do encourage you to read this paper Willard linked to for you; the author is recruiting.

      • @brandonrgates…

        I had to decide to believe that AGW is real and that it presents a credible *potential* future threat if emissions are allowed to continue unfettered. That necessarily relied on projections obtained from AOGCMs by way of IPCC assessment reports.

        Well, you’re entitled to your opinion.

        I’ve been watching the “global warming” nonsense since ’97 (on and off), and it’s been clear to me since then that, as Trump said, it’s a scam. An effort to use the (potential) problem as a stalking horse for a political/ideological agenda.

        Obviously, such a scam using a “problem” to justify a bunch of major political changes that may or may not be relevant to solving it will work much better if there’s a real problem behind it.

        Having studied Kuhn and some of the more modern ideas based on his, it’s also obvious to me that the “global warming” paradigm built around the IPCC and the “scientists” at its core is a fraud. Not a real scientific paradigm (in the Kuhnian sense).

        My own judgement of the risks comes from having studied other non-linear systems, most of them evolved (in the Darwinian sense) to the point of being non-chaotic. Even with climate, the “Teh Stoopid Modulz” are far to “stoopid” to provide any useful input to estimating the risks of sudden change, much less the impact of more CO2 on those risks..

        And, IMO, climate is far from the most important risks. Except for fantasy nightmare effects on agriculture, most of the supposed impacts of “global warming” are (AFAIK) either supposed to be via eco-collapse or continuation of human processes that aren’t sustainable even without any CO2 effects.

        But the non-climate-related effects of dumping all that fossil CO2 into the system also risk unwanted effects on eco-system stability.

        For instance, tilting the balance between C3 and C4 plants to a point not experienced since before the Panama Straits closed could have unknown effects on a system that’s both more complex, and potentially more subject to sudden reorganization than the climate.

        Filling up so many of the various carbon sinks/sources to a level at (pseudo-)equilibrium with the current pCO2 has unknown risks of unexpected systemic responses when they get too full. (Since we don’t understand how they work, we have little idea how to constrain the probabilities of such blowback.)

        With all that, I (personally) don’t see what difference “Teh Stoopid Modulz”’s stoopidity makes.

        Thus, your argument is academic … in the disparaging sense of being useless. Plus, since it tends toward policy paralysis, your argument *could* be worse than useless — it *could* be detrimental.

        IOW let’s hide the fact that “Teh Stoopid Modulz” are so stupid and use them to scare people?

        That sort of behavior is a major reason skeptics are so convinced there’s nothing but a scam here.

        C’mon, AK. 1/100th of a near-infinite state space is 1/100th of near-infinity.

        Which just shows you don’t know how to think outside the box here.

        Did you read the bone he threw you last night? That thing is a ‘Teh Modulz R Stoopid” quoteminer’s mother lode.

        I scanned it. Palmer also evidently can’t think outside the box. As his call for “PhDs in the fields of physics and mathematics,” demonstrates.

        Were I you, I’d be more suspicious of those who argue that we need to “wait and see” because … Stoopid Modulz and Uncertainty Monsters. That’s a foot-dragging tactic which has at least logical consistency going for it.

        Nope.

        And I’m not “suspicious” of such people (at Climate Etc. for instance) because they’ve already demonstrated their tendentious approach. To say “we don’t have to do anything about a risk until you’ve proven with 100% certainty there is one” is to demonstrate only fitness to be blocked out of the debate.

        Just as, IMO, to say “there’s a risk therefore we must do whatever it takes to remove it, no matter what the collateral damage,” does. Which is how I would class many of the alarmists here, especially those who pretend to understanding they don’t have. A description that, IMO, also fits the watermelons at the core of the “global warming” scam.

        That doesn’t factor in my decision-making. I mean no personal offense or insult by this, but you are an unknown entity to me going by untraceable initials which may not even represent your real name. Therefore I can’t vet your credentials or expertise by any of the normal ways I’d otherwise have at my disposal.

        Well, yes, you’ve repeatedly demonstrated you don’t understand all that computer stuff.

        (There’s a reason I went to extra trouble to build a link into my “untraceable initials” when we all had to set up WP ID’s to post here.)

        I don’t deem your arguments good enough to accept on their merits alone. So, my turn to not buy it.

        That’s fine, we’re just bouncing opinions off each other. My real target audience is people who would agree with me based on their own expertise.

        On the possiblity you have the chops you allege and you’re looking for some other form of gainful employment, […] the author is recruiting.

        Not me (see above). My chops apply to IT support for business processes (and changes thereto), and more generalized thinking out of the box. If others who are prepared to think out of the box want my input, I can give it to them, but I strongly suspect they already know the things I would point out.

      • My chops apply to IT support for business processes (and changes thereto),

        Currently I do business process then the data work to support it for the big high tech and med device.
        Ironically I’m partly responsible for all the use of simulation in the same group of companies.

      • >I’ve been watching the “global warming” nonsense since ’97 (on and off), and it’s been clear to me since then that, as Trump said, it’s a scam. An effort to use the (potential) problem as a stalking horse for a political/ideological agenda.

        “The Socialists are Coming” is political science, not physical science, AK.

        >And I’m not “suspicious” of such people (at Climate Etc. for instance) because they’ve already demonstrated their tendentious approach. To say “we don’t have to do anything about a risk until you’ve proven with 100% certainty there is one” is to demonstrate only fitness to be blocked out of the debate.

        I can’t fault your reasoning in the latter sentence … it’s one place you and I very much agree. I’d be careful where I swing that tendentious axe if I were you, she’s double-headed — talking of scams and frauds isn’t exactly using kid gloves.

        Please continue with the conspiracy theories. I like those.

        >Just as, IMO, to say “there’s a risk therefore we must do whatever it takes to remove it, no matter what the collateral damage,” does.

        My, what a ridiculous strawman you’ve built there.

        >If others who are prepared to think out of the box want my input, I can give it to them, but I strongly suspect they already know the things I would point out.

        More arm waving to your Field of Dreams. People with superior out of the box thinking skills execute, they don’t just brag about it on the Interwebz. Remember, they only come IF you build it.

        >But the non-climate-related effects of dumping all that fossil CO2 into the system also risk unwanted effects on eco-system stability.

        I think that whole section of your post is well-argued and much in line with my own thinking.

        >That’s fine, we’re just bouncing opinions off each other.

        True.

  38. “The images, numbers 1 through 30, each represent the results of a single run of the CESM starting from a unique, one/one-trillionth degree difference in global temperature…”

    What does this mean? The model does not contain a single parameter for “global temperature”, the model contains a temperature for each “cell”. Did they initialize each cell in the manner described?

    As someone stated previously the sample size is far too small to reasonably define the bounds of “natural variability”.

  39. dougbadgero ==> The original NCAR/UCAR press release is here. The paper that produces the images used in this post is here. I have not been able to get a copy of the SI.

    The idea is that they have made 40 identical runs starting the CESM at 1963 and running it for 50 years. Each run is identical except for a “less than” one/one-trillionth degree change in the value for starting (initial) global atmospheric temperature. (The paper uses only 30 of the runs, as that was all that were available at the time the study began — according personal communication with Dr. Dreser, main author)

    • “(The paper uses only 30 of the runs, as that was all that were available at the time the study began — according personal communication with Dr. Dreser, main author)”

      I guess the problem of climate change apathy is so urgent ya just can’t wait to complete a project before publishing.

    • captdallas2 0.8 +/- 0.3 ==> Now that’s not fair to the people at NCAR/UCAR or to Dr. Dreser. The study they have done was a long term effort, with scads of work done over a long time period.

      The CESM-LE project, which is separate from the study effort, also is a long-term effort.

      Here is the link for LENS, he Large Ensemble Community Project.

      • Then wake me up when they finish, then we can see the 10 runs they assumed would be available but were not. To me though, if the 10 were not available at the start of the project, but 30 were, I would have called the paper 30 Earths or a progress report on the 40 earth project or something that didn’t require personal correspondence to find out why their 40 earth project only contained 30. It is sloppy.; The kind of thing all too common in rush to publish climate science.

      • Capt.

        Brandon’s concerns are that the 40 leads to 30 (under the header of NCAR) is just a PR flack and their having differing ‘aims’. Nothing to see here. We should just move along.

      • danny, “Nothing to see here. We should just move along.”

        Right.

      • capt ==> It is only the press release that is called 40 Earths — in reference to the entire CESM-Large Ensemble, which is what the press release is about.

        The press release mentions one (of about 100) papers produced using data from the CESM-LE, by Dr. Dreser, in which the first 30 runs available runs were used for her study with the eponymous name “Forced and Internal Components of Winter Air Temperature Trends over North America During the Past 50 Years: Mechanisms and Implications”. It is from that paper that the image (and point) under discussion is taken.

        All of the CESM-LE data is available online.

      • Kip, right, the NCAR press release mentions the 40 while referencing the 30 and the 30 paper was published while the project was on going. My immediate reaction is what would be the spread of the remaining 10 earths not included? Why the authors thought 30 was enough? And of the 100 plus papers published, how many would be impacted by runs after they published?

        If it only takes 30 runs to draw so many conclusions why bother?

      • capt ==> While you bring up a good point, it is a general point applicable to almost all science studies — cohort size — how many patients? — how many mice? — how many generations of fruit flies? how many runs of the model? — how many runs of how many models?

        Often this devolves down to “How much time do we have?” “How many researchers?” How much money for this study?” “How many research assistants can we hire?” “How much computer time can we wrangle out of the university? out of NASA?”

        They have done what they have done — so the Dreser study with the long title stands alone. Have at it if you wish. I was only interested in this one particular issue and I have taken an essayist’s first brush at that issue.

        Dreser’s study doesn’t really depend on the number of runs …really… unless those runs would have vastly broadened the “spread”. I feel it is wrong-headed from the start as it ignores important lessons from Chaos Theory.

        Not everyone agrees with me (see all of the above comments). Some with unicorn-phobia are seeing unicorns under their beds and in their closets and in my essay, hard-core statisticians believe that chaos (as in Chaos Theory) can be tamed or at least brought into focus with statistics.

      • Actually Kip, there is some chance to bring statistical inference to chaos. Wang at MIT has been developing the shadowing method and has in some simple cases proofs of long term convergence to the correct statistics. Its a very hard problem and how it will play out I don’t know.

      • David Young ==> Yes, I am aware of this movement. Robert Brown at Duke also works on this aspect of Chaos. I made passing reference to this in the essay: “…albeit there is a field of study in statistics that focuses on this problem, it does not involve the type of average or mean used in this case)”

        Time will tell if the work pans out — if there is any sense to it.

        The deeper question is: If [some types of] chaotic systems tend to converge on correct statistics, does this help us in any practical way, for instance, with climate modeling?

  40. Starting points are a part of final result because, as we have seen over the course of history, global warming (GW) can simultaneously be going both up and down– e.g., the overall GW trend over the last 10,000 years can be down, as in reality we know that is the case. GW can, however, also be going up as we know it has over the last 100,000 years; and, still overall as we also know to be true GW can be going down for the last 4,000 and 2,000 and perhaps the next 20-30 years as well as going up prior to the aforesaid epochs.

  41. Pingback: Lorenz validated | Vince Werber's Rants!

  42. The modeling framework consists of 30 simulations with the Community Earth System Model (CESM) at 1° latitude/longitude resolution, each of which is subject to an identical scenario of historical radiative forcing but starts from a slightly different atmospheric state. Hence, any spread within the ensemble results from unpredictable internal variability superimposed upon the forced climate change signal.

    At best, the ensemble mean represents (i.e. “approximates”) the modeled climate change signal. That the model mean is an accurate approximation to the actual “forced climate change signal” is something that can only be tested with out of sample data, that is: the future realization of the process that is being modeled. The most relevant forecast is the forecast based on the initial value for the simulation that is closest to the actual, but unknown, initial value.

    And that “best” depends upon the sample of variations being “unbiased” with respect to the true but unknown initial climate, something that is not knowable, or at least not known from a single sample of 30 starting points. It seems from the presentation in the original, that the sample was unbiased with respect to the best known estimate of the true initial value; since that estimate is undoubtedly inaccurate to some degree, the sample is most likely biased with respect to the unknown true value.

    There is, therefore, no reason to conclude that the model mean is an accurate forecast of future climate. To conclude something like that, on present evidence, requires a huge leap of faith.

    I took this paper to be the beginning of awareness by some in the modeling community that everything known about the climate is known with sufficient inaccuracy that none of the forecasts, mean forecasts, median forecasts, etc from the GCMs is dependable for policy purposes.

    • I took this paper to be the beginning of awareness by some in the modeling community that everything known about the climate is known with sufficient inaccuracy that none of the forecasts, mean forecasts, median forecasts, etc from the GCMs is dependable for policy purposes.

  43. Can bugs in the software exert more “force” on the output of the model than simulated butterfly wings?

    • jim2 ==> Consider the case in neurosciences — here — the title is “A bug in fMRI software could invalidate 15 years of brain research”.

      So, I would say “Yes, bugs in the software could/would/might invalidate model output.”

      Climate Science fights this off with multiple models — which unfortunately are cross-fertilized with bits of code, concepts, approaches, prejudices and methods, thus not truly independent of one another thus may be, in a sense, cross-contaminated.

      • It’s an interesting endeavor.

      • It was reported one of the studies has redone the work, and the outcome was unchanged.

      • Surely you’re not going with the “some people say” and “believe me” shtick.

      • This has little to do with climate science.

      • Steven Mosher

        Unicorns and bugs might invalidate models.

        This is a classic unicorn appeal kip.

      • Steven ==> Try to keep up — this thread is about whether bugs in model software — any software modeling something — could invalidate results. I pointed the reader to the current ongoing flap in the fMRI world, where a bug was found in the software that evaluated data from MRIs in fMRI research. This has thrown the whole field into a tizzy. Lots of research being done trying to find out who used which software, which studies may be faulty, etc.

        Say nothing particular about climate models — and wasn’t meant to. (except the obvious inference — science depending on software being correct, bug-free, can be at the mercy of software producers.)

      • Say nothing particular about climate models — and wasn’t meant to. (except the obvious inference …

        There is no obvious inference… just smear ’em while you can.
        .

      • jch ==> This particular little conversation is between two software developers — myself and jim2 — on a rather narrow topic of the effects of bugs in software as it might relate to the overall topic of this essay.

        I’m sorry if you feel threatened by the content. I will try to include “trigger warnings” to protect the sensitivities of more tender students here who might be harmed by exposure to topics that might upset them.

      • Mosher – Butterflys are bugs.

  44. Hey Gatesy, this ones especially for you, although Mosh might take a look too

    https://youtu.be/n6Mq_kZQI7c

  45. Danny Thomas

    Maybe not your view, but your words in response to my comment that I wouldn’t be ‘the decider’ about if/when the ensemble would be ‘accepted’ in to (what some call) mainstream science implied……………..

    “I assume you are a taxpaying member of society living in a representative democracy. If true, no, you’re not *the* decider, but you are *a* decision-maker.”

  46. For believers in the usefulness of averages for predictive purposes.

    Say a coin can only show a head or a tail when fairly thrown.

    After 10 heads or tails in a row, there will always be a foolish gambler convinced that the “law of averages” decrees that there is a better than 50/50 chance of the next throw resulting in a different result than the last ten.

    Or after 100 throws of the same result in a row.

    Now the number of heads is supposed to equal the number of tails in any sequence of throws. Of course this is nonsense for any odd number of throws. What about even numbers? The longer the sequence, the less likely that any time, the number of heads will be equal to the number of tails. And, of course, the more likely that a longer sequence of unbroken heads or tails will occur.

    And so on.

    You may be absolutely certain that heads and tails will somehow “even out” at some indeterminate time in the future. You just can’t know how many, or when, and you know you will be absolutely wrong 50% of the time anyway.

    Even when you think you know there are only two outcomes from an event, it helps not at all. What will happen on the next throw, or the hundredth, or the first one in Ulaanbataar after the seventh full moon of the astral cycle?

    Not even chaos. No wonder the IPCC states that future climate states are unpredictable. The physics of the atmosphere seem more complicated than predicting the results of a single coin throw.

    Now is the time for a Warmist to deny, divert, and confuse, and start talking about birds or aeroplanes flying, or pointing out that water boils or that night follows day – all supposedly demonstrating the intellectual superiority of practitioners of the settled science of Climatology.

    So easy this climate stuff. If it doesn’t change, it will remain the same. An assumption that the climate will change seems reasonable. I’d bet on it, if I could find anyone to take the bet. Even Warmists don’t seem to be quite that silly, but I live in hope.

    Cheers.

  47. From this set of model runs it’s finally obvious that butterflies cause climate change. Time to bring back DDT.

  48. Berényi Péter

    There is an even more profound effect in climate due to chaos.

    First of all, it is a proven fact, that a reproducible non equilibrium open thermodynamic system produces entropy at as high a rate as it is possible.

    Journal of Physics A: Mathematical and General Volume 36 Number 3
    2003 J. Phys. A: Math. Gen. 36 631
    doi:10.1088/0305-4470/36/3/303
    Information theory explanation of the fluctuation theorem, maximum entropy production and self-organized criticality in non-equilibrium stationary states
    Roderick Dewar

    Now, the terrestrial climate system is a non equilibrium open thermodynamic system, in which most of the entropy production happens when incoming shortwave solar radiation gets absorbed and eventually thermalized. The rate at which it happens depends on the planetary albedo and nothing else. The lower the albedo is, the more incoming radiation is absorbed and the higher the rate of entropy production is.

    Therefore, if the terrestrial climate system were reproducible, Earth would be pitch black. It is, in fact, not.

    Consequently, the terrestrial climate must not be a reproducible thermodynamic system. And indeed, it is irreproducible.

    Definition: A system is reproducible if for any pair of macrostates (A;B) A either always evolves to B or never.

    Because the climate system is chaotic, microstates belonging to the same macrostate can evolve into different macrostates in a short time. That’s the butterfly effect.

    So, chaos not only makes prediction impossible in the long run, but determines color of Earth as well, as seen from space. A bluish-white marble, instead of a black orb.

    Still, there is strong indication, that albedo is a regulated quantity. Its sweet spot is just not as low as possible, but it is located somewhere around 30%. No one knows for sure, why, because there is no theoretical hint for irreproducible systems.

    Journal of Climate, Volume 26, Issue 2 (January 2013)
    doi: 10.1175/JCLI-D-12-00132.1
    The Observed Hemispheric Symmetry in Reflected Shortwave Irradiance
    Aiko Voigt, Bjorn Stevens, Jürgen Bader and Thorsten Mauritsen

    “Climate models generally do not reproduce the observed hemispheric symmetry, which the authors interpret as further evidence that the symmetry is nontrivial.”

    Because of irreproducibility, Jaynes entropy can’t even be defined inside the climate system, so entropy production and internal fluxes can’t be localized properly. Still, entropy production of the entire system is well defined, because it exchanges energy with its environment only by electromagnetic radiation. In EM each photon carries the same amount of entropy, irrespective of its wavelength. In a photon gas it is 3.6k (whre k is the Boltzmann constant). However, thermal IR photons leaving Earth are not a gas, but radiation. On a closed surface far away from Earth at each point of said surface direction of photons crossing it is well defined, they are coming from Earth, which has a negligible angular diameter as seen from there. However, radiation, once it entered space, has nothing to interact with, its entropy is not changed any more. So entropy carried away by momentum related degrees of freedom is zero, entropy per photon is 2.7k, the 4/3 multiplier proposed by some does not apply. That means Wei Wu and Yangang Liu are in error, the actual entropy production rate is closer to 0.9 W/m2/K, than to their proposed value.

    Reviews of Geophysics, 14 May 2010
    DOI: 10.1029/2008RG000275
    Radiation entropy flux and entropy production of the Earth system
    Wei Wu, Yangang Liu

    That said, one still wonders how inter hemispheric entropy production rates relate to each other. If what Voigt &. al. say is true, that is, annual amount of absorbed shortwave radiation is the same for the two hemispheres in spite of the huge difference between their surface albedos, something related to albedo is strictly regulated in the Earth system. As the climate system is a heat engine, it must be rate of entropy production.

    We are venturing into unexplored territory here, because of the chaotic nature of said system, that is, the Maximum Entropy Production Principle seems to be invalid in this case. Still, there seems to be regulation, so it would worth measuring entropy production rates for an extended period, with good geographic resolution. It could even shed light on why outgoing thermal IR radiation is in an imbalance of 1.2 W/m2 between hemispheres while absorbed shortwave radiation matches almost perfectly.

    Unfortunately data provided by CERES satellites is insufficient to do that, because Earth is not a perfect gray body, so spectral deviation of outgoing IR from Planck curve gives a non negligible (negative) contribution, while shortwave scattering properties of the atmosphere are also ill defined, which contributes to entropy production as well.

    However, the first thing to do is to develop a firm theory of irreproducible non equilibrium open thermodynamic systems. This is one of the last unexplored fields of classical physics and the lack of such theory makes theoretical efforts in climate science futile.

    On the other hand, as soon as theoretical foundation is given, that would shed some light on the problem, so we could start a meaningful modelling process.

  49. Pingback: Generating regional scenarios of climate change | Climate Etc.

  50. Pingback: Generating regional scenarios of climate change – Enjeux énergies et environnement

  51. Pingback: Weekly Climate and Energy News Roundup #244 | Watts Up With That?

  52. There’s a movement afoot to require the results of all clinical trials to be published, not only the ones that show good results for Miracle Drug X.

    Surely at least one of the 30 models in one of its runs must have shown our planet congealing into a solid ball of ice? Or bursting into flames?

    I’m not questioning the sincerity of the modelers, who are themselves aware of the limitations of the models, and will discard any run that gives obviously nonsensical results. But then they are caught in the trap of selecting the run that in their expert opinion yields the truest result. That allows human fallibility to creep in the back door.

    • old fossil,

      I agree.

      As far as I know, the modellers have to tweak, tweak, tweak again, and then impose sufficient constraints, so that the number of undesirable results are minimised. These are the results that don’t support CO2 induced planetary heating – as in “hottest year EVAH!”

      Even then, they can’t come up with a model that reflects the fact of the Earth cooling since its creation. Apparently programming reality into a fantasy toy computer model eludes even the most fanatical GHE adherent.

      It’s not science, it’s climatology – the refuge of scoundrels, delusional psychotics, grant grabbers and the rest of the second raters who can only dream of recognition in a field of real science.

      Why earn a Nobel Prize when you can just as easily claim to have been awarded one? Who would know the difference?

      Why bother to get a degree in science when you can just claim to be a scientist? It’s only climatology after all!

      Cheers.

  53. Dr Curry: Ref. “Generating Regional Scenarios”, you may wish to suggest the client increase the project scope to include Antarctica as a key region to study. If this project will attempt to look forward 30 plus years, whatever happens to Antarctic ice will impact sea level rise, and that in turn will shift the political urgency to take radical measures.

    The human response to events was a key problem I saw in the past when running long range projections using integrated dynamic model packages. I tried to get around it by running surveys of decision makers I could cajole for interviews, in which I posed the model results five-ten years in the future. And asking them what was the expected government and corporate reaction to such conditions. I attempted to include such reactions into the model logic, but as you can imagine this was rather hard for management to accept (people are used to an “Excel world” in which the functions and relationships remain rigidly coded).

    By the way, the individuals better able to predict government response to future events were all lawyers. The most unreliable were scientists and engineers. Economists gave very heterogenous answers. I have about 20 years of hindsight since I was doing those surveys to judge their predictions versus actual events.

    In conclusion: if your client is engaged in a very expensive project then it should consider shifts in the emissions profile as a function of sea level rise, which in turn can be (somewhat) projected by Antarctic temperature, precipitation and surrounding ocean behavior. This requires increasing project scope although I don’t think you will have to prepare that 1 by 1 km fine grid for the Antarctic subproject.

    • In this particular region, sea level rise is not an issue (it is inland)

      • The point is that a change in the pace of sea level rise would alter the emissions profile, which in turn makes the most likely profile shift.

        For example, say an Antarctic model identifies that sea level will rise 30 cm by 2040 and 2 meters by 2100. This may not alter your client’s behavior, but there may be a significant push to reduce emissions. You would have to assume that governments will respond by doing X. The X is set by interviewing proxies (people you think mimic world leaders in the way they think).

        The opposite could happen. The model ensemble could show less sea level rise, possibly no temperature change, by 2040. This would make leaders relax. Emissions would shift to a higher case.

        You see, the eventual result depends on these feedbacks between people who are embedded in the full system. I argue that a feedback loop exists which shifts the emissions pathway.

        I also wonder about whether you can predict Antarctic regional climate. And whether lower emissions necessarily lead to less sea level rise. I have read about models which show more sea level rise with RCP4.5 due to changes in precipitation.

  54. Thank you for just about the clearest explanation of the folly of projections from current “predictive” computer models that I have seen. My favorite quote from two German physicists, experts in thermodynamics and the modeling of thermodynamic systems:
    “The running of computer climate models is a very expensive form of computer game entertainment”

    You stated:

    “There are, in the climate system, known causes and there remains the possibility of unknown causes.”

    This brings to mind the famous quote from Donald Rumsfeld, Secretary of Defense during the Gulf War:
    “There are things that we know; there are things that we don’t know; and there are things that we don’t know that we don’t know”

    A very apt description of the state of climatology!

  55. Louis Hissink

    In the mineral exploration industry we have the rule of thumb that if the system is non-linear, all bets are off. This paper reminds me of the anecdote about inventing an ink pen to use in an orbiting space vehicle – the US designs a $million dollar ink pen, the Russians simply deploy a pencil.

    • Louis,

      Talking of thumbs and writing –

      “And an answer came directed in a writing unexpected,
      (And I think the same was written in a thumbnail dipped in tar)
      ‘Twas his shearing mate who wrote it, and verbatim I will quote it:
      “Clancy’s gone to Queensland droving, and we don’t know where he are.”

      Take your choice –

      High tech – $million dollar ink pen.

      Low tech – pencil

      Oz tech – thumbnail dipped in tar!

      Horses for courses. Chaos rules!

      Cheers.

  56. Pingback: Determinism and predictability | Climate Etc.

  57. Pingback: Determinism and predictability – Enjeux énergies et environnement

  58. Pingback: Chaos & Climate – Part 4: An Attractive Idea | Watts Up With That?

  59. Gobacktoschool

    So many lines to say nothing….

    Of course a model has a natural variablity, it’s own natural variability, but it has a natural variability and it is a chaotic phenomena, such as in the real world, and analysing the distributions (a mean is a good start) of an ensembles of simulations just gives you … the CLIMATE !!! (of the model). Climate is is the statistics of weather. To which extent, it is realistic, that’s another question, but of course averaging 100 times would be better than 30 times…