Model structural uncertainty – are GCMs the best tools?

by Judith Curry

Rarely are the following questions asked:  Is the approach that we are taking to climate modeling adequate?  Could other model structural forms be more useful for advancing climate science and informing policy?

Why GCMs?

Here GCM refers to the global coupled atmosphere-ocean models, whose simulations under the CMIP are used by the IPCC.

The sociology of GCMs is discussed in a fascinating 1998 paper by Shackley et al., entitled Uncertainty, Complexity, and Concepts of Good Science in Climate Modelling: Are GCMs the best tools?  Stimulated by Shackley’s paper, here are excerpts from an abstract I’ve submitted to a Workshop to be held next October:

The policy-driven imperative of climate prediction has resulted in the accumulation of power and authority around GCMs, based on the promise of using GCMs to set emissions reduction targets and for regional predictions of climate change. Complexity of model representation has become a central normative principle in evaluating climate models, good science and policy utility. However, not only are GCMs resource-intensive and intractable, they are also characterized by over parameterization and inadequate attention to uncertainty. Apart from the divergence of climate model predictions from observations over the past two decades that are raising questions as to whether GCMs are over sensitive to CO2 forcing, the hope for useful regional predictions of climate change is unlikely to be realized based on the current path of model development. The advancement of climate science is arguably being slowed by the focus of resources on this one path of climate modeling.

 Philosophy of GCMs

Shackley et al. describe the underlying philosophy of GCMs:

The model building process used to formulate and construct the GCM is considered as a prime example of ‘deterministic reductionism’. By reductionism, we mean here the process of ‘reducing’ a complex system to the sum of its perceived component parts (or subsystems) and then constructing a model based on the assumed interconnection of the submodels for these many parts. This is not, of course, a process which necessarily reduces the model in size at all: on the contrary, it normally leads to more complex models, like the GCM, because most scientists feel that the apparent complexity that they see in the natural world should be reflected in a complex model: namely a myriad of ‘physically meaningful’ and interconnected subsystems, each governed by the ‘laws of nature’, applied at the microscale but allowed to define the dynamic behaviour at the macroscale, in a manner almost totally specified by the scientist’s (and usually his/her peer group’s) perception of the system.

This reductionist philosophy of the GCM model is ‘deterministic’ because the models are constructed on purely deterministic principles. The scientist may accept that the model is a representation of an uncertain reality but this is not reflected at all in the model equations: the GCM is the numerical  solution of a complex but purely deterministic set of nonlinear partial differential equations over a defined spatiotemporal grid, and no attempt is made to introduce any quantification of uncertainty into its construction. 

[T]he reductionist argument that large scale behaviour can be represented by the aggregative effects of smaller scale process has never been validated in the context of natural environmental systems and is even difficult to justify when modelling complex manmade processes in engineering.

I just came across a 2009 essay in EOS by Stephan Harrison and David Stainforth entitled Predicting Climate Change: Lessons from Reductionism, Emergence and the Past, which emphasizes this same point:

Reductionism argues that deterministic approaches to science and positivist views of causation are the appropriate methodologies for exploring complex, multivariate systems. The difficulty  is that a successful reductionist explanation need not imply the possibility of a successful constructionist approach, i.e., one where the behavior of a complex system can be deduced from the fundamental reductionist understanding. Rather, large, complex systems may be better understood, and perhaps only understood, in terms of observed, emergent behavior. The practical implication is that there exist system behaviors and structures that are not amenable to explanation or prediction by reductionist methodologies.

Model structural uncertainty

When climate modelers work to characterize uncertainties in their model, they focus on initial condition uncertainty and parametric (parameter and parameterization) uncertainty.  Apart from the issue of the fidelity of the numerical solutions to the physical equations, there is yet another uncertainty.  This is model structural uncertainty, which is described in a paragraph from my Uncertainty Monster paper:

Model structural form is the conceptual modeling of the physical system (e.g. dynamical equations, initial and boundary conditions), including the selection of subsystems to include (e.g stratospheric chemistry, ice sheet dynamics). In addition to insufficient understanding of the system, uncertainties in model structural form are introduced as a pragmatic compromise between numerical stability and fidelity to the underlying theories, credibility of results, and available computational resources.  

The structural form of GCMs has undergone significant change in the past decade, largely by adding more atmospheric chemistry, interactive carbon cyclone, additional prognostic equations for cloud microphysical processes, and land surface models.  A few models have undergone structural changes to their dynamic core – notably, the Hadley Center model becoming nonhydrostatic.

Structural uncertainty is rarely quantified in context of subsequent model versions.  Continual ad hoc adjustment of GCMs (calibration) provides a means for the model to avoid being falsified – new model forms with increasing complexity are generally regarded ‘better’.

The questions I am posing here relate not so much to these changes to model structural form that relate to the current reductionist paradigm, but more substantial changes  to the fundamental equations of the dynamical core or entirely new modeling frameworks that may have greater structural adequacy than the current GCMs.  Below are some interesting ideas on new model structural forms that I’ve come across.

Multi-component multi-phase atmosphere

The biggest uncertainty related to climate sensitivity is the fast thermodynamic feedback associated with water vapor and clouds. A number of simplifying assumptions about moist thermodynamics are made in climate models, as a carryover from weather models. For the long time integrations of climate models, accumulation of model errors could produce spurious or highly amplified feedbacks.

Treating the atmosphere as a multi-component multi-phase fluid (water plus the non condensing gases) could provide an improved framework for modeling processes related to clouds and moist convection, which remains one of the most vexing aspects of current GCMs.  Peter Bannon lays out the framework for such a model in  Theoretical Foundations for Models of Moist Convection.  I have long thought that this modeling framework would incorporate the water vapor/condensation driven processes discussed by Makarieva and colleagues.

Stochastic models

The leading proponent of stochastic parameterizations in climate models, and now fully stochastic climate models, is Tim Palmer of Oxford (formerly of ECMWF).  The Resilient Earth has a post on this Swapping Climate Models for a Roll of the Dice.  Excerpts:

The problem is that to halve the sized of the grid divisions requires an order-of-magnitude increase in computer power. Making the grid fine enough is just not possible with today’s technology.

In light of this insurmountable problem, some researchers go so far as to demand a major overhaul, scrapping the current crop of models altogether. Taking clues from meteorology and other sciences, the model reformers say the old physics based models should be abandoned and new models, based on stochastic methods, need to be written from the ground up. Pursuing this goal, a special issue of the Philosophical Transactions of the Royal Society A will publish 14 papers setting out a framework for stochastic climate modeling. Here is a description of the topic:

This Special Issue is based on a workshop at Oriel College Oxford in 2013 that brought together, for the first time, weather and climate modellers on the one hand and computer scientists on the other, to discuss the role of inexact and stochastic computation in weather and climate prediction. The scientific basis for inexact and stochastic computing is that the closure (or parametrisation) problem for weather and climate models is inherently stochastic. Small-scale variables in the model necessarily inherit this stochasticity. As such it is wasteful to represent these small scales with excessive precision and determinism. Inexact and stochastic computing could be used to reduce the computational costs of weather and climate simulations due to savings in power consumption and an increase in computational performance without loss of accuracy. This could in turn open the door to higher resolution simulations and hence more accurate forecasts.

In one of the papers in the special edition, “Stochastic modelling and energy-efficient computing for weather and climate prediction,” Tim Palmer, Peter Düben, and Hugh McNamara state the stochastic modeler’s case:

[A] new paradigm for solving the equations of motion of weather and climate is beginning to emerge. The basis for this paradigm is the power-law structure observed in many climate variables. This power-law structure indicates that there is no natural way to delineate variables as ‘large’ or ‘small’—in other words, there is no absolute basis for the separation in numerical models between resolved and unresolved variables.

In other words, we are going to estimate what we don’t understand and hope those pesky problems of scale just go away. “A first step towards making this division less artificial in numerical models has been the generalization of the parametrization process to include inherently stochastic representations of unresolved processes,” they state. “A knowledge of scale-dependent information content will help determine the optimal numerical precision with which the variables of a weather or climate model should be represented as a function of scale.” 

Dominant Mode Analysis

Shackley et al. describe Dominant Mode Analysis (DMA):

DMA seeks to analyse a given, physically based, deterministic model by identifying objectively the small number of dynamic modes which appear to dominate the model’s response to perturbations in the input variables. In contrast to the traditional reductionist modelling, this normally results in a considerable simplification of the model, which is simultaneously both reduced in order and linearised by the analysis. TheDMAmethodology involves perturbing the complex and usually nonlinear, physically based model about some defined operating point, using a sufficiently exciting signal, i.e., one that will unambiguously reveal all the dominant modes of behaviour. A low order, linear model, in the form of a transfer function, is then fitted to the resulting set of simulated inputoutput data, using special methods of statistical estimation that are particularly effective in this role. As might be expected from dynamic systems theory, a low order linear model obtained in this manner reproduces the quasilinear behaviour of the original nonlinear model about the operating point almost exactly for small perturbations. Perhaps more surprisingly, the reduced order model can sometimes also mimic the large perturbational response of its much higher order progenitor.

Network-based models

There is growing interest in the use of complex networks to represent and study the climate system.  This paper by Steinhauser et al.,  provides some background.  My colleagues at Georgia Tech are at the forefront of this application:  Annalisa Bracco and Konstantin Dovrolis, and also Yi Deng.  And of course, the stadium wave is network based.

JC summary

The numerous problems with GCMs, and concerns that these problems will not be addressed in the near future given the current development path of these models, suggest that alternative model frameworks should be explored.  I’ve mentioned the alternative model frameworks that I’ve come across that I think show promise for some applications.   I don’t think there there is a one size fits all climate model solution.  For example, stochastic models should provide much better information about prediction uncertainty, but will probably still not produce useful predictions on regional scales.  Network-based models may be the most useful for regional scale prediction.  And we stand to learn much about the climate system by trying a multi-component multi-phase model, and also from DMA.

The concentration of resources (financial and personnel) in supporting the traditional GCM approach currently precludes sufficient resources for the alternative methods (although networks and DMA are pretty inexpensive).  Tim Palmer may be successful at marshaling sufficient resources to further develop his stochastic climate modeling ideas.  Unfortunately, I don’t know of anyone that is interested in taking on the multi-component multi-phase formulation of the atmosphere (a particular interest of mine).

I look forward to hearing from those of you who are have experience in other fields that develop models of large, complex systems.  In my opinion, climate modeling is currently in a big and expensive rut.

 

 

385 responses to “Model structural uncertainty – are GCMs the best tools?

  1. Mathematical models may be the best tools we have but scientists like Edward Wegman don’t come to the challenge assuming it is possible to capture nature in a bottle with statistics. GCMs are academics playing with toys — it’s like watching an episode of the Big Bang Theory — and, if limited to teaching tools they could be valuable. But as magic windows into our future, we see that models are giving power to the most unaccountable, unworldly and fundamentally dishonest sector of Western society.

    • Wagathon: Mathematical models are capable of predicting the behaviour of both man-made and natural systems. But validation can be a research activfy as difficult as any attempted. An IPCC that started with the ‘sciences settled’ injunction is hardlly the body to do this.

      • GCM-production is just another example of cash-for-clunkers government planning — all underwritten by government subsidies — that must be paid for by the ever dwindling number of taxpayers. We now know too much to about the, natural components of the currently progressing climate change, to be deceived or afraid:

        The first one is an almost linear global temperature increase of about 0.5°C/100 years. The second one is oscillatory (positive/negative) changes, which are superposed on the linear change. One of them is the multi-decadal oscillation… ~Syun-Ichi Akasofu

        These natural factors and others explain the temperature record without any help whatsoever from any alleged man-made causes. There is no global warming beyond what is explained by natural causes and, in fact, there is no room for man-made causes because ‘ground-based warming’ actually plateaued years ago:

        …lower atmosphere satellite-based temperature measurements, if corrected for non-greenhouse influences such as El Nino events and large volcanic eruptions, show little if any global warming since 1979. ~Richard Courtney

      • Mathematical models are capable of predicting the behaviour of both man-made and natural systems and fully capable of getting it very wrong. Actual data compared to model output shows that, so far, it is very wrong.

    • “models are giving power to the most unaccountable, unworldly and fundamentally dishonest sector of Western society” – correct. The current models cannot ever work. Using interactions between small space-time slices they must necessarily diverge from reality very quickly. Like weather models, they cannot ever successfully predict more than a few days ahead. That’s days, not years, decades or centuries. So :-

      “it is wasteful to represent these small scales with excessive precision and determinism” – correct.

      “the old physics based models should be abandoned” – correct.

      “and new models, based on stochastic methods, need to be written from the ground up” – partly correct: new models yes, ground up yes, stochastic questionable.

      I very much doubt that ‘stochastic’ will work – it will surely also be unable to predict more than a very short time ahead. Surely the models need to be built on the major movers of climate, ie. Earth orbit variations, solar cycles, ocean oscillations (including ENSO), atmospheric oscillations (jetstream latitudinal movements etc), etc, etc. Because we don’t yet know how ANY of these work, other than Earth orbit variations, the early models still won’t be much use. So the very first step is to start funding research into all these areas, and drop the absurdly expensive OTT GCMs, together with “research” into CO2.

  2. If scientists want to work GCMs, more power to them. I have taken the strategy instead of analyzing parts of the climate that are amenable to first-order physics simplifications. It really is rare that something that shows macro characteristics can’t be reduced in terms of its apparent complexity.

    One such climate system is ENSO. Obviously ENSO can be represented in terms of a finite-element hydrodynamics model and numerically computed. Yet there are simplifications that can be applied to that system and the results compared to the empirical data. A recent substantiation of a simplifying sloshing model is found here:
    http://contextearth.com/2014/06/25/proxy-confirmation-of-soim/

    This is probably crushing to those skeptics that were hoping that climate behaviors such as ENSO allows them to hide behind the uncertainty monster — but that’s the way that science works. Things always progress in a forward direction.

    BTW, a group of us at the Azimuth Project are starting to construct an open-source code base that hopefully allow better predictions of El Nino and ENSO.
    http://azimuth.mathforge.org/discussion/1358/experiments-in-el-nino-detection-and-prediction/?Focus=10683
    The plan is to try various approaches such as network models, the delayed oscillator model, and I will be pushing my Mathieu-equation sloshing model as a comparison yardstick.

    • Web, skeptics come in many varieties. I for one am looking for the truth. Good luck on your efforts to better predict the EL NIÑO and ENSO.

      • @HZ: I for one am looking for the truth.

        Have you ever pondered why Diogenes preferred to look for an honest man?

        Tarski gave a mathematical proof of why the truth must remain unknowable. His proof did not rule out the existence of honest folk.

      • Yes, but Web never posts without a passing kick at skeptics. Don’t need a model to predict that.

    • WHT, if I am not mistaken, Contextearth your site. So wouldn’t it have been more honest to have stated something like, “I discuss this at …” Instead of typing, “A recent substantiation of a simplifying sloshing model is found here:”.

      • squidBoy, you can’t win for losing if you place too many *I* pronouncements in a comment. People prefer the passive for some psychological reason.

        Get around this by saying MNFTIU.

      • WHT, I am sure that all sites are measured by the number of hits they receive. I am also sure that one of the reasons that “http://joannenova.com.au/2014/06/big-news-viii-new-solar-model-predicts-imminent-global-cooling/#comment-1496852” is being released in pieces is to increase hit rate. The difference is that their method is helping me to build a better understanding of the climate process. David Evans genius is that he can teach as well as do!

        What I object too is the nastiness of your comments! If I were to make an apology, “CAGW proponents are acting like dogs who have had their bones taken away “. Just saying!

    • Mr telescope, does the sloshing model have coupling to the atmosphere so you can account for the weakening and reversal of the prevailing wind?

  3. Climate Theory and Models must include Snowfall and Albedo or they will never provide skillful results.

  4. Judith, This post is so right and its about time it was said. We have a couple of papers coming out on this subject and two are in the articles to appear at AIAA Journal, one by Young et. al, and the other by Kamenetskiy et. al.. There is another one in July 2002 in The Aeronautical Journal about turbulent flows that states rather starkly the limitations of the detailed modeling of turbulent flows using the same methods used in GCM’s. I posted the exact references on a previous thread in response to Willard. They are easy to find. We have another one in review now that is even more shocking to the concensus. I can send you a copy privately. It shows that our predictive skill is limited to small classes of flows and uncertainty rapidly increases once you widen the field of cases considered.

    Two primary atmospheric dynamics are vortex dynamics and convection. Vortex dynamics is exactly the same at all scales above the turbulent scales which are tiny and irrelevant to this discussion. So CFD experience is directly applicable. Both vortex dynamics and convection are ill-posed problems and regularization is required by adding non-physical dissipation. There is no such as an “accurate” model of a simple vortex street at any non-trivial Reynolds’ number. The atmosphere is very turbulent but far more complex than simple vortical flows or convection. I have never understood the idea that detailed predictive modeling was even possible on long time scales. The rationale for this was summarized by Climate of Doom: “Every time I run the model I get a reasonable looking climate.” That’s a very weak argument.

    Last week I had a detailed discussion with one of the world’s top turbulence modelers who said that overconfidence in CFD is rampant and that there are a lot of “consensus” positions about it that are dangerous. He closed by saying “I could list all the categories of idiots who need to be reigned in.” The science here is well known, but the users of the models are often unaware of the large uncertainties.

    • k scott denison

      David, thank you for this comment. It is reassuring to see a researcher in the field who is poking at what many of us amateurs who have backgrounds in fluid dynamics and modeling (but aren’t active in those fields today) believe strongly to be the concern with GCMs – the incredible confidence instilled in them by their creators. The works you cite clearly tell a more honest story.

    • That is an excellent comment David. Were I a practicing climate scientist, I have no doubt that modeling would be my focus. The whole practice is interesting from a mathematical and computer standpoint.

      I think it is important to mention that there are endless cases of models not representing local expectations, certainly enough to invalidate many regional model based conclusions in print. The practice itself of modeling is worthwhile but require a generally more objective and skeptical eye than has been taken in the field for interpretation.

  5. Models have consistently underestimated the rate of Arctic sea-ice loss and polar warming. This looks like the area where they need to make the most improvement.

  6. Willard,

    Even though I am suspicious you may be playing climate ball here because the articles are very easy to find with just a little bit of effort based on the information I gave at ATTP, here is information that will enable you to very easily locate them. Some are behind paywall but you can afford the small fees involved.

    1. The Aeronautical Journal, July 2002, lead article on Turbulence.
    2. AIAA Journal, papers to appear, Kamenetskiy et al, “Numerical Evidence of Multiple Solutions for the Reynolds’ Averaged Navier-Stokes Equations for High-Lift Configurations.”
    3. AIAA Journal, papers to appear, Young et al, “Implemention of a Separated Flow Capability in TRANAIR.”

    There is one more to appear on design optimization and ill-posedness that is in review and another by Kamenetskiy on increments and fixed grids vs. adaptivity for RANS that is really fine work that will come out in January.

    The track record is that people in the climate blog wars say they want to access the papers and then never do. This is an example of bad faith I think and its common amoung the peanut galleries of activists and non scientists.

    Seriously, though, if you do read them, constructive negative feedback is always welcome.

    I would also urge you to think a little about the way climate ball is played both here and at ATTP and its poisonous effect of scientific communications. I have complained to Judith both here many times and in email about the serial name calling. Sometimes I wonder if you are a help or a hinderance in this goal.

    • @DY: The track record is that people in the climate blog wars say they want to access the papers and then never do. This is an example of bad faith I think and its common amoung the peanut galleries of activists and non scientists.

      A simpler explanation is that neither of the first two articles you mention (2002 and “to appear”) offer the reader any clue as to their relevance to climate, while the third (also “to appear”) is unknown to Google other than via your mention of it.

      Unless the third explains its relevance to climate it will plausibly suffer the same neglect you claim for the first two.

      • Yes I know Vaughan, if a tree falls in the forest. I thought however in my first comment I explained the relevance to climate. People like to claim “but my problem is different and so your information is irrelevant.” In this case that’s not true because atmospheric flow phenomena are in fact the same as those studied in fluid dynamics, vortex dynamics and convection.

      • Apologies, I overlooked your first comment.

        That said, I still fail to see the connection between the 2002 article and multidecadal climate. The turbulence time scales of that article would appear to be such that averaging any plausible solution of the relevant equations, parameters, and boundary conditions over any ten-year period would surely forecast essentially zero change in climate with high probability.

        While averages over periods of an hour or conceivably even a year may be unknowable for the cited reasons, how does that imply unknowability over much longer periods? In particular why would you expect averages of any fixed-size system over longer periods to tend to anything but zero?

      • I noticed that the original skyyydragon ClaesJohnson is an expert on these flow boundary layer problems. He apparently won this year’s upcoming Prandtl medal for outstanding work in CFD.

        What does this prove? That someone can be smart in an area but extends himself too much outside that area and fails? That someone says that statistical physics is unphysical and that Planck’s Law and Stefan-Boltzmann are unproven is taken seriously?

        In Young’s world does ClaesJohnson need to be “reigned in”?

      • Johnson himself comments on his blog that “The fluid dynamics community will not applaud the award, since 20th century fluid mechanics has followed the Father in search of the origin of separation, drag and lift in the boundary layer.”

        I read this to mean that so far there is no consensus in the relevant community that Johnson is correct.

        It will therefore be interesting to hear from the Prandtl Medal committee as to what exactly they consider Johnson to have accomplished.

        Johnson uploaded a video to YouTube,

        in which Karl Popper objects to Prandtl’s boundary layer theory of lift on the ground that it cannot be falsified, saying “The theory of flight developed by the fathers of modern aerodynamics Kutta, Zhukovsky and Prandtl postulates that the lift of a wing is generated by a singularity formed by the sharp trailing edge of an airfoil and that drag originates from a vanishingly thin boundary layer. This theory cannot be falsified because the singularity is unknown and the boundary layer is too thin for observation and thus both are hidden to inspection. My conclusion is that the classical theory of flight by Kutta-Zhukovsky-Prandtl is an example of pseudo-science.”

        Presumably Johnson does not feel that his own alternative is subject to the same criticism. However I don’t see why not, since the complete absence of a boundary layer in Johnson’s theory is surely even less observable than the very thin boundary layer Popper complains about. I therefore don’t see why Johnson’s theory is not equally or more so pseudo-science when judged by the same criteria Johnson is willing to apply to Prandtl’s version.

      • David Young

        Johnson has done some outside the con census work that will either be profound or profoundly wrong. I’ve done some inviscid stuff that shows he may be onto something. It’s too early to tell.

      • Web – that’s “reined” as in horse’s reins. Reigned is what monarchs and emperors do.

      • Jeremy, I put “reigned” in quotes because that is what Young spelled out to describe actions to be taken.

        But I guess you knew that, eh? If you didn’t next time try reading harder.

  7. The other day I was taking a shower while the sun was descending in the western sky providing through a large window a wonderful back light for all the billions of tiny water droplets generated by the hot shower water. The bathroom is large — the shower is walk-in and there are no curtains, doors, interior walls or other obstructions so the droplets have a lot of room to move around owing to Brownian Motion (just kidding). There is a small draft from the falling water mostly but some air comes in through the bottom of the room door as well so the droplets move to and fro and with the strong summer sun behind them it is quite a wonderful spectacle of moving lights following the jets and eddies of slow moving air in the room.

    For a moment it occurred to me that this cloud could be though of, in a limited way, as a sort of mini atmosphere. I wondered what would happen if I blew a gentle little puff of C02 toward the center of the ‘cloud’ so I did. Once, twice and more. I even waved my extended arm through the cloud in a successful attempt to replicate a vortex.

    What I soon came to conclude was that no computer model now or in the future would ever be able to predict the vector of even one of the floating, rising, falling, combining, evaporating droplets much less the entire system of them. The more I exhaled and waved my arm, the more complex the movements and the more sure I was that pre-mapping their collective motions would be impossible. This in a ten by twelve square foot room.

    Can a GCM do better?

    • catweazle666

      Can a GCM do better?

      No.

    • Danley Wolfe

      Meng, did you get wet while doing this? Look first at macro data and observations. In re to predicting the averge vector of the moving droplets, you are describing Wilson’s renormalization approach to solving statistical mechanics involved in critical phenomena.

    • One of my jobs involved understand fluid jets the size of a human hair. With all the arrogance of my scientific mindset I set out simulating this jet to understand how to better build our device. The result? Epic fail. Simulation could NOT provide the answers we needed for this jet the width of a human hair and the length of a toothpick.
      i.e. Your point is an important one.

  8. Regarding the so-called “pause”. If you take the trend from 1970-2014, you get 0.161 C per decade. If you take 1970-2000, you get 0.168 C per decade. This means that the last 15 years have not altered the long-term trend, as seen here.
    http://www.woodfortrees.org/plot/hadcrut4gl/from:1970/trend/plot/hadcrut4gl/from:1970/to:2000/trend/plot/hadcrut4gl/from:1970/mean:12
    How can a pause not alter the long-term trend that ends with it? Point to ponder. I can see why, but you need to think about it. Judging GCMs by the lack of a “pause” looks increasingly like a wrong criterion given that the pause had no effect on the long-term trend.

    • Time For An Ob

      Well, you’re right, the long thirty plus year trend is the significant one.

      But there are an infinite number of ways of looking at temperature trends:
      http://www.woodfortrees.org/plot/hadcrut4gl/from:2001/trend:2014/plot/hadcrut4gl/from:1970/to:2001/trend/plot/hadcrut4gl/from:1970/mean:12

      What happens with the trend going forward? We have to wait and see.

    • thisisnotgoodtogo

      Jim D, cherry picking as ever.

    • @jim d

      How can a pause not alter the long-term trend that ends with it?

      Walk up a steep hill. Stop at the top. Plot your delta height vs time graph with origin t=0, deltah=0. It takes a long time at the top to alter the long term trend significantly. But your height is undoubtedly static Simples.

      Note that if you throw a ball in the air and watch it come down it’s long term trend in height is always positive, even as it plummets back to the earth for half its path. And that if we set our origin back to the creation of the Earth abt 4.5 bya, the long term trend is cooling.

      Long term trends can be misleading. Don’t be so misled by the numbers and mathematical quirks to ignore your physical insight.

      • Long term trends can be misleading. Don’t be so misled by the numbers and mathematical quirks to ignore your physical insight.

        This is true but works just as well if you replace “long” with “short”.

    • @jim d

      Or in other (less polite) words….*look* at and understand your frigging data first!

      • Or, As Professor Evert Hoek says:

        Turn on your brain before you turn on your computer

    • The reason that the last 30 years has the same trend as the last 44 years, despite more than half of it being in the so-called “pause” is that just before the “pause”, there was a step, and the “pause” is merely a correction to a fast rise, or bubble, as they would put it in stock-market terms. The bubble was basically the 90’s.

      • @jim d

        Thanks Jim. You agree that it’s a pause. Point established

        Now remind me of all the scientists (or even yourself) back in the ’90s who were saying.

        ‘Whoa….all this warming is far too fast..it may all come crashing down with a pause. We must be very careful how we interpret it. Lets keep our powder dry and be cautious until we see how it pans out’

        Yep. I can’t remember that either. If there were any such brave souls they were drowned out by the choruses of ‘its worse than we thought’ ‘send lots more money for research’ ‘do something big and dramatic to appease Gaia, even if its really really stupid’.

        And any sociologists among us might wonder just how much of the 1990s ‘bubble’ you describe was real, and how much man-made by climos in the interests of continuing the vast expansion of their field at that time. Maybe climos and bankers aren’t really that different under the skin…..

      • A step and pause in that order. It averages to no effect on the long term upward trend, as I showed. We expect decadal steps and pauses. They happened in the 90’s and 80’s too. Never downwards, just steps and pauses. Each step makes the next decade the warmest on record. That’s just the way it is.

      • @jim d

        ‘We expect decadal steps and pauses’

        No doubt you can show me numerous references going back to the 1970s that confirm that this has indeed been the expectation and was widely predicted. They’ll gel nicely with the guys who were saying that we need to careful not to get overhysterical about the rapid rises.

        You see, Jim, outside climatology, we real-world folks have this annoying habit of judging the success of predictions by making them beforehand, not with the benefit of hindsight and the rear view mirror. Times winged arrow and all that.

        But just to show no hard feelings, here’s Mystic Latties predictions for yesterday’s racing. Get on quick if you can find an ante-race bookie to take your bet

        Bath 18:10 5,7,1 18:40, 8,7,1 19:10 11,9,6

        Any jobs going as a climo? With a 100% success rate at postdiction, surely I’m a natch.

      • The trend from 1970 is slowing down, but at a progressively slower rate. Imo, what is missing in most of the graphs is an acknowledgement that the highest trend was reached in 2010.42 = .17C per decade. So for the majority of the “pause” the longterm trend increased.

        After 2010.42, the anomaly cascaded downward to the 2nd strongest La Nina event in the instrument record, and the longterm trend took big hits 1970 to 2011 and 1970 to 2012. After that the slowdown slowed way down.
        Through May the trend is almost equal to Jim’s 1970 to 2000.
        from 1970

        1970 to 2000 = 0.0162602 per year
        1970 to 2010.42 = 0.0172926 per year
        1970 to 2011 = 0.0171895 per year
        1970 to 2012 = 0.0168074 per year
        1970 to 2013 = 0.0165065 per year
        1970 to 2014 = 0.0165065 per year
        1970 to 2014.42 = 0.0162533 per year

        versus

        1999.33 to 2014.42

        1999.33 to 2010.42 = 0.016644 per year
        1999.33 to 2011 = 0.0146297 per year
        1999.33 to 2012 = 0.0108319 per year
        1999.33 to 2013 = 0.00968386 per year
        1999.33 to 2014 = 0.00901231 per year
        1999.33 to 2014.42 = 0.00928177 per year

        1999.33 to 2010.42, a majority of the “pause”, saw an increase of the trend 1970 to 2000: same as above. Same pattern, big cascade into the 2nd strongest La Nina in the record, and then a slowdown trend reduction. And this time it actually reverses 1999.33 through May 2014.

        The “pause” is not 17 years, or even remotely close. The “pause” is the back-to-back La Nina episodes in 2011 and 2012, and it’s on the cusp of being reversed. The longterm trend zenith of 17C should be reached in 2014.

        That’s my definition of the end of the pause: 1970 to 2014.whenever reaching .17C. .17C = paws up: dead.

      • Don Monfort

        The CO2 that has been relentlessly added to the atmosphere, since the 90’s, was supposed to make the bubble get bigger and nastier, jimmy dee. It hasn’t. Everybody knows it. The pause is killing the cause.

      • The “pause” is dying, which means it is your cause that is dying. We will likely be back to the highest rate, .17C per decade, this year. The public is not going to like the deception of “global warming has stopped” one bit.

      • @JCH: We will likely be back to the highest rate, .17C per decade, this year.

        Are you referring to this year in isolation or from 1970 to the end of this year?

        For the latter, .17C per decade would only be possible if the average anomaly for this year were at least 0.85. Since the highest annualized anomaly since 1970 was 0.547 in 2010, followed by 0.407, 0.450, and 0.488 for 2011-2013, that’s a pretty tall order for 2014, especially given that the mean of the first four months of 2014 is 0.499.

        For the former it may be worth noting that the trend over the past 36 months has been 0.39C per decade (click on Raw Data at the bottom).

      • I get .66C for GISS LOTI.

        The current trend from 1970 is:

        #Least squares trend line; slope = 0.0162533 per year

        And that is with an ONI in 2014 of -0.6, -0.6, -0.5, -0.2. That’s La Nina lite.

        It looks like ONI will go positive this summer, and the PDO is positive. The AMO just headed up.

      • I mostly agree with what you’re saying. A step followed by a pause will maintain the trend. Eyeballing Kyle Swanson’s first diagram:
        http://www.realclimate.org/index.php/archives/2009/07/warminginterrupted-much-ado-about-natural-variability/
        the trend could be argued to have increased at the beginning of the pause. At the link Swanson also states, “We hypothesize that the established pre-1998 trend is the true forced warming signal, and that the climate system…”
        Looks to be 1.0 C or a bit more per 100 years.
        What is sometimes omitted when mentioning the pause is the step that preceded it. Perhaps the record temperature were caused by the step as much as the CO2.

    • Congrats Jim D, you’ve successfully disrupted another Climate etc thread with irrelevant, off-topic and pointless bickering about temperature trends.

      • Paul Matthews | June 26, 2014 at 4:16 am | Reply
        Congrats Jim D, you’ve successfully disrupted another Climate etc thread with irrelevant, off-topic and pointless bickering about temperature trends.

        All this talk about temperature going up and temperature going down without one mention of CO2. One might actually get the idea that temperature is at best loosely coupled to CO2. It is often as important what people don’t say as it is what they do say. Focus on the shiny watch swinging back and forth, back and forth, back and forth……

      • Quite. And is the trend cooling when we go to thousands of years rather than the piffling amount of decades?

      • Paul Matthews, I only raised it because of the “two decades” comment in the main post. Otherwise I would have thought the seeming pause irrelevant to GCM discussions too. It wasn’t me that raised the “pause” first, but it was a response to its mention. Some want to throw away GCMs because they perceive this variation that they expected GCMs to show. I say this is misguided thinking, to say the least. The pause has had absolutely no effect on the 30-year climate trend, and you have to wonder why that is, and what properties of a pause lead to that.

    • Matthew R Marler

      Jim D: How can a pause not alter the long-term trend that ends with it? Point to ponder. I can see why, but you need to think about it. Judging GCMs by the lack of a “pause” looks increasingly like a wrong criterion given that the pause had no effect on the long-term trend.

      There isn’t a “fool-proof” way to choose epochs for these comparisons. Three alternatives to those you chose:

      a. Based on the work of Santer et al, start at the last month of global mean temperature data, and work backward in 17 year epochs.

      b. Fit a piecewise linear model to data using one of the “breakpoint finding” algorithms to get sections that are statistically distinguishable. Bob Tisdale’s work displays a piecewise flat curve with distinct increments over very short intervals, for example.

      c. Take the date of a model run (or other kind of prediction), and look at the subsequent months to assess model fit. If the model fits the prior data but not the subsequent data, then there has been a “regime change” or some such.

      Each of these alternatives has been discussed at length in some past posts. It looks like the warmists are most concerned by the results from (a) and (c). The “breakpoint finding” analyses suffer from the public relations problem that they do not all identify the same breakpoints.

      Looking at nearly equal slopes from different eras, it seems that the distinctive epochs of warming since the LIA have nearly the same slope, which to some people suggests that the extra CO2 since WWII has next to zero effect. Because no one has a demonstrated accurate and complete model of “background” (CO2-independent) variation, there are elements of arbitrariness (aka “judgment”) in assigning weights to the varieties of results.

    • Steven Mosher

      Jim,

      you are trying to understand the “climate”, actually one small metric of the climate by applying statistics to data.

      Whenever you apply statistics to data you are using a model. In this case you have applied a linear model to the data. It is that linear model that contains the trend, not the data.

      In other terms you have assumed that the underlying data generating model is linear temp is a function of time. We most certainly know that this model is wrong.

      applying a linear model to temperature is useful in some contexts, but it can also be misleading as several assumptions and decisions are required: namely when to stop and start the analysis.

      A different approach to understanding the data is to build a physical model to generate the data. This approach also has problems

      • Steve, I made this point on one of my first post here, this was when the Earth appeared to be warming, and I was ridiculed.

      • Steven Mosher, I would prefer not to talk about trends, but the whole talk about the pause is on the subject of a trend. I could ignore it, but I chose not to. If they don’t see that their pause begins with a discontinuity from the past record (a step), I point it out. This is very different from a pause that is an actual gradual slowdown of a trend, and continuous with it, because this one just disappears in a long-term average, as I showed. It is an illusion.

      • You can’t fight illusions with illusions

    • Kyle Swanson at the above references this paper:
      http://journals.ametsoc.org/doi/full/10.1175/1087-3562%282000%29004%3C0001%3AITEASI%3E2.3.CO%3B2
      Where Michael Mann is in the ballpark of regime changes and hinting at the pause back on January 2000.

  9. The Stephan Harrison and David Stainforth link was short and understandable. It seems that there are limits to reductionism and I sometimes don’t understand why this seems to be overlooked. Reductionism is good for a lot of things but can fail as you keep adding decimal places and find you may need 1000 times more computing power. Their essay points to the next logical step for an accountant, and that’s emergent behavior of the system which seems a tractable approach. My guess is that the PDO and ENSO are emergent behaviors of the system, and that emergent behavior on many scales are found all over the system. Maybe this graph implies the system is emergent behavior influenced:

    I like this:
    The DMA methodology involves perturbing the complex and usually nonlinear, physically based model about some defined operating point, using a sufficiently exciting signal, i.e., one that will unambiguously reveal all the dominant modes of behaviour. 

    What I am reading is an attractor with regime changes such as changes in the PDO.

  10. David L. Hagen

    Stochastic models
    Koutsoyiannis et al. also strongly recommend ditching deterministic models for stochastic models. e.g.
    Anagnostopoulos, G. G. , Koutsoyiannis, D. , Christofides, A. , Efstratiadis, A. and Mamassis, N. ‘A comparison of local and aggregated climate model outputs with observed data’, Hydrological Sciences Journal, 55:7, 1094 – 1110

    The fact that climate has many orders of magnitude more degrees of freedom certainly perplexes the situation further, but in the end it may be irrelevant; for, in the end, we do not have a predictable system hidden behind many layers of uncertainty which could be removed to some extent, but, rather, we have a system that is uncertain at its heart.

    Do we have something better than GCMs when it comes to establishing policies for the future? Our answer is yes: we have stochastic approaches, and what is needed is a paradigm shift. We need to recognize the fact that the uncertainty is intrinsic, and shift our attention from reducing the uncertainty towards quantifying the uncertainty (see also Koutsoyiannis et al., 2009a). Obviously, in such a paradigm shift, stochastic descriptions of hydroclimatic processes should incorporate what is known about the driving physical mechanisms of the processes. Despite a common misconception of stochastics as black-box approaches whose blind use of data disregard the system dynamics, several celebrated examples, including statistical thermophysics and the modelling of turbulence, emphasize the opposite, i.e. the fact that stochastics is an indispensable, advanced and powerful part of physics. Other simpler examples (e.g. Koutsoyiannis, 2010) indicate how known deterministic dynamics can be fully incorporated in a stochastic framework and reconciled with the unavoidable emergence of uncertainty in predictions.

    Citations to this paper

  11. ‘Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable. They echo other famous limitations on scientist’s expectations, namely the undecidability of some propositions within axiomatic mathematical systems (Gödel’s theorem) and the uncomputability of some algorithms due to excessive size of the calculation.’ James McWilliams

    • Mathematicians, engineers and professional modellers understand this. Why do the physics guys not get this? Perhaps they dont want to suffer the disappointment of realizing their baby’s (GCM’s) are less powerful than they think?
      More likely just the lack of verifiability and accountability inherent to the climate problem.

  12. Phyllograptus

    As a petroleum geologist I work with numerical simulation models fairly regularly. As I work in a environment that is market dominated and competitive, the models that are developed to support it are competitive, i.e. they are developed mostly independently and “compete” against one another to give the “best” history match and therefore the best predictions going forward. From everything I have read regarding the GCM’s, they are not developed in a competitive mode with the concept of “market forces” attempting to get the competitive advantage and upper hand. Rather they seem to have been developed with a construct of attempting to match each other in coming up with a “average” solution that us agreeable to most involved parties. Reservoir simulation numerical models are constructed to be “sold” to the customer to help them maximize profit, and therefore they are constantly revisited and improved and compared against other models to demonstrate how they represent the world and data better than other commercial models. The premise of GCM’s seems to be to attempt to come to some common “solution”‘. The aspect of competition between GCM models seems to be down played, rather models should be developed to achieve a fairly consistent result between themselves. As most climate models are developed by academia, the competition to be the best at predicting does not appear to be the major focus. Rather uniformity of results is the target. Commercial petroleum reservoir simulation models strive for non-uniformity, i.e. My model us better than theirs because I can do this better, therefore using mine will make you more money. In the climate modeling world, IMO it appears to be, my model is better because it comes closer to what we all want to demonstrate, it is closer to an average result. The problem is this relies on knowing (or desiring) what the average result should be ahead of time and just using the model to confirm this knowledge and desire

    • Excellent comment. Thank you.

      +1

    • David L. Hagen

      Phyllograptus
      To fix this problem, Ross McKitrick proposes a T3 Tax which would

      calibrate a carbon tax to the average temperature of the region of the atmosphere predicted by climatologists to be most sensitive to CO2.

      David Henderson finds:

      A clear advantage of this proposal is that the tax would then depend on actual evidence of the extent of global warming. Further, as McKitrick observes, one could think in terms of a futures market ‘in which firms could buy contracts to cover the per-tonne emissions cost of the tax’: such a market would ‘force investors to make the best possible use of information about the future, and to press for improvements in climate forecasting’.

      • It would probably work a lot better if the modeler’s salary correlated with the accuracy of the model.

      • David L. Hagen

        McKitrick observes:

        But choosing the tropical troposphere as our metric means we are using the atmosphere’s own ‘leading indicator’ – the warming there is supposed to be earlier and stronger than warming at the surface. And, more crucially, the policy is forward-looking because investors are forward-ooking. . . .The market will force investors to make the best possible use of information about the future, and to press for improvements in climate forecasting in the process. So the policy is the most forward-looking one we can possibly implement. . . .
        it would only require one country to implement it, for everyone else to benefit from the emergence of the market for tax futures that would reveal optimal forecasts of global warming. And any country that implements it, even on its own, gets the benefit of knowing that it is pursuing the right policy path, even without knowing in advance what the path looks like.

      • Higher CO2 is a boon to the biome, sustaining more total life and more diversity of life. Same goes for warming.

        Whatever effect man has had, has, or can have is a blessing.
        =======================

    • David L. Hagen

      kim
      For details see
      Figure Is CO2 Plant Food
      And Report: The Many Benefits of CO2

  13. “[T]he reductionist argument that large scale behaviour can be represented by the aggregative effects of smaller scale process has never been validated in the context of natural environmental systems and is even difficult to justify when modelling complex manmade processes in engineering.”

    Yes, if a physical process is considered important enough for inclusion in the model, then it is import enough to be properly validated. Ultimately there is a random element in climate and this needs to be in the model, but its statistical properties, like distribution, SD, etc. need to be known and validated. Usually it is sufficient to add these random properties after all other processes have been validated. You then have a model whose prediction you can trust for long term when higher frequencies are filtered out, just as they are in the real world data of the past.

    PS: I am sorry to miss so much at a critical time for the blog, I suffered a major crash of my computer system last week and now have to learn the intricacies of the iMac, after long windows experience

  14. Don’t see huge problems for these GCMs and the extensions mentioned above … Save for one key counterproductive strangle-to-death constraint.

    Everyone is way too keen to prove and or corroborate the AGW agenda.

  15. I gave at ATTP, here is information that will enable you to very easily locate them.

  16. Oh dear…
    “…interactive carbon cyclone, ”
    sounds interesting :D

  17. Judith asks:

    Rarely are the following questions asked: Is the approach that we are taking to climate modeling adequate? Could other model structural forms be more useful for advancing climate science and informing policy?

    I suggest the questions need to be asked by the economists and policy analysts, not the scientists. The scientists need to provide the information needed by policy analysts if they want “action”, which means policy.

    Questions the IAMs (and improved IAMs), and robust decision making, need answers to are the following (pdfs needed for each):
    • Time to the next abrupt climate change
    • Direction of the next abrupt climate change (i.e. warming or cooling)
    • Magnitude of the abrupt change
    • Duration of the abrupt change and distribution of the rate of change over time
    • Impacts of abrupt changes by rate of change, by magnitude of change and by region
    • Economic costs and benefits of abrupt changes by rate of change, by magnitude of change and by region.

    Those are the questions I want to see (believable) answers to.

    • I would simplify these six questions to one: “Can models that can’t explain the past reliably predict the future?”

      Based on HadCRUT4, multidecadal climate for 1850-2010 is quite accurately modeled as a sum of two oscillations of respective periods 20 and 60 years and the log of the observed CO2 level over that period, using ice core data for pre-1958 CO2 and Mauna Loa since as per slide 25 (of 35) of my AGU Fall Meeting talk in December.

      While essentially every GCM incorporates the CO2 contribution, I’m unaware of any GCM that accounts convincingly for either of these two oscillations. Anyone with a convincing explanation for either one surely deserves AGU’s counterpart of the Nobel Prize.

      • Yet a vast number of peer reviewed papers that forecasted a dire future for humanity were based on the outputs of these unreliable models.

        How does a reasonable person justify implementing government policy based upon the outputs of unreliable models???

      • and in much of the paleo data there are indications that the oscillations extend back to ~1700, the tropical ocean depths of the LIA, but the possibility of long term persistence is carefully avoided.

      • Put together Pratt’s model, Lovejoy’s model, the CSALT model and other similar ones and you start to build up a force that will need to be reckoned with. Remember that these historical models keep on getting better as new data comes in.

      • How does your model go at reproducing the abrupt changes shown in Figure 15.21, p391, here:
        http://eprints.nuim.ie/1983/1/McCarron.pdf

        Rapid change from ice age to warm conditions in just 7 years and just 9 years!

      • I should also havce pointed out the chart shows the climate is highly variable, when cold and much less variable when warmer. Another piece of empiricle evidence demonstrating that warmer is better!

      • Vaughan, how did you get your delay of 25 years?
        The inflection point in atmospheric CO2 at 1960 imposes a shift in line shape on the rate of warming.
        I couldn’t get a delay of > 5 years to be a better fit than a delay of 0, this mean a t1/2 for a first order delay must be 2-3 years a most.

        Other than that, it is very close to the 1.8-2.2 guesstimate I get from graphology.

      • @PL: How does your model go at reproducing the abrupt changes shown in Figure 15.21,

        Not at all. But why should any model of contemporary climate need to do so? Those three changes occurred more than a hundred centuries ago as part of coming out of an ice age. We’re not currently in an ice age and are unlikely to enter one any time soon, let alone leave one.

        I should also havce pointed out the chart shows the climate is highly variable, when cold and much less variable when warmer.

        The chart shows a cold period from 16 ka to 14.5 ka (tail end of the last glaciation), a transitional period from 14.5 ka to 11.5 ka (the Late-glacial), and a warm period since 11.5 ka (the Holocene). The high variability you’re pointing out seems to have occurred largely during the transitional period. Would you call the cold period highly variable? To my eye it looks flatter than the warm period.

      • @DM: Vaughan, how did you get your delay of 25 years?

        I answered this further down (my mistake, probably).

    • If the scientists gave answers such as “there are many theories but we don’t know”, what would you do? Wouldn’t it then be good to work with them on reducing uncertainty?

      • Yes. But the point is that the focus needs to be on the information that is relevant for policy analysis, not the research that climate researchers are interested in.

      • Peter: Agree. The decision maker must have the talents of a good executive; gathering the best people and leading effective discussions.

    • @DM: The case d = 25 on slides 31 and 32 was mainly an example of how to use the graph, with the extra point that if the delay actually was 25 years this would make the observed sensitivity s = 0.47 * 25 = about 1.2 °C/dbling less than what would have been observed in the absence of heat sinks such as the ocean. All that slide 31 claims is a relationship between d and s, not any particular value of either.

      To the extent that x is a good approximation to ln(1 + x), and assuming x grows exponentially with time, it is impossible to estimate d by fitting because an additive change in d yields a multiplicative change in x and hence in ln(1 + x). In 1960 CO2 was 315 ppmv, making x = 315/280 − 1 = 0.125. ln(1.125) = 0.118 or about 6% less than 0.125. With CO2 now hitting 400 ppmv, x = 400/280 − 1 = 0.429 and ln(1.429) = 0.357 or about 17% less than 0.429 so there is somewhat more hope of estimating d now by fitting than using data only up to 1960. This will further improve over the next half-century, but even then I don’t presently consider the fitting approach to estimating d particularly robust. Worth a closer look though.

  18. There can be confusion over what randomness in dynamic climate models might entail. This reflects understanding of what randomness and probability mean, affected possibly by pop accounts of the difficulties of interpreting quantum mechanics. The way classical probability is set up means it can be viewed as determinism plus lack of specific information about the deterministic processes involved. Uncertainty created by such lack of detail (eg undetermined variables) is then represented by probabilities. This description seems to fit climate modeling well. The use of random dynamics need not imply any deeper philosophical concept of ‘God playing dice’ with climate. That is a different concept of probability.

  19. Berényi Péter

    The scientific basis for inexact and stochastic computing is that the closure (or parametrisation) problem for weather and climate models is inherently stochastic.

    Well, it is in fact worse than that. Current GCMs rely on Reynolds-averaged Navier–Stokes equations, which are completely unphysical. In technological applications, when all the parameter space can be covered by experiments, one can attain closure to some extent, but it is not the case with the climate system, which is too large to be brought into the lab. It is not a proper remedy to this problem to treat residuals as stochastic variables either, because we lack devices to estimate their statistics. In 3D turbulent flows a large amount of energy seeps down to ever smaller scales, which process is not represented on coarse grids, because dissipation in the atmosphere only kicks in at submillimeter scales and there is no way to improve resolution of computational models by 20 orders of magnitude ever.

    We have some general results for reproducible quasi stationary non equilibrium thermodynamic systems of many coupled internal degrees of freedom, that is, they tend to maximize rate of entropy production, which, being a variational principle itself, is an additional constraint to energy, momentum and angular momentum conservation. Unfortunately the climate system is not reproducible in the sense that no matter how macrostates are defined, microstates belonging to the same macrostate can evolve to different macrostates in a short time. This is an inherent property of chaotic systems, in climate it is also known as the “butterfly effect”.

    Irreproducibility in the climate system has enormous consequences, well beyond the fact it is difficult to compute the incomputable. Trouble is there’s not even a straightforward way to define Jaynes entropy for such systems, and it is foolish to discuss heat engines without going into details of their entropy processes. For the climate system is a heat engine for sure, feeding on the temperature difference between a small patch of sky at 5,778 K and 2.7 K for the rest of it.

    What we do know is that most of entropy production in the climate system occurs when incoming shortwave radiation gets absorbed and thermalized. However, there is nothing like maximum entropy production in this process, because about 30% of sunlight gets rejected right away, with no chance to get into the system. It means a pitch black Earth would generate entropy at a much higher rate, than the actual one, but it is enough to have a look at the thing from the outside to see it is very far from being black. Also, we do know that terrestrial albedo is very accurately regulated by some internal emergent process, because annual average albedoes of the two hemispheres match almost perfectly, while their clear sky albedoes are very different, due to uneven distribution of land masses. Therefore it is cloud cover that does the job, which is not represented in current GCMs properly.

    The only conceivable reason the terrestrial climate system should not obey MEPP (Maximum Entropy Production Principle) is its irreproducibility, that is, its chaotic nature. However, there is no general theory of irreproducible quasi stationary non equilibrium thermodynamic systems whatsoever, one of the few remaining blind spots of (semi)classical physics. With no adequate theory to start with and no chance for experiments, there is little hope to make progress in climate science along these avenues.

    If we wanted to delve a bit deeper into the issues of climate models, the first thing to be clarified were the greenhouse effect. Contrary to usual belief it is not directly dependent on well mixed greenhouse gas concentrations, but on average (thermal) infrared optical depth, or rather, its relation to short wave optical depth of the atmosphere. True, increasing mixing ratio of a specific GHG does increase IR optical depth in some narrow bands. But the elephant in the gift shop is water vapor, which is the major greenhouse gas in the atmosphere, and is very far from being well mixed. It has no chance to get to that point, because its average atmospheric lifetime is only 9 days before it gets precipitated out, far too short for turbulent mixing do its job.

    Now, average optical depth generated by a non well mixed GHG is only bounded from above by its average concentration, it is fully indeterminate otherwise. That is, with enough dry patches in the sky even a moist atmosphere can be fairly transparent in water vapor absorption bands. Optical depth is not determined by average concentration, but by higher moments of its distribution, which are neither modelled nor measured properly.

    I still think our best hope at this point is to leave the climate system alone for a while and go back to physics. The terrestrial climate system is obviously too large, but there may be other members of this broad class, that is, of irreproducible quasi stationary non equilibrium thermodynamic systems with many non linearly coupled internal degrees of freedom, which are small enough to fit into a lab setup and can be run there as many times as we wish under controlled conditions. With sufficient experimentation we may be able to develop a successful conceptual &. computational model of that system, verify it experimentally and learn a lot along the road about basic physics.

    Only then, if we knew how entropy processes worked, what variational principles were valid, if self organized criticality played a role or what fluctuation theorem would hold under such conditions, we’d be ready to return to climate. That’s my bet.

    • A fan of *MORE* discourse

      Berényi Péter claims [bizarrely, giving neither reason nor evidence] “The only conceivable reason the terrestrial climate system should not obey MEPP (Maximum Entropy Production Principle) is its irreproducibility, that is, its chaotic nature.”

      Climate Etc readers may wish to verify for themselves that the recent thermodynamical literature supplies abundant reasons for the MEPP to fail. For example:

      • Matteo Polettini’s Fact-Checking Ziegler’s Maximum Entropy Production Principle beyond the Linear Regime and towards Steady States (2013).

      • Leonid M. Martyushev and Vladimir D. Seleznev’s The restrictions of the Maximum Entropy Production Principle (2013).

      Both articles contain plenty of references to the earlier literature.

      Conclusion  The hopes expressed in the older literature, in regard to the broad applicability of the MEPP, have proved to be over-enthusiastic: the MEPP is *not* a law of nature, but is proving to be a general guideline that commonly is right, yet not uncommonly is wrong.

      In practice, MEPP is useful for generating models that are thermodynamically consistent, mathematically simple, and physically insightful … that however are *NOT* warranted to be quantitatively accurate.

      That’s why the track record of physics-grounded energy-balance climate-models is superior to the track record of MEPP-grounded climate-models.

      It is a pleasure to help broaden your appreciation of the recent mathematical and scientific literature relating to MEPP, Berényi Péter!

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • Berényi Péter

        Well, fan, it is not so bizarre. Read it. You may even learn something new. His derivation is only valid for reproducible systems, which does not cover climate.

        Journal of Physics A: Mathematical and General Volume 36 Number 3
        2003 J. Phys. A: Math. Gen. 36 631
        doi: 10.1088/0305-4470/36/3/303
        Information theory explanation of the fluctuation theorem, maximum entropy production and self-organized criticality in non-equilibrium stationary states
        Roderick Dewar

        An equivalent, more information-based way of viewing the rationale for Jaynes’ formalism is to recognise that we are concerned with the prediction of reproducible macroscopic behaviour.

      • What i have read on MEPP seems to tell that it’s not nearly as well defined in general as entropy itself. The boundaries (and perhaps also the structure) of the system can be defined in many ways and, how that’s done seems to affect the outcome essentially. The idea appears to have properties applicable for a rule of thumb rather than real theoretical work.

        It’s plausible that MEPP will produce good results in many applications where it’s definition is not too arbitrary, but not in many others (unless the setup is adjusted a posteriori to give the right outcome).

        Applying MEPP successfully to climate or the Earth system as a whole does not look promising.

      • A fan of *MORE* discourse

        Pekka Pirilä opines “The idea [of MEPP] appears to have properties applicable for a rule of thumb rather than real theoretical work.”

        Your statement is entirely correct, Pekka Pirilä.

        As a concrete example, thermoelectric power conversion devices deliberately operate far from thermodynamical equilibrium. If the MEPP were strictly true, then all all thermoelectric materials would relax similarly, and as a result, all thermoelectric materials would have the same thermoelectric figure of merit.

        Which they don’t (needless to say)!

        Conclusion  Enthusiasm for the MEPP as a fundamental law of nature varies inversely with practical experience of it.

        Berényi Péter, it is a pleasure to provide these concrete examples to assist your physical understanding!

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • Berenyi, thanks for this comment. I have long thought that entropy is an underutilized piece of physics with regards to climate models.

    • @BP: For the climate system is a heat engine for sure,

      Certainly, but its efficiency is critical here. The mechanical impact (hurricanes etc.) of a 50% efficient heat engine is 50 times that of a 1% efficient one.

      What do you estimate as the efficiency of the climate heat engine?

      And how does that efficiency vary between land and sea?

      • Berényi Péter

        I don’t think it is very efficient in converting heat flow to mechanical energy.

        However, pure mechanical energy input to oceans plays a critical role. Deep turbulent mixing is accomplished by breaking internal waves over rugged bottom features and continental margins of complicated geometry, at specific sites. These waves are generated by winds, mostly over the Southern ocean, and lunisolar tides, in roughly equal proportion. It is an important process, because this is what replenishes buoyancy at depth, keeping MOC (Meridional Overturning Circulation) running.

        Contrary to popular belief only the descending branch is driven by thermohaline density differences and only as long as the abyss is not saturated with cold dense saltwater. At that point either another process kicks in, which dilutes it or circulation grinds to a halt, for heat conductivity of seawater is very low, diffusivity of salts is some two orders of magnitude less than that, geothermal heating at the bottom is next to negligible, while vertical turbulent mixing in open ocean is three orders of magnitude lower than what’s required to maintain MOC.

        Overall mechanical energy input is low, so its contribution to ocean heating is nil, but being a very low entropy input, its effect on oceans (and climate) is enormous.

      • Berényi Péter – question. Given ocean water has a low viscosity and that the Earth turns, does that cause ocean currents directly by imparting a force to the water at the bottom of the ocean, resulting in a velocity gradient from the bottom up. I hope I worded that well enough to make a modicum of sense. :)

      • Look up Carnot’s Theorem

      • The Pacific ocean shows a sloshing behavior. Unless you know how to mathematically describe the dynamics of this behavior, you are talking around the real issue.

        In other words, all talk and no analytical models from the denialist yappers. What else can we expect?

      • Even defining the efficiency of the atmospheric heat engine is difficult. let alone measuring or calculating it. To define the efficiency we must decide, what’s considered output mechanical energy and what internal losses (dissipation) of the engine.

        The principal driving cycle is formed by the Hadley cells. Heat input is at the low latitude surface and sink in the troposphere in areas that emit IR out of the troposphere. Hadley cells drive jet streams and Ferrel cells and much of the rest of the circulation, but where is the output mechanical energy measured?

        I have seen numbers of a few percent presented for the efficiency, but I don’t know, what that’s supposed to mean.

      • Berenyi Peter:

        I agree that the climate system is far from an efficient heat engine. But, mysteriously enough, the MOC is brought into the discussion as “an important process” whose effect upon climate “is enormous.” How so? The poleward transport of heat by winds and currents is, after all, quite independent of it!

      • In Ocean Circulation, p.79, Rui Xin Huang estimates “the rate of work produced by the atmospheric heat engine” at about 2 W/m2. Based on a total heat flux through radiation of 238 W/m2 he infers an efficiency of 0.8%, corresponding to a Carnot efficiency of 33% (p. 151 op cit).

        Efficiency of the ocean heat engine is harder to estimate or even define, being complicated by several factors.

        1. The ocean has a considerable variety of mechanical energy components as listed in Fig. 3.19, p.203, op cit. Their balance is not well understood, making it hard to assess their contributions to the ocean heat engine’s output.

        2. Although geothermal energy is only a tiny fraction of insolation, it is a relatively large contributor to the kinetic and potential energy of the MOC. In particular a thermal vent can raise an abyssal parcel of seawater to the surface resulting in a huge increase in that parcel’s gravitational potential energy, seawater being a thousand times denser than air The efficiency of that particular heat engine will therefore be relatively large, having a small denominator.

        3. The MOC is driven to a considerable degree by thermohaline effects. However the meridional (north-south) components of the (nominally) Meridional Overturning Current predominate at high latitudes while at low latitudes the MOC travels westerly. Since the latter latitudes are where geostrophic balance (via the Coriolis effect) is strongest and acts westerly, some of the MOC’s mechanical energy must be of mechanical origin and thus not produced by any heat engine. Unfortunately this mechanical contribution to the MOC has been less studied than the thermohaline one, making it unclear how much of the MOC’s mechanical energy is of thermodynamic origin.

        4. Huang (p. 151, op cit) makes a more explicit disclaimer: “The ocean is not a heat engine at all. Differential heating is only a precondition for the thermohaline circulation, and not the driving force of the circulation., [which is rather] the wind stress and tides which contribute the [requisite] mechanical energy. Thus the ocean is a mechanical conveyor driven by external mechanical energy that transports thermal energy, freshwater, CO2, and other tracers. The inability of surface thermal forcing to drive the oceanic circulation was recognized a long time ago. Sandstrom discussed this fundamental issue 100 years ago; his postulation is known as `Sandstrom’s theorem’ in the literature.”

        Them’s fightin’ words. I’d be interested in hearing if anyone’s engaged Huang on them. I’d favor Huang’s side were my pay grade in that area up to it.

      • at low latitudes the MOC travels westerly

        I.e. towards the west. (Although the OED allows both “to” and “from” as the meaning of “westerly” I should have disambiguated with “westward”.)

      • corresponding to a Carnot efficiency of 33%

        Huang derives this on p.79 on the basis of an “input” temperature at the equator of 300 K and an “output” temperature at the poles of 200 K, giving a Carnot efficiency of 1 − 200/300 = 1/3 or 33%. These are obviously very round numbers, but the result of 33% is merely a (very weak) theoretical upper bound on the efficiency of the atmospheric heat engine and as such of no obvious practical interest.

        In the same paragraph he says (foreshadowing 4. above from p. 151) “Although the ocean has been called a heat engine in many papers and books, as will be discussed shortly, the ocean is actually not a heat engine; instead, it is a heat-transporting machine, driven by external mechanical energy due to wind stress and tidal dissipation.”

        (I would have said “tidal action” since I don’t see how tidal dissipation constitutes mechanical energy.)

      • Vaughan:

        Thanks for taking the time to elaborate on the oceanic circulation via Huang. Few notions impede the comprehension of the actual workings of the climate system as much as the mythical “great conveyor belt” endemic in superficial “climate science.”

    • Berenyi, you state:
      “Also, we do know that terrestrial albedo is very accurately regulated by some internal emergent process, because annual average albedoes of the two hemispheres match almost perfectly, while their clear sky albedoes are very different, due to uneven distribution of land masses. Therefore it is cloud cover that does the job, which is not represented in current GCMs properly.”
      That’s very interesting, I don’t remember seeing it stated before. Do you (or anyone else) have more detail on that?

      • Matthew R Marler

        Jonathan Abbott: Berenyi, you state:
        “Also, we do know that terrestrial albedo is very accurately regulated by some internal emergent process, because annual average albedoes of the two hemispheres match almost perfectly, while their clear sky albedoes are very different, due to uneven distribution of land masses. Therefore it is cloud cover that does the job, which is not represented in current GCMs properly.”
        That’s very interesting, I don’t remember seeing it stated before. Do you (or anyone else) have more detail on that?

        That is a most interesting comment. I second Jonathan Abbott’s request for more information.

      • Compensation of Hemispheric Albedo Asymmetries by Shifts of the ITCZ and Tropical Clouds. Abstract:

        Despite a substantial hemispheric asymmetry in clear-sky albedo, observations of Earth’s radiation budget reveal that the two hemispheres have the same all-sky albedo. Here, aquaplanet simulations with the atmosphere general circulation model ECHAM6 coupled to a slab ocean are performed to study to what extent and by which mechanisms clouds compensate hemispheric asymmetries in clear-sky albedo. Clouds adapt to compensate the imposed asymmetries because the intertropical convergence zone (ITCZ) shifts into the dark surface hemisphere. The strength of this tropical compensation mechanism is linked to the magnitude of the ITCZ shift. In some cases the ITCZ shift is so strong as to overcompensate the hemispheric asymmetry in clear-sky albedo, yielding a range of climates for which the hemisphere with lower clear-sky albedo has a higher all-sky albedo. The ITCZ shift is sensitive to the convection scheme and the depth of the slab ocean. Cloud–radiative feedbacks explain part of the sensitivity to the convection scheme as they amplify the ITCZ shift in the Tiedtke (TTT) scheme but have a neutral effect in the Nordeng (TNT) scheme. A shallower slab ocean depth, and thereby reduced thermal inertia of the underlying surface and increased seasonal cycle, stabilizes the ITCZ against annual-mean shifts. The results lend support to the idea that the climate system adjusts so as to minimize hemispheric albedo asymmetries, although there is no indication that the hemispheres must have exactly the same albedo.

        Paywalled, but if you can get access, its ref’s will probably provide full info.

      • And The Observed Hemispheric Symmetry in Reflected Shortwave Irradiance. Abstract:

        While the concentration of landmasses and atmospheric aerosols on the Northern Hemisphere suggests that the Northern Hemisphere is brighter than the Southern Hemisphere, satellite measurements of top-of-atmosphere irradiances found that both hemispheres reflect nearly the same amount of shortwave irradiance. Here, the authors document that the most precise and accurate observation, the energy balanced and filled dataset of the Clouds and the Earth’s Radiant Energy System covering the period 2000–10, measures an absolute hemispheric difference in reflected shortwave irradiance of 0.1 W m^−2. In contrast, the longwave irradiance of the two hemispheres differs by more than 1 W m^−2, indicating that the observed climate system exhibits hemispheric symmetry in reflected shortwave irradiance but not in longwave irradiance. The authors devise a variety of methods to estimate the spatial degrees of freedom of the time-mean reflected shortwave irradiance. These are used to show that the hemispheric symmetry in reflected shortwave irradiance is a nontrivial property of the Earth system in the sense that most partitionings of Earth into two random halves do not exhibit hemispheric symmetry in reflected shortwave irradiance. Climate models generally do not reproduce the observed hemispheric symmetry, which the authors interpret as further evidence that the symmetry is nontrivial. While the authors cannot rule out that the observed hemispheric symmetry in reflected shortwave irradiance is accidental, their results motivate a search for mechanisms that minimize hemispheric differences in reflected shortwave irradiance and planetary albedo.

      • Berényi Péter

        Thanks, AK, that’s the paper I was referring to. Full text is also available online.

        Journal of Climate, Volume 26, Issue 2 (January 2013)
        doi: 10.1175/JCLI-D-12-00132.1
        The Observed Hemispheric Symmetry in Reflected Shortwave Irradiance
        Aiko Voigt, Bjorn Stevens, Jürgen Bader and Thorsten Mauritsen

        It is measurement, not some godforsaken model. Also, current GCMs can’t even replicate this simple property, which is IMHO way more detrimental to them, than any discrepancy between projected vs. actual global average temperatures.

        BTW, I have already posted it on this blog.

      • Matthew R Marler

        AK, thanks for the links.

      • AK and Berenyi, thanks for the links.

        So it appears the ITCZ shifts north/south and the balance of albedo between the hemispheres is maintained. There doesn’t seem to be a clear mechanism, but presumably the symmetry in albedo is merely a result of the process, not a driver which I assume would relate to heat energy?

    • From Chapter 2 of Ackerman and Knox: “The planetary averaged albedo is a key climate variable as it, combined with the solar constant determines the radiative energy input to the planet. The global annual averaged albedo is approximately 0.30 (or 30%). The albedo varies quite markedly with geographic region and time of year. Oceans have a low albedo, snow a high albedo. While the Northern Hemisphere has more land the Southern Hemisphere, the annual average albedo of the two hemispheres is nearly the same, demonstrating the important influence of clouds in determining the albedo. ” (Italics mine.)

      A caveat here is that A&K are referring to bond albedo, which Wikipedia defines as the reflectance for light over all wavelengths (?) at all phase angles (as from a source filling the whole sky). Geometric albedo measures the actual light reflected back to a point source of illumination such as the Sun. Since the equator receives much more insolation per unit area than the poles, geometric albedo somewhat more faithfully captures the cooling effect of albedo. (Somewhat because one really cares about the total reflected radiation from a point source to all points in space, and I don’t know whether that version of albedo even has a name.)

      Wikipedia gives 0.306 and 0.367 as the respective bond and geometric albedos for Earth, showing the importance of specifying what kind of albedo one is talking about when discussing the impact of albedo on climate. The third unnamed (AFAIK) kind I mentioned parenthetically just above should be even more relevant to climate. Geometric albedo only tells how bright an object looks when viewed from the direction of the source of illumination, e.g. from the New Moon (other than during an eclipse of the Sun by the Moon), not how much insolation is reflected altogether.

    • Berényi, lets us say we have a body of air that satisfies the generally agreed definition of local thermodynamic equilibrium, in that molecules will collide many times with other molecules, all of the same average temperature, before a molecule can reach some region with a different temperature, and Kirchhoff’s law (good absorber is a good emitter, and a poor absorber is a poor emitter).
      This body of air is warmer than the body of air above it, and so the warmer body rises and the cooler body sinks. After the switchover, the cool air absorbs IR from the Earth below, and IR the warmer air above, and it radiates less IR down and up than did the body it replaced. After the switchover, the warm body emits more IR upward than did the cooler body, but less of its emitted IR gets to the Earth, as more is intercepted by the cooler body of air below. Neither body is in steady state with respect to incoming and outgoing IR, the cool body is warming and the warm body is cooling.
      As long as warm air is less dense than cold air, the two will exchange positions and you will have a circulation currents where warm air rises, radiates to space and falls, but at no stage is a packet of air at true steady state, because the less dense air must rise.
      Not only is the system not at equilibrium, but the churning demands that it can’t be at steady state.

  20. Most of these GCM’s come from principles and techniques developed by engineers. Johnny-come-lately mathematicians place far more reliance on arbitrary judgements and perform insufficient testing of said assumptions compared to engineers. Shackley, I notice, pretty much destroys Easterbrook and Lacis’s arguments about GCM’s. Of course there is no justification for assuming that grossly simplified models are somehow better than more sophisticated models!

    As you asked, the only way ahead for potentially modelling climate processes with any success is by Smooth-Particle-Hydrodynamics. You can take that to the bank!

    • JamesG, There goes your argument — Lacis is not the biggest proponent of GCM’s. He has said on this blog that mean value energy arguments are more effective than GCM’s in showing long-term global warming.

      And which Easterbrook are you talking about? I suppose its not the Easterbrook that is as dumb as a box of bricks.

      • Anyone can say anything but in normal fields of science and most definitely in engineering people expect some kind of testing to validate such a bold assertion – and by that I don’t mean hindcasting, which is like going into an exam with the results in your hands. Since no model has ever predicted anything correctly what Lacis and Easterbrook (you know the one), yourself and all the other pause deniers say is about as useful as a chocolate teapot..

  21. Is the approach that we are taking to climate modeling adequate?
    Yes, Yes and Yes. It is a trusted , proven and time honored model which has worked well scientifically in the past and will in the future and is used by scientists in all spheres of activity.
    Like Kepler who put forward theories on planetary motion to have some disproved only made him develop better ones which are accepted to this day.
    The problem is, as everyone knows , the models include guesses based on philosophy not science.
    Remove the climate sensitivity and positive feedbacks. Put in the ENSO, stadium waves , solar variance. Find ways to measure more accurately and to measure soot and other particle dispersion and causation and the models will be fantastic.
    As I have tried to advocate on a number of occasions all these models could be rerun taking out some or all of the unproven assumptions and if we found some that work, which we should we could use them with much more predictive capability
    Asimov showed in his Foundation series that Computer programmes can get thing s hopelessly wrong, but they can be corrected.
    We cannot model the butterflies or volcanoes but we can add them in when they occur, not because they might occur.

    • @angech: Is the approach that we are taking to climate modeling adequate? Yes, Yes and Yes. It is a trusted , proven and time honored model which has worked well scientifically in the past and will in the future and is used by scientists in all spheres of activity.

      The approach you speak of has failed to account convincingly for either the AMO or the pause. Since these are the only two multidecadal natural phenomena of any relevance to multidecadal climate forecasting beyond 2030, your confidence in climate modeling seems sorely lacking in empirical support.

      As I have tried to advocate on a number of occasions all these models could be rerun taking out some or all of the unproven assumptions and if we found some that work, which we should we could use them with much more predictive capability

      1. What does “some that work” mean when applied to “taking out unproven assumptions”?

      2. What is your criterion for an “unproven assumption,” other than an assumption in which you have no confidence?

      3. Given the meaning you have in mind, why would you expect that your envisaged improvement would have even the slightest impact on multidecadal forecasting?

      • Matthew R Marler

        Vaughan Pratt: The approach you speak of has failed to account convincingly for either the AMO or the pause.

        I am glad that you showed up for this thread. That was a good post.

      • Thanks, Matthew. I was off the grid (not even satellite internet) in Africa for a few weeks whence my silence. Unfortunately this has also created a backlog of work which will limit my opportunities to comment for a while.

      • IMHO there appears to be some underdamped mechanics at work that doesn’t manifest itself in the GCM’s. My bet is that given a gradual perturbation (positive forcing), the temperature gradiants in the oceans steepen which eventually leads to an increase in convective velocity. Due to inertia, the heat uptake is at first too slow and then too fast. This leads to an overshooting/undershooting oscillation with a period of about 60yrs. Now I only need to wait a couple of more cycles to gain adequate confidence of my hypothesis :)

      • If you look at the steps, it’s less than two decades, and sometimes less than one decade.

      • AJ, “Due to inertia, the heat uptake is at first too slow and then too fast. This leads to an overshooting/undershooting oscillation with a period of about 60yrs. Now I only need to wait a couple of more cycles to gain adequate confidence of my hypothesis :)”

        Very good! You can use paleo data to get a longer time frame for your hypothesis.

        https://lh4.googleusercontent.com/B8EdwjPc0ZA0LbUupACqn7WEr3P7epiBXsd3rvgttlg=w377-h223-p-no

        Volcanic perturbations tend to shift the timing, but recovery is a weakly dampened wave form.

      • Capt’n, that’s interesting. If there is a weakly damped waveform caused by the mechanism I propose, I wonder if this would have any implications for climate sensitivity? Do the upper ocean temperatures alternate about the equalization point? That is, can they temporarily heat to beyond their equalization temperature? For example, assuming we were near the peak of one of those cycles and we instantaneously stabilized the anthro forcings, would the temperatures actually drop towards equalization? Without consideration for delayed albedo forcings, I would imagine so.

      • AJ, ” Do the upper ocean temperatures alternate about the equalization point?”

        Don’t know. Because of the Coriolis effect and land mass distribution, you have effectively 3 ocean basins, Northern Atlantic, Northern Pacific and southern hemisphere with volumes of 1x, 2x and 4x North Atlantic roughly. Since the equalization of the basins is limited, that produces the psuedo-oscillation. The southern hemisphere ocean temperature runs about 2 C cooler than the NH currently so I imagine there is a temperature differential that is close enough to equalization to reduce the oscillation amplitude but not completely dampen it.

      • Thanks for your thoughts Capt’n. Seeing that the GCM don’t produce this behavior, perhaps this is a good candidate phenomenon to study with one of the alternative modelling methods Judy mentions.

      • What natural climate oscillations do GCM’s account for that aren’t 90% damped down within a decade, and are strong enough to be relevant to multidecadal climate for the current century?

        (These are not equivalent as there may exist undamped multidecadal oscillations that are so weak as to have no observable impact on multidecadal and centennial climate, or so slow as to only be relevant to forecasts for future centuries.)

        GCM’s contributing to CIMP5 are of particular though not exclusive interest here.

        If there aren’t any then it would be helpful to have an explanation (from any source) of the relevance of GCMs to understanding 21st century climate.

  22. Kevin Hearle

    In the end it is the model that best predicts the variable of interest that is what we want. GCM’s don’t predict any variable that matters as far as I can see so they need to go and we need to spend some real money and effort on alternatives. This time perhaps it might an idea to validate the models and follow some of the rules that apply to modelling. Perhaps some multidisciplinary approaches with people involved like statisticians who can keep the records strait and have open definitions of what people are trying to formulate and open access to the code would help.

  23. It used to be that there were just some things you couldn’t compute and so didn’t know. This didn’t lead to introspection. You did the science you could do, not the science you couldn’t.

    This apparently no longer appiies.

    I blame funding.

    • you just defined post normal science (PNS). you have to try to fill in the holes to inform policy. now, should it be attempted? some scientists think not. but that aspect I don’t think is strictly about funding. well, it could be related to emphasis on “broader impacts” i guess.

    • “It used to be that there were just some things you couldn’t compute and so didn’t know.”

      This is the most relevant statement I’ve seen yet to climate prediction.

      Given that running a GCM far into the future and obtaining a prediction is a fools game (its an absolutely unprecendented use of models and the arguments for the validity are inane, non rigorous and naive), what then?

      I see promise in Dr. Curry’s suggestions. If you cant get a prediction, the next best thing is to start exploring your uncertainties in depth and gaining understanding about exactly what can be said, what is iffy, what is impossible, etc…

      GCM’s are powerful, impressive, neat to run, monuments to ingenuity, perserverance and intelligence, and living museums of the interface between emerging computational power and old school physicist pragmatism. But they are not predictors.

      What next, then.

      • First there was dynamics …
        What next, then.

        (Rhetorical answer)
        Dynamical systems. After that, symbolic dynamics

  24. michael hart

    “[T]he reductionist argument that large scale behaviour can be represented by the aggregative effects of smaller scale process has never been validated in the context of natural environmental systems and is even difficult to justify when modelling complex manmade processes in engineering.’

    Hit the mark there, sure enough.

    A ‘science of everything’ which produces a model of everything. Surely must be worth more than just one “Nobel Prize”? What can I possibly do but bow down and bend over if front of the gods who could produce such a thing.

  25. In addition to insufficient understanding of the system, uncertainties in model structural form are introduced as a pragmatic compromise between numerical stability and fidelity to the underlying theories, credibility of results, and available computational resources. [my bold]

    This is a key circularity: if the paradigm demands warming in response to more CO2, “credible” models will produce it. As we say in IT: “garbage in, garbage out”.

    • Funny thing is that it will either whiz right over their heads, or they know they are being dishonest.

      • IMO it’s mostly psychological: all paradigms are intensely circular seen from the outside. The question of how somebody reacts to an “authoritative” paradigm is probably more a function of their personality than their honesty.

  26. Shackley et al wrote, “The model building process used to formulate and construct the GCM is considered as a prime example of ‘deterministic reductionism’. By reductionism, we mean here the process of ‘reducing’ a complex system to the sum of its perceived component parts (or subsystems) and then constructing a model based on the assumed interconnection of the submodels for these many parts.”

    Descartes held that non-human animals could be reductively explained as automata — De homine, 1662

    I don’t think that we have yet successfully modeled something as simple and well studied as a duck.

    • Speed says, “I don’t think that we have yet successfully modeled something as simple and well studied as a duck.”

      Which reminds me again of Von Neumann’s statement, “with four parameters I can fit an elephant, and with five I can make him wiggle his trunk.”

      The models seem to me to be becoming more and more Ptolemaic.

  27. A fan of *MORE* discourse

    Judith Curry requests [reasonably] “I look forward to hearing from those of you who are have experience in other fields that develop models of large, complex systems.”

    General experience modeling complex systems (and my experience own too) in suggests the following principle: Simulations should explicitly respect exact physical laws. A classic example is the long-time predictive simulation, by large-scale numerical integration, of the (chaotic!) dynamics of the Solar System.

    Lesson-Learned  Symplectic integrator numerical models (that explicitly respect the First and Second Laws) substantially outperform integrators of nominally higher accuracy that do not respect those laws.

    First Lesson-Learned  Large-scale numerical simulations should explicitly respect the Four Laws of Thermodynamics; preference therefore is given to energy-balance models.

    Second Lesson-Learned  Conversely, purely statistical “cycle-seeking” analysis methods have little (or even negative value) in climate-science, in consequence of the near-universal failing of backtest overfitting, a practice deplorably unconstrained by thermodynamical principles, that (sadly) is irresistibly tempting to researchers and denialist ideologues alike.

    Judith Curry asserts [wrongly, and without reason or evidence] “ Rarely  COMMONLY are the following questions asked: Is the approach that we are taking to climate modeling adequate? Could other model structural forms be more useful for advancing climate science and informing policy?”

    Judith Curry, please assign an undergraduate to begin a literature-search starting with:

    • Spencer Weart’s Simple Models of Climate Change

    • James Hansen et al’s Climate sensitivity, sea level and atmospheric carbon dioxide

    • The DOE/ASCR Workshop Challenges in Climate Change Science and the Role of Computing at the Extreme Scale

    Third Lesson-Learned  Simple models and ‘no pause’ observations already yield a compelling climate-change worldview:

    • the sea-level is rising without pause or obvious limit, and
    • the oceans are heating without pause or obvious limit, and
    • the polar is melting without pause or obvious limit, and
    • the troposphere ‘pause’ is ending (as predicted), and
    • all of these phenomena are predicted by simple energy-balance models.

    Fourth Lesson-Learned  Multiple independent large-scale global circulation models (from many authors and groups around the world) broadly affirm the above conclusions … without substantially altering them.

    Summary Lesson  Scientists already accept (younger ones especially!) and voters too already accept (younger ones especially!) that the energy-balance climate-change worldview is right.

    As for special interests, astro-turfers, pundits, professional politicians, cranks, and cycle-seekers … well … it matters less-and-less what *THEY* think.

    The emerging simplicity of energy-balance climate-science obvious to *EVERYONE*, eh Climate Etc readers?

    Surely your students — the mathematically inclined ones especially!s — are *ALREADY* telling you this, Judith Curry?

    \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • Sea level varies little from halfway up a duck.

      H/t some old Bish thread.
      ==========

    • Math says running GCM’s and trusting the results is wrong.

    • Oh, and Kudos to those climate research who are doing sensitivity stuides. Their work does not go unnoticed.

    • “the energy-balance climate-change worldview is right”

      A rotating planet, on a tilted axis, in an elliptical orbit around a variable star, with >70 of its surface covered by a molecule that exists in three phases during the normal orbiting cycle is in ‘energy-balance’.

      Application of equilibrium thermodynamics to non-equilibrium system can sometimes give a ball park figure, but is imbecilic to state as scientific truth.

    • We’re the ducks, get it?
      ==================

  28. Consensus Climate Theory and Climate Models were developed using modern instrumented data.
    That instrumented data was taking in a warming phase of a cycle that has gone from Warm to Cold to Warm to Cold, inside the same upper bound and the same lower bound, for ten thousand years.
    The Climate Models cannot reproduce the warming and cooling cycles. You cannot start a model in the peak of the Roman Warm Period or the Medieval Warm Period and get results that show going into a cold period. They just wander along an average and turn up like Mann’s Hockey Stick when CO2 goes up.
    A warm period has a lower ice extent and a cold period has a higher ice extent.
    Consensus Theory uses Solar Cycles and Orbit Cycles to make Earth warm and cold and then use the warm to take away ice and uses cold to add ice. That is backwards and wrong.

    When oceans get warm, Polar Ice melts and turns Snowfall on.
    When oceans get cold, Polar Oceans freeze and turns Snowfall off.
    Put this in the Models and they will start working better and will have bounds, like actual real data. Except for not having the Polar Ice Cycles that Earth has, the models may be mostly right,

  29. Pingback: These items caught my eye – 26 June 2014 | grumpydenier

  30. If you want to see what can be done with computing, look no further than the ASCII labs. This money allowed the labs to modernize their massively parallel solvers, hire computer scientists and mathematicians in addition to the physicists.
    Their solvers are state of the art. Codes (actually frameworks) for multiphase flow exist. Most of the hard computer science work is already done.
    The one climate lab I know decimated its stable of exactly the people who could have helped them get their models to a flexible state where they could start working in some of the directions suggested here.
    Mutlidisciplinary. Atmospheric scientists and phsycists will always be in the lead on the climate problem but the day of reckoning is here and their naivete in the fields of coding, building code frameworks, running models, doing adjoint analysis, numerical error mitigation, etc, etc…. are becoming painfully tiresome.

    Funding. It all starts there.

  31. I thought the models were already multi component and multiphase…I guess I need to read more about them, because if the have water evaporating, forming droplets, clouds, and rain that’s definitely multiphase.

    I wonder, are you looking at simulating water droplets in different forms, as if the system had a family of liquid water phases?

    • I think the issue is one of using first principles, using levels sets etc, to fully represent the interactions of multiple fluids in a non ad-hoc manner, as opposed to sub grid scale dice rolling etc…. As I understand the issue, anyway.
      e.g.
      http://physbam.stanford.edu/~fedkiw/animations/chemical_reaction.avi

      • I guess they do it ad hoc? I can see how it would be hellishly difficult to model water condensation into droplets which nucleate around aerosols and can eventually be anything from fine mist to ice balls the size of an orange. I imagine the water vapour to condensed water and clouds is the trick. it gives you fits.

        Also, why not bet on the emergence of computing power 1000 to 10,000 X more powerful than you have? I did that in the 1990´s with a system dynamics model proposal. It took them three years to get the thing working smoothly and by then the computers had “grown up”. The least you should do is assume you can have a 10X grid refinement. Why not pilot test it and see it yields something useful?

      • I dont know much about atmospheric physics, so Im not sure if I’m on the right track with the level set idea?? (I’ll stick to my expertise (modelling and computer programming… if only the climate people would do the same…).
        Not sure about expecting computers to get that much faster… my understanding is that issue with heat have kind of stalled Moore’s law at this point. There has to be a paradigm shift in computer design, which is only vaporware so far….

  32. The problem with stochastic modeling is that government scientists really have no desire to venture outside humanity’s comfort zone–i.e., the last few hundred years! For example, what is the likelihood that the future climate will show no global warming for a long, long time? Looking back over the last million years no one could seriously conclude that a period of no global warming is impossible; rather, we would conclude that the possibility of an ice age event is a certainty.

  33. Apart from the issue of the fidelity of the numerical solutions to the physical equations, there is yet another uncertainty.

    It’s refreshing to see the focus being shifted more and more to the parameterizations and issues associated with them. Especially it is refreshing to see the hand-waving, The fundamental equations of physics are solved by the GCMs, fading into the background. Unfortunately, until the focus has shifted to the discrete approximations to the continuous equations and the numerical solution of these little real progress will be made. After all, this is where the numbers come from. Or, more nearly correctly, it is this from where the numbers come.

    Consider conservation of energy; a hand-waving focus for so many discussions. Start with the indisputable fact that the complete, un-altered total energy equation, or the complete, un-altered formulations that make up the total energy equation, are not used in any GCM. None at all. Based on this situation alone, the GCMs cannot be conserving the energy of the physical domain; an approximation to the physical domain energy is represented instead. The models have a potential to conserve the model energy, but that is only a potential. Note that the Earth’s climate systems are somewhat unique with respect to conservation of the total energy because the driving energy for all the storage, transport and exchange processes occurring within the complete system is itself part of the problem.

    It is also an indisputable fact that the model equations and the GCM numerical solution methods are in the form of an Initial Boundary-Value Problem. They are not in the form of a Boundary Value Problem which is the steady/stationary case. Integration in time to get to a new theoretical steady/stationary state following perturbations/forcings requires that the potential to conserve the model energy be carefully attended to in all the procedures and processes that lead from the continuous formulations to the calculated numbers.

    The Earth’s climate systems, relative to thermal energy storage, transport and exchange, are temperature-controlled systems. That is, we don’t get to divvy up the energy budget among the subsystems by the power (Watts), or energy (Joules), or heat flux (W/m^2). Instead, the distribution is determined by use of parameterizations for the exchange coefficients and the calculated temperature driving potential.

    The exchange coefficients are generally based on empirical data taken at limited states of the material under what are hopefully the states, or nearby the states, that the materials will occupy in the physical domain. The exchange coefficients are not exact replications of the measured data; some differences between the data and its representations will always be present. Not to mention that the empirical data generally cannot cover all the states that are expected to occur in the calculation, or the physical domain. While the potential to exactly represent in the calculational domain the thermal energy exchanges modeled by the parameterizations, and thus conserve thermal energy, the resulting process will not correspond to the physical domain. Energy can potentially be conserved, but it is not the energy of the physical domain.

    So far as I am aware, there are no parameterizations for any physical phenomena or processes that are universally applicable to all states that a material can attain. That’s why the fundamental equations contain only material properties and do not utilize previous states that the material has attained.

    The insistence that the problem is solely a boundary-value problem is based on the assumption that the conditions within and between the sub-systems at the steady/stationary state are independent of the time-integration of the path to get to that state. This assumption, with a massive amount of work and CPU time, can potentially be shown to be correct. However, it is another indisputable fact if the spatial and temporal discrete increments used in the numerical integration are such that the calculations represent convergence of the numerical solutions to the continuous equations, the steady/stationary states are a function of the numerical path of integration. Again, model energy can be conserved, but its distribution within and between the subsystems will not correspond to the physical domain.

    • oops,

      However, it is another indisputable fact if the spatial and temporal discrete increments used in the numerical integration are such that the calculations DO NOT represent convergence of the numerical solutions to the continuous equations, the steady/stationary states are a function of the numerical path of integration.

  34. Matthew R Marler

    Prof Curry: are GCMs the best tools?

    My reading to date suggests that a barely adequate model (i.e. demonstrably better than historical variation) for forecasting will be 10 – 20 years in the future, and will be hugely computationally expensive. So I hope that all the “schools of thought” keep up their good work.

  35. Let’s compare climate modeling to oil production and reserves modeling. Looks like both need improvement.
    From the article:
    The US Geological Survey (USGS) ignited the Williston Basin oil boom in 2008 with its assessment of over two billion barrels of recoverable oil from the Bakken field but gave made no assessment of America’s Three Forks field to the south. The USGS just announced that its 2013 survey doubled its 2008 estimates for combined shale oil and recoverable natural gas deposits in the Bakken and Three Forks areas of the Williston basin to 7.4 billion barrels of oil and 6.7 trillion cubic feet of natural gas. The USGS survey results are expected to dramatically increase oil and gas investment in the region.

    http://www.breitbart.com/Big-Government/2014/06/23/USGS-Doubles-Bakken-and-Three-Forks-Energy-Reserves

    • Forecast: learning of the existence of a 100 supply of natural gas alone the Left failed to prevent US energy independence and will also fail in branding CO2 a pollutant but the Left and government science will succeed in destroying the dollar and costing the US a lot of jobs.

      • Fernando, your are correct. The Bakken drilling target is the middle member, which is not a shale, but rather a more porous member saturated by the source shales on either side. Same is true for the lower Middle Forks member, which lies below the lower Bakken shale.
        They are not actuall drilling the true Bakken shale at all, for good reason. Not sufficiently productive. I have a chapter on that in the next book, called Matryoshka Reserves, because analyzes the Russian Bazhenov (largest contiguous source rock shale deposit in the world) using exactly your observation plus some others that are commonplace in geophysics but apparently not known to MSM.

      • Rud – the horizontal wells that are fracked ARE drilled in the shale, not the sandstone. There is no reason to drill a horizontal well in sandstone, generally speaking, since it is porous. Note from the article: producing in the upper Bakken shale.
        From the article:

        Drilling activity along the “Bakken Fairway” changed greatly after Meridian Oil, Inc. drilled and completed the first horizontal well in the Bakken Formation. The #33-11 MOI, was initially drilled vertically. It was cored, logged, and drill stem tested, all of which indicated that the formation was tight. Meridian then backed up the hole and kicked off at 9,782 ft. Horizontal drilling was attained at 10,737 ft (measured depth) with a resulting radius of 630 ft. The well was completed on September 25, 1987 for 258 BOPD and 299 thousand cubic feet (MCF) of gas.

        The well had a horizontal displacement of 2,603 ft and is now producing in the upper Bakken shale that is 8 ft thick. The decline curve for the #33-11 was remarkably stable for the first two years until additional nearby wells came online. The well has produced 357,671 BO and 6,381 barrels of water (BW) through December 2003.

        The success of this well set off the previous notable play dealing with the upper Bakken shale. Operators were eager to use horizontal well technology to encounter more fractures and thus produce more Bakken oil. As in any play, there was a learning curve. The #33-11 MOI took 57 days to drill and complete; 27 days to drill the vertical borehole and 12 days to drill the horizontal section at a cost of $2 million. The third set of ten wells drilled by Meridian, further down the learning curve, had an average cost of $1.08 million and took 35 days to drill. By the end of the play, Meridian Oil, Inc. was touting the fact that they could drill and complete a horizontal Bakken well for essentially the same price as the drilling and completion of a vertical well, around $900,000. Successful wells were capable of producing high volumes of oil (IP’s in excess of 1900 BOPD).

        http://www.energyandcapital.com/resources/bakken-oil-field

      • From the article:

        But recently, several explorers have had success with wells that targeted the Upper Bakken. The wells don’t have the big initial flow rates as in North Dakota, but they declined more slowly and had a better oil-to-gas ratio (98% oil) than normal, Middle Bakken wells. The result is making geologists rethink the potential of the Upper Bakken – and therefore the potential of the entire Montana Bakken.

        A productive Upper Bakken is particularly significant in Elm Coulee, the best-producing part of the Montana Bakken to date. In this area the Middle Bakken becomes very thin, pinched out by a broad Upper Bakken. The result is a world-class source rock – remember that the organic-rich Upper Bakken is the source rock for the formation’s oil – with no nearby reservoir. That means all the oil has remained in place.

        http://oilprice.com/Energy/Crude-Oil/The-Bakken-Oil-Boom-Moving-Back-Home-to-Montana.html

    • Is there something that you don’t get Jim2?
      Shale oil is considered a second-rate source and one that is exploited because the more economic sources are declining.

      Compare shale oil to the taconite of N Minnesota. At one time taconite was considered uneconomic waste until the high grade iron ore started to get exhausted. Then they switched to taconite.

      7.4 billion barrels of oil is worth 3 months consumption on the world market.

      That’s why I switched to climate science analysis, at least it is more challenging than counting on your fingers.

      • It is you who doesn’t “get it.”
        From the article:

        Bakken Basics
        Question
        What is the API gravity of Bakken crude oil? Explain its relative quality.
        Answer

        Bakken crude oil gravity ranges from 36 to 44 degrees API. The quality of this oil is excellent, almost identical to WTI. The benchmark crude oil is West Texas Intermediate, which is 40 degrees API sweet crude. It is the benchmark because it requires the least amount of processing in a modern refinery to make the most valuable products, unleaded gasoline and diesel fuel.

        http://www.ndoil.org/oil_can_2/faq/faq_results/?offset=5&advancedmode=1&category=Bakken%20Basics

      • Web

        30 billion barrels of oil is a vast amount. Have you ever seen a chart that converts that to other forms of energy, for example x number of windmills of a certain rating or y number of solar panels?

        I’m just trying to reconcile the oil used with the number of renewable facilities that would be required if it were to be replaced.

        Tonyb

      • Mr Telescope, a couple of months ago I took a look at several Bakken cores just for my education….and it seemed to me the producing rock in North Dakota isn´t really shale.

        Regarding a comment by jim about “reserves modeling”: the COMPARABLE models we run in the oil industry are prepared after the field has been drilled and sampled. A full description isn´t needed to start with, but most experienced engineers would tell us the models don´t forecast very well until we have DYNAMIC data to match, and we have a very good measurement of fluid properties (most small companies don´t even bother to gather all the data because it can get expensive).

        This means any studies done by the USGS or geologists are extremely conceptual and don´t qualify as models as such. A full model requires a lot of study, years of history we can match, and today we use software to help us fine tune the history match, because changing all those parameters can be an inhuman task. A more sophisticated approach would involve using a match ensemble (we just don´t know which match is right). The match ensemble´s individual models are then pushed forward, and we cross our fingers the thing works. By the way, it´s bad manners to average the ensemble forecast. The outcomes have to be shown, and they are sorted in buckets.

        So as you can see this is very far from what geologists can do before a field is drilled and dynamically tested.

      • doesn’t should have been don’t – meh

      • Fernando Leanme – What you say can also be applied to climate models :)

      • From the article:

        The Upper Devonian-Lower Mississippian Bakken Formation is a thin but widespread unit within the central and deeper portions of the Williston Basin in Montana, North Dakota, and the Canadian Provinces of Saskatchewan and Manitoba. The formation consists of three members: (1) lower shale member, (2) middle sandstone member, and (3) upper shale member. Each succeeding member is of greater geographic extent than the underlying member. Both the upper and lower shale members are organic-rich marine shale of fairly consistent lithology; they are the petroleum source rocks and part of the continuous reservoir for hydrocarbons produced from the Bakken Formation. The middle sandstone member varies in thickness, lithology, and petrophysical properties, and local development of matrix porosity enhances oil production in both continuous and conventional Bakken reservoirs.

        http://geology.com/usgs/bakken-formation-oil.shtml

      • Don Monfort

        Webby is always looking through the wrong end of the telescope. He should know better.

      • Note that the denialists turn into cornucopians when the topiic is switched. They really are not into science, instead they have a deep-seated urge to be contrarians.

        All of the Bakken data is publicly available and one can analyze the trajectory for oneself. If you know how to perform a mathematical convolution you can take the number of wells developed and convolve that with the hyperbolic decline of a typical well to see the profile over the next few years. That is called analysis, if you have not encountered it before.

      • WHT convolves as crude oil production approaches, and will exceed, the 1971 peak.

      • Don Monfort

        The IEA convolves:

        http://www.bloomberg.com/news/2013-11-12/u-s-nears-energy-independence-by-2035-on-shale-boom-iea-says.html

        Oil shale BOOM!
        Biggest oil producer BOOM!
        Energy independence by 2035 BOOM!

        Webby weeps.

      • Didn’t taconite become viable because they learned how to run blast furnaces on magnetite? Crushing the ore and using magnetic separation, the pelletizing in clay of magnetite makes it more economic to use lower grade magnetite in taconite than the high grade clay/hematite ores.
        The Australians have loads of >50% magnetite and hematite ores, but they process their magnetite in a very similar manner to that used for the taconite process as you get sweeter iron and steel.

      • Tony, you might be interested in this on landscape use and
        renewable energy, from Matt Ridley, Chapter10 of ‘The
        Rational Pessimist :’

        ‘Moreover it is an undeniable if surprising fact, often over-
        looked, that fossil fuels have spared much of the landscape
        from industrialization. …To get an idea just how land-scape
        eating the renewable alternatives are, consider that to to
        supply the current 300 million inhabitants of the United
        States with their current power demand of 10,000 watts
        each would require:

        * solar panels the size of Spain
        *or wind farms the size of Kazakhstan
        * or hydro-electric dams with catchments one-third larger
        than all the continents put together.

        …Wind turbines require 5 to 10 times as much concrete
        and steel per watt as nuclear power plants , not to mention
        miles of paved roads and overhead cables. To label the
        land-devouring monsters of renewable energy ‘green,’
        virtuous or clean strikes me as bizarre.’
        bts

      • jim2,
        Why don’t you read this, fresh off the press:
        http://peakoilbarrel.com/oil-field-models-decline-rates-convolution/

        You might learn something useful.

  36. ThisblogisadelusionofJudithCurry

    Dr Curry, when will you wake up from your delusion that your a serious scientists. Models are just theory, the real world is expressing the issue.

  37. A fan of *MORE* discourse

    CLIMATE ETC POLL

    Model prediction versus “The Practical Wisdom of Crowds”

    Which should you trust? Let’s consult Arctic Sea Ice Weblog in regard to the 2014 Sea Ice Outlook Predictions

    Median Model Prediction  4.7×10^6 (km^2) minimum ice-area

    WUWT> Crowd Prediction  6.1×10^6 (km^2) minimum ice-area

    FOMD’s personal over-unders:

    50-50 either way  for the model predictions

    6-to-1 odds  that the WUWT crowd-source is too high.

    REASON  Reasoned model predictions are *SUBSTANTIALLY* more accurate than ideology-guided denialist predictions.

    This odds-imbalance is evident to *EVERYONE* … especially climate-science students and insurance executives, eh Climate Etc readers?

    \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • Statistically, based on the use of the scientific method, the probability that GCMs are actually looking at the same planet we live on is but 1 in 500. (So you’re telling me there’s a chance… YEAH! ~Dumb and Dumber)

      Studying 117 GCM simulations over a 20-year period (1993 to 2012), GCMs simulated a “rise in global mean surface temperature of 0.30 ± 0.02 °C per decade.” Compared to the actual rate of warming, the simulated rate was more than double.” GCM simulations were >4 times higher than actual over the last 15 years.

      [See, Fyfe, JC, et al., Overestimated global warming over the past 20 years. Nature Climate Change. V3 (Sept. 2013)]

    • Steven Mosher

      odds are more than 6:1 they are wrong

  38. To what extent are GCMs multi-threaded? Are grid cells processed in parallel?

    • First you make adjustments to the temperatures and then you convert the adjusted temperatures into anomalies for each station by comparing them to a period of time that that has been selected to make the anomalies look ominous.

    • The big ones use MPI for parallelization across supercomputers. At one time (at least) the CCSM used hybrid MPI and threading. MPI being used to link between compute nodes that do not have shared memory, threading for the cores on a single node (node being like a multicore desktop, the supercomputer being a collection of such nodes with a very fast network).
      So, yes, although its a little more complicated.

    • If you read the detailed publicly available documentation, the GCMs sort of are and sort of are not. They do everything in two steps per time slice. First, for each grid box, they compute the ‘new’ box result based on the previous result from adjacent grid box boundary conditions from the last time iteration. That is obviously parallel multithreaded but not dynamic. Then, in a second pass, they compute the new box boundary outputs to each adjacent cell for the next iteration. Again, that can be parallel multitasked but is stepwise rather than dynamic. So, GCM computational models just slog through.

    • Chris Quayle

      Lol,

      I’m sure the systems use multithreading. But, to optimise the end to

      end computing process, you need much more than just a multithread,

      multicore system, which is only one link in an ideal chain.

      First you need the mental ability to visualise and define the problem

      in parallel terms. Then find a language capable to expressing

      that idiom which translates to a multicore hardware environment

      having the required operating system support. Overall, not a trivial

      task.

      No idea what languages they are using now, but Fortran used to be

      the science favorite afaik. When all you have is a hammer etc :-(…

  39. A fan of *MORE* discourse

    BREAKING NEWS

    Republicans Seek to Bar Modeling from Policy Decisions


    Computing Research Policy Blog

    A Fruitless Markup
    on Department of Energy R&D Act of 2014

    The budget for Energy Efficiency and Renewable Energy (EERE) is substantially cut; ARPA-E funding is largely reduced; and the bill includes burdensome limitations on what research DOE can fund.

    In addition, it includes particularly objectionable language that bars the results of DOE-funded R&D activity from being, “used for regulatory assessments or determinations by Federal regulatory authorities.”

    Specifically, this language is meant to keep the Environmental Protection Agency (EPA) from using research data to support their operations and any environmental regulations.

    The real news from this markup is that the House Science Committee has become increasingly polarized and neither side intends to cooperate with the other.

    This does not bode well for future science legislation.

    Conclusion  Support from America’s community of STEM professionals for the Republican Party’s anti-scientific faux-conservatism is approaching zero … for good reason.

    Those reasons are obvious to *EVERYONE* — young research students especially — eh Climate Etc readers?

    Judith Curry, what do your students make of ongoing Republican attempts to bar science from policy decisions?

    \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • You’re saying the Democrat party continues to oppose the Republican party’s efforts to use limited resources wisely. Got it.

    • Certainly you wouldn’t advise young students to uncritically cut and paste news releases from DC lobbies, such as the CRA.

    • Matthew R Marler

      A fan of *MORE* discourse: Conclusion Support from America’s community of STEM professionals for the Republican Party’s anti-scientific faux-conservatism is approaching zero … for good reason.

      Alan Leshman of AAAS wrote me the other day for the second time in as many months. In reply, I wrote that I disagreed with some recent AAAS positions, such as uncritical support for Michael Mann and uncritical support for the idea that CO2 is causing climate change.

      I doubt that a survey of STEM professionals has ever been carried out that included an unbiased sample of stem professionals. The surveys that we read about are generally selected from a narrow fraction of academia.

    • Arrogant sneering hypocrite, AGW alarmist, and Hansen worshipper FOMD again insults our hostess by implying that Dr. Curry has failed as an educator by not her not promulgating AGW gospel to her students. It is a tribute to Dr. Curry that she allows all points of view at Climate Etc., even trolls like FOMD.

      Eh Climate Ec. readers!

      • Does FAN have a clue as to what/how Dr Curry teaches? The evidence is that she only wants better science and most likely only teaches science. Good teachers such as this will present opposing theories when applicable. Good lesson for FAN, eh? In what direction is the FAN blowing? Why is it opposed to evidence?

      • The ship hit the sand, and flew apart in unrecognizable pieces.
        ===================

  40. GCMs are the best bet for ever being able to understand how the climate works and answer the big questions in climate science. I sure haven’t seen any alternative presented clearly as to how exactly they those alternatives would work instead.

    Questions like:

    1. How will Earth’s climate respond if CO2 is doubled?
    2. How will Earth’s climate respond if solar output drops 2%?
    3. What are the causal factors for the observed pattern of interglacial and glacial periods?
    4. If a big volcano erupted in X with a VEI of Y, what impact would that have on climate?
    5. How does ENSO work?
    6. What caused the PETM?

    Even hypotheticals like
    7. If Earth was 100km closer to the Sun, how would the climate be different?
    8. If Earth had no moon how would the climate be different?

    Or possible useful questions in the future:
    7. How does the climate on planet B currently behave?
    8. What do we need to put into the atmosphere of planet B to terraform it to conditions C?

    Having a GCM that can represent and run the climate of described planet allows any number of questions to be asked and answers provided. Clearly a prize worth fighting for.

    • Fabricators of GCMs are just another special interest group looking for government handouts.

    • Having a GCM that can represent and run the climate of described planet allows any number of questions to be asked and answers provided. Clearly a prize worth fighting for.

      Does not matter that GCM’s have never provided any skillful answers that matched real data.

    • nottawa rafter

      lolwot

      You seem to be under the impression that man has the knowledge of all possible factors and variables involved in solving those questions. Let me assure you we don’t.

    • lolwot | June 26, 2014 at 4:49 pm says:

      “GCMs are the best bet for ever being able to understand how the climate works and answer the big questions in climate science.”

      Very nice theory but unattached to reality. The purpose for which climate models were set up was the promise of being able to foretell the future. It started with Hansen whose model A (business as usual) ventured to forecast the global temperature from 1988 to 2019. It was an abject failure as we discovered when we lived through the period he was modeling. His predictions were nowhere near the reality, all of them too high. However, the promise of eventually being able to foretell the future brought them plentiful funding. As a result, they have switched to supercomputers and are writing million-line code for the models. You would think that having advanced equipment and 26 years to get it going they would by now have a grip on what the climate is doing. But no such luck. The output of these supercomputers, and there are dozens by now, is no better than Hansen’s was and in some cases actually worse. This applies in particular to the PAUSE in warming that has lasted 17 years by now. Looking at the output from CMIP5 house duster graph there is not one thread there that follows the actual temperature pause which has been flat for 17 years.They all bend up in the twenty first century when the entire century’s temperature has been horizontal and flat. Part of the problem is of course that the mandate that carbon dioxide greenhouse warming exists is written into the code by fiat.. It so happens that the present day pause is accompanied by steady increase of atmospheric carbon dioxide and the models are consequently forced to respond to that. But what is worse is that this incredible stupidity is then written into recommendations for governments that need to prepare for what might be coming along as well as for AR5 and the NCA, and released to the press. As a result, all such official documents and press releases talk about a totally non-existent warming that is expected to warm the planet, raise sea levels, and do other non-sensical things that this warming will cause. This is actually worse than not having climate models at all because then they would lack the ability to call this nonsense scientific.And that is why it is time to recognize that the climate models have failed, close down the enterprise, and stop this waste of research money on worthless projects. As to the false results that are already out, I have no idea what to do about that except to warn all users not to take any climate projections seriously.

    • Matthew R Marler

      lolwot: GCMs are the best bet for ever being able to understand how the climate works and answer the big questions in climate science.

      Where has it ever been shown that GCMs are better for planning purposes than statistical summaries of past weather? Admittedly, I changed the ground away from “understand” to “planning purposes”; that is because the GCMs do not “add to understanding” but merely assemble all the known factors into computable form for prediction and testing. Were they to accurately forecast unexpected events, that might be different, but to date they inaccurately forecast what has happened since the individual forecasts.

    • lolwat, what about if the rotation speed was 36 hours or 12 hours; would average temperature be the same? The models, based on equilibrium thermodynamics suggest that the answer must be yes, but I don’t think so.

    • lolwot,

      All the questions you posed are irrelevant for policy options analysis.

  41. Steven Mosher

    are screwdrivers the best tool.

    ill posed.

  42. I hope you are not getting on a sinking ship. Despite supercomputers and million-line codes the performance of these models, from Hansen till today, has been atrocious and in my opinion the operation should be shut down.

    • Yet Hansen predicted the warming back in the 80s.

      To my mind no-one else predicted the world would warm.

    • ‘The climate system has jumped from one mode of operation to another in the past. We are trying to understand how the earth’s climate system is engineered, so we can understand what it takes to trigger mode switches. Until we do, we cannot make good predictions about future climate change… Over the last several hundred thousand years, climate change has come mainly in discrete jumps that appear to be related to changes in the mode of thermohaline circulation.’ Wally Broecker

      Broecker predicted it in the 70’s. It happens along with ‘Camp Century Cycles’ – which are now known to be abrupt climate shifts.

    • Matthew R Marler

      lolwot: Yet Hansen predicted the warming back in the 80s.

      Hansen overpredicted future warming once some warming was evident in the data.

  43. Obama predicts floods and droughts. It will happen, even if he has to get the government to blow a dam.

  44. Judith, from my own (non-climate) modelling experiences, this is a very big subject. The GCMs cannot hope to succeed. Not because of chaos. That can be simplified to strange attractors plus more uncertainty than woild be politically acceptable. Because the grid scales cannot be made small enough to model essential processes like convective cells.
    There are a wide range of model ‘atyle’ alternatives, as you note. Who knows which might prove fruitful under what circumstances. (IMO, it is highly unlikely one model ‘style will fit all refional and time scales). That is why much more funding should go into those explorations than into GCMs.
    We can now say definitively two things about them. One, they have no hope of succeeding, as TAR pointed out. Two, they have also objectively failed (the pause). Time to strike out on new scientific directions.
    Which automatically means the entrenched GMC crowd will fit back viciously until they are politically pruned for having failed. They already are.

  45. For interest, my lead letter in The Australian today;

    “Great news – “Australia will be left without a major scheme to cut greenhouse gas emissions” (“Palmer kills carbon action,” 26/6). This is the only sensible approach, giving the absence of warming since 1998, continued uncertainty about the processes which drive weather and climate, and a failure to demonstrate that there will be net costs if warming resumes. We know that anti-emissions programs to date have had high costs but will not affect future climate, and we know that, whatever projections have been made of the future, it has always surprised us.

    “Whether or not the Earth warms, the best policies have always been those which support innovation, entrepreneurship and adaptability, giving us the greatest capacity to deal with whatever future befalls. That is, broad policies which promote economic growth, rather than restrictive policies based on dubious assumptions and modelling.”

    In terms of the topic of this thread, it would need a much better demonstration of impending disaster which would be best dealt with by GHG emissions reductions to make me change my view.

    • A fan of *MORE* discourse

      Faustino opines “It would need a much better demonstration of impending disaster which would be best dealt with by GHG emissions reductions to make me change my view [regarding the desirability of a carbon-neutral global energy economy].”

      Speaking hypothetically, which among the following observations would suffice?

      •  an end to the troposphere ‘pause’? accompanied by

      •  continued ocean-heating? accompanied by

      •  acceleration of ice-mass loss and sea-level rise?

      Q  What weight of evidence suffices for Faustino to conclude “the energy-balance climate-science world-view is plausibly correct.”

      Would it be four mm/year of sea-level rise? five? six? more?

      The world wonders!

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • Faustino as someone who understands this subject your position is highly irresponsible.

      You know full well that human greenhouse gases if they continue will cause a significant warming of the Earth. There is simply no question about that. They best you can say is you don’t know what effect that will have.

      But reassuring people that warming has stopped, let alone implying that the “future surprising us” would be a good thing is highly irresponsible. Shame on you.

      • actually scrub the “shame on you” part, that’s overly harsh and I didn’t mean it. I think it just rang out in my head as a way of signing off/ending the comment but it makes me sound like some school teacher or parent wagging their finger..

      • Well, lolwot, one of my teachers did think that I needed a short, sharp shock about once a term.

      • More substantively, I understand that temperature increase since 1998 has not been significantly different from zero. Perhaps warming will resume, some here argue not, or not for some time, or that we don’t know enough to be sure. The Earth, as others believe, might resume warming. Yes, I don’t know exactly the consequences of further warming, no one does. I’ve seen no reason to accept that a policy of GHG emissions reduction in an attempt to reduce any such rise is preferable to policies which increase our capacity and adaptability to deal with the uncertain future. My interest is in policy; I think that that’s the best policy. The anti-emissions position has been dominant in Australia, we have incurred high costs for little or no reduction in potential warming, I seek to mitigate that damaging policy consensus.

      • Faustino,

        +100.

        Lolwot,

        I don’t understand why you reject what Faustino is saying.

      • “Proof by forceful assertion.” It has been effective enough to sucker Obama into the mess. Probably why the climate guys have taken up bullying with such gusto!

      • I doubt anyone had to “sucker” Obama. All this “climate change” stuff means a bigger role for the Federal government, a bigger Federal government with more government union employees, and more business-strangling, individual freedom-squashing regulations. That’s all he needs to know.

      • “I understand that temperature increase since 1998 has not been significantly different from zero. ”

        It hasn’t been significantly different from 0.2C/decade either.

    • Most warming in the satellite data is from cloud changes associated with ocean and atmospheric circulation.

      http://s1114.photobucket.com/user/Chief_Hydrologist/media/WongFig2-1_zps405d00fa.jpg.html?sort=3&o=188

      If warming is quite natural – then cooling is quite likely. Albeit at a risk of climate instability.

      The rational response is to take actions that have a wider rationale – but that reduce risk of ‘surprises’ at the same time. There are many of these that address most global emissions of greenhouse gases while at the same building productivity, resilience and prosperity.

      Carbon dioxide is best addressed by innovation.

    • Faustino,

      Excellent letter in today’s Australian and the two preceding days, too. Three in a row, Excellent.

    • #Least squares trend line; slope = 0.00684573 per year;Least squares trend line; slope = 0.0636059 per year

      Of course they want to discuss anything other than trends. Because the trends are blowing in gale force against them, and it is about to get worse.

      The trend since the last step is .07C per decade. That is the part of the step that is supposed to be flat. Instead it’s uphill. The trend for the last three years is .4C, which is twice the IPCC forecast/prediction.

      The trend since 2012 is.7C, which is more than 3 times the IPCC forecast/prediction. This in a period when 22 of 28 ONI reporting periods have been negative. NEGATIVE. We’re kakooling folks is a delusion.

      Yeah, I wouldn’t want to talk about that either.

      Especially since the PDO is apparently on its way to the moon, and the AMO may have just joined it on this insane upward trajectory.

  46. Careful – a blog thread of this quality can suggest that blogs can indeed make a significant contribution to scientific understanding. Well done, all.

  47. NSIDC March 2014 Study: Seasonal Arctic Summer Ice Extent Still Hard to Forecast http://nsidc.org/news/newsroom/2014_seasonalseaice_PR.html

  48. Feasible solutions to any deterministic model look like this.

    http://rsta.royalsocietypublishing.org/content/369/1956/4751/F2.expansion.html

    The trajectory of multiple solutions possible with the range of feasible inputs – whether these are obtained ‘stochastically’ or not – diverge through time. Change the structure of the model and the solution space changes with no guarantee that the change will be minor.

    Pick a solution arbitrarily and go with it. Seriously. I would stick to data for the time being.

    Abrupt climate change is a theory that has emerged this century in climate science. The old theory says that climate evolves slowly under the influence of climate ‘forcings’. The new theory says that climate changes rapidly as a result of interacting sub-systems – atmosphere, biosphere, cryosphere, hydrosphere and lithosphere – as tremendous energies cascade through powerful mechanisms. Climate change occurs as discrete jumps in the system. Thus – in climate data – there is large warming or cooling in as little as a decade in the shifts between glacial and interglacial periods. On a smaller scale there are shifts between planetary warming and cooling trends – along with alternating changes in global rainfall patterns – with a period of 20 to 30 years. Climate is more like a kaleidoscope – shake it up and a new pattern emerges – than a control knob with a linear gain.

    The theory of abrupt climate change is the most modern – and powerful – in climate science and has profound implications for the evolution of climate this century and beyond. A mechanical analogy might set the scene.

    Abrupt climate change is technically a chaotic bifurcation – equivalently a phase transition or a tipping point. An abrupt climate change occurs when the climate system is forced to cross some threshold. This triggers a shift in the balance between sub-systems and a new climate state – a new equilibrium of ice, clouds, water vapor, dust and vegetation – emerges.

    The problem in a chaotic climate becomes not one of quantifying climate sensitivity in a smoothly evolving climate but of predicting the onset of abrupt climate shifts and their implications for climate and society. The problem of abrupt climate change on the scale of two or three decades is of the most immediate significance. Mojib Latif – Head of the Research Division: Ocean Circulation and Climate Dynamics – Helmholtz Centre for Ocean Research Kiel – and colleagues have recently reported on modelling of decadal climate shifts in the 1976/1977 and 1998/2001 shifts. ‘The winds change the ocean currents which in turn affect the climate. In our study, we were able to identify and realistically reproduce the key processes for the two abrupt climate shifts. We have taken a major step forward in terms of short-term climate forecasting, especially with regard to the development of global warming.’ Despite this progress numerical prediction of climate shifts using powerful climate models is now as accurate as tossing a coin – although perhaps we should not make light of such a difficult problem in climate science.

    Indeed – abrupt climate change challenges the methods of climate modelling. Climate models embody a reductionist and constructionist approach whereby climate is reduced to its component parts and then aggregated into a system representing the sum of the parts. Climate – in the new theory – is instead a large, dynamic and complex system that can better be understood – perhaps only be understood – in terms of observed, emergent behavior.

    • ‘d suggest the simplest of zero dimensional energy balance models.

      d(OHC)/dt ≈ energy in (J/s) – energy out (J/s)

      Where OHC is ocean heat content.

      http://s1114.photobucket.com/user/Chief_Hydrologist/media/vonSchuckmannampLTroan2011-fig5PG_zpsee63b772.jpg.html?sort=3&o=145

      Here I go with a middle of the range Argo climatology for the last decade that is consistent with the energy terms. Shortwave forcing increased over the last decade – at least in the Argo period covered.

      e.g. http://s1114.photobucket.com/user/Chief_Hydrologist/media/CERES_MODIS-1.gif.html?sort=3&o=192

      You may note in passing that that the ocean grew more saline over the period – implying that the total sea level rise was less than the steric rise. The rate of rise is an order of magnitude less than satellite derived estimates.

      energy in = energy out when the change in OHC is zero. This nominally occurs at inflection points in OHC – i.e. the points when oceans change between warming and cooling. Local peaks and troughs in the graph.

      It then becomes a matter of evaluating factors in the transient radiative imbalance.

      It is in fact much easier than that. Set the energy in term to a constant – it changes relatively little especially over a solar cycle. Changes in OHC then depend only on changes in net energy out.

      Anastasios Tsonis suggests that multi-decadal surface cooling and warming results from a change in energy uptake in the deep oceans or a change in cloud and water vapour dynamics. Both seem likely. In the simplest case the cooler or warmer water surface loses less or more of the heat gained from sunlight and so the oceans warm and cool.

      In the latter case – cloud cover seems increasingly likely to be a significant factor in the Earth’s abruptly changing energy dynamic. Dr Norman Loeb – Principal Investigator for NASA’s Clouds and Earth’s Radiant Energy System (CERES) – shows that large changes in the Earth’s energy balance at top of atmosphere occur with changes in ocean and atmospheric circulation. However, CERES commenced operation just after the 1998/2001 climate shift.

      Earlier satellite data from the International Satellite Cloud Climatology Project (ISCCP-FD) shows a substantial step increase in cloud at the turn of the century. Furthermore – an intriguing project originating with Dr Enric Pallé at the Big Bear Solar Observatory (BBSO) made photometric observations of light reflected from the Earth onto the moon from 1998. Short term changes in global reflectance – is for the most part cloud changes. A step increase in albedo was observed at the turn of the century. In this data – from both sources – energy changes from cloud changes are an order of magnitude greater than changes in greenhouse gas forcing over the same period. There are currently three Project Earthshine robotic telescopes with plans to expand to a global network.

      http://s1114.photobucket.com/user/Chief_Hydrologist/media/Earthshine-1.jpg.html?sort=3&o=153

      ‘Earthshine changes in albedo shown in blue, ISCCP-FD shown in black and CERES in red. A climatologically significant change before CERES followed by a long period of insignificant change.’

      I’d suggest that the world isn’t warming and – on the basis of past behavior – that it seems unlikely to for decades at least.

      http://s1114.photobucket.com/user/Chief_Hydrologist/media/HADSST3vCERESnet_zps068a355c.png.html?sort=3&o=73

    • I‘d suggest the simplest of zero dimensional energy balance models.

      d(OHC)/dt ≈ energy in (J/s) – energy out (J/s)

      Where OHC is ocean heat content.

      http://s1114.photobucket.com/user/Chief_Hydrologist/media/vonSchuckmannampLTroan2011-fig5PG_zpsee63b772.jpg.html?sort=3&o=145

      Here I go with a middle of the range Argo climatology for the last decade that is consistent with the energy terms. Shortwave forcing increased over the last decade – at least in the Argo period covered.

      e.g. http://s1114.photobucket.com/user/Chief_Hydrologist/media/CERES_MODIS-1.gif.html?sort=3&o=192

      You may note in passing that that the ocean grew more saline over the period – implying that the total sea level rise was less than the steric rise. The rate of rise is an order of magnitude less than satellite derived estimates.

      energy in = energy out when the change in OHC is zero. This nominally occurs at inflection points in OHC – i.e. the points when oceans change between warming and cooling. Local peaks and troughs in the graph.

      It then becomes a matter of evaluating factors in the transient radiative imbalance.

      It is in fact much easier than that. Set the energy in term to a constant – it changes relatively little especially over a solar cycle. Changes in OHC then depend only on changes in net energy out.

      Anastasios Tsonis suggests that multi-decadal surface cooling and warming results from a change in energy uptake in the deep oceans or a change in cloud and water vapour dynamics. Both seem likely. In the simplest case the cooler or warmer water surface loses less or more of the heat gained from sunlight and so the oceans warm and cool.

      In the latter case – cloud cover seems increasingly likely to be a significant factor in the Earth’s abruptly changing energy dynamic. Dr Norman Loeb – Principal Investigator for NASA’s Clouds and Earth’s Radiant Energy System (CERES) – shows that large changes in the Earth’s energy balance at top of atmosphere occur with changes in ocean and atmospheric circulation. However, CERES commenced operation just after the 1998/2001 climate shift.

      Earlier satellite data from the International Satellite Cloud Climatology Project (ISCCP-FD) shows a substantial step increase in cloud at the turn of the century. Furthermore – an intriguing project originating with Dr Enric Pallé at the Big Bear Solar Observatory (BBSO) made photometric observations of light reflected from the Earth onto the moon from 1998. Short term changes in global reflectance – is for the most part cloud changes. A step increase in albedo was observed at the turn of the century. In this data – from both sources – energy changes from cloud changes are an order of magnitude greater than changes in greenhouse gas forcing over the same period. There are currently three Project Earthshine robotic telescopes with plans to expand to a global network.

      http://s1114.photobucket.com/user/Chief_Hydrologist/media/Earthshine-1.jpg.html?sort=3&o=153

      ‘Earthshine changes in albedo shown in blue, ISCCP-FD shown in black and CERES in red. A climatologically significant change before CERES followed by a long period of insignificant change.’

    • I’d suggest that the world isn’t warming and – on the basis of past behavior – that it seems unlikely to for decades at least.

      http://s1114.photobucket.com/user/Chief_Hydrologist/media/HADSST3vCERESnet_zps068a355c.png.html?sort=3&o=73

  49. Progressive billionaire – “I got mine billions from coal, now I want to stop everybody else.”

    https://nextgenclimate.org/accountability/charting-a-different-course/

    What a profile in courage. If Steyer really wants to divest himself from the evils of fossil fuels, he should give away every penny he ever made from such investments, now, without any tax write off.

    But no, he will now instead use a small part of those billions to push policies that will keep billions of others in poverty. Oh, and as just a happy, coincidental by product, purchase enormous political power and influence for himself and his other investments.

    • I like the way he talks about ‘special interests’ without a hint of irony, introspection, or self-consciousness. Ah, what fools we mortals be.
      ==========================

  50. Climate effects arise from carbon dioxide and black carbon, methane, nitrous oxide, tropospheric ozone, sulphate and methane.

    We can address the majority of the global total effect in the short term with major benefits for health, welfare, environments and development.

    Carbon dioxide from fossil fuels is best addressed through technological innovation.

    http://online.wsj.com/news/articles/SB10001424052748704505804575483423120157674

    http://freakonomics.com/2012/03/14/the-rise-of-the-prize/

    http://www.forbes.com/sites/ciocentral/2012/07/17/the-merits-of-incentive-prizes-for-driving-innovation/

    This is a multi-billion dollar challenge.

    There is little to commend in current approaches and much to be gained in innovative responses.

  51. Eschenbach had a good post on emergent behavior:
    http://wattsupwiththat.com/2013/02/07/emergent-climate-phenomena/
    He wrote:
    “Well, in a good model all of the emergent phenomena we know about would actually emerge, not be parametrized … because the free actions of those emergent phenomena, the variations and changes in their times and locations of appearance are what control the temperature…”
    This make senses to me. Some may recall his equatorial ocean cloud thunderstorm daily regime example and his opinion that the more the heat, the sooner the emergent behavior turns on to dissipate it. It occurred to me that the same thing might be going on with the phases of the PDO. The more the heat, the more time it might spend in the cooling phase. That it is an emergent behavior to transport cold South and heat North. Another possible answer is that the PDO will more or less permanently stay in the warm phase.

  52. “The stuff that we can’t resolve, the sub-scale processes, we need to approximate in some way. That is a huge challenge. Climate models in the 1990s took an even smaller chunk of that, only about three orders of magnitude. Climate models in the 2010s, kind of what we’re working with now, four orders of magnitude. We have 14 to go, and we’re increasing our capability of simulating those at about one extra order of magnitude every decade. One extra order of magnitude in space is 10,000 times more calculations.” – Gavin Schmidt

  53. A fan of *MORE* discourse

    BREAKING STEM NEWS

    Australian scientists take to the streets to protest job cuts

    A Fruitless Markup on Department of Energy R&D Act of 2014

    Climate Etc denialists are delusional to imagine that scientists will support slashing cuts to ARPA-E energy research.

    The cohort who are pleased to slash energy-research are Big Carbon/Koch brothers/Duke Energy/Vladimir Putin/oil shieks (and their ilk) … folks whose assets are protected by massively cutting energy research and aggressively astroturfing climate-change denial.

    That slashing research protects Big Carbon is obvious to *ALL* young STEM professionals (and young voters) eh Climate Etc readers?

    Judith Curry, isn’t this how your young STEM students see it?

    \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • Need a STEM Union?

    • We’re trying to save the money to put your image on the fourth wall.
      ===============

    • Matthew R Marler

      a fan of *MORE* discourse: what the far-fight’s faux-conservative anti-science denialism is all about.

      Boy, you’ve gone off the deep end again.

    • Of old and unremembered things,
      And far fights long ago.
      ================

    • A fan of *MORE* discourse

      Whoops! see above kim! And learn!

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • Don Monfort

      Minority Democrat pols in our Congress obstructing appropriations process. What about: Obamma says “Elections have consequences.”?

      CSIRO needed trimming:

      http://www.afr.com/p/lifestyle/review/has_the_csiro_lost_its_way_GQXJkn51cSmSovqKYdMAcI

      Scientists with poor career prospects in the private sector take to the streets to protest. Nicely made signs. Maybe that’s an area of opportunity.

      • A fan of *MORE* discourse

        Science (and engineering) march on, forward under science-respecting governance, and backward in certain other nations.

        That’s obvious to *EVERYONE* — young scientists and young voters especially — eh Don Monfort?

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • In the U.S. government, scientists and engineers are managers, overseeing the work of contractors. This is my experience, and there are probably exceptions.

      • FOMbs,

        no, it is “not obvious to *EVERYONE*”

        so yet again you are guilty of anti-intellectual, obnoxious, uncivil bloviation

      • Don Monfort

        You don’t seem the type to actually work, fanny.

      • ceresco kid

        Fan, dont you even read your links? Nowhere did it say IBM was leaving the technology business. If you are going to form some opinions, why not use some facts for those opinions. Ohh wai,t I remember you are a warmist. That explains it all.

      • FAN. Reagan came to office with an economy that in many respects was worse than that of 2009. But he turned things around by removing the roadblocks to innovation. Today the roadblocks have returned and businesses are not investing in people, the engine of a strong economy.

      • Don Monfort

        Fanny’s foolishness is better left unread. I will resume ignoring his flamboyant flimflammery.

      • The young are actually more cynical than the >35’s and a greater percentage think Global Warming is a hoax

        http://environment.yale.edu/climate-communication/files/YouthJan2010.pdf

      • A fan of *MORE* discourse

        ceresco kid complains [wrongly] “Fan nowhere did it [your link] say IBM was leaving the technology business.

        Climate Etc readers are invited to verify for themselves the following excerpts

        No one knows for sure how many jobs IBM plans to cut [in 2014], and the estimates vary wildly … SIBM has well under 100,000 employees in the United States these days, although how many is unclear because the company has not provided a breakdown of workers by country for many years.

        Rumors have been going around that IBM is looking to sell off its chip making business … I have little doubt it is for sale and that IBM wants to get out of the chip making business. It may even want to get rid of chip research, and in fact, it may have to sell of a chunk of IBM Research along with its patent portfolio to get a deal done.  … Selling off that chip research would probably mean IBM loses its prestige in the IT racket and its annual patent king status.

        Needless to say, these trends are of great importance to America’s STEM students who once (but no more) have realistically planned for a research career at IBM, and also to American voters who once (but no more) might realistically have regarded IBM as an American corporation.

        It is a pleasure to assist your reading comprehension and to help improve your appreciation of globalization’s impact on American private-sector research capabilities, ceresco kid!

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • I was in deep doing chip-fab science at IBM Research for awhile.
        No reason not to believe they may cut research as it is unbelievably expensive. I was running experiments on multimillion $ epitaxy systems and the crapshoot is that discoveries will pay for the investment. No guarantees, so it comes down to business decisions versus the established scientific culture of a place like TJ Watson.

    • A fan of *MORE* discourse

      rls asks “Has the FAN taken his  medicine  astroturfed denialist koolaid today?”

      Nope! `Cuz mighty few STEM folks — and ever-fewer thoughtful citizens — care for denialism’s toxic brew of special-interest kool-aid!

      Thanks for asking, rls!

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • Pleased that things have settled down. Kim is a poetic calming influence and yet you exploded on her. Caused my mind to split.

    • A fan of *MORE* discourse

      The confluence of science, economics, and policy ain’t no tea-party … enlightenment demands compassionate fortitude.

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • FOMbs,

      stop spamming the threads!!

      you are unbelievable

      you should be banned for continual uncivil OT bloviation, but Dr. Curry displays remarkable tolerance and forbearance

      • Actually, I am starting to delete FOMD’s off topic comments.

      • Shame wp.com won’t allow you to turn off latex shortcodes for non-technical threads :)

      • Judith

        Hey, it’s Saturday over here. What am I supposed to do now for my weekend entertainment? Perhaps we could have a ‘Best of Fan’ compilation?

        Tonyb

      • …or perhaps a ‘Worst of Fan’?
        Or ‘Fanning the flames’
        Or ‘When the **** hits the Fan’

        The list is endless, eh Climate etc readers? ;-)

      • phatboy,
        Oh I do so like yer last suggestion, ‘When the **** hits the fan.’
        Already I can think of so-o-o-o- many examples, historic, politic
        and climatic!

      • Or “fan hit the sand”. I think he went over the edge when kim said “the ship hit the sand and blew apart in unrecognizable pieces”. Beautiful lyrics but don’t understand them. A condition not uncommon for me.

      • Hi Tony, hopefully the new Goddard post fits the bill :)

      • Looks like you might be in England. Lived two years there 1961-65, RAF Wethersfield. Only GI there that preferred bitter. Bought a new Volkswagen in Cambridge. And decided on my future vocation. Loved the place and the people.

      • Judith, if you deleted all off-topic comments, the size of the comments would be reduced to a fraction……might be readable!

    • Australian CSIRO scientists worried about job cuts??! It has been obvious at least since the time when this video was made that Oz has surplus of climate scientists and could use some serious pruning:

  54. I don’t understand a key point. GCM’s are supposed to be as “bottom up” physically modeled as possible. Doesn’t this mean they should get regional forecasts even BETTER than the global average? And if they happen to get global average forecasts better than regional, that is more likely by chance (or fudge) than by fidelity?

    • Often, for the solution of a PDE, it is possible to argue that the sensitivity to averages is less than for point values (or regional values, provide the average is over something larger, like time and world) using adjoint analysis.
      This by no means such quantities are computable, however.
      Which is why I assume complete ignorance when I ask for proof that ‘averages are computable, whereas the exact weather isnt’. So where is the adjoint analysis? Where are the studies? Where is the proof for simple models, to get an idea if this is even true in the simplest case.
      As a matter of fact, for the Lorenz equations this statement is completely and easily demonstrable to be totally untrue.
      For an amalgamated model like climate, which is not a PDE, but contains one, I’m not sure where one even start….

  55. Going back to ’67 the AGW industry was born long before the GCM-fabrication industry.

  56. Tetragrammaton

    The tremendous success of current GCMs is evidenced by the current huge and increasing spending on windmills, solar systems and other “alternative energy” projects, designed to forestall the doom scenarios which follow from the truth of the models’ projections. In their 30+ years of labor, the model builders have had the very difficult task of fulfilling three separate objectives. The first and most important objective has been to secure and increase the grants and other funding to enlarge and extend the government and academic facilities and staffing needed to address the “climate change” meme. The second task, different from the first but somewhat related, has been to provide a bulwark to boost the political objectives of the left-leaning lawmakers who see the aforementioned meme as a ticket to the new sources of taxes and central control they seek. A third objective is to establish layers of complexity and camouflage in the models, sufficient to deflect any criticism of the veracity of the models’ construction and/or results. All objectives have been achieved, in spades.

    It is not the model builders’ fault that Mother Nature has provided new “data” in the form of a lengthy (16 year) pause or hiatus in the inevitable global warming progression. This has meant that the actual “results” of most global-warming models have been very obviously dead wrong, so far. Fortunately for them, only a handful of spoilsports, polymaths or (like yours truly) ancient physicist-geezers care to expose or discuss the reality of the situation.

    Indeed, it is evidently much easier for all involved to continue down the same path, and to add to the complexity and camouflage rather than take the risky step of constructing new types of climate models. To add a fourth objective, to actually successfully emulate or simulate the climate with stochastic or other means, would seem to them to be neither necessary nor particularly desirable. And what if it produced results which undermined the first two objectives?

    Really, informed commentary on GCMs makes the most sense if today’s models are viewed and judged as decorative artifacts. Viewed as an ensemble, they are sufficiently impenetrable to repel all but the most ungenerous or daring critics. Claims of continuing improvements and enhancements, laughable from the standpoint of the “denier” connoisseur, can be confidently trumpeted to the obedient media and political world.

    No change is needed.

    • Years ago I said the modelers were trying to keep their toys on circular tracks, on the ceiling.
      =============

    • Bob Ludwick

      @ Tetragrammaton

      I was just getting ready to write the same thing, but not as well, when I saw your name as a recent commenter, didn’t recognize it as a ‘regular’, and decided to see what you had to say.

      Saved me the trouble.

      Thank you.

    • “A third objective is to establish layers of complexity and camouflage in the models, sufficient to deflect any criticism of the veracity of the models’ construction and/or results.”
      That for sure.
      As well as appealing to the complexities of the earth system (the study of which I have great respect for) as a means to discredit scientists from all other disciplines (even though at its heart the GCM in a numerical PDE solver, which these guys have little knowledge about compared to math/engineering guys).

  57. Paul Ehrlich, James Hansen and Al Gore and the Eurocommunists all blamed the US as the major cause of AGW. Their anti-Americanism in the guise CAGW has become shrill — with or without GCMs to mask their underlying motives — now that China is numero uno in the CO2 production business, together with the move to natural gas and the Leftists’ jihad on the economy which have shrunk America’s carbon footprint.

    • Jim D | June 27, 2014 at 11:16 pm |
      It’s too easy being a skeptic. This solar thing shows you don’t have to understand something at all to believe it. Force X. Great, that must be it. That’s the ticket.

      Sorry Jim IMO you mischaracterized the majority of comments. Commenters seem to be appreciative of someone finally offering a testable hypotheses and are willing to let it play out. From an objective, scientific viewpoint that seems reasonable to me, a the majority of commenters, and our hostess who commented “I’m following this, not sure what to make of it yet.”

  58. Dr. Evans and Jo Nova have put out a testable prediction and aren’t waffling about. None of this prediction-projection-expectation-anticipation nonsense.

    http://joannenova.com.au/2014/06/big-news-viii-new-solar-model-predicts-imminent-global-cooling/

    • I’m following this, not sure what to make of it yet

      • I’m kind of intrigued because it was derived from observations, treating the climate as a black box. It’s very high level, but it seems this approach would hold promise. What I’m wondering is how one would add more fine grained detail to it.

      • Somebody over there asked a very interesting question. What if David is right, but can’t be shown so because of his use of adulterated temperature records?
        ===========

      • His approach is so (relatively) simple that he could easily re-run the analysis with a different temperature series. He may have already.

      • Kim – I forgot, but he ran several different temperature records through the analysis, IIRC, and they all had the notch.

        The reason the notch is there is because the solar insolation variation isn’t in the temperature record. If this is because it’s lost in noise, then the model might be meaningless. But if it isn’t in the temperature record of over 100 years, then it seems the climate ain’t all that sensitive anyway.

        Anyway, if the variation should be in the temp record but isn’t, there there has to be a mechanism that gets rid of the insolation signal.

        Time will tell, I suppose.

      • A lot of people have noted that the Sun doesn’t drive temperature. The reason given is that the insolation variation is too small. But it seems that in data that spans 10 or more 11 year cycles, we should see it. And some papers do claim to see it.

        But Evan’s method shows that it’s not in the temp record.

        Does anyone know good reasons why it shouldn’t be there if there is no counter-acting force?

      • Lubos knows what to make of it.

      • Here is part of what Lubos says:
        What the near-vanishing of R~(f) for 1/f close to 11 years really means is that the most obvious possible proof of the direct effect of the total solar irradiance doesn’t exist – the 11-year cycle isn’t present in the temperature data. This is a problem – potentially a huge problem – for any theory that tries to present the solar output as the primary driver even at the decadal scale and faster scales. It’s surely nothing to boast about. It makes the solar theory of the climate much less likely, not more likely. Suggesting otherwise is a case of demagogy.

        http://motls.blogspot.com/2014/06/david-evans-notch-filter-theory-of.html
        (end quote)

        Unless I missed it, and I have read quickly the entire post, I don’t see the mention of “force x.” Lubos only mentions insolation, although his function does incorporate a delay. He doesn’t seem to get the idea that some other force that in sync with the solar cycle is responsible, in Evan’s model, for suppressing the 11 year cycle.

        At any rate, I’m not convinced that Lubos’ criticisms are valid.

        That being said, I’m all ears.

      • Yeah, the “x” force. Which would be, what? Maybe Tom McClellan can eyeball it out for them. I suspect that the “X” force is a figment of confirmation bias.

      • One possibility is modulation of cosmic rays by the Sun’s magnetic field. It doesn’t take a large increase in cloud cover to drop a good bit of incoming watts.

      • So at the peak of insolation, the Sun’s magnetic field is stronger, deflecting more cosmic rays and decreasing cloud cover, which reinforces the heating. Hmmmm … guess that won’t work.

      • So that brings us back to the question of should a 0.1% variation in TSI show up in the temperature record.

      • jim2, “One possibility is modulation of cosmic rays by the Sun’s magnetic field. ”

        A larger possibility is that the solar impact is greater in the ocean mixing layer in the tropics and takes time to migrate through the system where it is amplified by the land mass. I think that is why “surface” temperature was not the proper “metric” to base all this non-sense on. Schwartz has the ocean mixing layer delay at ~ 8 years and with an “11 year” solar cycle that ranges from less than 10 to more than 12 years, you need a few cycles to note the small direct impact of the ocean mixing layer on the brilliant lower troposphere/ocean “surface” “metric”. It’s like apples, oranges with a few bananas and some passion fruit thrown in. Note that the “22 year” Hale cycle peeks out in the “metric” a little more clearly when it really should not.

      • Here’s another simple model:

        A simple additive model for total solar-output variations was developed by superimposing a progression of fundamental harmonic cycles with slightly increasing amplitudes. The timeline of the model was calibrated to the Pleistocene/Holocene boundary at 9,000 years before present. The calibrated model was compared with geophysical, archaeological, and historical evidence of warm or cold climates during the Holocene. The evidence of periods of several centuries of cooler climates worldwide called “little ice ages,” similar to the period anno Domini (A.D.) 1280–1860 and reoccurring approximately every 1,300 years, corresponds well with fluctuations in modeled solar output. A more detailed examination of the climate sensitive history of the last 1,000 years further supports the model. Extrapolation of the model into the future suggests a gradual cooling during the next few centuries with intermittent minor warmups and a return to near little-ice-age conditions within the next 500 years. This cool period then may be followed approximately 1,500 years from now by a return to altithermal conditions similar to the previous Holocene Maximum.

        http://www.pnas.org/content/97/23/12433.full

      • I’m signing off. But here is a paper on TSI and magnetic field. I wish I had more time for this.

        http://www.hindawi.com/journals/jas/2013/368380/

      • It’s too easy being a skeptic. This solar thing shows you don’t have to understand something at all to believe it. Force X. Great, that must be it. That’s the ticket.

      • So at the peak of insolation, the Sun’s magnetic field is stronger, deflecting more cosmic rays and decreasing cloud cover…
        This one: http://jonova.s3.amazonaws.com/cfa/solar-radiation-peaks-magnetic-field-b.gif
        The purple line area is weakening and throw in an 11 year delay, fits with recent GATs. Unfortunately the past history of the magnetic field strength is pretty sketchy as far as I can tell.

      • JimD, It is easy to be a skeptic because the problem is ridiculously difficult and the model “projections” are consistently wrong.

        That is the solar impact on the ENSO regions with a 27 month lag thanks to seasonal mixing. That lag produces the QBO which is basically the equatorial crossover resisted by the Coriolis effect. You also have a Brewer-Dobson circulation in the stratosphere that varies in intensity and is not included in many models.

        So since the grand poohbahs of climate science have screwed the pooch, skepticism is the way of climate science’s future.

      • Thanks j2, cd, & R. No thanks, Jim D.
        ==============================

      • Where X>AGW and D=null hypothesis.

      • captdallas:
        Where to look for the TSI signature? The ENSO region. Something like close to an 8 watts variation per square meter. Then it flows away in the oceans and shows up later. Is 8 watts considered insignificant?

      • Ragnaar, “Is 8 watts considered insignificant?”

        In the tropics the 11 years solar cycle variation is a little over 1 Wm-2 and that seems to be considered insignificant because of the “global” averaging, but that was before land amplification was recognized. Land amplification mainly in the 30N-60N regionis something the models don’t even come close to getting right along with absolute temperature. Since it is pretty obvious that solar does impact ENSO and ENSO does impact “climate”, it should be significant.

        ENSO though is not really cut out to be a reliable “global” index since there are 4 ENSO regions and should be more like 6 regions. The Indian ocean because it has less THC influence is a better “global” indication of variation in forcing.

        http://redneckphysics.blogspot.com/2013/12/2000-years-of-climate.html

        That uses the Indian Ocean or rather the Indo-Pacific warm pool, sea level, ocean heat content and the Oppo 2009 to reverse reconstruct temperatures allowing for the land amplification.

        Then if you consider both solar and volcanic influences,

        http://redneckphysics.blogspot.com/2013/10/sol-y-vol-giss-rough-fit.html

        you can start seeing a little more of what happens even without including the lags.

        It is a fun puzzle

      • Steven Mosher

        temperature data does not have a notch.
        it does not have a trend
        it is what it is.

        Models of data have trends and notches.

        data just is. it is what it is. nothing more.

      • Everything is what it is in this BEST of all possible worlds.
        ===============

      • Here is the Wiki on transfer functions. It is an absolutely valid approach to attempt to analyze a “black box” system, where you know the input to the system and the output from it, but aren’t privy to the inner workings of it.

        https://en.wikipedia.org/wiki/Transfer_function

        And I might note, that I’m not saying this is correct. I am saying N&E have made a prediction. If the prediction holds, then their hypothesis will gain credibility. That’s how science works.

        I am exploring possibilities for “force x.” In order for “force x” to “know” when to kick in, it would have to be synched with the Sun. Some attribute of the Sun other than the TSI would have to fill that bill. Otherwise, you are left with a mechanism internal to Earth’s climate – that would be difficult.

        I would be happy if someone could weigh in on the fact that the total Solar variation is only 0.1%. Does anyone have proof that it is too small to be detectable in global surface temperature?

        If you have a knee-jerk reaction, and based on that reject or accept an idea, you aren’t acting in a proper scientific manner. Just sayin’.

      • OK, Kim. So maybe the global temp record just isn’t good enough for Evan’s approach.

        http://wattsupwiththat.com/2014/06/26/on-denying-hockey-sticks-ushcn-data-and-all-that-part-2/#more-112008

      • jim2,

        It’s very nice and scientific that Evans has made a prediction, but I will also make a prediction. It’s more likely to get warmer or to stay the same than it is to get significantly colder. Let’s see who gets lucky. If it does get colder, that won’t prove Evans’ model.

        Lubos arguments are more persuasive than Evans’ black box. This is part of Lubos’ response to Jennifer Morihasy supporting Evans in the comments on Lubos’ blog:

        “I am afraid that just like David, you don’t appreciate one thing. If you train a neural network or if you adjust some function(s) so that the fit is the best, whatever you exactly want to be fitted, then the fit will be pretty good and will suggest that it works. But this is guaranteed to happen even if no underlying relationship exists! In effect, you are just fitting the noise – you are “overfitting”. Your paper, like David’s, seems to avoid the key question whether all the agreement you get is anything else than overfitting, fitting the elephant, i.e. developing a model for a particular episode of noise. In David’s, the answer is obviously that he is developing one. The amount of information he has to insert to his response function is the same as the amount of information he wants to predict. He is just reparameterizing the data in some way. In your case it’s less clear so it’s remotely plausible that a model like that has a predictive power based on a real relationship, but even in your case, I think that you are bringing no real evidence supporting that claim.”

        Read the whole comment:

        http://motls.blogspot.com/2014/06/david-evans-notch-filter-theory-of.html#comment-1445117227

        I think webby’s efforts make more sense than Evans’ black box, with notch and force x kicker.

        More from Lubos comment:

        “The real problem is that the correlation between the Sun’s total output (in this case) and the climate is missing. David is essentially saying that this lack of correlation doesn’t matter because the relationship may be given through a complicated response function which is fine-tuned to make all traces of the correlation disappear. One could speculate that such a relationship exists between any pair of quantities in the world so of course that the evidence is zero – the response function is just calculated from the data on both sides to fudge the elephant.

        But it’s worse than that. The required response function we need here *cannot* result from *any* physical mechanism because it’s not causal. The violation of causality is the dealbreaker. David and maybe you seem to underestimate the power and diversity of the tools that may be used to instantly kill most ideas – he seems to live in the Anything Goes world.”

        Evans asked Lubos to review. It’s no kneejerk reaction.

      • captdallas:
        Okay, I see my mistake. It’s 1 Wm-2. Kind of thin. However the TOA imbalance is sometimes around 1 Wm-2.

      • ragnaar, “Okay, I see my mistake. It’s 1 Wm-2. Kind of thin. However the TOA imbalance is sometimes around 1 Wm-2.”

        Right. It is on order of the imbalance so a prolong solar minimum is more likely to lead to no warming instead of cooling of the oceans. Land however, has that pesky amplification issue so there could be some cooling if the amplification is linear. Not all that likely but possible. Then things can get more interesting.

    • Evans idea is poetic. Skeptics might now be claiming high sensitivity in regard to TSI. Readily accepting a plethora of new feedbacks. Replaying the history of CO2 with a new lead actor, the Sun. I wish Evans and Jo Nova luck.

    • Mosher asked me to put in his two cents on the “x” force: unicorns, pink ones, 12 of ’em

    • Sorry I put my comment under the wrong thread:

      Jim D | June 27, 2014 at 11:16 pm |
      It’s too easy being a skeptic. This solar thing shows you don’t have to understand something at all to believe it. Force X. Great, that must be it. That’s the ticket.

      Sorry Jim IMO you mischaracterized the majority of comments. Commenters seem to be appreciative of someone finally offering a testable hypotheses and are willing to let it play out. From an objective, scientific viewpoint that seems reasonable to me, a the majority of commenters, and our hostess who commented “I’m following this, not sure what to make of it yet.”

      • Evans’s X-Force has 10-20 times the no feedback response and not a peep. Say that CO2 has 3 times the no feedback response, and all heck breaks loose. What gives? Yes, Evans has to amplify solar changes that much for it to even compare with CO2, and now is looking for a reason to do this. He paints a picture of an unbelievably unstable climate that skeptics usually hate too, but not this time. Where’s the skepticism? This one is sorting out the true skeptics (and I see a few who are questioning it), and the “skeptics”.

      • Don Monfort

        It’s easier being a consensus dogma drone, jimmy dee. Drone on, jimmy.

      • Jim D:

        He paints a picture of an unbelievably unstable climate that skeptics usually hate too, but not this time. Where’s the skepticism?

        Well, a lot of us haven’t yet read it, having just finished breakfast on this side of the pond.
        However, from the brief look I’ve had, it seems the ‘notch’ is merely an artefact of Evans’ methodology – we saw much the same thing with the Hockey stick.

      • Don M, call me biased, but I prefer theories that actually have some physics behind them, and are not just statistical constructs.

      • Jim D | June 28, 2014 at 12:35 am |
        Evans’s X-Force has 10-20 times the no feedback response and not a peep. Say that CO2 has 3 times the no feedback response, and all heck breaks loose. What gives? 

        The GCMs don’t work with low sensitivity values, I think I read. Same with Evan’s model it seems. However it turns out his high sensitivity required if it is, is a page out of someone else’s playbook. One can do fun things with high sensitivity. Do you want a mommy van or a Lamborghini Spyder?

      • The Lambo Ragnaar.

        Heh !

    • Well – at least that means all the debate will be settled by 2017. Only 3 years and it will be clear that global warming is just not a concern. The “skeptics” will be proven right, once and for all, and we can pump ACO2 into the atmosphere with impunity. No more starving children. Fossil fuel Nirvana can be reached. No more concern about increases in extreme weather, about sea level rise, about species extinction. No more AGW cabals, with their Eco-Nazi/anti-capitalistic/statist/faith-based/socialistic/”progressive”/leftist/anti-poor/progress-hating/neo-Luddite/alarmist/warmista/neo-McCarthyistic/Lysenkoistic/oneworldgovernmentistic plots to destroy the human race.

      What a relief! Just a few brief years and “skeptics” can stop spending so much time typing at their keyboards to save us from disaster.

    • ==> “I’m following this, not sure what to make of it yet”

      Well – at least it means all the debate will be settled by 2017. Only 3 years and it will be clear that global warming is just not a concern. The “skeptics” will be proven right, once and for all, and we can pump ACO2 into the atmosphere with impunity. No more starving children. Fossil fuel Nirvana can be reached. No more concern about increases in extreme weather, about sea level rise, about species extinction. No more AGW cabals, with their Eco-N*zi/anti-capitalistic/statist/faith-based/socialistic/”progressive”/leftist/anti-poor/progress-hating/neo-Luddite/alarmist/warmista/neo-McCarthyistic/Lysenkoistic/oneworldgovernmentistic plots to destroy the human race.

      What a relief! Just a few brief years and “skeptics” can stop spending so much time typing at their keyboards to save us from disaster.

      • What a relief! Just a few brief years and “skeptics” can stop spending so much time typing at their keyboards to save us from disaster.

        er, you’ve got the rear camera on by mistake.

    • And finally, we have true science:

      Joanne Nova
      June 27, 2014 at 3:30 pm · Reply
      Popeye, with respect, we are part of the real scientific community. I think you are referring to the officially endorsed government funded science community?

      Despite the horrible news of imminent cooling, we can look forward to the deconstruction of the “officially endorsed government funded science community.” Imagine the good what will come about as a result. No more of those resources-wasting entities like the CDC and the NSF. No more of that NASA and EPA and the insidious “government-funded science.” We won’t have to worry about the scourges of vaccines and atomic regulation and food security.

      Hallelujah brothers and sisters. Put town your hammers and saws and AK-47s and stop building your bunkers and organizing your militias. Free-market Utopia is right around the corner. The Warmistas are already heading for the hills.

    • “data just is. it is what it is. nothing more.”

      Wrong. If it doesn’t show warming then it needs to be adjusted to show that it does.

      Duh.

      Andrew

  59. I think this article will give many on this blog some insight into themselves. Like, when do get your ideas?

    Secrets of the Creative Brain
    http://www.theatlantic.com/features/archive/2014/06/secrets-of-the-creative-brain/372299/

  60. David Archibald relays the chilling story of, “two Californian researchers, Leona Libby and Louis Pandolfi. In 1979,” Archibald says, “they used tree ring data from redwoods in Kings Canyon to make a remarkably accurate forecast1. From a Los Angeles Times interview of that year,

    When she and Pandolfi project their curves into the future, they show lower average temperatures from now through the mid-1980s. “Then,” Dr. Libby added, “we see a warming trend (by about a quarter of 1 degree Fahrenheit) globally to around the year 2000. And then it will get really cold… How cold? “Easily one or two degrees,” she replied, “and maybe even three or four degrees.”

  61. The problem seems to have occurred in finance and business planning. When the mobile spectrum auctions were announced in Europe, all the bidders started to construct models to tell them how much they could bid.

    The general view was that the more detailed the model the better. Makes sense, right? The more detailed and specific your information, the more accurate your predictions must be, and the more sure you must be that you are paying the right amount and really will be able to make the return you think.

    Well no. They all ended up with 100s of pages of Excel and predictions whose basis was totally unclear to anyone except the modellers, and maybe not even them, and they ended up bidding largely on emotion, and in a couple of cases they nearly bankrupted themselves.

    Whereas the real drivers of return would have fitted on one A4 – but of course, without specifying the values of all the assumptions in enormous detail. When you do that, what you end up with is something that can be sanity tested by looking at ranges of values for variables in a discussion.

    What they ended up with was a complete inability to argue about whether the assumptions were reasonable because the model had put them out of reach.

    We seem to be in the same situation with climate. Endless detail is not a marker for accuracy, still less usefulness. Multiplying detail does not usually lead to any different predictions, nor to greater certainty, than very simple models. It just makes the process more obscure and less reliable.

  62. Unless the model mesh is sufficiently fine enough to model the local upwelling of evaporo-transpiration transfer of water vapour and air from the surface to the upper atmosphere, the models are not true GCM, just a cheap imitation. Its like not modelling viscous effects (i.e. turbulence) in cfd models, you only end up with a partial solution at best. That is fine if you have other robust methods for adding back in the viscous effects say from model tests.

    Then there is the problem of the mesh not being fine enough ( too coarse) to converge to a proper solution anyway due to not being able to properly model large local variations in the remaining circulation parameters. The trouble with fine meshes is that they require lots more number crunching time.

    It may well be that the GCM approach is just beyond us for the above reasons alone and then of course there is the old problem of not actually understanding the mechanism itself, you know, being obsessed with some of CO2 ‘s effects but ignoring others……

    • Seward’s folly. That’s ok, believe what u want 2.

    • @ M Seward

      “It may well be that the GCM approach is just beyond us for the above reasons alone and then of course there is the old problem of not actually understanding the mechanism itself, you know, being obsessed with some of CO2 ‘s effects but ignoring others……”

      Exactlly. The purpose of climate models is to provide scientific justification for government action by confirming that anthropogenic CO2 poses an existential threat that must be addressed. At that, they have been wildly successful, and continue to do so even as data and model outputs diverge.

      In my opinion though (with next to no scientific ‘chops’ to back me up) climate models do not and can not provide reliable long term climate predictions. There are at least two reasons for my opinion:

      a. The models are CO2 centric, but there is no reason for me to believe that the modelers actually understand all the factors which drive the climate or that they have sufficient data on the factors that they ARE aware of to make projections meaningful. Or that CO2 is even A major influence, never mind dominant.

      b. Of the external inputs known to impact our ‘climate’, several occur at random times, with random amplitudes, and are intrinsically unpredictable on any time scale. Given that, any model of future climate whose inputs vary randomly and unpredictably should be prima facie suspect.

  63. nobodyknows

    Can there be a difference in how models are used, or the function of models. In the IPCC context it look like models are used as proof of some tesis. But I think that models also can be used as a working tool, as in some posts of Isaac Held. I think it is more trustworthy when one individual model is used in a reflective way to explain some data.

  64. Tetragrammaton

    What is needed on this thread, or perhaps on a new one, is a competition to come up with the best collective noun to use for climate models. Here are a couple of suggestions:

    “A MUDDLE OF MODELS”

    “AN OBFUSCATION OF MODELS”

  65. The advancement of climate science is arguably being slowed by the focus of resources on this one path of climate modeling.

    Yes why not scrap them and put the money into hard science – accurately measuring the balance of radiation in absolute terms, and seeing whether or not this marches in step with CO2 levels?
    Surely that is the only real arbiter?

  66. Judith the GCMs are entirely unfit for the purpose of climate forecasting.
    The entire output of the IPCC forecasting models and all the impact studies which are based on them are a waste of time and money .A new forecasting method must be used .For forecasts of the possible timing and amount of the coming cooling based on the 60 and 1000 year quasi-periodicities in the temperature data and the neutron count and 10 Be record as the best proxy for solar activity see
    http://climatesense-norpag.blogspot.com/2013/10/commonsense-climate-science-and.html
    here are the conclusions.
    ” It has been estimated that there is about a 12 year lag between the cosmic ray flux and the temperature data. see Fig3 in Usoskin et al
    http://adsabs.harvard.edu/full/2005ESASP.560…19U.
    With that in mind it is reasonable to correlate the cycle 22 low in the neutron count (high solar activity and SSN) with the peak in the SST trend in about 2003 and project forward the possible general temperature decline in the coming decades in step with the decline in solar activity in cycles 23 and 24.
    In earlier posts on this site http://climatesense-norpag.blogspot.com at 4/02/13 and 1/22/13
    I have combined the PDO, ,Millennial cycle and neutron trends to estimate the timing and extent of the coming cooling in both the Northern Hemisphere and Globally.
    Here are the conclusions of those posts.
    1/22/13 (NH)
    1) The millennial peak is sharp – perhaps 18 years +/-. We have now had 16 years since 1997 with no net warming – and so might expect a sharp drop in a year or two – 2014/16 -with a net cooling by 2035 of about 0.35.Within that time frame however there could well be some exceptional years with NH temperatures +/- 0.25 degrees colder than that.
    2) The cooling gradient might be fairly steep down to the Oort minimum equivalent which would occur about 2100. (about 1100 on Fig 5) ( Fig 3 here) with a total cooling in 2100 from the present estimated at about 1.2 +/-
    3) From 2100 on through the Wolf and Sporer minima equivalents with intervening highs to the Maunder Minimum equivalent which could occur from about 2600 – 2700 a further net cooling of about 0.7 degrees could occur for a total drop of 1.9 +/- degrees
    4)The time frame for the significant cooling in 2014 – 16 is strengthened by recent developments already seen in solar activity. With a time lag of about 12 years between the solar driver proxy and climate we should see the effects of the sharp drop in the Ap Index which took place in 2004/5 in 2016-17.
    4/02/13 ( Global)
    1 Significant temperature drop at about 2016-17
    2 Possible unusual cold snap 2021-22
    3 Built in cooling trend until at least 2024
    4 Temperature Hadsst3 moving average anomaly 2035 – 0.15
    5 Temperature Hadsst3 moving average anomaly 2100 – 0.5
    6 General Conclusion – by 2100 all the 20th century temperature rise will have been reversed,
    7 By 2650 earth could possibly be back to the depths of the little ice age.
    8 The effect of increasing CO2 emissions will be minor but beneficial – they may slightly ameliorate the forecast cooling and help maintain crop yields .
    9 Warning !! There are some signs in the Livingston and Penn Solar data that a sudden drop to the Maunder Minimum Little Ice Age temperatures could be imminent – with a much more rapid and economically disruptive cooling than that forecast above which may turn out to be a best case scenario.

    How confident should one be in these above predictions? The pattern method doesn’t lend itself easily to statistical measures. However statistical calculations only provide an apparent rigor for the uninitiated and in relation to the IPCC climate models are entirely misleading because they make no allowance for the structural uncertainties in the model set up.This is where scientific judgment comes in – some people are better at pattern recognition and meaningful correlation than others. A past record of successful forecasting such as indicated above is a useful but not infallible measure. In this case I am reasonably sure – say 65/35 for about 20 years ahead. Beyond that certainty drops rapidly. I am sure, however, that it will prove closer to reality than anything put out by the IPCC, Met Office or the NASA group. In any case this is a Bayesian type forecast- in that it can easily be amended on an ongoing basis as the Temperature and Solar data accumulate. If there is not a 0.15 – 0.20. drop in Global SSTs by 2018 -20 I would need to re-evaluate

  67. Pingback: Weekly Climate and Energy News Roundup | Watts Up With That?

  68. I have had lots of experience in reservoir simulation, which adopts a similar discretised matrix inversion per time step approach. It is well known that cell size affects the outcome. Similarly, populating the cells with data- necessarily smeared- is subjective. We know for certain that coupled non-linear equations are unpredictable, so running them many times doesn’t help prediction. Finally, a resort to parameterisation of relations between variables is a pathetic admission of failure. I really don’t know why anyone believes the output.

    • ” We know for certain that coupled non-linear equations are unpredictable, so running them many times doesn’t help prediction.”

      This is a flat-out misconception used by the denialists to “prove” that no progress can be made.

      This is from a text frequently cited by denier fave Tsonis:

      “Periodic external force acting on a chaotic system can destroy chaos and as a result a periodic regime appears. This effect occurs for a relatively strong forcing as well.”
      G. V. Osipov, J. Kurths, and C. Zhou, Synchronization in oscillatory networks. Springer, 2007.

      And of course the obvious case of this is diurnal and seasonal variations. Not how the strong forcing wipes out the chaos and results in a periodic regime.

      ENSO is also clearly periodic though appears erratic as it is governed by an underlying periodic forcing:
      http://contextearth.com/2014/06/25/proxy-confirmation-of-soim/

  69. Judy may be too pessimistic about GCMs and too optimistic about alternative approaches.

    “Rather, large, complex systems may be better understood, and perhaps only understood, in terms of observed, emergent behavior.” Is there really any hope of studying the emergent behavior of a chaotic system like climate from a record that contains the natural variability from only a dozen decades and good records for less than half of that.

    IMO, the biggest problem with GCMs are the parameters and the lack of candor about the uncertainty they engender. The sensitivity of most if not all models can probably be adjusted to produce TCRs similar to Otto et al by using a different set of parameters that is approximately as good as the current set. However, it is politically impossible to do so unless a modeling group can clearly show that they have improved model performance, not just arbitrarily lowered sensitivity. That’s hard to do when many possible parameter sets are equally bad/good at representing current climate.

    Instead of optimizing the ability of climate models to reproduce current climate or decadal changes in climate, why aren’t modelers focusing on their ability to reproduce seasonal changes in climate (without anomalies). We have 50 years worth of changes from summer to winter and back again in each temperate hemisphere with massive changes in (solar) forcing and feedbacks along with 50 years worth of tropical “seasons” produced by the forcing associated with the eccentricity of the earth’s orbit. Which models and/or parameters do best with Pinatubo or ENSO.