Implications for climate models of their disagreement with observations

by Judith Curry

How should we interpret the growing disagreement between observations and climate model projections in the first decades of the 21st century?  What does this disagreement imply for the epistemology of climate models?

One issue that I want to raise is the implications of the disagreement between climate models and observations in the 21st century, as per Fig 11.25 from the AR5.

hawkins

Panel b) indicates that the IPCC views the implications to be that some climate models have a CO2 sensitivity too high — they lower the black vertical bar (indicating the likely range from climate models) to account for this.  And they add the ad hoc red stippled range, which has a slightly lower slope and lowered range  that is consistent with the magnitude of the current model/obs discrepancy.  The implication seems that the expected warming over the last decade is lost, but future warming will continue at the expected (albeit slightly lower) pace.

The existence of disagreement between climate model predictions and observations doesn’t provide any insight in itself to why the disagreement exists, i.e. which aspect(s) of the model are inadequate, owing to the epistemic opacity of knowledge codified in complex models.

What IF?

For the sake of argument, lets assume (following the stadium wave and related arguments) that the pause continues at least into 2030’s.

Further, it is important to judge empirical adequacy of the model by accounting for observational noise.  If the pause continues for 20 years (a period for which none of the climate models showed a pause in the presence of greenhouse warming), the climate models will have failed a fundamental test of empirical adequacy.

Does this mean climate models are ‘falsified’?  Matt Briggs has a current post that is relevant, entitled Why Falsifiability, Though Flawed, Is Alluring.

You have it by now: if the predictions derived from a theory are probabilistic then the theory can never be falsified. This is so even if the predictions have very, very small probabilities. If the prediction (given the theory) is that X will only happen with probability &epsilon (for those less mathematically inclined, ε is as small as you like but always > 0);, and X happens, then the theory isnot falsified. Period. Practically false is (as I like to say) logically equivalent to practically a virgin.

With a larger ensemble, perhaps there would be ‘some’ simulations with a 20+ year pause.

If the pause does indeed continue for another 2+ decades, then this arguably means that the scenario of time evolution of the predictions, on timescales of 3+ decades, has been been falsified.  This then brings us back to the ‘time of emergence‘ issue, i.e. whether the climate models are fit for the purpose of transient climate predictions, rather than merely equilibrium climate sensitivity.

If the climate models are not fit for the purpose of transient climate projections, and they are not fit for the purpose of simulating or projecting regional climate variability, what are they fit for?  Estimation of equilibrium climate sensitivity?  Maybe, but nearly all signals are pointing to a climate model sensitivity being systematically too high.  Well, ok, the climate models aren’t perfect, but it is argued that we are moving on a path that will make climate models fit for these purposes, as per the National Strategy for Advancing Climate Models.  Increasing model resolution, etc. are not going to improve the situation, IMO.

The argument is then made that climate models were really designed as research tools, to explore and understand climate processes.  Well, we have long reached the point of diminishing returns from climate models in terms of actually understanding how the climate system works; not just limited by the deficiencies of climate models themselves, but also by the fact that the models are very expensive computationally and not user friendly.

So, why are so much resources being invested in climate models?  A very provocative paper by Shackley et al. addresses this question:

“In then addressing the question of how GCMs have come to occupy their dominant position, we argue that the development of global climate change science and global environmental ‘management’ frameworks occurs concurrently and in a mutually supportive fashion, so uniting GCMs and environmental policy developments in certain industrialised nations and international organisations. The more basic questions about what kinds of commitments to theories of knowledge underpin different models of ‘complexity’ as a normative principle of ‘good science’ are concealed in this mutual reinforcement. Additionally, a rather technocratic policy orientation to climate change may be supported by such science, even though it involves political choices which deserve to be more widely debated.”

If the discrepancy between climate model projections and observations continues to grow, will the gig be up in terms of the huge amounts of funding spent on general circulation/earth system models?

489 responses to “Implications for climate models of their disagreement with observations

  1. Thank you, Professor Curry, for your patient persistence in getting to the bottom of the AGW disaster.

  2. Well, since culturally we are addicted to the idea that a great machine can see into the future, no, I don’t think we’ll quit throwing money into the machine’s maw.
    ================

  3. Yogi Bear says (in effect) that the future is very hard to predict!

    • In theory there is no difference between theory and practice. In practice there is. –Yogi Berra

    • “Practically false is (as I like to say) logically equivalent to practically a virgin.”

      “It depends upon what the meaning of the word ‘is’ is. If the—if he—if ‘is’ means is and never has been, that is not—that is one thing. If it means there is none, that was a completely true statement”. — Bill Clinton

  4. Basically it shows that the consensus theory of GHG function of the climate is wrong, and these models which are designed around this theory are wrong in some fundamental way.
    New theories, and models designed to implement those theories will have to be developed.
    These new theory/models will then need to be evaluated for fitness by being compared to actual measurements.

    • Almost every field of engineering uses modeling and simulation now to great benefit to society. But the models have to be validated in the real world. F1 uses CFD to simulate their cars on each track prior to going to the track, and they are very good (iirc within fractions of a sec), but they still do not get the exact lap time as their driver does on the real track.
      I worked for one of the first 3 electronics design and simulation companies when they first became a commercial venture, I’ve build hundreds of models, and run thousands of different circuit simulations showing the value to the engineers who designed those circuits. It’s all about the models, CGM’s were hijacked by environmental activists, and the developer are absolutely convinced they’re right, but their results prove they are wrong. When they finally decide they need to change their theories, change their models, progress in useful models can resume. Models and simulations are good useful things, when the models is accurate, worthless when they are not.

      • Mi Cro,

        Excellent comment and excellent analogy. There is an enormous difference between the way engineering worlds and science works. Validation is one, but documentation of all relevant inputs in a way that is accessible for due diligence is another. Science does not provide the information in the way needed to justify the enormous expenditures needed for GHG mitigation.

      • Peter Lang,

        Another important difference is that as an engineer, I am accountable for my work. Should something go wrong, I may need to show my due diligence to avoid an unpleasant consequence.

      • Poor equivalence. I am the guy that is characterizing the surfaces and terrains that those cars are travelling across. This is a cross between modeling natural stochastic variability and artificially introduced disturbances.

        You don’t have the right mind-set for working the problem. Nature is not a car, built to factory specifications according to your wishes.

        Uncertainty rules, but there are ways of quantifying that uncertainty.

      • You must have not looked at all of the millions of dollars of Aero development that goes into an F1 car design every year, with the difference between winning and losing as small as a fraction of a second after ~100 miles, nor that fact that they have to do their design work with minimal allowed testing. They actually have started using rapid prototyping machines so engineering can make changes to their aero package (which are simulated on super computers running CFD software) and incorporate it on a race weekend.
        This isn’t NASCAR.

  5. Why are we talking about models when there is only one model , that of the increase in surface temperature based on inreasing CO2 production that is not and has not been reigned in. The infamous scenario A1F1 of the IPCC with its 4 degrees of temperture rise by 2095.
    The discrepancy between this, “the real world model” and the real world observations makes this model farcical.
    It is basically the bit of spaghetti at the top of the IPCC ensemble at the end. All the other modifications are those of the IPCC wish list of reducing emissions which are not real because emissions are not reduced. Can Judy or someone take out all the other lines and show this bit of dishonesty and nonsense for what it is.

    • This is indeed not clear to me. If Figure 11.25 includes model predictions made on assumptions of anthropogenic CO2 emission up to and including 2012 different from what they have actually been, it makes no sense to show the associated curves. Is it what you are saying? If it is, you are obviously right; CO2 emission have kept increasing, this is one of the main reasons why we care about the pause discrepancy. But it is so obvious that I hope that I am missing something. I apologise to be so ignorant and confused. Thank you for educating me.

      • To be clearer, which is the RCP that corresponds to reality up to 2012? and how well does it?

      • Actually the short term simulations are pretty insensitive to which RCP is used.

      • Sorry to come back on this. From what I understand of the RPCs (namely the paper in Climate Change (2011) 109:5) they describe different scenarios for future emission of greenhouse gases. They have NOTHING to do with climate science. In making predictions (or projections) for the future warming, one needs to understand the climate science AND to make hypotheses on future emissions. When we talk about the pause (as it is observed today, not as it may or not go on) we should not mix up what we have to say with RPCs since we know what emissions have been. One should then show different models that differ only in the climate science parameters, essentially the values of the feedbacks assumed for each of the mechanisms acting on the global temperature. Ideally, for each of these mechanisms, one should estimate which systematic error is attached to it (of course, it has no reason to be Gaussian, it may be two-valued, for example, if one is not sure between two different interpretations of what is observed). With proper caveats on the way to combine all these systematic uncertainties, on could hope to end up with a single line with an error bar (from what I understand, it would be an illusion to hope that we could take such an error bar too seriously, the arithmetic of uncertainties used by IPCC is a joke). If this is what angech means to say, and if what I say makes sense (I am very well aware that may be it does not, I am posting to educate myself, not to argue), then I think that there is no doubt that angech is right. I understand from Judy Curry that on the short range the effect is small and that this point is academic (no pun, as you like to say). Yet, I always found it very disturbing to see figures as 11.25 with 4 RCPs times ~40 models! It is inviting the reader to do things like averaging all these lines, which is CERTAINLY NONSENSE. Its is up to the person who make the figures to do the home work to make an understandable synthesis of all this information. One can expect the reader to look at three or four curves, not more. It is very misleading to mix up uncertainties in the RPCs with uncertainties in the climate science. Please be indulgent if what I say is wrong and be kind to correct me. And if what I say is right, may be some of you know of places where the figure angech and I are asking for may be found. With kind regards.

    • “Why are we talking about models when there is only one model , that of the increase in surface temperature based on inreasing CO2 production that is not and has not been reigned in.”

      Exactly! There is only ONE climate model and it treats as axiomatic that anthropogenic CO2 is the ‘knob’ that is driving the ‘Temperature of the Earth’. And, most importantly, that the TOE will rise rapidly and CATASTROPHICALLY unless Something is Done. And therein lies the importance of the Climate Model. Singular.

      The purpose of The Model has never had anything to do with actual, empirical climate (When the model and the data diverges, the data is adjusted.) but to provide justification for politicians to assume world-wide control over all energy production. And, more importantly, energy consumption.

      Contrary to the commentary here, The Climate Model has been one of the most successful in the history of science. When it comes to climate, the science is well and truly ‘settled’.

      If you doubt me, simply go the magazine rack in any grocery store and start leafing through the magazines. ALL science related magazines treat CAGW as ‘settled science’, questioned only by illiterate nutcases of various stripe. See the latest editorial? in ‘Scientific American’ that says essentially that it is time to quit pussyfooting around and DO SOMETHING about those whack job ideologues (conservatives/Republicans, of course) who are attempting to poison the minds of our youth by suggesting that CO2 driven CAGW be taught as theory rather than fact. ‘National Geographic’, ‘Science’, ‘Popular Science’, ‘Popular Mechanics’; settled science unanimously.

      News, politics, home, cooking, hunting, fishing, cars, beauty, pick a subject. Almost every issue on any subject makes mention of CO2 driven climate change and treats it as unquestionable fact. Not theory. I get a monthly blurb from my power co-op and EVERY issue has a comment about how we can fight climate change by reducing our CO2 signature and helpful hints on how to accomplish this nobel objective.

      As for schools, I think that SA needn’t worry. EVERY science class in EVERY public school teaches CO2 driven CAGW as a fact on the same level as F=MA. It just is. And woe be to the science teacher who has the temerity to question it in front of her class.

      I am with Jim Cripwell, HOPING that someone, some authoritative science body will hold up their hand and say STOP!, while knowing full well that it ain’t happenin’.

      If Climate Science were actually science, CO2 driven CAGW would have died long ago. After all, there is no actual DATA supporting it. It is not; it is politics, it is fulfilling it’s role beyond its originators’ wildest expectations, and the idea that the politician/green complex that invented it and rode it to fame, power, and fortune will suddenly say ‘Oh, never mind. We have examined the data and it was all a big mistake. Sorry.’ is delusional. At best.

      Jerry Pournelle often says that ‘Despair is a sin.’. Wonder if an objective examination of the actual ‘state of play’ counts. If so, Jim and I are in deep afterlife doodoo.

      Bob Ludwick

      • Thanks, Bob. At least someone read what I wrote.

      • The divergence of reality from model projection creates a problem for the narrative. It is much easier to modulate the narrative than it is to dissolve the divergence. The only key part of the narrative, human guilt, will be easy to keep viable, given that we are humans with a vast and necessary capacity for guilt.

        Anthropogenic warming in the future will be on the scale of the warming since the end of the Little Ice Age. It is simple to demonstrate net benefit from the past warming, and just as simple to project net benefit from similar future warming.

        The charlatan’s trick is to generate guilt from something that has been, and will be, net beneficial. This is where terms such as ‘hoax’ apply.

        It is also likely that it will take the perspective of distant vision, that is, from the future, to discern this great hallucination about human guilt.
        ================

      • Very well stated, and to Jim, I have read and agree with what you have said also.

        One problem in countering the CAGW argument is that CAGWers have managed to created multiple negative narratives – the most egregiously false one being that burning fossil fuels is evil and destroys the planet. Unfortunately, the fossil fuel industry has been unwittingly complicit in furthering this view which I’ll ascribe to the “charlatan’s trick” Kim so eloquently described. The narratives have served to indoctrinate and lobotomize too many journalists, politicians, and educators.

        Until we can change the narrative is some meaningful way, the CAGWers will continue to “win” the debate.

        The Center for Industrial Progress is working to change this narrative. If you have not already heard of them, they make a very strong moral case for continuing to use fossil fuels – see

        http://industrialprogress.com/wp-content/uploads/2013/10/The-Moral-Case-for-Fossil-Fuels.pdf?inf_contact_key=cb137e018f4955e76c1c560ad449d802aed7425adcd4a307ca6b163db88c1bb3

        Can’t wait to see if/how the usual warmist suspects react to CIP.

      • I thought I posted this, but apparently not…

        Bob – very well stated, and a hat tip to you as well Jim.

        The problem is that the CAGWers have successfully created a negative and false narrative around the use of fossil fuels, and the fossil fuel industry has been unwittingly complicit in helping to further this narrative – call it the manifestation of the “charlatan’s trick” so eloquently described by Kim.

        Too many politicians, journalists, and educators have been indoctrinated and lobotomized by the narrative, and it has now, as you stated, become ingrained in our culture.

        One organization working to bring some sanity to the narrative is the Center for Industrial Progress. They make a strong moral case for the use of fossil fuels. See https://uy137.infusionsoft.com/app/linkClick/2703/7c7f3c72f739163a/129191/c5ad575fa4013a2c

  6. With each passing year of global temperature pause or fall, the models become nothing more than argumentum ab auctoritatem (argument from authority) replacing humans with “incorruptible” and “all-knowing” computer models. As long as progressive politicians and activists have agendas to advance, more money will be thrown at the models, which will continue to be to be used as a social and political bludgeon, rather than a scientific tool.

    • There is a creative mechanism, and the thermodynamics, to produce false narratives. We humans will get used to this new found power, and adapt. There will be many victims in the process.
      ===============

      • Poster held by person at corner…

        “Willing to produce climate models to fit your political agenda for food”

  7. “If the discrepancy between climate model projections and observations continues to grow, will the gig be up in terms of the huge amounts of funding spent on general circulation/earth system models?”

    Probably. But there a lot invested in them [invested in all it’s meanings]
    So it’s seems they will instead focus on regional climate.
    As you suggested a while ago.
    And could even be valuable for the public.

  8. Your final paragraphs sums up the dilemma nicely. But the answer depends upon what the funding is for. If the funding is for pure science, the answer is undoubtedly yes. If the funding is to prove a point, then the answer is no, and indeed the amount of funding will only increase as they will determine that not enough has been spent to find the answer they want.

    Which answer it is, right now is a matter of opinion. And I think the opinions basically would be an indicator of the persons faith in government.

    But on a second point, Matt Biggs brings up an interesting topic. If indeed the models are “probabilistic” and therefore cannot be “falsified”, therein lies the problem with this line of “science”. It is not science. It is probabilities. Now probabilities are a valid line of research in and of itself. Pollsters do it constantly trying to predict (with some accuracy) the outcome of elections. But they USE science as a basis of their craft. They are not DOING science, rather relying on the work of past theoreticians to validate their endeavors. Much like Engineers build bridges based upon scientific principals determined by scientists in the past.

    So if the climate modelers want to be thought of as Engineers, they should be judged on that basis, and not on the science method. And while we have such marvels as the Golden Gate Bridge, we also have the Tacoma Narrows Bridge. And right now the models are as useful as the Tacoma Narrows bridge.

  9. Dr. Strangelove

    Warmists must turn to computer models because that’s the only way they can create their fantasy. Reality does not agree with the models. All you need is an eyeball to see that climate is cyclical whether decadal, multi-decadal, centennial or millennial. It’s all dominated by natural climate cycles. All climate data on all time scales show this.

    IPCC climate models are all wrong because they all assume strong positive feedbacks. Without positive feedback, doubling CO2 yields only 1 C increase in temperature. They need positive feedback to make it catastrophic. But actual satellite measurements of feedback reveal strong negative feedback of 6 W/m^2 per 1 C change in temperature. (Lindzen & Choi, 2008) (Spencer, 2010)

  10. Very biased title and presentation. The proper way to ask the question is “what is the matter with the data for not matching the model output?” We know the models are correct since the people that came up with them tell us they are correct and there is some physics in the models. Why should we expect the models to change just because they don’t match the data? This is data-centric thinking, which I am sure is probably some form of discrimination and therefore should be illegal. If it wasn’t for you “darn deniers” with your negativity pointing this out all the time, we wouldn’t be in this mess.

    • “Once again, the data is rejected by the theory.” –A theorist I know, after an empirical seminar.

  11. Judith, you write “For the sake of argument, lets assume (following the stadium wave and related arguments) that the pause continues at least into 2030′s.”

    Surely you could also assume that the pause will turn in to a steady decline in temperatures, for which there is growing evidence. I suggest that this possibility should also be explored.

    • Yogi Bear ain’t Yogi Berra! eg “what (is)
      quotes by yogi bear about the future? “It’s tough to make predictions, especially about the future” -Yogi

      • Yogi Bear was pretty smart, but I hear Boo Boo was the real brains of the operation…

      • @ pokerguy

        Boo Boo: “Hey Yogi, if we don’t tell the park ranger what he wants to hear, they will stop bringing the picnic baskets around!”

        Yogi: “Give it a rest Boo Boo! – and help me manipulate this tree ring data…”

    • Sorry Jim C. Wrong nesting. However, I agree that no-one knows what the future will hold… not even Yogi.

  12. JC said::

    If the discrepancy between climate model projections and observations continues to grow, will the gig be up in terms of the huge amounts of funding spent on general circulation/earth system models?

    More important from a pragmatic and policy question is when can/should we stop wasting so much money on policies like GHG mitigation, carbon pricing, and renewable energy?

  13. “The more basic questions about what kinds of commitments to theories of knowledge underpin different models of ‘complexity’ as a normative principle of ‘good science’ are concealed in this mutual reinforcement.”

    Is there an English version of this sentence (is it a sentence?)

    • Ed, My translation is activist over took the field and it became dominated by group think, and it stopped being science when this happened a long time ago.

    • Classic gobblegook! Normative principles of “good science” should forever remain concealed from rational discourse!

    • Meh, the mutual re-inforcement has become destructive positive feedback, destroying the usefulness of the signal. Dial it all back to 9.9.
      ====================

    • Here is part of it, perhaps:

      “This strategy assumes that a more complex model may lay claim to be a better representation of a complex system and, hence, that it has a greater truth-content than other models.

      Thus, the complexity of model representation becomes a central normative
      principle in evaluating ‘good science’ in the climate research domain. It is argued that the predictive potential of the model is enhanced; the uncertainty reduced or better defined; and, as a further bonus, the policy-usefulness of the model output is strengthened relative to simpler models. It is this ‘central dogma’ – that greater complexity equals greater realism, equals greater policy-utility – which we set out to explore critically in this paper”

    • Step 1: Locate the predicate. “Are concealed.”

      Step 2: Locate the subject root: “The more basic questions.”

      Step 3: Attach the modifiers to the predicate: “Are concealed in this mutual reinforcement” [note that “this mutual reinforcement” refers to a previous sentence].

      Interim result: “The more basic questions.. are concealed..in this mutual reinforcement.” [Maybe a tad vague, but intelligible]

      Step 4: Attach the modifiers to the subject: “”The more basic questions about what kinds of commitments to theories of knowledge underpin different models of ‘complexity’ as a normative principle of ‘good science’.” [At first there appears to be a lack of singular-plural agreement in the sentence because ‘complexity” is the only singular noun prior to the clause “as a normative principle” complexity is plainly not a normative principle. Charitable reading would dictate changing this clause to “as normative principles.” But a different principle of interpretation says to assume that the authors actually meant “‘complexity’ as a normative principle of good science” to be the object of the other prepositions and verbs, in which case the agreement problem goes away. Maybe the authors are making an imprecise shorthand reference to the normative principle that complex phenomena might not be analyzable in reductionist ways. That’s the one I’ll go with.]

      Step 5: Unpack the one verb and seven prepositions in the modifier clause by starting at the beginning and end of the about-of-to-of-underpin-of-as-of chain and working in from the ends, using parentheses to group clauses to which each preposition applies:
      a) “Basic questions about.. (‘complexity’ as a normative principle of “good science’) are concealed in this mutual reinforcement.”
      b) “Basic questions about (what kinds of commitments to)…(‘complexity as a normative principle of ‘good science’) are concealed in this mutual reinforcement.”
      c) “Basic questions about (what kinds of commitments to (theories of))…(‘complexity’ as a normative principle of ‘good science’) are concealed in this mutual reinforcement.”
      e) “Basic questions about (what kinds of commitments to (theories of))…(different models of (complexity as a normative principle of ‘good science’)) are concealed in this mutual reinforcement.”
      f) “Basic questions about (what kinds of commitments to (theories of knowledge)) underpin (different models of (complexity as a normative principle of ‘good science’)) are concealed in this mutual reinforcement.”

      Step 6: Restate the same semantics using active voice and logical order of explanation: “Different people have different models of how ‘complexity’ should be dealt with in science. These different models presume different degrees of commitment to any given theory of knowledge, but the mutual reinforcement of climate science and global environmental ‘management’ conceals these epistemological disagreements.”

      Step 7: Charitably assume that the “what kinds of” clause was just a stutter and rewrite as:

      “Different people have different models of how ‘complexity’ should be dealt with in science. Each model is based on a different theory of knowledge, but the mutual reinforcement of climate science and global environmental ‘management’ conceals these epistemological disagreements.” [I’ll have to read the whole paper carefully to understand the hypothesized mechanism of concealment.]

      See? Easy-peasy. Or maybe it would work better translated into German.

  14. Judith, you ask “If the discrepancy between climate model projections and observations continues to grow, will the gig be up in terms of the huge amounts of funding spent on general circulation/earth system models?”

    We all know the answer to that one; NOOOOOOOO!!!!!!! There is no way The Team is ever going to give up on The Cause. They have the Royal Society, the American Physical Society, the AGU, Old Uncle Tom Cobbley and All on their side, and they are never going to give in, until they are absolutely forced to do so. And this will not happen until something is DONE, instead lf people like yourself and others just WRITING about it.

    I am getting tired of writing this sort of thing, and I am sure others are getting tired of reading it, but I know others are getting as frustrated as I am with the sheer lack of action on the part of the “scientific establishment”. Science is being more and more abused and nobody is DOING anything. Is there any chance that you could persuade Georgia Tech to issue a statement, with the full authority of the University, that directly contradicts the statements made by the RS, APS, AGU etc? At least then we skeptics would have one academic institution in our corner.

    So far as I can see, absolutely nothing is going to happen unless and until there is either a sudden drop in global temperatures, or a miracle happens, and someone who matters, actually DOES something.

    • Well, Jim, we know the vision has been produced from hallucinatory wisps of reality, and we know that the social dynamics sustains the scientists and the politicians in a folie a deux.

      The resolution and cure will most likely be bottom up from a cold populace. What an unfortunate way to run a civilization.
      ===================

      • “Is there any chance that you could persuade Georgia Tech to issue a statement, with the full authority of the University, that directly contradicts the statements made by the RS, APS, AGU etc? At least then we skeptics would have one academic institution in our corner. ”

        Mornin’ Jim,

        I think Judith’s probably worried enough about her climate blogging and its possible effect on her career as it is. And as nice as that might feel in the short term, isn’t it really more of the same? I’m leery of all such grand pronouncements from on high.

      • pokerguy, you write “I think Judith’s probably worried enough about her climate blogging and its possible effect on her career as it is.”

        I wont argue. But desperate times need desperate measures. Maybe I will have more luck with Ken Haapala and Princeton.

    • I should add that over on WUWT, I suggested Ken Haapala might persuade Princeton to make a similar statement about CAGW..

    • Have you all seen the statement by the President of Harvard regarding the call to disinvest from fossil fuel companies? It was outstanding, both in content and presentation.

      My version would have been a bit shorter. As in “To the students calling for disinvestment, I can only say that a Harvard education is likely wasted on you. When you dumbasses learn something about the responsibility involved with managing an endowment fund, perhaps I will take the time to listen. Until then, I suggest you utilize your time in more important activities, such as studying.”

  15. It is time to put the models on the back burner. Move the model resources toward a new effort of natural variation research, regional prediction, and expanded observational tools. Within ten years we should be able to make seasonal weather predictions in vulnerable places (like Florida). However such a rational policy would only be tolerated in the private sector. Making such a change in government sponsored activity has no chance to see the light of day.

  16. WE are involved with bioinformatic In silico modeling of protein receptor sites. At least they are based on known certain structures and do not involve the thousands of unknowns in climate.
    On an aside:
    Interesting to note that this site
    http://wxmaps.org/pix/sa.850.html
    Only forecasts accurately 48 hours ahead. If you follow this for 7 days you will see how it is changed daily. In my experience they are regularly off by two days. (ie will rain in 7 days means usually 5 days that’s in the sub-tropics). Not bad compared to 20 years ago when forecasting was limited to max 24 hours.

  17. There are no disagreements with models that incorporate observational estimates of forcing functions. See this model:

    http://contextearth.com/2013/10/26/csalt-model/

    There are more analyses like this one that are able to model the pause, but the CSALT model is my take on it and I am able to robustly defend it, unlike most skeptics that have no alternate model to point to.

    Absent a model, the problem is with trying to pin down a specific Monte Carlo roll of the dice. Skeptics are riding the noise and make the rookie mistake of thinking that the noise is the underlying signal.

    • Webby, better watch out, you’re going to tear a rotator cuff patting your back like this.
      Your model is worthless until you can make future predictions at a regional level where it matches future measurements. All you’ve done is some curve fitting to homogenized data that was generated with lossy data processing.

      • +10,000

      • Mi Cro, Your own analysis is incompetent in that you do not attribute anything to CO2 and say that it is all a recovery from regional cooling
        http://www.science20.com/virtual_worlds/blog/global_warming_really_recovery_regional_cooling-121820

        At least I am pragmatic and my own predictions fit into the interval shown in the top-level post
        http://img826.imageshack.us/img826/4494/v5c8.jpg

        Two projections are shown, one for a nominal 2.5PPM CO2 growth rate and one with a high 4PPM growth rate. This is all assuming only a TCR — and the ECS as described by land temperatures is about 50% higher than this.

        So what is your excuse for your worthless model?

      • Webby,
        “Mi Cro, Your own analysis is incompetent in that you do not attribute anything to CO2 and say that it is all a recovery from regional cooling
        http://www.science20.com/virtual_worlds/blog/global_warming_really_recovery_regional_cooling-121820

        So what is your excuse for your worthless model?”

        Lol, I don’t have a model, I’m making an observation on the actual measured data (something that would serve you well btw). What I see is zero global trends in regional surface tempertures, which is the fingerprint of Co2. If the only actual data we have shows no global trend, and the GHG theory of Co2 demands there be one, surface data falsifies the theory.

        I think that makes it worth at least as much as we’ve spent chasing the ghost of GHG theory, donations are appreciated.

      • Mi Cro,

        What’s the deal with your worthless mind’s eye model based on your gut feel?

        Your model of reality is so far removed from any conventional view of climate science that it should be considered useless. That’s why I refer to your ilk as noise chasers.

      • “Your model of reality is so far removed from any conventional view of climate science that it should be considered useless. ”

        Again, it’s not a model, it’s the average of surface measurements, without homogenizing the data first. It has no more noise in it than the data you use to compare your seasalt model to. It is the data used to make the trend you compare your model to. And I include all of the good data (now over 118 million station records), I don’t base it on air pressure from 2 locations.

        I incorporate data controls, and I don’t make up data where it’s not been measured. I produce google maps, summaries from each station that is used in each analysis, and I’m working on calculating variance and sum of squares based on the reported measurement error for each record.
        I also published my code, though my latest code which accounts for missing humidity and pressure needs to be uploaded.

        If you want to understand what is actually measured, I think I have a good product, and I’m willing to enhance it if something is lacking.

      • Heh, Web, ‘actual data’ is ‘far removed from conventional climate science’. Now you can get the utility factor correct.
        ====================

      • Climate scientists show their objectivity by using a combined ocean-land model for presenting the temperature rise. They all realize that the ocean is sequestering about half the thermal forcing via its large heat capacity, and ridding some of the excess via latent heat of evaporation.

        Got to give them credit for not showing how much faster the land is heating — ~50%.

        My failure in the skeptic community is that I go through all the trouble of analyzing multiple datasets and ultimately come up with the same value as the Charney report from 1979, ECS=3C

        Skeptics can not deal with it because it has to be ABCD.

      • “Got to give them credit for not showing how much faster the land is heating — ~50%.”
        The NCDC global summary of days I used is mostly surface data, doesn’t look anything like 50%.

      • Mi Cro,
        You are doing something egregiously wrong if you can’t even match land data such as BEST or CRUTEM.
        Try harder.
        Or maybe give up. Some don’t have the knack for this kind of analysis.

      • Why would I do that, they’re wrong.

      • Well, maybe that’s a little harsh, I am at least measuring something completely different than they do.

      • Mi Cro,
        Sure, I can also measure something that has nothing to do with a phenomena and attribute all sorts of explanations to why it does or doesn’t agree.

        But that would be a waste of time.

      • Hmm, day over day surface station temperature averages doesn’t having anything to do with detecting the effect of Co2 on surface temperature, that might explain why you like your seasalt model so much.

      • Steven Mosher

        webby

        waste no time with this guy

        “This is where the temperature data would be homogenized. I feel that since temperatures are not linear spatially and the sample size changes so much over time, homogenizing temperature data is basically making up data that doesn’t exist. I understand some might say not doing this creates a bias where the data is over sampled, I feel making up data is worse than bias.”

        so bad its not even wrong.

      • Mosh, I am absolutely convinced that the spatial homogenization done through statistical interpolation, or kriging, is the correct approach. I know BEST does this and the fact that it does pick up the fluctuation terms that matches the SOI variations makes it a very reliable measure.

        If Mi Cro is off in his own dream world of inventing new measures, I guess that is his choice.

      • “If Mi Cro is off in his own dream world of inventing new measures”

        Lol, it’s not new measures, it’s an average of the actual measurements, now I guess this is a new concept in climatology, actually using your measured data.

        The problem with what i calculate is it proves this whole farce to be wrong, you just can’t have that can you.

      • Mi Cro hasn’t heard the absurd fiction of McKittrick’s T-Rex uncertainty monster.

        Guess who always gets the last laugh on these hopeless crusades concerned with proving science wrong … on the most mundane aspect?

      • Webby, I must have touched a nerve, me being a clueless crusader and all.

        So, you’re saying you oppose my creating averages of day over day differences of actual measured minimum and maximum temperature records, and feel that looking at how much temps go up today, and then fall tonight does not have a place in determining if Co2 reduces nightly cooling (since it has to effect nightly cooling if it effects anything).

        Do I have that right?

    • WebHubTelescope
      Absent a model, the problem is with trying to pin down a specific Monte Carlo roll of the dice. Skeptics are riding the noise and make the rookie mistake of thinking that the noise is the underlying signal.

      No, that’s just you trying to build strawmen. Far from rookie, it’s admirably deceptive.

      • No Gail, you have to bring something to the table. What exactly are you promoting? Awfully deceptive of you to make insinuations when all I do is present models that anyone can work out for themselves.
        You, OTOH, got nothing. You’re not even a player.

      • Chief Hydrologist

        ‘Finally, the presence of vigorous climate variability presents significant challenges to near-term climate prediction (25, 26), leaving open the possibility of steady or even declining global mean surface temperatures over the next several decades that
        could present a significant empirical obstacle to the implementation of policies directed at reducing greenhouse gas emissions (27). However, global warming could likewise suddenly and without any ostensive cause accelerate due to internal variability.
        To paraphrase C. S. Lewis, the climate system appears wild, and may continue to hold many surprises if pressed.’

        Webby has repeated this nonsense some 600 times in a week. It is copy of a methodology that is a dead end for either attribution or prediction.

        Here’s is a much better model from real scientists based on real theory.

        http://s1114.photobucket.com/user/Chief_Hydrologist/media/rc_fig1_zpsf24786ae.jpg.html?sort=3&o=26

        The rate of recent warming is about 0.1 degrees C/decade – at least half of that is natural variation. Non warming – or cooling – over the next decades is quite likely.

        It is very easy to misuse this simple method of scaling data to temperature by ignoring decadal – or even the true signal of centennial to millennial variability – and by not understanding the source and nature of variability in a wild climate.

        Webby’s nonsense is misleading at best – ignorant and tendentious more likely.

      • Little Chief Big Man blowing smoke. He does this because he can’t comprehend the most basic scientific analysis and so runs off to quote-mine.

        So what else is new?

      • Chief Hydrologist

        I haven’t looked at you so called model – but it is not an original idea. It scales temperature to data series and adds them together to get a synthetic series. I have described this in one of the earlier infestations of this nonsense.

        e.g. http://pubs.giss.nasa.gov/abs/le02300a.html

        One thing that we can depend on is that you do it badly and with appalling presentation, repeat it endlessly and prattle and preen about it.

        Your one complaint about quoting real climate science
        is to call it quote mining.

        You suck up to people like Appell – who amusingly calls you for the utter incompetent that you are – and complain and whine endlessly in tribally motivated trivialities. You are not really a balanced individual are you.

      • That’s great that Lean and Rind have done this kind of modeling. It is very effective in furthering our understanding of climate change as it removes the small contributions of natural fluctuations.

        What is left is the underlying trend.

        Chief is on board with this … Until he decides it is against his agenda, and then will moan again.

        Watch what happens.

      • Chief Hydrologist

        Unlike El Niño and La Niña, which may occur every 3 to 7 years and last from 6 to 18 months, the PDO can remain in the same phase for 20 to 30 years. The shift in the PDO can have significant implications for global climate, affecting Pacific and Atlantic hurricane activity, droughts and flooding around the Pacific basin, the productivity of marine ecosystems, and global land temperature patterns. This multi-year Pacific Decadal Oscillation ‘cool’ trend can intensify La Niña or diminish El Niño impacts around the Pacific basin,” said Bill Patzert, an oceanographer and climatologist at NASA’s Jet Propulsion Laboratory, Pasadena, Calif. “The persistence of this large-scale pattern [in 2008] tells us there is much more than an isolated La Niña occurring in the Pacific Ocean.”

        Natural, large-scale climate patterns like the PDO and El Niño-La Niña are superimposed on global warming caused by increasing concentrations of greenhouse gases and landscape changes like deforestation. According to Josh Willis, JPL oceanographer and climate scientist, “These natural climate phenomena can sometimes hide global warming caused by human activities. Or they can have the opposite effect of accentuating it.”

        Here I am ‘quote mining’ NASA again? Excluding extreme ENSO fluctuation in 1976/197 and 1998/2001 as seen at realclimate – the residual warming is about 0.2 degrees C. How much of this was natural. The data shows 0.7 W/m2 cooling in IR and 2.1 W/m2 warming in SW. This exceeds the change in forcing from greenhouse gases (~0.6 W/m2) by a considerable amount. The warming was predominantly natural variability in cloud cover.

        ‘In summary, although there is independent evidence for decadal changes in TOA radiative fluxes over the last two decades, the evidence is equivocal. Changes in the planetary and tropical TOA radiative fluxes are consistent with independent global ocean heat-storage data, and are expected to be dominated by changes in cloud radiative forcing. To the extent that they are real, they may simply reflect natural low-frequency variability of the climate system.’
        http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch3s3-4-4-1.html

        Low frequency climate variability is very real and has substantial impacts on TOA flux. But a statistical scaled temperature time series cannot distinguish between these co-varying factors. The progressive denialism of space cadets like webby is a wonder to behold.

      • “Low frequency climate variability is very real and has substantial impacts on TOA flux. “

        You made a mistake there Procto. It is obviously the case that low-frequency climate variability is real but ultimately it has little impact on the trend resulting from a relentless GHG forcing function.

        I am just showing how puny that underlying climate variability is in the greater context.

      • Chief Hydrologist

        I had though that there was some hope of progress for space cadets – obviously not the case. Progressive denialism is a strong force.

        It is obvious that ENSO influences temperature. The original complaint was that 1998 was a super El Nino. So subtract it from the record – it is all ENSO. At the same time subtract the La Nina/El Nino transition at the 1976/77 ‘Great Pacific Climate Shift’.

        What is left is 0.2 degrees C warming. At least half of this is the IPO. So what we have is some pissant warming 0.05 degrees C/decade due to greenhouse gases.

        Yet we still get idiots arguing that James Hansen’s 0.6 degree lower bound is remotely accurate. And fools like you who still fail to make the obvious connections staring you in the face.

        Sceptics can’t be right after all – that would be a reversal of the natural order of the universe.

      • Web
        You clearly tried to build a strawman. And when I pointed this out, you ducked answering that, trying to hide this ducking by changing the subject. Devious, as ever.

      • Gail,
        I only hide behind my work, which beats anything you can come up with.

        I guess building an interactive model for all to use is what you consider “devious”.

      • Chief Hydrologist

        Webby never ever addresses the real issue. The decadal variations co-vary with greenhouse gases. Temperature series alone wont say which is which. It is like having three unknowns in two equations. An impossibility.

        The TOA radiant flux data adds the necessary information – it suggests that the dominant influence in recent warming was natural variability.

        Climate is not random – it is not noise – everything has a proximate cause. Webby’s problem is that he lacks intellectual depth and breadth and leaps to unwarranted conclusions. There is something else in there as well. He obsequiously sucks up to people like David Appell – who calls him for the incompetent he is – and behaves like a trained attack gerbil otherwise. Not a balanced individual.

      • Web
        My sole point here , which you keep ducking, is that you tried to strawman skeptics with your comment “skeptics are riding the noise and make the rookie mistake of thinking that the noise is the underlying signal”.

        Your latest issue-ducking response was “I only hide behind my work”, and the others were of a similarly ilk, well wide of the point at hand.

        Like most consensus types, just constitutionally wired to use malpractice, and never, ever, admit it, eh?

      • BFJ Cricklewood

        Chief Hydrologist | October 31, 2013 at 4:05 pm |
        The TOA radiant flux data ….. suggests that the dominant influence in recent warming was natural variability.

        Meaning : warming and CO2 *not* moving together ?

      • Gail, ya got nothing, except for WUWT, which is less than nothing.

      • Web
        Here’s how our conversation has gone:
        * I called you on launching an egregious strawman.
        * In response, rather than admit it, you repeatedly try and change the subject.
        That makes you an unrepentant fraud.

      • No strawman, deniers riding the noise 24×7.

      • Web
        Your dogged question-ducking over, the point is finally addressed. Which makes you a repentant fraud now. Ok, progress.

        But your denial of strawmanning is wrong. To wit : “Skeptics … think .. the noise is the underlying signal”.

        All skeptics? Some? A single one? Who?

      • Chief Hydrologist

        It is called attribution. Let’s accept webby’s decadal natural variability of 0.1 degrees C. – exclude ENSO transitions between states in 1976/77 and 1998 – and we get a global warming trend of 0.5 degree C/decade over 2 decades in the late 20th century that might be greenhouse gases.

        http://s1114.photobucket.com/user/Chief_Hydrologist/media/rc_fig1_zpsf24786ae.jpg.html?sort=3&o=26

        It is so obvious even real climate blogs it.

        http://www.realclimate.org/index.php/archives/2009/07/warminginterrupted-much-ado-about-natural-variability/

        The future has also been discussed at realclimate – the overwhelming probability of the pause continuing for another decade to three.

        Webby is a sad little man fighting a rear guard action against progress in climate science. He simply cannot process even the simplest attribution – it is a fairly common problem with progressive deniers.

      • The Chef has never shown any propensity to do any kind of detailed analysis. All he does is cherry-pick and quote-mine.

        He can’t find anything wrong with the CSALT model, which removes the noise that Team Denier clings to like a warm blanket.

      • Chief Hydrologist

        Something wrong with the number webster? By all means – come up with a number yourself – it is not difficult but the silence on actual numbers for attribution and the obligatory BS is tedious in the extreme.

        I have explained the core problem in your silly little – derivative – idea several times. It is neither a new or a particularly interesting method.

        The essential problem is co-linearity – in the period of interest from 1979 to 1997 decadal variability co-varying with CO2. Attribution – temperature scaling in your idiotic case – is thus arbitrary without further information. Too many unknowns in too few equations.

        It is like talking to a goldfish.

      • The Chef can not bear to do any analysis that involves him getting his hands dirty.

        All he does is make up stuff like this:

        “The future has also been discussed at realclimate – the overwhelming probability of the pause continuing for another decade to three. “

        Oh yeah? I am certain that a top-level post at http://RealClimate,org never said that the pause could continue for “another decade to three. “

        Relentless lying is all we see from the Chef.

    • “Skeptics are riding the noise and make the rookie mistake of thinking that the noise is the underlying signal.”

      I think skeptics in the main, are arguing the two are at this point indistinguishable.

      • Yes, that’s why the skeptics are incompetent.

        Consider the fact that engineers can actually extract the signal from something as faint as the transmissions of the Voyager spacecraft.

        This has a lot to do with having a model of the underlying actual signal and being able to remove the noise that doesn’t obey that model.

        Same goes for the AGW signal and the way that we can strip away the noise that is naturally present, revealing the unnatural warming trend.

      • Lol, and he’s funny too!

      • Given the track record, NASA did very well with Voyager. Given the track record, alarmists still cannot find a signal in the noise for climate science.

        Given that competence is related to actual results, not projections, and that actual results show climate models have consistently failed, it seems that the incompetent ones are not the skeptics that keep pointing out the failure of the climate models (no matter what success you would LIKE to compare them to), but the alarmists.

        Even a broken clock is correct twice a day, and that is a better track record than climate models.

      • Web circles for a landing. And circles. And circles.
        ============

      • David Springer

        Speaking of Nasa engineering and climate research…

        http://www.wired.com/science/discoveries/news/1999/09/31631

        A $100M Mars mission, the Mars Climate Orbiter, burned up in the atmosphere one engineering team was using metric units and the other was using English units. Instead of parking into an orbit of 150 kilometers it went down to 50 kilometers and it quickly incinerated itself.

        So Webby, engineers who aren’t employed by the government have independently developed test and verification plans to establish that what they design meets the predetermined specifications. Even in that case the best intention of mice and men and climate modelers oft times go astray. Where may I review the test plan for your model? LOL That’s a joke son, just like your “model” which is simply fed historical proxy data for global average temperature and unsurprisingly it then spits out a reasonable replica of global average temperature from the proxy data. Where’s revelation in that? You invite mockery such as Doc Martyn saying he can predict your body temperature based upon your pulse rate. I can do the same thing based upon your blood pressure both of which are merely proxies to establish you’re not a dead man at room temperature. ;-)

      • “and circles…”

        NIce one, Kim. Another gem. I can’t understand how intelligent people with purportedly anyway, fine analytical minds, cannot see how circular their reasoning is.

      • SpringyBoy has got test plans for his flying spaghetti monsters that are intelligently designing the universe.

      • Not circular reasoning. The components in the CSALT model are all orthogonal energy terms which can be variationally analyzed to minimize the Gibbs free energy.

        If you have another orthogonal energy term, I can add it to the mix, but in the meantime, you can go back to your pointlessness.

      • David L. Hagen

        WebHubTelescope
        Please clarify how you find TSI and LOD to be orthogonal. I understood LOD to vary with winds which vary with equator to pole temperature which varies with global temperature e.g., by TSI, log CO2, clouds, and aerosols.

      • David L. Hagen

        WebHubTelescope
        Re LOD being orthogonal to TSI, CO2
        See Mazzarella A., A. Giuliacci and N. Scafetta, 2013. Quantifying the Multivariate ENSO Index (MEI) coupling to CO2 concentration and to the length of day variations. Theoretical and Applied Climatology 111, 601-607. DOI: 10.1007/s00704-012-0696-9. PDF

      • Chief Hydrologist

        ‘Not circular reasoning. The components in the CSALT model are all orthogonal energy terms which can be variationally analyzed to minimize the Gibbs free energy.’

        A pet peeve is scientific sounding babble that obfuscates rather than reveals.

        The technique he keep babbling about uses SOI and other factors to scale to a temperature using multiple regression. It is a statistical empirical method – and has nothing to do with fundamental physics.

      • Chief can’t even follow simple thermodynamics.

        Variational approaches to thermodynamics became popular decades ago.

      • Chief Hydrologist

        Other than that variational principals are utterly irrelevant? Do you ever have any success is deluding people that you actually know what you are talking about?

        You take a series and assume a temperature scaled to the series. It is statistical and empirical and you a liar and a fool.


      • David L. Hagen | October 30, 2013 at 2:47 pm |

        WebHubTelescope
        Please clarify how you find TSI and LOD to be orthogonal. I understood LOD to vary with winds which vary with equator to pole temperature which varies with global temperature e.g., by TSI, log CO2, clouds, and aerosols.

        They are obviously orthogonal. TSI has a quasi-period of 11 years. LOD has a much longer term variation, unless one looks at the faster seasonal variation riding on top of it.

        Orthogonal means that one component can not be easily decomposed into a linear combination of the other (and perhaps more). And that is certainly the case here.


      • Chief Hydrologist | October 30, 2013 at 8:04 pm |

        Chief Proctologist is suffering from blockage. He is losing his chaotic wiggles in the noise.

      • Re: Scafetta

        Scafetta and company miss the obvious angle that the seasonal ripple and variation in atmospheric [CO2] are due to outgassing of CO2 that follows the ocean and biosphere temperature

        They try to prove that:
        ENSO => [CO2]

        Where in reality it is:
        Yearly solar insolation + ENSO + FF => dT => d[CO2]

        where FF is other forcing functions, which van include [CO2], making it a positive feedback.

        In the end they say this:

        “Equally, an increase of sea surface temperature causes a smaller solubility of CO2 in the ocean and so a higher concentration in the atmosphere.”

        So why didn’t they just correlate against SST in the first place?

        They do get the sign right on the LOD though.

        Scafetta has to work this ground carefully, because he can’t give away that the obvious warming culprit is CO2. That would not be good for his planetary gravitational theory that he goes on about elsewhere.

      • Chief Hydrologist

        Orthogonal is not necessary condition – apart from volcanoes you would be hard pressed to show that the parameters were truly independent. TSI and ENSO for instance are intimately connected. But it is not necessary for this to be the case in this simple – but misguided – method. It is just another meaningless interjection of absurdly misused jargon which you now feel compelled to defend.

        Open your mouth and put your foot in it again.

      • ChiefProcto said

        “TSI and ENSO for instance are intimately connected”

        Well then the Chief should be able to construct an 11 year period (TSA) out of an unpredictable red-noise component (ENSO).

        Grade: F in Math

      • Chief Hydrologist

        Don’t have to – it has been done hundreds of times.

        e.g. http://www.sciencedaily.com/releases/2009/07/090716113358.htm

        There are coherent patterns to the global system that are driven by a combination of bottom up solar TSI forcing and top down modulation of solar UV/ozone interactions in the stratosphere.

        Webby has a depth of ignorance combined with arrogance leading to egregious error and an inability to self correct. Unimpressive at best – hopelessly muddled.

      • David Springer

        WebHubTelescope (@whut) | October 30, 2013 at 1:45 pm |

        “SpringyBoy has got test plans for his flying spaghetti monsters that are intelligently designing the universe.”

        Perfect. A lame attempt at insult and nothing more. Exactly what I expected.

      • Chief, that doesn’t prove that the effects are not orthogonal.
        http://contextearth.com/2013/10/30/detailed-analysis-of-csalt-model/
        read it and weep, the model works

      • Chief Hydrologist

        ENSO and solar activity are not independent. Webby depends on confusing the issue with pseudo scientific language and relies on few people understanding the terms he misuses. A perpetual demonstration of bad faith.

        There is really nothing that could induce me to look at another look at any of his incompetent efforts – or to give a rat’s arse about any of it. I have described his approach to this in detail. It is a derivative method with fundamental theoretical shortcomings. That he can neither see or acknowledge the shortcomings of this simple linear method is the problem.

      • The Chef is blind.
        He can’t see that a model that uses no direct measures of temperature is able to estimate temperature that accurately over the past 130+ years.

        You can always call it a heuristic, but then you would have to refer to it as a really good heuristic.

        Or you can be a scientist about it and actually try to understand why it works as well as it does.

        But the latter does not fit your agenda.

      • Chief Hydrologist

        There are a few phenomenon that account for most variability in climate.

        e.g. – http://s1114.photobucket.com/user/Chief_Hydrologist/media/lean_2010.gif.html?sort=3&o=129

        http://pubs.giss.nasa.gov/abs/le02300a.html

        This is same as webby’s method. It relies on scaling non temperature series to create a synthetic temperature series.

        Of course it fits – until the natural decadal variability switches direction and predictions diverge from reality. As Lean and Rind’s predictions are. The alternative model proposed by Kyle Swanson at realclimate is much more convincing – warming not happening for decades yet.

        What there is real and inclusive science by real scientists as opposed to webby substituting his own reality.

      • pokerguy > I think skeptics in the main, are arguing the [signal and the noise] are at this point indistinguishable.

        Web’s “response” was that skeptics are incompetent because they don’t realize it is possible to distinguish them.

        Another egregious strawman from the master.

  18. Global Temperature has not been constant for 10,000 years.
    Snowfall has not been constant for 10,000 years.

    Both went up and down, but according to ice core data, they stayed inside tight bounds. There were warm periods with more snowfall and colder periods with less snowfall.

    Every time it got warm, it snowed more and after the snowfall it got colder.

    Every time it got cold, it snowed less and afterwards it got warmer.

    The data does show this did happen.

    Temperature never went out of bounds in ten thousand years and it is not out of bounds now and it is not headed out of bounds now.

    Only consensus climate theory hockey sticks have had constant temperature for 10,000 years.

    Real temperature goes up and melts Polar Sea Ice and then goes down and freezes Polar Sea and the cycle repeats.

    Same thing happens in Arctic and Antarctic, but the Arctic Sea Ice is the Primary Thermostat for Earth due to warm water that now always flows under the ice.

    The oceans get warm and melts Polar Sea Ice and it snows more. The oceans get cold and freezes the Polar oceans and it snows less.

    The Polar ice cycles have tightened the bounds of Earth Temperature.

    Earth now has a set point. The temperature that Polar Sea Ice Freezes and Thaws.

    Earth has a powerful forcing to keep temperature near the set point. That is Albedo.

    IR still does most of the cooling for Earth but it does not have a set point. Look at the older data. Temperature had much wider bounds when there were no Polar Ice Cycles.

    It is like the AC in my house, the cooling comes on when needed and it don’t take a lot.
    The cooling goes off when not needed and then the house gets warmer.

    Roman Warm Period, cold period, Medieval Warm Period, Little Ice Age, Modern Warm Period, next cold period. The Cycles have and will repeat.

    Do you have an alternate theory that provides a “SET POINT” with forcing that has always worked for ten thousand years with the same bounds?

    EARTH TEMPERATURE HAS A SET POINT. THE DATA SHOWS THAT.
    ICE AND WATER HAS A SET POINT. THERE IS NO OTHER THERMOSTAT ON EARTH THAT HAS THAT AND WATER IS ABUNDANT!

    • When Climate Theory is fixed, when Climate Models are programed with correct Theory, the Model output will stay in bounds just like Real Data. It snows more when Polar Waters are warm and wet. It snows less when Polar Waters are cold and frozen.
      This is in nature but it is not in Climate Theory or Models.
      They take away ice when Earth gets warmer.
      Earth adds the ice when Earth is warmer.

      Just look at actual data!

    • When Earth is cold and water is frozen, it don’t snow enough to replace the ice that melts every summer and ice on earth retreats and Earth warms.

      After Earth gets warm and water is no longer frozen, it does snow more than enough to replace the ice that melts every summer and then ice on Earth advances and Earth cools.

      This cycle repeats.

      This cycle has repeated and kept temperature in the same bounds for ten thousand years.

      This cycle will repeat in the future.

      Put this in the climate models and they will work right and stay in bounds and the CO2 can be used for its main purpose, make green things grow better and use less water.

  19. The fact that the current set of GCMs poorly represent how the climate system will act over the next several decades does not mean that folks were wrong completely about CO2 impacting the climate system. It probably won’t matter to most of the readers here, but what will be the impacts in about 2050? Societies have time to prepare, but will it be used wisely? I have little doubt- most won’t.

    • Yes, the fact that Model Output does not match real data does mean they were completely wrong!

      That is how you measure right and wrong.

      • LOL–perhaps that is how you measure right and wrong. I agree that the current set of models poorly represent the actual system. It does not mean that CO2 will not impact temperatures over the long term. It means that the system was more complex than was modelled.

      • It does not mean the system is more complex than was modeled.
        It does mean the system is different from what was modeled.
        It does most likely mean something very basic was done wrong.
        They don’t do snowfall and Albedo correctly.

      • Steven Mosher

        “They don’t do snowfall and Albedo correctly.”

        really.

        Perhaps you can tell us how the observed albedo compares with modelled albedo..

        Go ahead I’ll wait

    • In our rocket business, the rocket had to go where we said it would go or we were completely wrong. We were most often right, to the moon and back.

      • And when the rocket veered off course they blow them up!

      • So you had established an acceptable performance criteria for your rocket model prior to use? Wow what a novel idea! What were the same criteria for each of the GCMs that have been developed??? Not quite the same is it?

      • There is a nice little social mechanism at work here; when the climate models make the culture veer off course, the culture blows itself up.

        Heh, the pieces are still raining down.
        ================

      • Well, I wrote that for ‘culture’ to be human culture generally, but it really works best in the specific, the ‘culture of climate modelers’. It’s been a devastating and expensive loss, but I imagine we’ll pick up the pieces and carry on in some fashion. Thank heavens there is no need to artificially raise the price of energy; that would be morbid, dreadfully so, but not mortal.
        ==============

      • Steven Mosher

        wrong.

        you can also build a rocket to go where it has to go within prescribe parameters, and when it strays outside those parameters you have methods for bring it back within parameters.

        The best example would be a terrain following CM.

        tell me, have you ever done mission planning for an autonomous attack platform?

      • Three cheers for the Kalman filter?

      • David Springer

        Steven Mosher | October 30, 2013 at 6:11 pm |

        “tell me, have you ever done mission planning for an autonomous attack platform?”

        Sure. The computer opponents in video games are AIs. The level of realism in their operating parameters and environment in many of them evidently exceeds climate models too.

      • you can also build a rocket to go where it has to go within prescribe parameters, and when it strays outside those parameters you have methods for bring it back within parameters.

        Actually, this is more incorrect than the original statement. The amount of fuel you have to correct for any error is extremely small – if you aren’t pretty much close to where you thought you are going to be, it’s highly unlikely you have enough fuel to correct the problem.

        And the terrain example isn’t good either, since it doesn’t matter how great the terrain following software is if the original path it was given was bad to begin with.

    • “does not mean that folks were wrong completely about CO2 impacting the climate system.”
      But at this point there’s also no evidence it right either. This is the point!

      • I do not disagree at all that the current set of models being unreliable and unsuitable for the formation of short term government policy.

        Now if you were planning to build a large dam that was planned to last for 150 years- might you take into account the potential of greater swings of weather in the future than had been experienced in the past?

      • “Now if you were planning to build a large dam that was planned to last for 150 years- might you take into account the potential of greater swings of weather in the future than had been experienced in the past?”

        Based on what? The normal scientific method says you use past measurements. If there is a problem with this it’s that our measure of the past is of poor quality, which also means that building models based on this data is already a disadvantage.

      • Steven Mosher

        “Based on what? The normal scientific method says you use past measurements. ”

        It says no such thing. There is no such thing as the normal scientific method and no requirement to use past data.

        you’re talking out of your ass

      • Sure. Reminds me of Ace talking out his ass!
        Most times when you need to quantify a future range of values, you use measurements, which usually were made in the past. Even when you have a well defined theory, say knowing that a rock thrown into the sky will come back down, sometime you even measure where the rock lands.

      • Steven Mosher: It says no such thing. There is no such thing as the normal scientific method and no requirement to use past data.

        Actually, anytime you use a “validated model or theory” you are using past data.

    • Leonard Weinstein

      Rob, Since we are likely close to the end of the present interglacial (the Holocene), it is more likely that cooling to a new glacial period is coming, although it may still be a fairly long time off (or not). Worrying about 137 years from now as being too hot is a joke. It is just as likely to be much colder, and heading down. Cold is much worse than hot (lost crops, freezing populace, etc.). Why do you think people go to Florida, etc. for retirement? With warming (if any), crops would grow better world wide, especially with higher CO2 levels. Also, do you have any idea how much technical progress has been made in the last 137 years? I expect that even if it warmed, it would not be a problem 137 years from now. Your narrow view is misplaced.

  20. Isn’t this why we have “expert opinion” and “multiple lines of investigation”.
    The tools are never going to be perfect but do we really need perfection to inform policy?
    Judith do you think climate science should stop trying to inform policy?

    • Not at all. read my presentation for other things that I think we should be doing.

    • HR, you write “Judith do you think climate science should stop trying to inform policy?”

      I agree with Judith’s answer. But you need to define “climate science”. Does this include both scientists who agree with CAGW, AND those who know it is a hoax? If it does NOT include BOTH sides of the discussion, then no, climate science should stop trying to inform policy.

      • Actually, tomorrow’s post has a presentation i am giving on this topic, stay tuned

      • Heh, I think that climate science should stop misinforming policy. There, that’s better.
        =========

      • It was just a genuine question. Our knowledge of the science of climate seems incomplete. That doesn’t mean that it’s worthless or trivial or dull, in fact having now spent some time looking into the science I can get true excited and impressed by some of the insights. It’s just that given the uncertainty and caveats that Judith seems to emphasize then anything that could be gleaned from climate science could be spun to fit any policy or at least delay policy that doesn’t fit a particular outlook. Maybe I’m just being cynical about the political process.

  21. So, why are so much resources being invested in climate models?

    Obviously because they offer the the most politically reliable ‘answers’.

  22. “Isn’t this why we have”“expert opinion ?

    It’s a curious thing, how often so-called experts are proven wrong.

  23. All I can say is that the “pause” is short term trends and highly uncertain.

    It is highly probable that the 10, 20 and 30 trends will continue and the actual temperatures will remain within the 95% range 95% of the time.

    Certainly natural variability is on the order of +/- 0.3 C, so expecting short term predictions better than that is unrealistic.

    If models can’t do what they can’t do, then that doesn’t falsify anything.

    Predicting climate on the short term is a fool’s errand.

    When someone predicts the “pause” to continue for 10 to 30 years, just what are they saying. Can you give a number to your prediction?

    • No globally averaged temperature increase, until the mid 2030’s.

      • When you have patterns that persist for ten thousand years, you don’t expect them to change just because you have a super computer. People believe the numbers that come out of the computer and they quit thinking.

        I started working at NASA when I was a 19 year old coop.

        I learned from people who thought first and then ran computer models.

        If the model output did not match what they understood, the understanding was fixed and then the models were fixed.

        We never went back and fixed data to match model output.

        We did not go to the moon and come back with any hockey sticks.

        Someone did hit a golf ball on the moon. I hope we find it some day.

      • People will tell you that what happened in the past will never happen again.

        Then they tell you that what happens next is something that has never happened before.

        These people are almost always wrong.

        People who predict the future best are people who study the past.

      • I’ll say GISS global land ocean breaks 1.00 before the mid 2030’s.

        Want to bet a couple juicy cheeseburgers?

      • 21st Century to date

        It’s going to take a lot of powerful La Nina events to hold that line as the SAT trend since the 2011 nuttkicker La Nina is going up.

        September GISS is in, and 2013 is climbing the ladder.

      • Yes GISS for september is in, what do the skeptics think of the number?

        1. GISS is obviously adjusted to keep the hoax alive.
        2. It is all Urban Heat Island effect.
        3. Must have been a low month for cosmic rays.
        4 We still don’t understand clouds.
        5. Shush, the damn red sox are on.

      • Bob Droege,

        So 17 years is short term, but September puts skeptics in their place?

        Ted

      • Chief Hydrologist

        ‘ This study uses proxy climate records derived from paleoclimate data to investigate the long-term behaviour of the Pacific Decadal Oscillation (PDO) and the El Niño Southern Oscillation (ENSO). During the past 400 years, climate shifts associated with changes in the PDO are shown to have occurred with a similar frequency to those documented in the 20th Century. Importantly, phase changes in the PDO have a propensity to coincide with changes in the relative frequency of ENSO events, where the positive phase of the PDO is associated with an enhanced frequency of El Niño events, while the negative phase is shown to be more favourable for the development of La Niña events.’ http://onlinelibrary.wiley.com/doi/10.1029/2005GL025052/abstract

        These are not cyclical changes – but chaotic climate shifts in the climate state space. It has seemingly a periodicity of some 20 to 40 years. It is the basis for expecting that the current cool state will persist for another 10 to 30 years. The ‘prediction’ is based on a robust feature of global climate. A cool PDO and intense and frequent La Nina are quite likely over decades.

      • Ted Clayton,
        17 years is 17 years, and what can we say about the temperature trends over the last 17 years?

        Given that I have defined a “pause” as being a trend less than 0.05 C per decade. 6 of 7 global data sets say no “pause” GISS, Hadcrut4, NOAA land, NOAA land and sea, BEST and UAH say “pause” only RSS says “pause” and that of 0.03 C per decade.

        May be it is self righteous to define a “pause” but it seemed no one else was willing to do that.

        The warmest September on record is only one straw for the Camel’s back.

        The warmest month for the southern hemisphere is another.

        They do add up after a while.

      • Chief,
        As each successive La Nina year is warmer than the last it shows that there is something going along with your theory of great climatic shifts.

      • Chief Hydrologist

        ‘What happened in the years 1976/77 and 1998/99 in the Pacific was so unusual that scientists spoke of abrupt climate changes. They referred to a sudden warming of the tropical Pacific in the mid-1970s and rapid cooling in the late 1990s. Both events turned the world’s climate topsy-turvy and are clearly reflected in the average temperature of Earth.’ http://www.sciencedaily.com/releases/2013/08/130822105042.htm

        ‘Anastasios Tsonis, of the Atmospheric Sciences Group at University of Wisconsin, Milwaukee, and colleagues used a mathematical network approach to analyse abrupt climate change on decadal timescales. Ocean and atmospheric indices – in this case the El Niño Southern Oscillation, the Pacific Decadal Oscillation, the North Atlantic Oscillation and the North Pacific Oscillation – can be thought of as chaotic oscillators that capture the major modes of climate variability. Tsonis and colleagues calculated the ‘distance’ between the indices. It was found that they would synchronise at certain times and then shift into a new state.

        It is no coincidence that shifts in ocean and atmospheric indices occur at the same time as changes in the trajectory of global surface temperature. Our ‘interest is to understand – first the natural variability of climate – and then take it from there. So we were very excited when we realized a lot of changes in the past century from warmer to cooler and then back to warmer were all natural,’ Tsonis said.

        I don’t think I can claim it as my theory and it is clear that tendentious argumentation about successive La Nina and warm Septembers mean very little.

        http://www.drroyspencer.com/wp-content/uploads/UAH_LT_1979_thru_September_2013_v5.6.png

        At least until you get a handle on actual energy dynamics.

        http://s1114.photobucket.com/user/Chief_Hydrologist/media/AdvancesinUnderstandingTop-of-AtmosphereRadiationVariability-Loebetal2011.png.html?sort=3&o=66

      • Yes BD, the GISS temperature update is suggesting that the warming may be advancing again.

        There was a real anomaly over the last year according to the CSALT model.
        http://contextearth.com/2013/10/30/detailed-analysis-of-csalt-model/
        The global average temperature was not seeing as warm a temperature that the neutral SOI value would promote, as suggested per the past 130 years.

    • Bob, you write “Predicting climate on the short term is a fool’s errand.”

      It amazes me that someone who is obviously intelligent cannot see where this statement leads one. Smith et al Science August 2007 vividly illustrates why short term predictions are impossible with current models. But we cannot VALIDATE climate models until they have proven that they can consistently predict what is going to happen 30 + years in the future. So climate models can NEVER be VALIDATED, and so they are useless for providing any guidance as to what is going to happen in the future.

      • When snowfall and albedo are modeled correctly, the climate models may heal.

        But, that will take away much of the alarmism and maybe much of the money.

      • Jim, you can spare me the caps.

        So according to you 30 years is never.

        We have 6 more years to the end of Hansen’s 1988 model and its three predictions. The range is 0.6 to 1.6 by GISS met stations. It is within that range now. We will see how close it ends up.

      • Bob, you write “It is within that range now”

        Sorry, no sale. I said consistently. One swallow does not make a summer. Being right once, if he is right, cold just be a lucky guess..

        And 30 years is never for me. I will be dead by them.

      • Not a one time, GISS met station has been within the range 0.6 to 1.6 all of this century.

        Our own individual longevity has nothing to do with it and show a pretty selfish attitude.

      • So Bob, are you saying that because the temps are near or below Hansen’s zero emissions scenario even though emissions are at or above his high growth scenario, Hansen is correct? This makes no sense.

      • DalyPlanet,

        No, what I am saying is that since the temperature is above the no emissions scenario but below the BAU scenario of steadily increasing forcing, which we are below. So the temperature response is between the two scenarios A and C.

        You are not expecting me to compare Hansen’s projections to Spenser’s temperature data set are you?

        Cause that would make no sense.

      • “We have 6 more years to the end of Hansen’s 1988 model and its three predictions.
        The range is 0.6 to 1.6 by GISS met stations. It is within that range now. We will see how close it ends up.”

        I think we say at the moment the low guess of 0.6 C is wrong is too high.
        And as recall the low estimate depended on a government policy that lower global emission- and we have not lower global emission.
        Instead we gone from the US being the highest CO2 emitter in the world to have China dominating the world in terms of CO2 emission. Or we added another US emission level emission to the world. World emission has increased so much that if US were to have zero CO2 emission, global CO2 has risen.
        And since one could say that US policy which inhibited economic growth,
        resulted at least to slight increases in China’s energy use, government regulations rather decrease global CO2 emission, has increased it.

    • Suppose we don’t know, and can’t make a good prediction.
      Should we make a prediction anyway, one based mostly on or as valuable as a guess?
      If you can’t make a well based prediciton – don’t make one.
      Dr. Curry’s 10-30 years pause isn’t a prediciton (if it were it wouldn’t have much value) it’s just a “suppose” scenarioo.

      • “If you can’t make a well based prediciton – don’t make one.”

        I migrate business data for a living, one of the things i tell my customers is one of the worst things we can do is migrate bad data that the end user can’t tell from good. Data is the foundation of my customers, bad data can put them out of business.

      • jacobress, you write “If you can’t make a well based prediction – don’t make one.”

        I agree, and have been trying to say this for years. If the science is not good enough to solve the problem, then the problem cannot be solved. Unfortunately we have people like Steven Mosher who believe that saying we cannot solve the problem is admitting defeat, and we should do the best we can. But if the best isn’t good enough, we should say so.

      • A well based prediction would look at well based data from the past and extend the climate pattern of the past ten thousand years forward for the next ten thousand years. Nothing major has changed to kick us out of the pattern of the past ten thousand years.

        This warm period will proceed much like the Roman and Medieval Warm periods and end with something like the Little Ice Age.

        The best guess for what is going to happen is based on what has happened.

    • When someone predicts the “pause” to continue for 10 to 30 years, just what are they saying. Can you give a number to your prediction?

      There is a 60 year cycle in Climate. Look at past data and this upturn will come. The longer term cycle will also come. Every 700 to 1000 years the long term cycle repeats.

  24. “When someone predicts the “pause” to continue for 10 to 30 years, just what are they saying. Can you give a number to your prediction?”

    Can you? When will warming resume? How much? To what effect? Can you rule out cooling?

    • I gave a number up thread.
      GISS global land sea reaches 1.00 by mid 2030s.
      I have said it is still warming, so, no the warming will not resume.

      When CO2 doubles compared to pre-industrial level of 280 ppm, we will have a total of about 3 C above pre-industrial temps. At current rates that should happen late this century. 2093 if current rates stay the same, but I expect the levels of CO2 to increase faster than linear.

      The only effect I will predict from that is significant loss of the Greenland ice sheet within 300-600 years and associated sea level rise.

    • Chief Hydrologist

      The expectation of the pause persisting for a decade to three more is based on good physical oceanography. Science in other words.

      http://judithcurry.com/2013/10/30/implications-for-climate-models-of-their-disagreement-with-observations/#comment-406261

  25. Can I suggest that models are in fact built on a false premise? The models assume that they have incorporated all of the significant factors that influence regional and world climate. What the discrepancy between observations and the models is telling us is that this assumption is (most likely) false.

    We should be pouring our resources and money into finding out why the models have got it so wrong.

    • I’ve always wondered why the models appear to be stuck on stupid and haven’t evolved.

      • If you can figure out why the Piltdown Mann’s Crook’t Hockey Stick zombied on well past its expiration date, and then evolved into an unnatural niche, you can explain the persistence of the models’ stupidity. It is in service of a narrative of human guilt, the persistence of which, and the utilization of which, is quite explicitly known as ‘The Cause’.
        ===================

      • Lets not forget the great motivator FEAR it’s in there too.

    • “The models assume that they have incorporated all of the significant factors that influence regional and world climate”

      I asked Andy Lacis that question about a year ago now

      He replied that that all factors are accounted for, as far as he knew

      I didn’t believe him then, either

  26. Is there a self correcting mechanism for the realignment of scientific resources to maximize return on investment? Is there a mechanism that determines that the money (a limited resource) spent on models, as an example, is yielding diminishing returns and should be reallocated to say carbon capture research as an example?

    • No, there is no such mechanism in Climate Science. And that’s a critical genuine part of the problem.

      There are no customers needing answers to important problems who are paying with their own real money and judging if progress has been made toward solutions to their problems.

      Especially there are no genuinely independent review processes and procedures in place to perform independent verification and validation of the models, methods, software, application procedures, and user qualifications.

  27. One place to start is to answer the question that I proposed back in 2008:

    “Why is their a 60 year periodicity in the Earth’s trade winds?”
    http://astroclimateconnection.blogspot.com.au/2010/03/60-year-periodicity-in-earths-trade.html

    Dr. Judith Curry proposes a 60 year Mexican wave amongst the Earth’s various climate systems.

    I proposed back in 2008-09 that it was due to an external astronomical influence,

    Wilson, I.R.G., 2011, Are Changes in the Earth’s Rotation
    Rate Externally Driven and Do They Affect Climate?
    The General Science Journal, Dec 2011, 3811.
    http://gsjournal.net/Science-Journals/Essays/View/3811

    Only time will tell who is correct.

  28. Reputable journals should not accept papers for publication that are based solely on climate simulation computer model study results until the models can be improved and validated with physical data. NASA policy forbids use of un-validated computer models for operational or design decisions involving human safety (NASA has gone a bit “off the tracks” with its climate simulation model work.). The root cause of the Shuttle Columbia structural breakup during re-entry can be traced to the use of a computer “model” to predict survivability of External Tank foam impact on the Orbiter that was not based on first principle physics and that was never validated.

    In the aftermath of the Columbia accident investigation, appropriately qualified engineers were able to quickly develop a first principle model that was validated by carefully controlled foam impact tests, measuring impact loads that the new first principle model accurately predicted. This validated model was then used for subsequent Shuttle flights to assess structural integrity for a specific foam impact environment in terms of mass, velocity and location of the impacting pieces of foam.

    Use of un-validated models for critical decision-making is a dangerous practice that the climate model community should clearly explain to politicians who want to create legislation and regulations in an ill-advised attempt to avoid uncertain future problems that climate models predict. If the climate science community does not do this, representatives from various Professional Engineering fields will be compelled to do so as required by their Codes of Ethics.

    Until such time as the climate models can be improved and validated, critical decisions should be based solely on actual physical data. My own assessment of atmospheric CO2 concentration and HadCrut4 global average surface temperature data from 1850-2012 indicates a transient climate sensitivity Upper Bound of 1.6 deg C. Compare this to the IPCC AR5 report and statistical models for climate sensitivity that are being used by the US Government to compute “Social Costs of Carbon” to economically justify their proposed CO2 emissions control regulations. These statistical models allow a 10 percent probability that climate sensitivity exceeds 5.86 deg C and a 2 percent probability that it exceeds 10 dg C!!! The fact that the climate science community has spent 30 years and billions of dollars on climate model development and simulation studies and hasn’t succeeded at all in narrowing the climate sensitivity uncertainty range estimated at 1.5 to 4.5 deg C over 30 years ago, indicates to me that narrowing the uncertainty range was never the goal of their research.

    Essentially, the US Government’s “made up” statistical models for climate sensitivity allow a small probability of Greenland and Antarctic Ice Sheet melting, causing drastic sea level rise, and extremely high “flood damage cost” that yields their statistical “Expected Value” of costs for every ton of CO2 emitted into the atmosphere. This arguably fictitious “Expected Value” for “Social Cost of Carbon” is used to mathematically offset the real cost from loss of jobs and higher energy costs resulting from their proposed CO2 emissions control legislation. If the lives of their grandchildren depended on the accuracy of their calculations, I wonder if they would still be so cavalier with their methodology. This is the kind of question I ask myself about confidence in my computer model when using it for design or operational decisions involving human safety. Would we want engineers to design bridges and commercial airplanes with models as un-validated as climate simulation models, and with “made-up” statistical models with little physical data to justify the statistical model, for dynamic loads the structure needed to survive?

    • Dr Doiron,
      If the goal was science, climatology would be following your suggested path, unfortunately it was taken over by activists who are looking to justify a cause.

    • HHD,
      Thank you for this informative posting.

    • These climate models are dreadfully misinformative. Are you gonna believe the climate models or your lying eyes?
      ==================

    • Harold. You write “Reputable journals should not accept papers for publication that are based solely on climate simulation computer model study results until the models can be improved and validated with physical data.”

      Welcome to the madhouse!! Those of us who try to behave like scientists have been saying this for years. No-one who matters takes any notice of us. The scientific community, led by the Royal Society and the American Society, have endorsed the use of non-validated models, and no-one dares to tell them they are wrong. To do so, as our hostess knows, is to risk your future in academia. I was told this by Jan Vizer over 10 years ago, when I first got interested in CAGW, and I did not believe him. I now know this is absolutely true.

      So you can write what you have written, it will be absolutely true, and NOTHING will happen. The learned scientific societies will go on endorsing CAGW, politician will go on saying we must “decarbonize” society, and we will go on burning fossil fuels as if nothing is going on.

      As I said originally. Welcome to the madhouse!!

      • how and where have you ever tried to act like a scientist?

        not here on this blog where you suggest that the RS should get involved. That’s utterly non scientific. That’s pure politics

      • The scientific community, led by the Royal Society and the American Society, have endorsed the use of non-validated models, and no-one dares to tell them they are wrong.

        WE DO! THEY ARE WRONG!

        THEY HAVE NO DATA THAT SUPPORTS THEIR SIDE.

        REAL EARTH TEMPERATURE DOES NOT MATCH THEIR MODEL OUTPUT.

      • Steven, you write “how and where have you ever tried to act like a scientist?”

        I ALWAYS try to behave like a scientist. Whether or not I succeed is an entirely different matter.

      • “you suggest that the RS should get involved”

        You dipsh*t. They are already involved. They need to get un-involved.

        Andrew

      • Involved? Why the Royal Society is accomplished! Er, I guess that should be ‘accompliced’.
        ==============

      • I missed the part where Jim said the RS should get involved.

    • Dr. Doiron,

      Excellent post! +10000

    • When we saw the video of foam coming off of the External Tank and hitting the Shuttle Columbia, some people I worked with said foam cannot do serious damage.

      I said, if a tornado can drive a straw into a piece of wood, foam can break a wing.

      If the foam hits edgewise, it can and did do deadly damage.

    • Great post !!!

  29. WebHubTelescope

    Your CSALT regression seems to use unadjusted, highly autocorrelated, time series. This can lead to spurious correlations (see any econometrics textbook in particular). There is no reference to the time series being integrated in order to remove this autocorrelation, following standard practice. Without this, estimated regression parameters lose even basic consistency properties. If you have, in fact, adjusted the series properly, I suggest you say so.

    That said, the problems with GCMs do suggest climatology might lower their sights to plain old statistical model-fitting of the CSALT variety. When I suggest this in polite social conversation to professors nursing large grants for climate-related research, they assure me I don’t understand the important physical principles involved. That’s fair enough, but goes to the purpose of GCMs – fundamental science or policy?

    • OAS – Organization of American States?

    • OAS said : ”
      This can lead to spurious correlations (see any econometrics textbook in particular).

      Why are there so many absolutely clueless economists trolling these sites?

      Economists don’t seem to understand that autocorrelation can explain a real physical phenomenon. If you integrate a time series, that will generate an autocorrelation because it is accumulating a slope, like the side of a hill is autocorrelated as you start climbing up — quick, what is the probability of the next step being higher?

      Go away until you can talk hard science instead of this phony soft science based on game theory that I have no interest in pursuing.

  30. “Although it has been only a little over twenty years since the Montreal Protocol, which effectively created a global ban on chlorofluorocarbons (CFCs), the interesting history of the ozone hole has slipped under the radar, largely eclipsed by the much greater story of the anthropogenic global warming fraud. It’s interesting to revisit the CFC/ozone depletion scam and note the striking similarities to the current campaign against CO2.” [See also: IPCC: International Pack of Climate Crooks] ~David S. Van Dyke, American Thinker

    • Waggy you are flogging a dead horse.. be my guest and link to the industry propaganda from the 1970s and 1980s.. before DuPont (who sponsored most of that Contrarian twaddle) accepted the science behind the anthropogenic forcing to Ozone depletion.. in which case you are going to look very foolish, because Its the same as claiming Plate Tectonics doesn’t exist, because you can link to some skeptical geological papers, written in the 1960s (yes, that is how recently Plate Tectonics finally got fully accepted).. or the Bible.

      CFCs are an excellent example of why the term ‘skeptic’ is inaccurate but ‘Denier’ is applicable, despite the mewling from Contrarians.. Deniers desperately want CFC (and Acid Rain) science to be wrong, so they can say Homo sapiens can’t affect the planet.. therefore they claim it is wrong.. no reputable science to back it up, just banjo playing neocon dogma.

      Study that demonstrated Ozone Depletion was caused by CFCs –
      Molina, M. and Rowland, F.S. (1974) Stratospheric Sink for Chlorofluoromethanes: Chlorine Atom Catalyzed Destruction of Ozone. Nature. 49(5460), 10-12

      Report that confirmed multiple studies data demonstrated Ozone depletion had an Anthropogenic forcing –
      Ozone Trends Panel, “Executive Summary” (Washington, DC: National Aeronautics and Space Administration, Feb. 8, 1988)

      Dr. Mack McFarland, DuPont staff scientist, was on the Ozone Trends Panel and DuPont were one of the largest corporate manufacturers of CFCs.

      • The dead horse is the Left’s cash for clunkers economics, and your continued homage to DuPont in the matter of CFCs is an homage to liberal fascism.

  31. I think the points around falsification is an interesting one.

    Of course there are many aspects of science where we don’t so much prove (or falsify) a hypothesis, rather a hypothesis rises to the top because it better explains the phenomenon we are interested in.

    If we remember back to Judith’s post on the three hypotheses, then I think we are now entering a period where the “high sensitivity low variability” hypothesis of the IPCC does not explain the earth climate as well as either “natural variability plus GHE” or “natural variability plus non-linear synchronisations plus GHE”. If this situation persists then I’m sure the science will shift further in this direction, despite the undoubted inertia provided by the politics and the IPCC.

    • skeptics dont even understand the logic of falsification. They should read and reread briggs.

      • Leonard Weinstein

        Steven,
        You can NEVER prove something false. The most you can do is show that to the best of your information it has not been shown to be supported, at least in some cases. Saying that you can’t falsify a hypothesis is in the extreme case true, but in the real world, a small enough residual chance is no excuse to say it has not been effectively falsified.

      • Matthew R Marler

        Steven Mosher: skeptics dont even understand the logic of falsification. They should read and reread briggs.

        All skeptics?

      • You can NEVER prove something false.

        When you use NEVER or ALWAYS, you can “almost” ALWAYS be found WRONG.

      • I like to say “Never is a long time”

    • Falsification or near-Falsification isn’t very interesting. So you ran a horse race with one horse (model) and it dropped dead before it reached the finish line (given the model is true, the data are highly unlikely). No one is going to pay money to watch your one-horse races.

      Now, if you want to run a race with two or more horses, and say which one got closest to the finish line… well now your talking. Even if the second horse is an old lame nag (a pure linear extrapolation of the last 100 year trend). At least people can laugh at the nag. Or at least, policy makers have something that predicts better than a simple extrapolation of the linear trend.

      It’s all about better, not true and false.

      • Right, that’s the Popperian line, the best explanation going (least falsified) is the one you provisionally use to understand the world. But when that best explanation is still pretty dubious, optimal decision making entails putting very little weight on that explanation relative to the others. And a linear extrapolation is not an explanation in the normal sense.

  32. Since all the model are off course in the same direction, and they all apparently use the same basic physical principles, it seems that there is likely to be the same fundamental flaw somewhere in all of them.

    Until this basic flaw is found, fixed and shown to be so, they should all be grounded.

    • Yes, the models are burning up upon re-entry, and we still fling humans and treasure blithely into the sky.
      ==================

  33. Fear of global warming has been great for academia and the Left because it gives school teachers and the UN a chance to save the world while it, “makes industry and capitalism look bad while affording endless visuals of animals and third-world humans suffering at the hands of wealthy Westerners.” But, that’s not all: “Best of all, being driven by junk-science that easily metamorphoses as required, it appeared to be endlessly self-sustaining.” (See–Ibid. @Marc Sheppard, American Thinker)

    • American Thinker is yet another neocon echo chamber for knuckledragging groupthink, propaganda, not a scientific journal.. that the ‘best’ reference point you’ve got Waggy?

      • At the very moment the technological ability to determine if the ozone existed guess what: it was found to exist. The reason is simple: it has always existed– just as it exists now.

      • Kiddo, even I know that UV/ozone interaction and ozone chemistry are poorly understood, and it is senseless to argue that our time of observation is not very short. I’ve avoided using ‘Lame, Nasty’, to address you, but it’s about time.
        ================

  34. A fan of *MORE* discourse

    Jidoth Curry asserts [without supporting citations, with only dubious reasons, and thus possibly wrongly?] “We have long reached the point of diminishing returns from climate models in terms of actually understanding how the climate system works; not just limited by the deficiencies of climate models themselves, but also by the fact that the models are very expensive computationally and not user friendly.”

    Judith Curry, please be aware that your assertions echo criticisms of fluid-dynamical simulation codes that history has convincingly refuted:

    Thirty years of development and application of CFD at Boeing Commercial Airplanes
    Forrester T. Johnson, Edward N. Tinoco, and N. Jong Yu

    Over the last 30 years, Boeing has developed, manufactured, sold, and supported hundreds of billions of dollars worth of commercial airplanes. During this period, it has been absolutely essential that Boeing aerodynamicists have access to tools that accurately predict and confirm vehicle flight characteristics.

    Thirty years ago, these tools consisted almost entirely of analytic approximation methods, wind tunnel tests, and flight tests. With the development of increasingly powerful computers, numerical simulations of various approximations to the Navier-Stokes equations began supplementing these tools. Collectively, these numerical simulation methods became known as Computational Fluid Dynamics (CFD). T

    his paper describes the chronology and issues related to the acquisition, development, and use of CFD at Boeing Commercial Airplanes in Seattle. In particular, it describes the evolution of CFD from a curiosity to a full partner with established tools in the design of cost-effective and high-performing commercial transports.

    Broadly speaking, paleo-calibrated energy-balance models represent today’s “best available climate science” (in Judith Curry’s handy phrase), relative to which global-scale climate-simulation codes are at a comparable stage of scientific development to Boeing’s aircraft simulation codes of twenty years ago.

    Have the prospects for continued improvement “reached the point of diminishing returns”, Judith Curry? Your post did not cite the many reasons for believing that the answer is “no”. Fortunately Ars Technica is remediating that lack!

    Why trust climate models?
    It’s a matter of simple science

    How climate scientists test, test again,
    and use their simulation tools.

    Steve Easterbrook, a professor of computer science at the University of Toronto, has been studying climate models for several years. “I’d done a lot of research in the past studying the development of commercial and open source software systems, including four years with NASA studying the verification and validation processes used on their spacecraft flight control software,” he told Ars.

    When Easterbrook started looking into the processes followed by climate modeling groups, he was surprised by what he found. “I expected to see a messy process, dominated by quick fixes and muddling through, as that’s the typical practice in much small-scale scientific software. What I found instead was a community that takes very seriously the importance of rigorous testing, and which is already using most of the tools a modern software development company would use (version control, automated testing, bug tracking systems, a planned release cycle, etc.).”

    “I was blown away by the testing process that every proposed change to the model has to go through,” Easterbrook wrote.

    If you only tune in to public arguments about climate change or read about the latest study that uses climate models, it’s easy to lose sight of the truly extraordinary achievement those models represent.

    As Andrew Weaver told Ars, “What is so remarkable about these climate models is that it really shows how much we know about the physics and chemistry of the atmosphere, because they’re ultimately driven by one thing—that is, the Sun. So you start with these equations, and you start these equations with a world that has no moisture in the atmosphere that just has seeds on land but has no trees anywhere, that has an ocean that has a constant temperature and a constant amount of salt in it, and it has no sea ice, and all you do is turn it on. [Flick on] the Sun, and you see this model predict a system that looks so much like the real world. It predicts storm tracks where they should be, it predicts ocean circulation where it should be, it grows trees where it should, it grows a carbon cycle—it really is remarkable.”

    The Ars Technica reader comments too are highly recommended!

    Conclusion  In the event that the land-temperature pause ends, and sea-warming trends continue, and sea-level rise accelerates, all as affirmed by global-scale gravimetric, altimetric, and thermometric measurements, and as verified and validated by in-depth dynamical models, all in accord with simpler energy-balance predictions, then it will be fair to conclude: The era of rational climate-change skepticism has ended.

    Question  What is the earliest date at which rational climate-change skepticism could cease entirely?

    Answer  Within ten years. Which is quite a short time!

    \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • “please be aware that your assertions echo criticisms of fluid-dynamical simulation codes that history has convincingly refuted:”
      “With the development of increasingly powerful computers, numerical simulations of various approximations to the Navier-Stokes equations began supplementing these tools.”
      Supplement, not replace. Just as i mentioned in my post at the top of the thread, CFD get’s F1 cars close, but the teams that used it exclusively, do not win. And I content dealing with the flight characteristics of a jet is a far simpler task than the same for the Earths Atm.
      And the proof I’m right is that the Models fail to reproduce climate at the global level (where it’s all averaged into an blob hiding error), and is worse still at the regional level(where it’s not so well hidden).

      • A fan of *MORE* discourse

        F1 victory  Hinges on relative speed differences of order ±1/1000.

        Aircraft marketability  Hinges on relative fuel-efficiency differences of order ±1/100.

        Climate-change acceleration  Hinges on relative CO2-forcing differences of order ±1/4.

        Conclusion The relative accuracy required of climate simulations is orders-of-magnitude less demanding than F1/aircraft simulation accuracy.

        It has been a pleasure to assist your climate-change comprehension, Mi Cro!

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • While I don’t disagree with your margins, your conclusion is flawed because we we have zero empirical evidence on Co2 forcing, so we don’t know what order the difference is. Though measured temps look to show it to be far lower than consensus values.

      • Steven Mosher

        ‘While I don’t disagree with your margins, your conclusion is flawed because we we have zero empirical evidence on Co2 forcing, so we don’t know what order the difference is. Though measured temps look to show it to be far lower than consensus values.”

        Huh, we have engineering data on C02 forcing. The physics used to calculate C02 forcing is so precise that we use it to design aircraft, design radars, design IR sensors, design communication systems, design co2 detection systems.

        The added forcing from C02 is easily calculated ( although it used to be secret back in the day ) and the calculating has been verified by field test and years of working fielded devices. We paid billions to develop this understanding PRIOR TO any concerns about global warming.

        Some day I will explain how this physics played a role in reagans star wars.

      • Mosh,
        “Huh, we have engineering data on C02 forcing. The physics used to calculate C02 forcing is so precise that we use it to design aircraft, design radars, design IR sensors, design communication systems, design co2 detection systems.
        The added forcing from C02 is easily calculated ( although it used to be secret back in the day ) and the calculating has been verified by field test and years of working fielded devices. We paid billions to develop this understanding PRIOR TO any concerns about global warming.”

        Be that as it may, a global warming trend does not show up in regional daily min/max day over day temperature changes.

        “Some day I will explain how this physics played a role in reagans star wars.”
        Atm Co2 get in the way of Co2 laser beams? Or were they planning on using the atm as the lazing cavity?

      • Matthew R Marler

        Steven Mosher: Some day I will explain how this physics played a role in reagans star wars.

        OK, but what we need are the physics of increased IR radiation on sea surfaces, boreal forests, savannahs, rain forests and such; and the physics of increased non-radiative transport of heat from the lower troposphere to the upper troposphere.

        Globally, the upper troposphere carries out a net cooling of the Earth, with more energy radiated to space than back toward the Earth surface. It stays within its range of temperatures because of the steady influx of tangible and latent heat carried from the surface by dry and wet thermals. The NH and SH net outward radiation fluctuates seasonally and daily. So what does the physics tell us will happen to the net outward radiation if the CO2 concentration doubles? How will that be affected if the non-radiative transport of tangible and latent heat from the surface to the upper troposphere increases?

      • Steven Mosher

        here Mi Cro

        Start here.

        To shoot down a missile from space you are in a look down scenario.
        The radiation from the plume has to travel through the atmosphere for you to detect it. To build the sensor to do this detection you have to be able to calculate the required sensitivity. To calculate that you have to know how the various gases in the atmosphere will absorb,reflect, or transmit RF. That physics is know. That physics is fricking engineering.
        it works. we build shit based on that physics.

        That physics says ; double C02 and you get 3.7 watts of forcing.

        Engineering. guys use this shit every day. see chapter 3 in the book below

        http://www.aerospace.org/publications/books-by-aerospace/rocket-exhaust-plume-phenomenology/

        And look for the Delta Star experiment or Delta 183.

        Jeez, skeptics who dont even understand basic engineering

      • “To calculate that you have to know how the various gases in the atmosphere will absorb,reflect, or transmit RF. That physics is know. That physics is fricking engineering.”

        First I presume you really mean IR not RF. So, let me ask what’s the temperature of the exhaust plume, what is the wavelength of the detected IR, verses the absorption spectrum of Co2? Lastly what wavelength is the IR that has been designated as the one that causes global warming?

        “That physics says ; double C02 and you get 3.7 watts of forcing.”
        I don’t have an issue with this, but I’ll point to averages done by region, and point out that the trend in min and max temps does not show a common background value.
        Averaging surface data should be acceptable mathematically, and creating averages by region as long as there are adequate samples should be valid. No I can’t help it that it doesn’t show what you expect. But I’ll also note what it shows looks to be what the effect of a stadium wave would have on regional temperatures (ie they’re not all the same).
        I also can’t help that all of the temperature indexes run the measurements through the equivalent of a meat grinder.

      • Steven Mosher

        Mi Cro.

        1. you are wrong about temperatures and Co2

        2. “Some day I will explain how this physics played a role in reagans star wars.”
        Atm Co2 get in the way of Co2 laser beams? Or were they planning on using the atm as the lazing cavity?

        Wrong. jeezus you call yourself an engineer?

      • Mosh,
        1)You keep telling me I’m wrong, I keep telling you it’s NCDC published surface station measured values, and all I’m doing is creating averages of the measurements (which is not a global average). Values which BTW, I don’t adjust.
        2) You leave a cliff-hanger, I guessed wrong. I have to presume it’s a clever answer, otherwise why bother with the tease.

      • Steven Mosher

        “OK, but what we need are the physics of increased IR radiation on sea surfaces, boreal forests, savannahs, rain forests and such; and the physics of increased non-radiative transport of heat from the lower troposphere to the upper troposphere.”

        No you dont need that at all.

        All you need to know is that adding C02 raises the ERL.

        That’s all you need to know to know that more c02 means a warmer planet.

      • “All you need to know is that adding C02 raises the ERL.”

        Vaughan Pratt explained that elegantly in comments to the thread on his quasisawtooth with the missing tooth milikelvin foolishness. I wonder why we can’t get past that.

      • “All you need to know is that adding C02 raises the ERL.”

        And if you like your current healthcare plan, you can keep it.
        Thanks Steve!!!

      • Matthew R Marler

        Steven Mosher: That physics says ; double C02 and you get 3.7 watts of forcing.

        That is the subset of physics ignoring most of the energy transport processes on Earth, and assumes that the entire Earth is at its equilibrium temperature, and then the entire Earth changes gradually to its new equilibrium. When you consider the round Earth with a non-homogenous surface, orbiting the sun, spinning on an axis that is oblique to the plain of its revolution about the sun, and with substantial non-radiative transfer of tangible and latent energy, then nothing can be derived from considering a doubled CO2 concentration.

        Consider again the upper troposphere, where outbound radiation exceeds inbound (that is, into the upper troposphere from above and below) radiation, and where a substantial amount of the net inbound energy comes from wet and dry thermals. What happens there if CO2 concentration doubles, and where is that shown. FWIW, the equilibrium model assumes no only that the sea surface and deep sea have the same temperature, and that the equator and poles have the same temperature, but that all levels of the atmosphere have the same temperature. Without that assumption, which is obviously counterfactual and inaccurate, neither the 3.7 w/m^2 surface average downwelling increase nor its global consequences can be calculated.

      • Steven, you write “That’s all you need to know to know that more c02 means a warmer planet.”

        I don’t know why I bother. For the umpteenth thousand time, this is NOT all we need to know. We need to know HOW MUCH it warms the planet. All we have is guesses, since the numeric value has not been, and cannot be MEASURED.

        But as usual, Steven will avoid the issue of lack of measurement, and claim that estimates are good enough. They aren’t.

      • Matthew R Marler

        Steven Mosher: Jeez, skeptics who dont even understand basic engineering

        You repeatedly deny the fundamental problem: given that CO2 absorbs IR, what does an increase in atmospheric CO2 (e.g. a doubling) produce in the atmosphere? Where, geographically and atmospherically, do those effects occur?

        The inaccuracy of the equilibrium model without non-radiative transport of energy is much greater than the estimated effects; both the consistent spatio-temporal variation in temperature, and the inconsistent or unexplained variation.

        The aircraft industry is irrelevant to the climate science discussion, except that it has a lot of what climate science does not have: a history of making things that work (along with some that don’t work, or don’t work well enough); a history of testing and modifying mathematical relationships in light of real world experiments (actual aircraft) and wind-tunnel experiments; and of continually appraising and refining the limits of accuracy of the mathematical approximations.

      • mattstat, “FWIW, the equilibrium model assumes no only that the sea surface and deep sea have the same temperature, and that the equator and poles have the same temperature, but that all levels of the atmosphere have the same temperature.”

        I was under the impression that the up/down radiant “kernals” were a bit more sophisticated and assume an atmospheric temperature gradient with isothermal layers. Then adding CO2 would shift the ERL upwards producing warmer layers below and cooler layers above. The problem is more with the isothermal layer assumption where water vapor tends to advect or expand instead of simply warming like a nice little molecule. Keeping track of H2O is supposedly the reason for the complex models since CO2 inspired H2O increase is supposed to contribute 2/3rds of the potential warming.

      • Mosher, you forgot about the troposphere and the wet and dry stuff. Also, what about that Obamacare! Everybody can’t keep their insurance. They got ya, Steven. They are impenetrable.

      • If you like your climate, you can keep it. Period.
        ==================

      • AFOMD,

        You say : –

        “• Climate-change acceleration Hinges on relative CO2-forcing differences of order ±1/4.”

        Just so. A big fat zero plus or minus zero times one quarter equals . . . zero.

        Live well and prosper,

        Mike Flynn.

      • Mosher: “That’s all you need to know to know that more c02 means a warmer planet.”

        Wrong.
        A more energetic planet I will grant you, but not warmer – the extra energy may:
        * be converted into mechanical energy (wind etc, although of course this is likely to end up as low grade heat eventually)
        * be converted by the biome into chemical potential energy
        * the near-surface temperature (what matters most to most people) may stay constant due to the latent heat and increased evaporation transporting the heat higher into the atmosphere
        * the near-surface temperature (what matters most to most people) may stay constant due to the heat being sequestered in the deep cold ocean (which I concede could be argued to be a “warmer planet”)

        etc etc.

      • Thank you Matthew R Marler | October 30, 2013 at 5:03 pm |

        Nice summation of the processes alter the standard radiation understanding. The agreed 3.7 watts for a double seems to be derived a bit circuitously.

      • Matthew R Marler

        Capt Dallas: I was under the impression that the up/down radiant “kernals” were a bit more sophisticated and assume an atmospheric temperature gradient with isothermal layers. Then adding CO2 would shift the ERL upwards producing warmer layers below and cooler layers above. The problem is more with the isothermal layer assumption where water vapor tends to advect or expand instead of simply warming like a nice little molecule. Keeping track of H2O is supposedly the reason for the complex models since CO2 inspired H2O increase is supposed to contribute 2/3rds of the potential warming.

        Yes, there are more complex models than the equilibrium model. To my knowledge, they don’t account for the non-radiative transfer of tangible and latent heat from the lower to upper troposphere, or take into account that 23% of incoming radiation is absorbed in the upper atmosphere and radiated spaceward without penetrating to the troposphere.

        My larger point in responding repeatedly to Mosher’s (and some others’) references to “the physics” is that they do not list exactly which subset of propositions they include as “the” physics, but I emphasize that whatever they mean, they do mean a subset. When you look at all the documented heat transfer processes, then there is no way to predict what a doubling of CO2 will result in.

      • Matthew R Marler

        Steven Mosher: All you need to know is that adding C02 raises the ERL.

        A review of the derivation will show that you don’t even know that. It’s another “flat non-rotating Earth with a uniform surface and uniform illumination and uniformly mixed atmosphere ignoring non-radiative transport and there aren’t any clouds anyway” result.

        The Earth climate system is a net recipient of energy around the Equator, a net radiator outward at the poles, with energy transported via diverse mechanisms from the Equator to the poles. “The physics” can not even tell us whether a doubling of CO2 increases or decreases the rate of this net transfer from the Equator to the poles.

    • Clever guys, sophisticated testing, good processes. And the people who write them telling us how great they are. What could possibly be wrong with the models?

      Just the simple fact that they don’t match reality. Apart from that they are wonderful. But sadly matching reality is a prereq, not an incidental nice to have for climate models. Fundamental big problem.

      Ground them until fixed…waste of time, blood and treasure. Making decisions based on bad models is dangerous.

      • A fan of *MORE* discourse

        Latimer Alder asserts [utterly wrongly]  “The simple fact os that they [global-scale climate models] don’t match reality”

        Latimer Alder, please consider that the goal of climate models is *not* solely to predict decadal-scale climate variability, or even to predict CO2 forcing coefficients. The goal of global climate-models is *far* broader: to predict everything regarding the global climate!

        Why trust climate models?
        It’s a matter of simple science

        How climate scientists test, test again,
        and use their simulation tools.

        “What is so remarkable about these climate models is that it really shows how much we know about the physics and chemistry of the atmosphere, because they’re ultimately driven by one thing—that is, the Sun. So you start with these equations, and you start these equations with a world that has no moisture in the atmosphere that just has seeds on land but has no trees anywhere, that has an ocean that has a constant temperature and a constant amount of salt in it, and it has no sea ice, and all you do is turn it on. [Flick on] the Sun, and you see this model predict a system that looks so much like the real world. It predicts storm tracks where they should be, it predicts ocean circulation where it should be, it grows trees where it should, it grows a carbon cycle — it really is remarkable.”

        The guiding principle in modeling of any kind was summarized by George E.P. Box when he wrote that “all models are wrong, but some are useful.”

        Climate scientists work hard to ensure that their models are useful, whether to understand what happened in the past or what could happen in the future.

        Summary  Because the *objective* of climate models is so broad, the *verification and validation* of these same climate models is correspondingly broad and robust!

        That is why *careful* reading of the Ars Technica review is commended to all Climate Etc readers.

        It is a pleasure to assist in deepening your comprehension of this crucial scientific point, Latimer Alder!

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • Latimer Alder

        @ A Fan

        ‘ The goal of global climate-models is *far* broader: to predict everything regarding the global climate!’

        You are digging your hole even deeper, mon brave.

        Against this ambitious goal, their failure is even more dramatic. To paraphrase Feynman..

        .’I don’t care how clever the modellers are, how hard they work, how many degrees they have, how good they are at Fortran or how nice they are to cuddly animals and to old ladies crossing the road. If their models do not successfully forecast reality, then they are wrong’.

        This is surely not a difficult concept to grasp, even for Fan. When theory and observations conflict, it is the theory that is wrong.

        And all the models are wrong.

        We can leave the question of why it took at least 15 years for these supposedly dedicated bright folks to bother to look out of their window and notice that the temperatures ain’t going up for another day. But prima facie, it does not cast a good light on their dedication to ensure that their models are reality and observation-driven, not just theoretical constructs.

      • A fan of *MORE* discourse

        Easterbrook and Johns Engineering the Software for Understanding Climate Change (2011) will substantially allay your concerns Latimer Alder!

        • The release schedulei s notdriven by commercial pressure, because the code is used primarily by the developers themselves, rather than released to customers.

        • The developers are also the domain experts. Most have PhDs in meteorology, climatology, numerical methods, or related disciplines, and most of them regularly publish in the top peer-reviewed scientific journals.

        • They control the code by having a small number of code owners and a much larger set of contributors, and a careful review process to decide which changes get accepted into the trunk.

        • The community operates as a meritocracy. Roles are decided based on perceived expertise within the team. Code owners are the most knowledgeable domain ex- perts, and code ownership tends to be stable over the long term.

        • The developers all have “day jobs” – they’re employed as scientists rather than coders, and only change the model when they need something fixed or enhanced. They do not to delegate code development tasks to others because they have the necessary technical skills, understand what needs doing, and because its much easier than explaining their needs to someone else.

        • V&V practices rely on the fact that the developers are also the primary users, and are motivated to try out one anothers’ contributions

        It is a pleasure to help increase your appreciation of the rich and robust verification and validation processes that are practiced in climate modeling!

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • Latimer Alder

        @A Fan

        It is immaterial how they organise themselves to make the models.

        What is material is that the models do not reflect reality. Until they do, they are unfit for purpose.

        Are you incapable of understanding this concept? Or wilfully blind?

      • A fan of *MORE* discourse

        Latimer Alder says “What is material is that the models do not reflect reality.”

        There is good news for you Latimer Alder!

        The September GISS data are in, and it appears that “the pause” is ending … as predicted by climate models!

        Global Temperature Analysis
        – September 2013

        The globally-averaged temperature across land and ocean surfaces combined (GRAPHIC HERE) was 0.64°C (1.15°F) higher than the 20th century average, tying with 2003 as the fourth warmest September since records began in 1880. The six warmest Septembers on record have all occurred since 2003 (2005 is currently record warmest). September 2013 also marks the fifth consecutive month (since May 2013) with monthly-average global temperatures ranking among the six highest for their respective months.

        It is good to assist in further increasing your confidence in scientific climate models, Latimer Alder!

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • Steven Mosher

        models dont reflect reality. ever.
        that is not the measure of a model.

      • I am a bit confused. I thought the test of a model was to reflect the past and predict (or project) the future. If they are not to “reflect reality, ever” (the past is reality), when did that change?

      • David L. Hagen

        I affirm Latimer’s highlighting the major failures of the Scientific method by the present models.
        Fan look again at the GIFF temperatures. The records are not as significant as the trends shown that have leveled off and suggest lower warming to declining temperatures.

      • Matthew R Marler

        A fan of *MORE* discourse: It is a pleasure to help increase your appreciation of the rich and robust verification and validation processes that are practiced in climate modeling!

        Yeah, they are great folks, intelligent and well-educated, and they work wonderfully hard and their models are wrong.

      • Matthew R Marler

        Steven Mosher: models dont reflect reality. ever.

        I hope you remember to put that into the published version of the BEST data analyses. There you have a hierarchical statistical model, and without that caveat it may seem that you are telling us something about the world we have been living it.

        What you meant to write is that a model is a systematic aggregation of a lot of ideas, hypothetical relationships, parameter estimates and measurements and such. It is a psychologically constructed “model” of reality, not a “reflection”. It can’t be taken as an approximation or representation or any such until after modelers have shown at least that it is reasonably accurate to the purposes in hand.

        Some models are very good: Newton’s laws, the models used for calculating the lift capacity of aircraft wings, the Hodgkin-Huxley model of neurons, the absorption/emission spectra of CO2. Some models, like the GCMs, are clearly too inaccurate to be used for planning human technology investments.

        Some models I have called “live” but not yet “tested”, including the lnCO2 models of Vaughan Pratt and WebHubTelescope. If those models survive the testing of the upcoming decades, then the concept of “warming in the pipeline” caused by CO2 will only apply to the deep ocean and other parts of the climate system than the surface measurements that they model. If Nicola Scafetta’s “live but untested” model survives the testing, then we may be willing to conclude that the climatological effect of increased CO2 is totally negligible.

        But your main claim, slightly rewritten, is correct: models entail ideas, not reality. Getting anything about reality from a model is an inference, yet another psychological process.

      • “Some models I have called “live” but not yet “tested”, including the lnCO2 models of Vaughan Pratt and WebHubTelescope”

        Unfortunately, these can probable be discounted, they do seem accurate to global temperature averages, but regionally temperatures are not in sync with other regions. I think this will eliminate them.

      • Mi Cro is able to invalidate the concept of the mean value because some regions are high and some regions are low.
        Where do they find these people?

      • Isn’t Co2 suppose to be the only control knob? Shouldn’t there be a common trend? Or how does seasalt do getting regional temps right, and which two locations do you pressure from?

    • A fan of *MORE* discourse,

      I am reminded of the Banksy flap in New York City. Argumentation by stencil.

      Question What is the earliest date at which rational climate-change skepticism could cease entirely?

      Answer Within ten years. Which is quite a short time! [emph. added]

      Let’s pose the question in a more rational form:

      “What is the earliest date at which climate-change skepticism becomes the consensus?

      A lot less than 10 years.

      Over the weekend, for example, Quebec cancels zombie apocalypse training scenario [emph. added]:

      The provincial government has stepped in to cancel plans for a zombie-themed emergency training exercise.

      Such a theme has been used elsewhere. [CDC, FEMA, et al]

      Public Security Minister Stéphane Bergeron [said], “I thought … the theme of the workshop had taken on a greater importance than its goal and that it was better to change it … so as not to undermine the real purpose of the activity, which is and remains a very important exercise for civil security.”

      Over the weekend. Abrupt policy change.

      A reasonable earliest credible date for replacement of an irrational climate advocacy consensus with an irrational climate skepticism consensus, might be the 2014 Mid-Terms. More probably; the 2016 Presidential campaign. Depending.

      All eyes on the Arctic icecap, next season.

      • A fan of *MORE* discourse

        Scientists and science-minded conservatives are alike in foreseeing a return to science-respecting conservatism, eh Ted Clayton?

        The Ars Technica article Why trust climate models? (and its comments) provide solid insights into the STEM community’s appreciation of this unitary (and accelerating) science-and-politics trend.

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • Conservative temperaments produce a high portion of our science degrees, More Discourse, but they tend to be unwelcome in academia. So, they are heavily over-represented in the military & industry.

        The military, of course, oversees the finest scientic instrumentation, has the best labs, and addresses the most advanced science questions. For the highest stakes. Including climate … which of course is a quite-serious security-mission.

        Sadly, everyone can’t access the military work. But the President does. Congress can.

        Our leading civilian science relies on military hand-me-downs.

        Conservative science isn’t returning from anywhere. It remains where it has been all along – so far out in front of the Dr. Hansens and the EAUs, they forget who blazes the trail.

      • A fan of *MORE* discourse

        Ted Clayton says “The military, of course, oversees the finest scientic instrumentation, has the best labs, and addresses the most advanced science questions.”

        Yes, and that’s why the military appreciates that climate change is scientifically real, strategically serious, and accelerating in pace.

        Thank you for your observations Ted Clayton!

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • But do the military figure climate change is anthropogenic? (I can’t do videos, More Discourse.)

        There are military treatments that accept the AGW premise, and go with it. If the President wants to know more about CO2 from coal plants, they don’t argue with him, even if they think CO2 is a red herring. The military is huge, and does include people of all opinions & interpretations.

        But overall, yeah, the military are both socially & scientifically conservative. And – oh boy! – skeptical.

    • Sorry Fan, this citation about the software “quality” or “quality procedure” is nonsense, and it shows you don’t know what we are talking about. We’re not talking about technical aspects of writing code. The model failure is not caused by bugs in the code. The models failed because assumptions on which they are based are wrong, or quantifying of parameters used isn’t correct.
      The most brilliant and well tested code won’t produce good results from bad assumptions.

  35. Chief Hydrologist

    “The winds change the ocean currents which in turn affect the climate. In our study, we were able to identify and realistically reproduce the key processes for the two abrupt climate shifts,” says Prof. Latif. “We have taken a major step forward in terms of short-term climate forecasting, especially with regard to the development of global warming. However, we are still miles away from any reliable answers to the question whether the coming winter in Germany will be rather warm or cold.” Prof. Latif cautions against too much optimism regarding short-term regional climate predictions: “Since the reliability of those predictions is still at about 50%, you might as well flip a coin.” http://www.sciencedaily.com/releases/2013/08/130822105042.htm

    Yes – breathless gushing notwithstanding – flipping a coin is truly remarkable.

    In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. This reduces climate change to the discernment of significant differences in the statistics of such ensembles. The generation of such model ensembles will require the dedication of greatly increased computer resources and the application of new methods of model diagnosis. Addressing adequately the statistical nature of climate is computationally intensive, but such statistical information is essential. http://www.ipcc.ch/ipccreports/tar/wg1/505.htm

    The TAR is talking here about perturbed physics model ensembles that are still in their infancy more than a decade later.

    Can we expect a computer scientist to have much of an idea? Can we expect FOMBS to have any idea at all? Why not rely on actual experts?

    http://www.pnas.org/content/104/21/8709.long
    http://rsta.royalsocietypublishing.org/content/369/1956/4751.abstract

    • A fan of *MORE* discourse

      Chief Hydrologist avers “long-term prediction of future climate states is not possible.”

      Energy-balance models, global dynamical models, and purely statistical models, all predict (with considerable confidence) that five years from now, the sea-level will be higher.

      How is it, Chief, that annual-to-decadal dynamical chaos does not obstruct these predictions? The world wonders!

      Conclusion  We’ll know that global warming is over when the seas stop rising.

      It’s not complicated, eh Chief Hydrologist?

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • Chief Hydrologist

        Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision. http://www.pnas.org/content/104/21/8709.long

        Progressive denialism is a wonderful thing. The IPCC says prediction is not possible – and for the same reasons neither is projection. This is confirmed by global leaders in the field.

        The rest is merely babble that I have dealt with at length previously.

        http://judithcurry.com/2013/10/26/open-thread-weekend-38/#comment-405191

      • AFOMD,

        And I predict with 100% certainty, that in five years you will be not one whit more intelligent.

        Pretending that assumptions about the future are facts does not make them so.

        Live well and prosper,

        Mike Flynn.

  36. Over at WUWT, Bob Tisdale asserts that the way the IPCC is handling the model/data divergence is to adjust near term model-based predictions downward by 10%, while maintaining the scary long term predictions as-was.

    http://wattsupwiththat.com/2013/10/29/ipcc-adjusts-model-predicted-near-term-warming-downwards/

  37. The IPCC Climate models are simply useless for prediction – discussion of them is really a waste of time . For a different forecasting method see
    http://wattsupwiththat.com/2013/10/29/commonsense-climate-science-and-forecasting-after-ar5/

  38. I don’t know why anyone would think the money spent on climate models was wasted. Progressive politicians pay sympathetic scientists to provide predictions of future catastrophic climate conditions.

    The scientists provide those predictions undeterred by the failure of reality to conform to those predictions. The politicians keep using the predictions to keep and expand their power and control over the economy.

    Rinse and repeat.

    The models are, like the IPCC, doing exactly what they were created and funded to do.

  39. Matthew R Marler

    “In then addressing the question of how GCMs have come to occupy their dominant position, we argue that the development of global climate change science and global environmental ‘management’ frameworks occurs concurrently and in a mutually supportive fashion, so uniting GCMs and environmental policy developments in certain industrialised nations and international organisations.

    In his book “The Great Chain of Being”, Arthur Lovejoy wrote of “metaphysical anguish”, the internal psychological pressure to believe that what you have spent a great deal of time studying (say the writing of Emmanuel Kant) must be both important and true. It has also been called “cognitive dissonance reduction”, “post decision dissonance reduction” and such. The climate modelers and their colleagues can hardly be expected to give up the idea that their work has been important and their results true just because of some short-term inaccuracies. Persistence through doubt is a recognized virtue, and their community supports them in it. For the people who study the psychology of “denialists” to ignore the psychological impulse toward dissonance reduction, or Lovejoys’s “metaphysical anguish”, as it effects belief in the CO2 hypothesis, shows their bias in favor of CO2 belief. Irrational psychic and social processes affect the believers and unbelievers alike.

    It is valuable to continue the intellectual and scientific debate, but the goal has to be to inform and perhaps persuade people who do not already have firm opinions. Very few people committed to the concept of AGW or the details in the models will change their minds, and we have seen over the years that very few have changed their minds.

  40. Matthew R Marler

    The existence of disagreement between climate model predictions and observations doesn’t provide any insight in itself to why the disagreement exists, i.e. which aspect(s) of the model are inadequate, owing to the epistemic opacity of knowledge codified in complex models.

    “Epistemic opacity” is a good phrase.

    I modify my former phrase “climate science is full of cavities” to “the epistemic cavities of climate science”.

  41. I know it’s well accepted in the climatology community that when you don’t have temperature data for someplace, you make it up.

    What I produce is an average of the actual surface station measurements. I thought that was what science is. I guess your definition is something different.

    And you know what, I don’t care if you think this is wrong. I think there are a lot of people who will read this, who also think this is one of the places climatology went off into the weeds. I got a laugh when BEST published that repeating Jone’s, and Mann’s work, got them the same answers, well Duh.

  42. Schrodinger's Cat

    The models are flawed, some of the assumptions are wrong. We have now had 17 years of divergence between the modelled output and observation. I doubt if any climate scientist disputes that.

    What I find absolutely incredible is that the scientists still cling to their faith in the flawed models. Their peers (excluding our host) continue to support the models. Are they waiting for a miracle? Are they betting that a chaotic system will return to warming eventually and then they can claim to be right? Are they busy retrofitting delays into the models so they can subsequently claim the hiatus was predicted?

    The denial cannot go on indefinitely. Can someone explain to me why these models are still being used to underpin hugely expensive policies when they are clearly invalid? What has happened to scientific integrity?

    • I find the Cat’s diatribe puzzling. Anyone who follows the debate here knows what the warmer’s answers are. Why then ask these questions as if there were no answers?

  43. Reblogged this on pdx transport.

  44. Chief Hydrologist

    James Hurrell and colleagues in an article in the Bulletin of the American Meteorological Society stated that the ‘global coupled atmosphere–ocean–land–cryosphere system exhibits a wide range of physical and dynamical phenomena with associated physical, biological, and chemical feedbacks that collectively result in a continuum of temporal and spatial variability. The traditional boundaries between weather and climate are, therefore, somewhat artificial.’ Somewhat artificial is somewhat of an understatement for a paradigm shift in climate science.

    The weight of evidence is such that modellers are frantically revising their strategies. They are asking for an international climate computing centre and $5 billion (for 2000 times more computing power) to solve this new problem in climate forecasting. The monumental size of the task they have set themselves cannot be exaggerated. In the interim – and for the foreseeable future – you may as well toss a coin

    http://journals.ametsoc.org/doi/pdf/10.1175/2009BAMS2752.1

  45. Off subject but I was a Starbucks this morning and saw this article by Al Gore and David Blood:

    The Coming Carbon Asset Bubble
    Fossil-fuel investments are destined to lose their economic value. Investors need to adjust now.

    http://online.wsj.com/news/articles/SB10001424052702304655104579163663464339836

    I found it kind of ironic that he started with this…:
    “After the credit crisis and Great Recession, it seemed ridiculous to have thought that investing in subprime mortgages was a good idea. As with most market “bubbles,” the risk of giving 7.5 million mortgages to people who couldn’t possibly pay them off was somehow invisible to many investors at the time.”
    …when it was the Clinton administration that required banks to issue sub-primes.

    He seems to be very expert when it comes to Bubbles!

  46. If the models are just research tools (one could accept that they are useful for this) then why would one want to change the whole world economy on their say-so?

    • The world’s economy is changing anyway. High grade crude oil is going bye-bye.

      • We will use the low grade crude, coal and whatever other hydrocarbons we can scrape up. Just watch.

      • I admit I was pretty worked up over the Peak Oil message, a decade or so back. It passed, though.

        DARPA is showing off these 4-legged trotting robots … should be just the ticket for all those coal-seams that tapered down too low for a man.

      • Like I said, high-grade peak oil is waving bye-bye to you.

        Julian Simon was wrong. Finite resources means finite resources.

        Why even argue something so basic?

      • Matthew R Marler

        WebHubTelescope: The world’s economy is changing anyway. High grade crude oil is going bye-bye.

        What can be developed sooner and cheaper than low-grade oil, natural gas and methane clathrates?

      • If high grade crude is going away, then the greens should be happy! Problem solved without having to impose their wills on anyone. As the supply goes down, the price goes up. And that will do more for solar or wind than any billions Obama wants to give them.

      • Chief Hydrologist

        There are 100 years of oil supply reserves at current usage. Not to mention huge increases in gas reserves that are rapidly coming online.

        It is difficult to imagine what monomania inspires webster to ignore reality and substitute his own.

    • A specification of that thing called “the world economy” without relying on a model deemed realistic might be welcomed by economists.

      The best model of the universe is the universe, preferably the same universe. Although holodecks might be nice.

    • Chief Hydrologist

      We are not quite running out of oil yet – http://www.eia.gov/analysis/studies/worldshalegas/ – and gas supplies just keep growing pell mell.

  47. If the discrepancy between climate model projections and observations continues to grow, will the gig be up in terms of the huge amounts of funding spent on general circulation/earth system models?

    Well, I don’t know about huge sums of money, but I think that climate models in and of themselves are useful to test our understanding – in fact modelling in general. I don’t think that the issue is that the climate models have proven to be too flawed (I would say “incomplete”) to be useful in predicting what our climate might be like under increasing human emissions, I think the problem is the over-confident conclusions drawn from them before they have been properly validated by observation.

    Personally, I think we should keep an eye on the climate, I think we are/were right to be concerned on the impact humans might have had on it, and whether or not we do, it’s still important to strive for an understanding of it, and I think modelling provides an important role.

    Turning it around for a moment, if modelling incorporates our best understanding of how the climate system works, and observations are diverging from predictions based on that understanding, it validates the skeptical criticisms of that understanding that have raged for more than a decade. But it also should tell us where to look (or more accurately where not to look) to gain a better understanding of climate. It’s clear there are either factors that have not been accounted for (“we can’t think of anything else that could be warming things up so it must by CO2”) or factors we don’t know enough about to account for them properly.

    What would happen if a climate model incorporated “The Stadium Wave” hypothesis? I think it would be interesting to see how that would play out.

  48. We do models because they are fun to do. Great fun. Nothing wrong with it as long as you don’t confuse models with the world out there.

  49. A fan of *MORE* discourse

    BREAKING NEWS
    Big Data, Big Carbon, and Big Healthcare

    How Big Data is destroying
    the U.S. healthcare system

    by Robert Cringely

    Big Data has changed the U.S. health insurance system and not for the better. There was a time when actuaries at insurance companies studied morbidity and mortality statistics in order to set insurance rates. This involved metadata — data about data — because for the most part the actuaries weren’t able to drill down far enough to reach past broad groups of policyholders to individuals.

    In that system, insurance company profitability increased linearly with scale so health insurance companies wanted as many policyholders as possible, making a profit on most of them.

    Then in the 1990s something happened: the cost of computing came down to the point where it was cost-effective to calculate likely health outcomes on an individual basis. This moved the health insurance business from being based on setting rates to denying coverage.

    In the U.S. the health insurance business model switched from covering as many people as possible to covering as few people as possible — selling insurance only to healthy people who didn’t much need the healthcare system.

    The goal went from making a profit on most enrollees to making a profit on all enrollees. Since in the end we are all dead, this really doesn’t work as a societal policy, which most of the rest of the world figured out long ago.

    U.S. health insurance company profits soared but we also have millions of uninsured families as a result.

    Given that the broad goal of society is to keep people healthy, this business of selling insurance only to the healthy probably can’t last. It’s just a new kind of economic bubble waiting to burst.

    Some might argue that the free market will eventually solve this particular Big Data problem. How? On the basis of pure economic forces I don’t see it happening. If I’m wrong, please explain.

    Tell us all in detail how this will work.

    Background: Robert Cringely is among the most-respected US computer experts.

    Conclusion  21st century Big Data is challenging old-school Big Carbon free-market conservative orthodoxy on multiple fronts. Conservatism must recreate itself or perish; because “willful ignorance” just doesn’t work any more, in climate-change, energy economics, *or* health-care.

    I’m sorry sir. Those *ARE* the numbers.

    \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • No, More Discourse, Cringely is nothing like “among the most-respected US computer experts”. He is a minor media-figure, writer.

      His own About Page makes no claim to computer-expertise. Only that he writes about them.

      Wikipedia, in their article, is less kind.

      In 1998, it was revealed[8][9][10] that Stephens had falsely claimed to have received a Ph.D. from Stanford University and to have been employed as a professor there. Stanford’s administration stated that while Stephens had been a teaching assistant and had pursued course work toward a doctoral degree, he had never held a professorship nor had he been awarded the degree. Stephens then stated that while he had received a master’s degree from the department of communications and completed the classes and tests required for the Ph.D., he acknowledged that he failed to complete his dissertation. Asked about the resulting controversy, Stephens told a reporter: “[A] new fact has now become painfully clear to me: you don’t say you have the Ph.D unless you really have the Ph.D.”[11]

    • A fan of *MORE* discourse

      Cringely’s web-page says “It’s not that he [Cringely] is so smart, but his friends are smart.

      Words by Cringely, Big Data analysis by Cringely’s silicon-valley friends, links by FOMD!

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • Then in the 1990s something happened: the cost of computing came down to the point where it was cost-effective to calculate likely health outcomes on an individual basis. This moved the health insurance business from being based on setting rates to denying coverage.

      A very good parallel to “climate science”, though undoubtedly not in the way you would like.

      Like assertions of “global warming”, that statement is categorically false. It is a lie told to stupid people, preying upon their ignorance to get them to support certain policy objectives.

      Insurance companies do not make money denying customers. This is especially true of high risk customers. High risk customers are highly desired. High risks mean high policy value and thus high profits. No insurance company would address high risk by denying coverage. They address high risk by charging commensurately high premiums, and make commensurately high profits.

      What moved health insurance companies from selling rates to denying coverage was not “big data”, it was Big Government. Among the things that Big Government did was enact laws that limited the premiums that health insurance companies are permitted to charge. A typical version of this is a state law that prohibits health insurance companies from charging any customer a premium that is more than twice their average premium. If your risk is much higher than average, then the law makes a policy that would cover your risk illegal. Insurance companies would LOVE to write you a high risk, high premium policy, and you might very well be willing to purchase it, but the law prohibits you and the insurance company from entering into that voluntary contract. It is not profit motive that causes denial of coverage, it is Big Government mandate.

      What you are telling is a lie, and it is a lie intended to get people to support Big Government – just like the lies told about ‘global warming’.

    • Fan worries about conservatives: “…this business of selling insurance only to the healthy probably can’t last.”

      I guess health insurance is not for the markets anymore. Yes we can have the government do it. Then I suppose government will charge the same for everyone. Then they will change their collective mind and charge more if you smoke or eat excessive pizza. And find other reasons to charge some people more. And then rather than being able sue a bad insurance company who denies some coverage, you can try to sue your government. Fan, it seems like a number of changes, not all of them good.

      We have a glimmer of hope with Health Spending Accounts. However it appears that Obamacare is calling into question how cheap the high deductible plans will be going forward.

      The idea of an HSA is to pay lower premiums for a high deductible plan (as defined by the IRS, make sure yours qualifies) and bank the difference in an HSA account that is used for medical bills. It is beautifully tax efficient. The accounts are/were an example of self sufficiency. I in effect partly self insure my family, paying about 60% on premiums and 40% on the bills I pay with my HSA money. The 60% is the backstop disaster money. The 40% is for more normal bills (including dental).

      I have seen many of my W-2 earning clients with decent paying jobs, get moved into HSA model plans. The pushback one might expect, I haven’t heard much of it. Perhaps they like being treated like adults, who knows?

      Fan, I don’t what to say about big data. Does my health care insurance provider know to much about me? Should they? How about we put on our wish list, an HSA Obamacare option, as long is this thing seems inevitable.

      Thank you Fan for furthering the discourse.

      • A fan of *MORE* discourse

        Ragnaar asks “Does my health care insurance provider know too much about me?”

        That is a mighty good question to ask Ragnaar!

        It is a mighty tough question to *answer*, however.

        Once corporate computers know … and never forget … and act upon … and trade among themselves … the entirety of your personal medical history, your entirety of the genomic and mitochondrial DNA sequence, and its methylation patterns, and the entire medical history of all your relatives, and of your children, and your entire job history and your annual income … then in practice there gets to be mighty little difference between the Efficient Market Big Brother and the Omniscient Fascist State Big Brother.

        Question  Do rational societies *really& want healthcare markets that are 100% science-driven … and 100% efficient … at the price of being 100% privacy-invading … and irrevocably totalitarian?

        All around the world, most societies say “no thanks.” And Cringely’s science-savvy computer-literate Silicon Valley audience appreciates these real-world nuances pretty well.

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

      • “in practice there gets to be mighty little difference between the Efficient Market Big Brother and the Omniscient Fascist State Big Brother.” Oh really? The difference is N. In the former, N>1. In the latter, N=1. Can you guess what N is, FOMD?

      • A fan of *MORE* discourse

        FOMD asserts “in practice there gets to be mighty little difference between the Efficient Market Big Brother and the Omniscient Fascist State Big Brother.”

        NW replies  “The difference is N. In the former, N > 1. In the latter, N = 1. Can you guess what “N” is, FOMD?”

        That’s easy! “N” is the *number* of unrestrained globe-spanning privacy-invading nanosecond-acting database-systems that rule the lives of individual citizens.

        Thank you for your clarifying question NW!

        \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

  50. Steve Fitzpatrick

    The post hoc addition of the ‘revised’ red box projection (unsupported by published literature!) in the middle panel of fig 11.25 would be funny were it not so sad a commentary on the IPCC’s Grand Poobahs. Unless there is a clear drop in temperatures in the next two decades, the AR5 projections will be claimed to have been ‘correct’. No matter what happens over the next 20+ years, the IPCC authors will claim to have been ‘proven’ right! You might say the fix is in. Why did they not just state clearly the models and reality have diverged, and the model ensemble projection of climate sensitivity (~3.2C per doubling) looks much less likely to be correct? That a kludge like the middle panel passes for science with the Poobahs is sad indeed.

  51. In 1970s and 80s using models was the only alternative foe predicting future warming rates. That was largely true even in 1990s, but now we start to have enough data to base the estimates of climate sensitivity on observations. The interpretation of the observations takes advantage of models but is not very sensitive on the details of the models used.

    GCM type models remain an essential tool for atmospheric sciences and in learning more about the behavior of climate, but at the present they are not the most central component in estimating the rate of future warming. That conclusion is strengthened by the observation that most GCMs seem to deviate somewhat from the observations, not very much but enough to make them suspect as the best tool for estimating future warming.

    Models have not so far shown great skill in describing regional climates, but even so they are important also on attempts to predict other aspects of future warming than average surface temperatures.

    The lacking and poorly known skill of present climate models is unfortunate but it does not weaken significantly the conclusions on the strength of AGW.

    • Chief Hydrologist

      Observation is precisely the point. This was posted at realclimate along with the usual post hoc rationalisations and obligatory sceptic bashing.

      http://s1114.photobucket.com/user/Chief_Hydrologist/media/rc_fig1_zpsf24786ae.jpg.html?sort=3&o=26

      http://www.realclimate.org/index.php/archives/2009/07/warminginterrupted-much-ado-about-natural-variability/

      2/3 of the recent warming occurred in 1976/1977 and 1997/1998. Excluding this for the very good reason that it was all ENSO ‘dragon-kings’ at times of climate shift rather than AGW leaves a residual of some 0.1 degrees C/decade. Natural variability contributes at least 0.1 degrees C to this over the period to 1997. It leaves less than 0.05 degrees C/decade for greenhouse gases. Not enough to make any difference over any reasonable time frame.

      This is the simplest of calculations – that it is not recognised is a symptom of the most pervasive example of madness of the crowd in the history of the human race.

      • Because if greenhouse gases hadn’t risen, temperatures would have dropped after 1998 to much lower levels than they did.

      • “Because if greenhouse gases hadn’t risen, temperatures would have dropped after 1998 to much lower levels than they did.”

        We been saved! Hallelujah! Hallelujah! Hallelujah!

      • I repeat again the nice little curving dilemma for the catastrophists: The higher the sensitivity, the colder it would now be without man’s input. So pick a sensitivity that frightens you, and calculate how cold it would now be without humanGHGs.
        =============

      • Chief Hydrologist

        It warms and cools naturally – surely that’s the point. The recent warming once you subtract ENSO and decadal variability is 0.1 degrees C and not 0.6 degrees C.

    • stevefitzpatrick

      Pekka,
      Of course the divergence of climate models has no impact on reality; GHG’s have to cause some warming.

      But the bigger questions are (and always have been) how much warming and how fast. The models do not help answer these key questions, and have instead only led to a level of hysteria about future warming and hysteria-driven demands for nonsensical and extreme public energy policies. Most observationally based estimates indicate a most probable equilibrium sensitivity somewhere below 2C per doubling and a median estimate (from the PDF’s of those estimates) well below that of the models (eg. ~2.2-2.5C per doubling). My personal evaluation is that even these observationally based estimates are probably a bit on the high side, but at least not crazy-wrong like the climate models. Rational and economically justifiable public policies on energy production and use require reasonably accurate estimates of future GHG driven warming; the climate models and the IPCC are not helping to generate reasonable estimates of sensitivity. The IPCC helps to delay good, sensible public policies by encouraging crazy policies which will never be adapted.

      • “…never be adapted.” if it was adapted, maybe it wouldn’t be so crazy. Let’s hope for “never adopted.”

      • stevefitzpatrick

        John,
        I saw that as soon as I posted. Typos can’t be fixed on this blog… too bad.

      • For the central estimate of the rate of warming I would take TCR of about 2C noting that the likely range is as wide as 1 to 3C. That’s enough from the climate science for the rate of warming, but we would like to know also on changes in precipitation and other details. From European perspective one big question is the future precipitation in in Mediterranean, other regions have their corresponding problems.

        If we could finally get over the fight of this fight over that number we could move to the larger uncertainties that relate to the impacts and the still larger ones in judging the values of alternative policies. The policies should be looked at more broadly. While climate change is one of the major issues it has got too much emphasis relative to other major issues – at least it has got it in public debate, while the it has not really affected so much what we have really done. A more broad approach might lead to more robust solutions even from the point of view of climate.

      • Stunning conclusion, Pekka; we’ve got bigger problems than the mild beneficial warming we MIGHT get from anthroCO2. Welcome to the 21st Century.
        ========

      • Chief Hydrologist

        More precisely, we ask whether the impact of human activities on the climate is observable and identifiable in the instrumental records of the last century-and-a-half and in recent paleoclimate records? The answer to this question depends on the null hypothesis against which such an impact is tested. The current approach that is generally pursued assumes essentially that past climate variability is indistinguishable from a stochastic red-noise process, whose only regularities are those of periodic external forcing. Given such a null hypothesis, the official consensus of IPCC (1995) tilts towards a global warming effect of recent trace-gas emissions, which exceeds the cooling effect of anthropogenic aerosol emissions. . .

        The presence of internally arising regularities in the climate system with periods of years and decades suggests the need for a different null hypothesis. Essentially, one needs to show that the behaviour of the climatic signal is distinct from that generated by natural climate variability in the past, when human effects were negligible, at least on the global scale. . .

        Can we identify with measurable certainty deviations of the current record from predictions based on past natural variability? If so, such deviations have to be attributed to new causes. The “suspects” clearly include human effects, and attribution to them will become thereby both easier and more reliable. Michael Ghil

        Again from Michael Ghil – we have sensitivity in a chaotic climate.

        http://s1114.photobucket.com/user/Chief_Hydrologist/media/Ghil_fig11_zpse58189d9.png.html?sort=3&o=16

        Sensitivity is defined in relation to the nearness of the system to tipping points. In this mathematical theory of sensitivity – TCR and ECR do not have a meaningful definition.

  52. In Pygmalion of Greek mythology, the Cypriot sculptor carves a women out of the finest ivory, then prays to Venus to grant his creation, life.

    Modelers, in the modern mythology of GCM’s, craft a replica of the Earth’s climate from the fundamentals of physics, then pray to the Governmental Gods, to grant their wish and breath life giving funding into their creation.

    Pygmalion’s statue metamorphosed from inanimate material to flesh and blood which itself becomes aged and dies. All natural and believable. Applause.

    GCM’s in this modern tragedy also take on a life of their own Instead of succumbing to the vicissitudes of the relentless pounding of reality checks and the aging process, remain as enfeebled representations of their original intent. Modeler’s cry, its all fundamental and plausible. Boo.

  53. Are there any analyses of the major input parameters into GCMs???

    What evidence is there of systematic refinement of GCMs?

    We know there is a load of “input data” for a GCM. So there are historical data and presumably invididual projections of multiple parameters. But then over time more data is collected for the next round. Is there assessment of the individual projections of the indivudual data projections in comparison to actual data over time.

    Major important data might include such things as volcanic eruptions, cloud cover, ice cover, albedo, GHG concentrations, aerosols, etc.

    Are indivudal projection algorithms for individual data items refined over time?

    Surely one would expect that continuing refinement of GCMs should result in increasing accuracy with succeeding generations of model.

    No increase in accuracy over time implies not just lack of skill but deteriorating skill, since skill should increase over time.

    What about comparisons of older models?? Does an older model run with an older model version do better or worse after being fed with a few more years worth of real data?

  54. “If the pause does indeed continue for another 2+ decades, then this arguably means that the scenario of time evolution of the predictions, on timescales of 3+ decades, has been been falsified.”

    Only if the models also fail to predict (project) Ocean Heat Content over the same period (Figure 9.17).

    A pause, according to the current theory, would increase OHC. Only if OHC is decreasing at the same time as the surface can it be said that the models have “failed” in their projections. This inevitably would mean a large slow down in ocean expansion/ sea level rise. I think most skeptics would doubt the likely hood of these combinations.

    • Chief Hydrologist

      ‘Climate forcing results in an imbalance in the TOA radiation budget that has direct implications for global climate, but the large natural variability in the Earth’s radi.ation budget due to fluctuations in atmospheric and ocean dynamics complicates this picture.’ http://meteora.ucsd.edu/~jnorris/reprints/Loeb_et_al_ISSI_Surv_Geophys_2012.pdf

      Your current theory is incorrect.

      But we do know that ohc reflects changes in TOA flux.

      http://www.met.reading.ac.uk/~sgs02rpa/PAPERS/Loeb12NG.pdf

      Too bad it’s all SW

    • Sea level rise as a symptom of CAGW is an interesting proposition.

      The current rise is purported to be 1mm/year, more or less. Given the published area of the oceans, this translates to an increase in ocean volume of about 360 km^3. Given the volume of ocean and picking a likely coefficient of volumetric expansion (not easy, since it varies wildly with temperature and pressure), it would require an average increase in the temperature of the ocean of 1-2 milli-degrees C to expand by 360 km^3, assuming that nothing else was happening to the oceans that would affect sea level.

      Now I am not an expert in anything, never mind climate change, but I do know some things for sure:

      a. Every river feeding into the oceans deposit a bunch of silt in them, every year. Do we know how much?
      b. There is an enormous number of underwater ‘features’ squirting a variety of substances into the deep oceans, including super heated water and magma. Do we know the quantity in km^3/year or the calories of heat/year associated with it?
      c. The tectonic plates are tecting away, changing the shape and volume of ocean basins and by extension, sea level. Do we understand how the continents are moving and how the size and shape of the ocean basins are changing well enough to be sure that the annual change in ocean volume is negligible compared to the 360 km^3 necessary to change sea level by 1 mm?
      d. Humans are pumping water out of aquifers around the world and dumping it into rivers, which in turn dump it into the sea. With what precision do we know the amount?
      e. I’m sure that there are other things affecting sea level. What are they and what are their effects?

      Measuring temperatures with millidegree precision in a laboratory under controlled conditions is non-trivial. Do we actually have an instrumentation system in place that can measure the temperature of the oceans, planet wide, with better than millidegree precision, unattended, over multi-year time frames? Do we understand all factors other than temperature that affect sea level so that their affect on sea level can be separated from temperature change? Do we understand everything that influences ocean temperature well enough to separate their influence from that of anthropogenic CO2 and identify millidegree changes in ocean temperature as a signature of anthropogenic CO2? Do we have evidence that a change of average ocean temperature of a millidegree per year poses an existential threat that demands immediate action to control anthropogenic CO2?

      A climate scientist that can convince me that we know the answers to all the above with enough precision to attribute current rates of sea level rise to anthropogenic CO2 could have made a fortune over 8-10 October 1871 by selling buckets of gasoline to the residents of Chicago.

      Bob Ludwick

      • Bob Ludwick,

        Measurements? Precision? Wild and woolly wacky Warmist warriors don’t need no stinkin’ precise measurements! They got faith! They got rhythm! They got models!

        Live well and prosper,

        Mike Flynn.

      • “Sea level rise as a symptom of CAGW is an interesting proposition. ”

        Exactly. From I skeptics point of view, or more specifically an attribution skeptical point of view, the idea that humans are the main cause behind ocean heat content, thermal expansion of the ocean, and sea level rise, is an insult to ones intelligence. Arguing that these inexorable processes need to some how stop, slow down, or change course, in order to call into question AGW, is, in my view, completely silly. With such a vague, and interchangeable theory, just about everything these days is due to human induced global warming, the hiatus being the latest in a long line of fudge factors. All distinction tends to be lost in over confidence….And if OHC and surface temp did go down synchronously? They just make some other bs to explain whats happening. It’s ridiculous!

    • Exactly, it is insufficient to just use surface temperature to falsify AGW. The theory predicts the sum of two effects, not how they are divided at a given time. In a period when the surface temperature pauses as the forcing increases, AGW dictates that the OHC would continue to increase without pause at least until the surface temperature starts to rise again.

  55. How do the GCMs perform in terms of modelling the temperature over the last 2000 years. Why do we only ever seem to see illustrations of their performance over the last couple of decades. To me that timeframe is far too short to evaluate them properly. And we mostly only see global temperature average plotted. The regional patterns of variation would provide far more data which would allow us to assess the realism of a model. Looking at just one variable (global temperature) over the short span of a couple of decades is a poor test of the adequacy of these models.

    When you are modelling a chaotic system you cannot expect accurate prediction; even a perfect simulation will not predict accurately as it is subject to the butterfly effect. To demonstrate its perfection you would need to look elsewhere. Where I’d look would be to see whether the nature of natural variation predicted by the model matches the variation observed in reality. That includes spacial as well as temporal variation.

    Which of the models successfully produce an el nino – la nina phenomenon. Do they replicate the other commonly observed climate oscillations. How successfully do they describe monsoons and other seasonal variations in rainfall. Are any of them capable of producing something like a little ice age or MWP. If they can’t do any of these things then in my opinion they are already falsified – and we don’t need to wait for another 20 years to find that out.

    • Mosher,
      Interesting but what’s your point?

    • So the CAGW movement is the real war on women (and the poor). Shoot, I coulda told you that.

    • Nice find Mosher. I am quite a fan of Hans.

    • Steve Fitzpatrick

      Mosh,
      I had seen that some time ago, but it was worth seeing again. Thanks.
      The video makes a fundamentally moral argument, and one which is profoundly true: Good intentions count for nothing; what matters are the consequences of our actions and especially how they impact the lives of others. When I hear demands for drastic reductions in energy use, I think always of the poverty, disease, and suffering which inevitably go hand in hand with very low energy use. Drastic reductions in energy use in the developed world are not going to happen, and people are not going to impoverish themselves pursuing impractically costly ‘renewables’ on a grand scale either. The truly poor have no economic possibility of using costly renewable energy sources, so insisting on renewable energy for poor people dooms them to continued poverty and is, IMO, simply immoral. We need more and cheaper energy for the future, not less and more expensive energy. If CO2 emissions must be reduced to control warming, then low cost non-fossil energy sources are the only real alternatives. Developing those low cost sources should be the focus of energy policy, not crazy demands for for more expensive energy and reduced energy use.

      • + as many billions and trillions and googloplex’s as you can imaging in all of the Universe!

      • Steve:

        Jerry Pournelle is fond of saying, accurately, that cheap, plentiful energy is the key to freedom and prosperity.

        I would be interested in the explanation by those seeking draconian reduction in anthropogenic CO2 as to why Jerry’s premise is inaccurate or, alternately, since it seems to me to be true prima facie, why for at least 50 years they have been doing everything within their power (quite a lot as it turns out) to enact policies which simultaneously INCREASE the price of energy and REDUCE its supply.

        Bob Ludwick

  56. If there are 100 models, all producing different a result, then at least 99 are giving a wrong answer. The problem arises when trying to establish which if any gives a useful (in the sense of correct) answer.

    If none of the hundred show, for example, the current “pause”, then none at all are useful. The new problem, then, is to determine the reason that the models are failing. There could be many, but a few that spring to mind are : –

    1. Incorrect assumptions about the physics involved.

    2. Poor program design – logic flaws, etc.

    3. Poor programming skills – lack of in depth knowledge of the language, quirks, compiler eccentricities and so on.

    The list goes on, but it would seem logical to investigate the reasons that Nature does not seem to be bending to the will of the modellers. Fairy tales such as energy from the Sun penetrating several hundred meters of ocean, before deciding to heat deeper colder water, will not correct models which did not account for this unforeseen effect.

    Some decision makers read blogs such as this, and are starting to realise that some of the “science” is not as “settled” as the Warmist merchants of doom would have them believe.

    For example, it is physically impossible to stop an object from cooling by surrounding it with CO2. It is therefore also impossible to raise the temperature of a cooling body by surrounding it with CO2. The Earth is one such cooling body.

    Another example is the failure of CO2 to perform its supposed heating function at night, when presumably the CO2 is still present in the atmosphere. Generally, the temperature of the atmosphere starts to fall after sunset, regardless of the presence of CO2.

    One would assume that decision makers would ask to see at least basic experimental data to support these hitherto unverified perfect insulating and warming properties of CO2. And hopefully some will, before more millions are wasted.

    Live well and prosper,

    Mike Flynn.

    • Mike – A good textbook, such as Marshall & Plumb’s Atmospheric, Ocean, and Climate Dynamics: An Introductory Text would clear up some of your oft-repeated misconceptions.

      • He don’t need no book larnin.

      • If a good textbook and some fine larnin’ was actually sufficient, the IPCC wouldn’t be in a smokey back room, casting Runes out on the table.

      • Biddle,

        I am unable to find anything in that textbook which would “clear up some . . . misconceptions”. It certainly doesn’t seem to contain any experimental data to show the magical properties of CO2 which must be assumed in order to postulate that rising CO2 levels cause “global warming”.

        I may have missed it. I quickly perused the book, but it contained so many errors and assumptions, I dismissed it as being based on fancy rather than fact, when it came to promoting the non existent “greenhouse effect”.

        However, if you wish to provide some facts, so that I may be free of misconceptions, I would appreciate your input. Predictions, projections or scenarios are not facts. If you wish to talk about energy balances, you will need to clearly define your terms, and demonstrate the workings of the “greenhouse effect” at night. And so on. Of course you can’t, so like AFOMD you exhort me to read a religious tract, in order that I may “believe”.

        Nice try.

        Live well and prosper,

        Mike Flynn.

      • I would advise heading on over to Science of Doom and checking out some of the early posts on the basic principles of GHG warming. Also the comment threads where things are explained to some very stubborn people with views similar to yours.

      • Look Ted, the IPCC is not wrong about everything. It helps to confine the arguments to those things that they have just made up. But if you want to fly with the Skydragoons, have fun. Have they mailed your little Skydragoon helmet and goggles yet? Make sure you get the plastic one-size-fits-all decoder ring.

      • I think Linux is awesome, Don, but the community’s penchant for “RTFM” is hard to watch. So I fired on the ‘git a text’ and ‘larnin’ line when I shoulda let it slide … particularly since I don’t share history with you guys yet.

        True, the IPCC is not without its redeeming qualities. Whether that will save it if the climate-facts don’t unfold just-so, is another thing we wait to see.

    • You’re OK, Ted.

  57. It is obvious. There is a fundamental flaw in all models, indicating that climate science is not well enough understood. At birth the science was said to be ‘settled ‘ Obviously that was wrong. But how come this was not exposed earlier? There was no international attempt to properly validate the models. Instead international funds were wasted on building multiple models when one good one was all that was needed.

    Sure models of complex systems are difficult. In fact they surpass the understanding of their human creators, so it is important that they be properly validated according to the rules of science. These rules require that every process within the model be separately validated against real life. If that had been done, we would not have today’s dilemma.

    So how do we fix it? Only by going back to those important scientific principles.

    • Heh, first ‘settled’ then birth. The science was settled alright, with a Rosemary’s Baby.
      ==========

  58. Just a quick point on CS.

    If the CO2 levels have risen, and the supposed “global temperature” has not risen, or even fallen, does this not indicate that CS, at the very least, is a dreadfully inconsistent “forcing”?

    If you are one of the Wonderfully Wacky Witless Warmist Warriors, you will come up with all sorts of fanciful stories to avoid admitting that Nature is failing to cooperate with your fantasy. It doesn’t really matter. You can’t come up with a definition of CO2 “climate sensitivity” that simultaneously allows for increases, decreases, or no change as CO2 levels rise.

    CO2 does not raise the temperature of the Earth. Making CO2, on the other hand, obviously does, if the making involves oxidising carbon.

    And this is the main reason why the climate models are crap, notwithstanding the amateurish program logic and coding of the majority of models used to produce the gibberish masquerading as scientific output.

    I will of course retract my (possibly harsh) criticisms if anybody can show experimental proof of CO2’s ability to prevent an object from losing heat. Not slowing the rate of cooling compared with a vacuum. People who say that slowing the rate of energy loss is the same thing as increasing it, are obviously prime candidates for membership of the Cult of Latter Day Warmism. Cooling is cooling. Even if it took 4.5 billion years for the Earth to cool to its present level, it still cooled. Obviously not the same was warming. If the crust starts to melt again, thats a definite indication of heating.

    I could be wrong. After all, I can’t look into the future. Can you?

    Live well and prosper,

    Mike Flynn.

    • Mike CO2 slows the rate of cooling to space. That is what it does. All the rest is attempts at visualization of the process. The net result is what counts. Meanwhile the entire climate system oscillates changing both the inputs and outputs over daily to millennial scale.

      • Think of CO2 as dust on the cooling fins of a power amplifier in a big audio system. The cooling is definitely reduced by some amount but we don’t know if the volume knob is turned up enough for the power transistor to notice.

      • dalyplanet,

        The net result of losing energy to space is cooling. Temperature reduces. The entire climate system cannot create heat from nothing.

        It can reduce the amount of energy absorbed by the Earth system in numbers of ways. It can never increase the amount of energy emitted by the Sun, nor increase the amount of internal heat generated by the radioactive decay process.

        Live well and prosper,

        Mike Flynn.

      • dalyplanet,

        I prefer to think of CO2 as CO2. What do you think of that approach?

        Live well and prosper,

        Mike Flynn.

    • Mike Flynn, if the energy is supplied by a constant source (the sun), slowing the energy loss from the surface is equivalent to warming the surface. Again, I would use the home insulation comparison, but I think you don’t understand how that works either.

      • Jim D,

        Fill your vacuum flask (R value around 2500) with coffee at say 85C.

        Wait for it to warm up some (its rate of heat loss has been slowed, and you say that this is the same as warming). Hmmm. Not warming.

        Nature has just taught you that reducing the rate of cooling produces no heat. The Earth cools, slowly.

        Live well and prosper,

        Mike Flynn.

      • Pierre-Normand

        Your analogy is flawed since the coffee flask doesn’t have a continuous source of heating. The Earth continuously receives shortwave radiation from the Sun. For the analogy to be relevant you need to introduce something like an electric resistor to heat the flask. In that case, increasing the insulation of the flask would increase its equilibrium temperature.

    • Do you know what gloves are Mike? Get yourself a pair. Put one glove on. On your hand. Don’t worry, it doesn’t matter which hand. Now think. I know it’s hard.

      • OMG! I forgot to mention that this is really better if you have two hands. When you get the glove on, wave them around. Think.

      • Don Monfort,

        Find yourself a nice cold corpse. Put gloves on its hands. Might as well put a nice warm overcoat on it at the same time.

        Perform standard Warmist incantations. Whoops, corpse failing to warm up. Blame the corpse. Must be defective.

        Live well and prosper,

        Mike Flynn.

      • They call your affliction willful ignorance, mikey.

      • Don Monfort,

        Any climber missing a few fingers due to frostbite will no doubt be cursing that he wasn’t wearing a pair of your wondrous gloves. Do you fill them with CO2, or are the normal specialised climbing gloves defective? Maybe there is not enough CO2 in the Nepal Himalaya?

        Where can these magical gloves be obtained, or have you only got as far as a computer model so far?

        Live well,and prosper,

        Mike Flynn.

      • I thought you had changed the subject to corpses, mikey. They don’t climb mountains. I hope everyone else here soon realizes that you are a complete waste of time.

      • Don Monfort,

        I gather from your responses that you accept that insulators such as gloves provide no warmth (energy) whatever. They merely slow the loss of warmth (energy).

        Maybe you could exhort Professor Curry to delete my comments. I wouldn’t blame you.

        Live well and prosper,

        Mike Flynn.

    • Vaughan Pratt

      @MF: After all, I can’t look into the future. Can you?

      I forecast a rising sea. A great sea of comments by you on Climate Etc.

      • Vaughan Pratt,

        Any fool could make that assumption. Even me.

        Do you have a point?

        Live well and prosper,

        Mike Flynn.

  59. Judith Curry, in her previous post, Pause (?), took up the ins & outs of this matter. She said:

    Here I define “pause” to mean a rate of increase of temperature that is less than [the IPCC AR4 projection of] 0.17 – 0.2 C/decade.

    All that had to happen to create the need for a ‘name’ and a ‘definition’, was a clear failure to meet the official projection. Once the data weren’t meeting the AR4 trend, then we have something that needs to be referred to, discussed, explained … and named, and defined.

    It doesn’t take anything remotely like your reduction to 0.05°C/decade, to make clear that ‘something is wrong’. And that’s all that ‘pause’, ‘hiatus’, ‘decline’, ‘oh-crap’ etc really mean. We need to recognize & address the mismatch between model & data … and to do that we need a nomenclatured.
    ===

    I too think events in the southern hemisphere, and especially relative comparisons between north & south, and between summers & winters (all of which may be related) are important.

    Ted

    • This was supposed to be a reply to Bob Droege.

      • Right,
        I missed that post where Judy did indeed define the pause.

        That is the famous post where Judy defined warming to be any trend greater than zero and the “pause” to be any trend less than 0.17 C per decade and no warming to be a trend less than zero.

        So when she says pause she really means warming.
        Or when she says warming she really means pausing.

        There is uncertainty in the data as well as the models, and we should take both in to account in making any judgement on the performance of the models.

        If ENSO can cause ups and downs in global temperature of at least 0.1 C, and we are in the cool PDO phase which has more La Ninas, that weakens the case for any mismatch.

  60. Political Junkie

    Mike Flynn has a point.

    If we truly understand climate (95%), why do we need more than one model?

  61. R. Gates, Skeptical Warmist

    Some of you will appreciate the link between this picture and climate models, and some will not, but first, the picture:

    http://tinypic.com/r/e00j9u/5

    In the case of the meandering trail of a water drop down a window pane, every single one of the forces that controls the path of that drop, from gravity to the intermolecular forces in the water and between glass and water are very well understood and can be extremely accurately put into a model that would describe the “system” of glass, gravity, water drop, viscoscity, etc. This model would even be able to predict, with a high degree of confidence that this drop of water will wind up at the bottom of the window glass because of the “forcing” from gravity. Yet, despite all that is known about the basic dynamics involved in this water drop flowing down the window glass, not what model could tell you the exact wiggly path a drop will take. There is far too much “internal variability” of the glass itself, the intermolecular forces, etc. to every predict the exact position and path to be taken. So accurately knowing the dynamics of a system will not lead to an accurate prediction of the path. Dynamical climate models are exactly like this, only the system to be described is far more complicated. If a model can predict the drop will end up at the bottom of the glass, but not predict the exact path, is it still useful, though it is “wrong” in prediction of the exact path?

    • Chief Hydrologist

      Here’s another picture – http://rsta.royalsocietypublishing.org/content/369/1956/4751/F8.expansion.html – and some actual science – http://rsta.royalsocietypublishing.org/content/369/1956/4751.abstract

      Progressive denialism and their ad hoc rationalisations are a wonderful thing. Insane narrative metaphors notwithstanding – climate is more a balance than a window pane.

      http://www.nap.edu/openbook.php?record_id=10136&page=12

    • R. Gates,

      Well done analogy, but why even bother then if you know the outcome?

    • With the drop of water, I think we are modeling more than that. We are trying to model the whole pane and all the water on it. Occasionally it is pointed out that we can model complex airplanes. To me it’s similar. Aircraft models I’d guess are plane-centric. Mostly concerned with how the plane flies. There is curving of air streams and turbulence, but once they are behind the plane, why model that, unless you are doing a study on wing tip vortexes and light aircraft flying into them. With aircraft models, we are not modeling the entire system. The atmosphere is perhaps given density and humidity factors based on altitude. I imagine cross and tail winds are modeled. As well as atmospheric turbulence. But we still don’t have a complete picture of everything that happens in the system.

      • That part I understand, we are learning about the climate, but why only show temperature trends? If we know, AGW CO2 theory, that temp will rise, why model it when we know it won’t be accurate?

      • R. Gates aka Skeptical Warmist

        We are still learning all the dynamics of the climate. It has so many interacting feedbacks (as any chaotic dynamical system does), that only supercomputers can even come close to REVEALING those dynamics. Climate models all about their usefulness in revealing dynamics that we could we would not understand without them. That they can “predict” the exact path of the climate water drop as it makes its inevitable way down the window pane from the forcing of gravity gives a false notion of what models are really useful for.

      • R. Gates,

        Why are the models output only in Temp? Or are there other parts that we don’t usually see?

      • R. Gates aka Skeptical Warmist

        M. Hastings,

        You can look at any parameter you want in the model output, but tropospheric surface temperature is the most easily digested by the general population. There is no dynamical reason that it is necessarily the most important except that is was historically the easiest to measure as the attempt was made to match model output with actual temperatures.

      • R Gates,

        You say : –

        “Climate models all about their usefulness in revealing dynamics that we could we would not understand without them.”

        I accept that your typo was unintentional. You might care to name one “dynamic” that has been revealed by a supercomputer, rather than someone using their intellect.

        Go a little further. You might care to specify anything at all that has proved useful, as a result of these wonderful, magical, models. Predictions, projections, scenarios, don’t count. They haven’t happened yet.

        Surely something useful can be demonstrated after the expenditure of millions, if not billions, of taxpayers’ funds. Hopefully, something really big and exciting!

        Live well and prosper,

        Mike Flynn.

      • R. Gates says “attempt was made to match model output with actual temperatures.”

        So is it wrong then for a layman to question the models attempt to match actual temperatures. That may be the crux?

      • R. Gates aka Skeptical Warmist

        Mike Flynn,

        You ask for one useful thing that has come from models. Recall that I said they were useful in revealing dynamics that could not be seen or even often times measured when models first revealed them. There are many examples, but probably the one that Is easiest to understand is the enhancement of the Brewer-Dobson circulation. The models consistently kept showing this, even though it was a counter-intuitive result. After all– the models were also showing a cooling of the stratosphere, so how could the Brewer-Dobson circulation be enhanced in such a cooling stratospheric mode? Furthermore, some models, and early AGW theory was indicating a tropospheric “hotspot” would develop, but later and more advanced models were more consistently showing the Brewer-Dobson enhancement to be a stronger dynamic than a tropospheric-hotspot. Many of the so-called AGW skeptics latched on to the rather outdated hot-spot concept, but didn’t keep up with the Brewer-Dobson model output. More than a decade after the models showed the Brewer-Dobson enhancement, actual measurements have confirmed that result, and now vigorous research is being done into all the follow on effects such as a weakening of the QBO, and related effects on ENSO behavior.

      • R Gates,

        Dobson was proposing his thoughts in 1929, just a little bit before supercomputers came on the scene. Brewer was working for Dobson, and they were involved in finding out why high altitude aircraft left vapour trails in the WW2 era. Still no supercomputers. It is telling that : –

        “What Brewer needed were accurate measurements of temperature and frost point at flight altitude, and especially the latter proved difficult (and remains difficult to date).” . . . “It did not take very long until the Brewer-Dobson circulation made a real societal impact. Radioactive debris from the nuclear tests by the U.S. in the tropical Pacific, even though deposited into the tropical stratosphere and believed to stay in the stratosphere for many years, got transport poleward and downward back into the troposphere where it got quickly mixed down to the surface (as had been predicted by Brewer).”

        No supercomputers involved. Two men, brains, and, critically, some actual measurements.

        Try again – surely to goodness you can find something factual. Oh, it has to be useful, as well.

        I assume you can’t, but go your hardest.

        Live well and prosper,

        Mike Flynn.

      • R. Gates aka Skeptical Warmist

        Don’t be a complete ass Mike Flynn. I ever never said the models discovered the BDC. For you to either mistakenly or intentionally conflate what I wrote to suit your own predefined purposes is quite asinine.


      • Surely something useful can be demonstrated after the expenditure of millions, if not billions, of taxpayers’ funds. Hopefully, something really big and exciting!

        Live well and prosper,

        Mike Flynn.

        Flynn, I don’t care how your Aussie government spends your Aussie tax money.
        If they want to spend it on Star Trek toys to keep you happy, fine by me.

        Make no mistake that you have zero impact on how the rest of the world spends their tax money.

      • R Gates,

        I repeat : –

        You said :

        ““Climate models all about their usefulness in revealing dynamics that we could we would not understand without them.”

        I accept that your typo was unintentional. You might care to name one “dynamic” that has been revealed by a supercomputer.”

        I do apologise for accepting your typographical errors as being unintentional. Only joking.

        I assumed that when you said “I ever never said the models discovered the BDC.”, you really meant “. . . never ever said . . . “. Correct me if I have erred.

        I now accept, that in the best Warmist tradition, conforming to normal English expression is the province of deniers and asses. I’ll try my best.

        But seriously, I confess to overlooking the “enhancement” when you stated “There are many examples, but probably the one that Is easiest to understand is the enhancement of the Brewer-Dobson circulation.”, and I do apologise.

        If that’s the best example of a useful dynamic resulting from supercomputing that you can come up with, I must admit to being significantly underwhelmed. I leave it to others to judge if their contributions as taxpayers were wisely used. Personally the impact of the enhancements to the Brewer-Dobson circulation is far outweighed by the ristretto coffee beside me.

        I have use for the coffee, and none whatsoever for enhancement of the Brewer-Dobson circulation. Call me a Philistine if you wish.

        Live well and prosper,

        Mike Flynn.

      • WebHubTelescope,

        Silly me. I wrongly assumed that the US taxpayer funded the vast majority of this silliness. Oh well, as long as they are happy with the vast benefits accruing from enhancement to the Brewer-Dobson circulation.

        I am sure you are right that they prefer to do without a few billion dollars worth of unemployment benefits, health care, various municipal services, and so on. I agree with you, let the beggars starve in the dark. Serves them right for ignoring global warming!

        Maybe you could provide copies of your CSALT model for their relaxation and enjoyment.

        Live well and prosper,

        Mike Flynn.

      • ” Mike Flynn | October 31, 2013 at 1:46 am |

        Silly me. I wrongly assumed that the US taxpayer funded the vast majority of this silliness. “

        Now you know why no one cares what you and your Aussie buddies think. Flynn, You are not an American taxpayer and can’t complain about how we spend our money.

      • WebHubTelescope,

        I obviously support the democratic right of US taxpayers to waste their money anyway they like.

        I absolutely support your right to advocate diverting as much money as possible into something that has been shown to be of no practical use whatever.

        As an Australian I also absolutely support the right of our Government to dismantle any frameworks previously imposed at great pecuniary disadvantage on the taxpayers. The dissolution of the so-called Climate Commission was, hopefully, merely the first step. The witless carbon tax should be next to be abolished.

        It is obviously your responsibility to give to your Government until you can give no more, and then borrow until you are poverty stricken beyond all hope. Good luck with that.

        And now, I must depart, laughing, into the night.

        Live well and prosper,

        Mike Flynn.

  62. I’ve often wondered what would happen if we could run an experiment to determine which forcing has a grester impact on climate where the first test is to make the atmosphere 100% co2 and see what happens to temperature. The second test would be to turn off the sun and see what happens to temperature. I just realized after reading many of your posts that one of these tests occurs every day and the result is obvious.

    • This comment was supposed to be a reply to mike flynn

      • Barnes,

        I hope you are in agreement with me – we may have differing views on what is obvious. I tend not to notice sarcasm. Maybe I’m a bit slow.

        But just in case, I believe that Nature carried out the first experiment a few billion years ago. Maybe not quite 100% (there are always a few impurities), but apparently compensated by raising the pressure to 100 bars or so.

        As we note today, perched on a reasonable thickness of solidifies crust, the Earth cooled. And will continue to do so, barring something extraordinary occurring in the future.

        Live well and prosper,

        Mike Flynn.

    • If you find one of my posts where my name is a link, there is about 5 blogs where I analyze NCDC data with this question in mind.

  63. the ”models” are suffering from obesity, not fit for the catwalk

  64. Perhaps we are all talking at crossed purposes because of a lack of clear definitions.
    In order to fix this, I propose the following definitions be used:

    DATA: information obtained by direct empirical measurement.

    PREDICTION: output of a MODEL or SIMULATION that is *not* contingent on the unknown future state of a variable.

    PROJECTION: output of a MODEL or SIMULATION that *is* contingent on the unknown future state of a variable.

    MODEL: a method of PREDICTING or PROJECTING future events where parts of the method do *not* rely on calculations from the equations derived from basic physical laws.

    SIMULATION: a method of PREDICTING or PROJECTING future events where *every* part of the method relies *only* on calculation derived from basic physical laws.

    Using these definitions:
    F1 CFD aerodynamics are SIMULATIONS that make PROJECTIONS of lap times based on DATA from past experience. (the driver of the vehicle being the contingent variable)

    Nuclear bomb calculations are SIMULATIONS that make PREDICTIONS of the size and other parameters of the explosion resulting from detonating a weapon of a particular design.

    Climate is PROJECTED by MODELS to be primarily driven by CO2, based on both DATA and other MODEL PROJECTIONS.

    Weather is PREDICTED by MODELS.

    I believe that the difference between a prediction and a projection is well accepted in the climate research community (after all, they have defended their projections using this very definition!). I also believe that they should be stressing the difference between a model and a simulation, but alas this does not appear to be “on message” for them.

  65. Kneel says,
    “Climate is PROJECTED by MODELS to be primarily driven by CO2, based on both DATA and other MODEL PROJECTIONS.”

    Don’t you mean Temp is PROJECTED……..

    • In simple terms, perhaps – but since we are being told that all sorts of other variables are being changed as well, I have not limited it to just temp.

      • Agreed, but the output of the model is Temp, correct?

      • R. Gates aka Skeptical Warmist

        One of hundreds of potential outputs from a global climate models is tropospheric surface temperatures. Warming of the troposphere is but one of many effects of climate change caused by the of increasing GH gases in the troposphere.

      • “One of hundreds of potential outputs from a global climate models is tropospheric surface temperatures. Warming of the troposphere is but one of many effects of climate change caused by the of increasing GH gases in the troposphere.”

        Most of which are wrong.
        http://icp.giss.nasa.gov/research/ppa/2002/mcgraw/
        Results

        Upon preliminary inspection of the model’s simulation of surface air temperature it was noted that temperatures were much cooler over the Tibetan Plateau region than the rest of the surrounding region. (The Tibetan Plateau region, for our purposes, is defined to be the region between 14°N and 46°N latitude and 50°E and 125°E longitude.) As you can see in Figure 1, below, there was a large discrepancy in temperature that led us to investigate this region in more depth. It was decided to limit the analysis to July due to both time constraints and the fact that we are more interested in how the model handles extremities. The following variables were compared in an in-depth study of this region: precipitation, absorbed solar radiation, total cloud cover, cloud top pressure, and surface air temperature (Surf_Temp). It is expected that the low temperatures over this region can be explained by the model’s deficiencies in simulating other variables.

        [ INSERT FIGURE 1 ]

        Figure 1. Box in Surf_Temp plot shows the observed discrepancy that prompted further investigation of this region. Region also boxed out on the Primary Grid.
        Surface Air Temperature

        East of the Himalaya Mountains, the model underestimates the Surf_Temp within a range of 3°C and 30°C. Moving eastward, however, it is found that the difference between the model and the observations decreases. West of the Himalayas, towards India, Pakistan, Afghanistan, and Iran, the model overestimates the surface air temperature. Surface air temperature is directly influenced by insolation (incoming sunlight) reaching the surface. One would expect that the absorbed solar radiation at the surface should be less as well. This was confirmed by the difference plot below (See Fig. 3). For the most part, where solar radiation was overestimated, the surface air temperature was overestimated, and where solar radiation was underestimated, the temperature was underestimated.

        [ INSERT FIGURE 2 ]

        Figure 2. Difference Map (SST-OORT) Blue region shows model underestimates the temperature over the Tibetan Plateau, and overestimates it to the west of the Himalayas.
        Precipitation

        Moving towards the west away from the Himalayas, the model tends to underrepresent precipitation in India by 8 to 16 mm/day. Over the Tibetan Plateau region itself, the model estimates too much precipitation (3 mm to 6 mm). In addition, in the Bay of Bengal, the model also predicts too much precipitation. Over the South China Sea, precipitation in the model is less than the observed. The increased precipitation over Mongolia and the Gobi Desert could be linked to the exaggerated amount of low clouds in the region. (See Figure 3.) This is because most of the rainmaking clouds are low clouds.

        [ INSERT FIGURE 3 ]

        Figure 3. Difference Map (SST-LEGATES) Blue region shows model underestimates the precipitation over parts of India, while slightly overestimating precipitation over Mongolia.
        Absorbed Solar Radiation

        Generally over the Himalayas and up through Mongolia and the Gobi Desert, the model underestimates the amount of solar radiation absorbed. Over the Tibetan Plateau, the amount of absorbed solar radiation is significantly underestimated (see Figure 4). However, just above India, near the Himalayas, the model significantly overestimates absorbed solar radiation.

        Why is the absorbed solar radiation not simulated well by the model? To answer this question, a difference plot of cloud top pressure was examined. Cloud top pressure is the measure of the height of a cloud. The greater the cloud top pressure, the lower the height of the cloud. This is relevant because of the fact that low clouds (such as nimbostratus) are responsible for reflecting insolation back to the atmosphere, thereby reducing the amount of solar radiation reaching the surface. Figures 5 and 6 demonstrate the model is exaggerating the production of clouds over the Tibetan Plateau region. Figure 5 suggests these are low clouds.

        [ INSERT FIGURE 4 ]

        Figure 4. Difference Map (SST-ERBE) Blue region shows model underestimates the amount of absorbed solar radiation received at the surface of the Tibetan Plateau Region.

      • R. Gates says, “One of hundreds of potential outputs from a global climate models is tropospheric surface temperatures. ”

        No doubt and as it should be, however why would you expect anyone not so informed as yourself to conclude that the models temp output is not important when compared to actual measured temps?

      • “Agreed, but the output of the model is Temp, correct?”

        ONE of the outputs is temperature. As R Gates says, there are hundreds if not more CALCULATED OUTPUTs –

        CALCULATED OUTPUT: a variable of interest NOT provided to the model/simulation, but rather derived from a combination of other inputs to the model/simulation

      • “the models temp output is not important when compared to actual measured temps?”

        He didn’t say that – as far as I can see, he didn’t even imply it. After all, he made no mention of comparisons or measured data in that post at all!

        Unlike some others I will not name, R Gates appears to me to be a reasonable fellow – I do not agree with all that he says, but in my view (based on what he says here at this blog), he is careful to discriminate between facts and opinion, data and speculation. I would suggest that PERHAPS one of the areas we MAY disagree about WRT facts, is using spacial and/or temporal averages of a variable of interest – personally, I would suggest that while this can be a useful exploratory tool, it should be avoided in published analyses, with the possible exception of notes such as “…examination of average of X indicates Y as an area of interest…” etc. Why? Several reasons, however the most relevant here would be that such procedures “throw away” valuable data that can likely help you move from MODEL to SIMULATION (ie, refine the model). If you think averages are a better measure than not averaging, than perhaps you would be content to hold onto the AC power feed to your house – average voltage is 0! (this is for illustrative purposes ONLY – I am NOT suggesting you do this. Touching live electrical wires is dangerous and may cause severe injury or death)

  66. I’m sure Obama and some of his minions can give you a perfectly valid explanation of how the models don’t have to reflect the actual temperature in order to be right.

  67. I don’t know about the jig being up, but I do know the same type of connection exists between air quality modeling and regulation. That de facto connection promotes the development of questionable policy, litigation, and slow progress. Job security for sure, cleaner air, not always.

  68. Taking a realistic look at the temperature record,
    http://www.woodfortrees.org/plot/hadcrut4gl/from:1970/mean:12/plot/hadcrut4gl/from:1970/mean:12/trend/plot/hadcrut4gl/from:1970/mean:12/trend/offset:0.1/plot/hadcrut4gl/from:1970/mean:12/trend/offset:-0.1
    we see a pattern of rises and falls with hardly any periods paralleling the actual mean trend but seeming to be constrained by 0.1 degrees about the mean (marked as rails). We see a period just prior to 1998 that looked just like now, and we expect a rise of 0.1-0.2 degrees shortly just like it has done every four years or so. This is the persistence forecast. It is very daring to suggest that this time it will stay below the lower rail for any length of time when that hasn’t happened in the last 40 years.

  69. WHAT IF?

    I know this isn’t the subject of this thread, but I can’t help but wonder WHAT IF in the next 15 years the air temperature observations jump back up to the mean of the model projections?

    It would be a leap of more than .5 of a degree in a short time frame, but it still seems possible to me. All it would take is some shifts in the natural variability processes, many of which are in downward swings right now.

    • I think uncertainty is always a valid discussion topic.

      It is clearly the case that sources of noise and natural fluctuations in the temperature record have served to mask the underlying upward trend.
      One can easily unmask the noise and get a better appreciation of the underlying trend.

      Consider that this last GISS release of Sepember’s global average temperature of 0.74C put the time series right back on the model envelope.
      http://contextearth.com/2013/10/30/detailed-analysis-of-csalt-model/

      This rising-trend warming envelope hasn’t changed with 130+ years of CO2 accumulation and I doubt it will stop anytime soon. The TCR is 2C for doubling of CO2 for the future.

      • Chief Hydrologist

        The trend is 0.1 degrees C/decade. At least half of that is natural variation. We have known that for a long time. You can project that forward and it is not very scary – but that’s not how climate works.

        ‘The climate system has jumped from one mode of operation to another in the past. We are trying to understand how the earth’s climate system is engineered, so we can understand what it takes to trigger mode switches. Until we do, we cannot make good predictions about future climate change… Over the last several hundred thousand years, climate change has come mainly in discrete jumps that appear to be related to changes in the mode of thermohaline circulation.’ Wally Broecker

      • Chief, Science does not work by assertion.

      • Chief Hydrologist

        It is like lecturing a goldfish. I have been through this before – quite recently.

        Here is the essential – and quite simple – picture.

        http://s1114.photobucket.com/user/Chief_Hydrologist/media/rc_fig1_zpsf24786ae.jpg.html?sort=3&o=26

        The recent warming – 1976 to 1998 – is the important period.

        The theory of climate shifts is bound up with the stadium wave – there were climate shift in 1976/1977 and 1998/2001. These are accompanied by extreme fluctuations in ENSO especially known as noisy bifurcation or – more colorfully – dragon kings.

        ‘We develop the concept of “dragon-kings” corresponding to meaningful outliers, which are found to coexist with power laws in the distributions of event sizes under a broad range of conditions in a large variety of systems. These dragon-kings reveal the existence of mechanisms of self-organization that are not apparent otherwise from the distribution of their smaller siblings. We present a generic phase diagram to explain the generation of dragon-kings and document their presence in six different examples (distribution of city sizes, distribution of acoustic emissions associated with material failure, distribution of velocity increments in hydrodynamic turbulence, distribution of financial drawdowns, distribution of the energies of epileptic seizures in humans and in model animals, distribution of the earthquake energies). We emphasize the importance of understanding dragon-kings as being often associated with a neighborhood of what can be called equivalently a phase transition, a bifurcation, a catastrophe (in the sense of Rene Thom), or a tipping point.’ http://arxiv.org/abs/0907.4290

        The idea is not essential to understanding the past but is critical for a theory of the decadal evolution of climate.

        ENSO is implicated In the recent warming. The extreme transition from the 1976 La Nina to the 1977 El Nino and the equally extreme 1998 El Nino. You can see on the graph Kyle Swanson posted on realclimate the changes in temp at these times. This is most of the recent warming.

        The residual when you exclude these extremes is some 0.2 degrees C. Of this – most is due to cloud changes based on the available data.

        e.g. http://s1114.photobucket.com/user/Chief_Hydrologist/media/WongFig2-1.jpg.html?sort=3&o=65

        Understanding the future evolution of climate requires understanding the nature of climate shifts. The decadal regimes – and there are much longer term regime changes – alternately warm and cool the planet over 2 to 4 decades. The only real question is what the mechanisms of the shifts are.

        It seems to involve cloud and albedo.

        e.g. http://s1114.photobucket.com/user/Chief_Hydrologist/media/Earthshine-1.jpg.html?sort=3&o=64

        Earthshine changes in albedo shown in blue, ISCCP-FD shown in black and CERES in red. A climatologically significant change before CERES followed by a long period of insignificant change.

      • Chief Wet Nappy, Did you say something?

  70. Judith, This is a timely post and I agree with virtually everything you said here. It looks to me like GCM’s may be ready for retirement in favor of some fundamental research.

    • Chief Hydrologist

      ‘Figure 12 shows 2000 years of El Nino behaviour simulated by a state-of-the-art climate model forced with present day solar irradiance and greenhouse gas concentrations. The richness of the El Nino behaviour, decade by decade and century by century, testifies to the fundamentally chaotic nature of the system that we are attempting to predict. It challenges the way in which we evaluate models and emphasizes the importance of continuing to focus on observing and understanding processes and phenomena in the climate system. It is also a classic demonstration of the need for ensemble prediction systems on all time scales in order to sample the range of possible outcomes that even the real world could produce. Nothing is certain.’ http://rsta.royalsocietypublishing.org/content/369/1956/4751.full

      There are fundamental problems with models that are yet to be resolved – in the only way they can be – as PDF’s from perturbed physics ensembles. From the other side of the fence – the hardware, software and data to model a ‘fundamentally chaotic’ system may be some time coming.

    • Ivory towers are like a padded cell.

    • Global climate models should be discontinued in favour of regional short term meteorlogy research, aimed at improving the accuracy and duration of regional weathe forecasts and short term climate trends, which adequately tracks decadel natural variability and synches reasonably well with multi-decadel and longer oscillations.

      • R. Gates aka Skeptical Warmist

        “Global climate models should be discontinued in favour of regional short term meteorlogy research…”
        ————
        Very short-sighted and narrow perspective. All regional weather patterns exist within a global dynamic of climate. This would be like trying to understand an ecosystem by studying just one species.

      • Yeah, that would be foolish, like trying to explain climate by referencing just one trace gas in the atmosphere.
        =============

      • R. Gates aka Skeptical Warmist

        But of course no one tries to do that (except in the conflated and confused imaginations of so-called skeptics), so except for that, in the real world of global climate study hundreds of parameters are used and studied.

      • Global climate is an oxymoron R Gates. Even if an average of some kind were used as a reference point for global climate, this average could well result in no discernable movement due to swings in regional averages.

        Even the if data being used to verify and validate GCM output were indeed anything close to what is actually being measured and recorded at the regional level, which it is not, effective verification and validation seems problematic.

        It seems that the practice of global climatology ranks in parallel with studies of cosmic rays and the gravitational forces that are being exerted in our neck of the universe, and any conclusions are to be ranked with cosmology as of academic interest only.

        Communities sorely need more reliable and timely forecasts of local weather and regional climate trends in order that economic decision-making can be carried out and or for preparing communities to deal more effectively with climate change.

  71. Re: the quote from Matt Briggs in part: You have it by now: if the predictions derived from a theory are probabilistic then the theory can never be falsified. This is so even if the predictions have very, very small probabilities.

    This shows a profound misunderstanding of probabilistic modelling. I illustrate with a simple example. Take a single die of the kind used in gambling or the game monopoly. Our model of this die is that it is fair — all numbers 1 to 6 have equal probability. We can certainly devise a test that our model (that our real die is fair in any practical sense) is correct — say that we we are confident to a .95 probability that our real die is fair or that the model is false.

    But there are so many problems with the idea that climate is modeled probabilistically that I hardly know where to begin in a list. Who has such a model — not an average of some 30 odd model results?

    • Chief Hydrologist

      ‘Lorenz was able to show that even for a simple set of nonlinear equations (1.1), the evolution of the solution could be changed by minute perturbations to the initial conditions, in other words, beyond a certain forecast lead time, there is no longer a single, deterministic solution and hence all forecasts must be treated as probabilistic.’ http://rsta.royalsocietypublishing.org/content/369/1956/4751.full

      The perturbed ensemble relies on hundreds or more runs of individual models with slightly different inputs. It is the only way that models can theoretically approach prediction. There are a couple of examples but the theory seems little more than sympathetic magic at this stage.

      • Generalities Chief. For any specific climate model, what is the forcast lead time beyond which the model cannot be treated deterministically. For Newton’s laws of motion, this time was on the order of 100+ years. And who is doing this type of analysis?

  72. I was surprised by this; “The existence of disagreement between climate model predictions and observations doesn’t provide any insight in itself to why the disagreement exists,”
    How come climate scientists are ‘baffled’ by this?
    I do, and so does every other Astronomer/Solar specialist.
    Its just the 61- year Solar barycenter cycle.

  73. The gig will never be up. At least as early as, if not earlier than, 2005 The propagandists were predicting unprecedented warming. Now that temperature is not tracking with CO2 they have simply changed the story, and/or are denying that it isn’t warming.

    Isn’t the story now that ENSO or El Nino has caused the pause? If you remind the propagandists, “But I thought you told us throughout the 90’s and early 2000’s that CO2 was the dominant factor.” Blank stair… continue as if they didn’t hear or don’t understand… “CO2 will end the Earth. We need to act NOW!”

  74. David Springer

    Reposted from Roy Spencer’s blog:

    I too would like to know

    “where is the characterisation curve for IR forcing of water in the presence of an atmosphere, at power densities around 4W/m2?”

    Anyone? Anyone? Bueller? Anyone?

    Micky H Corbett, Ph.D. says:

    October 9, 2013 at 3:27 PM

    I actually came over here from WUWT, go figure. TonyB, your belief in AGW is admirable but if you were a scientist and more importantly an empiricist, the very first thing you would ask is “where is the characterisation curve for IR forcing of water in the presence of an atmosphere, at power densities around 4W/m2?”

    Theory is great and the models, especially Moller 1961 up to Hansen, are great attempts to try and simulate basic atmospheric processes. However they make a number of ideal assumptions about absorption of IR radiation and subsequent surface heating. No account appears to be made for surface conduction, skin effects, scattering or plain all thermal inefficiencies because there is an atmosphere there. A blackbody is assumed.

    Now, I’m not a retired meteorogist; I’m a physicist who has worked for over 10 years in aerospace, 9 of which were building and flying plasma engines in space. We had to test a lot of theoretical assumptions to ensure engines (ion thrusters) survived long enough to fulfill mission requirements. The reason I say this is that many assumptions about engine plasmas were just plain wrong. And shown to be so only by detailed test.

    If climate science spent more time actually characterising effects instead of buliding assumption on assumption and then telling me that AGW is real, then maybe there wouldn’t be this distrust, or the rise of amateur scientists.

    If Willis wants to publish in a blog let him. Hopefully it might jolt a few peoples’ minds.

    So I won’t get over it. Try actually doing experiments rather than modelling them. You can call it science then.

    And sorry, by the way, if it seems I’m trying to make this an attack on you. It’s just I’ve had theorists tell me this and that all my career so far. And each time they were proved wrong by test. Science should be done to a higher standard than what we see in, at least the popular climate science. That goes too for the assumption that AGW is real, without empirical evidence of the exact detail.

  75. Tomas Milanovic

    Increasing model resolution, etc. are not going to improve the situation, IMO.
    .
    I fully agree and this statement is both simple to understand and very deep.
    Somebody mentionned in this thread the CFD (even if this somebody obviously never did CFD nor has any clue about what it is).

    Now there is a megaton of papers and studies about the CONVERGENCE of CFD methods and this is the most important and fundamental question.
    “Does the numerical solution of N linear equations converge to the solution of the non linear PDEs that it tries to approximate ?”
    The answer on this question obviously involves resolution both spatial and temporal but is not limited to it.
    .
    And what can be said about the climate models?
    One thing is true with an absolute certainty – the computed numbers do NOT converge to a solution of the underlying (non linear) equations and they certainly do NOT converge to any kind of solution of Navier Stokes.
    The theoretical proof of this is given precisely by the CFD and the experimental proof is given by the different “spagetti graphs”.
    Indeed to what do those spagetti converge ? Obviously to nothing.
    .
    From this quite trivial insight follows that due especially to resolution constraints known from CFD, a numerical model will never ever converge to anything that might be a solution of the underlying equations , the most important of which is Navier Stokes.
    The statement quoted is therefore important and legitimate and is for me the principal reason why I don’t pay any attention to these numerical pseudo simulations that happen and will always happen on a too coarse grid to have any relevance or usefulness.

    • Tomas,
      Nice post!
      I spent 15 years running multiple different electronics simulators. We had different type of simulators for different types of analysis, full analog, digital, timing, signal propagation, all had different types of models, with different levels of fidelity.
      Analog simulators also had the same step size/convergence constraints, at the time (in the 80’s), you couldn’t just stuff a circuit in a simulator and expect it to work, and you had to understand what question you asked, and how the simulation answered that question (helping engineers with this was my job for ~15 years). You had to understand what the simulator did, your model, your circuit (in this case), and the question (ie your input), and then understanding all of this, interpret the results.
      When I started, the digital models were all gate level with time delay, we had a timing analyzer for synchronous circuits, but this tool didn’t do a lot of logic, it was used to study min/max delays. In the late 80’s more complex digital models came out, we build a device that would accept the physical device, and send signals to it, and return the results, then Hardware Description Languages were developed, at which point modelers would write code describing CPU, ALU, other complex chips, but these models needed careful validation, they were really a model of how the modeler thought the chip worked, a pure black-box. So you could run a simulation based on a gate level model, behavioral model, physical model, a transistor level model or a timing model all had pro’s and con’s, all could give you different results depending on how accurate the models were and what questions were asked, all had different levels of fidelity. Usually the trade off was fidelity vs run time. Validations were almost always done from the highest fidelity and longest run times to get the base line, then comparing those results to lower and lower fidelity with faster run times. At least until a physical device was made and the design was tested in a lab.

      From the mid 80’s or so, all high tech electronics design went through this process. With faster computers came more complicated design issues that required doing things like adding interconnect simulation into the process, first by running batch processes with special input and output models, and then plugging the results back into the lower fidelity simulations.

      This background is part of the reason why grew interested in this, I understand how complex models are made, how they’re used. GCM’s implement a theory of climate, 15 years ago, while I was skeptical, they could have been correct, but I did question if the way they coupled Co2 to water vapor was anything more than just the way the modelers thought it worked. I figured we’d need 10-15 years to see if the model output actually followed measurements, we know that answer now, no it doesn’t.
      Over the same period I was taking pictures of galaxies and nebula’s with my telescope, standing outside, logging the temp so I could remove the thermal noise from my images, and it bothered me, how quickly and how much it cooled once the Sun went down.

      That led me to looking for actual data to see if it showed a change in nightly cooling over time as Co2 increased, and it doesn’t.

      • Mi Cro,

        Well said. The only phenomena I noticed that affected temps at night was water vapor. Dry air, no clouds, got cold very fast.

    • Tomas,

      I agree with your general considerations as far as they go. There are other considerations.

      The discrete approximations to the PDEs are generally non-linear for the case of non-linear PDEs. A numerical solution method can either respect the non-linearity of the FDEs, or not.

      In some cases, integrating transient models in time to get to steady-state for example, accuracy in time is sometimes sacrificed and one-pass-through-per-time-step linear approximation of the FDEs is used. This approach assumes that the final state is independent of the path taken to get there. This is sometimes used even when a some-what useful representation of the transient portion is of interest.

      Additional time accuracy is obtained if the non-linearity is respected and an iterative solution method is used.

      The former method generally requires much less computation work than the latter.

      Other considerations are also important. Conservation of mass and energy is almost always hand-waved about like apple pie and motherhood. If the one-shot approach is used and the equation of state is non-linear, mass will not be conserved, unless iteration at some level is introduced.

      The time-level at which driving potentials are evaluated at interfaces, ( mass, momentum and energy exchanges ) in the equations for each side of the interface is critically important for mass and energy and momentum conservation. Conservation requires these to be at the same time level, otherwise artificial sinks or sources will be introduced. Evaluation of the potential at the new-time level for material ‘a’ in material ‘a’ equations and the potential for material ‘b’ at the old-time level, and likewise for material ‘b’, is more-or-less straightforward. But conservation is lost.

      Coupling the equations by evaluating both potentials at the new-time level in the equations for each material generally, thereby ensuring conservation, introduces very significant amounts of additional computational work.

      Explicit methods, in which almost everything is evaluated at the old-time level, do not have driving-potential issues, but will generally require that step sizes be small relative to the characteristic time associated with the interfacial processes.

      Mass and energy conservation is very much harder than hand-waving, bumper-sticker science. The fundamental conservation principles at the continuous equation stage must be rigorously preserved all the way down.

      Oh, I wonder how all this fits into the framework of ill-posed Initial-Boundary Value Problems and spatio-temporal chaos ?

      • Dan,
        “The former method generally requires much less computation work than the latter.”
        This is a trade off of fidelity vs run time, and I get it, but we have to be sure such time savers don’t give false answers.
        Now it might also be a place where part of the model can be more abstracted to a higher more behavioral level, and still leave the non-linear “connections”, while still preserving some of the performance improvements.

    • It’s true that increasing model resolution does not make the model to converge to any specific solution of the equations. That has, however, not been the idea of the climate models. The models will always produce spaghetti graphs that look similar to their present output. From those results it’s possible to calculate statistical properties of the set of model output, and the statistical output may change and agree perhaps better with the statistical properties of the real Earth system.

      Whether improving model resolution will make them better and capable of producing more correct climate projections cannot be concluded from generic arguments, that can be found out by further research. It’s plausible that increasing model resolution is of little help as long as it’s not combined by other improvements in the model structure and equations, but only further research will tell the real answer for that.

      • Pekka,
        When looking at the suite of model outputs, they are all wrong, increasing the resolution is IMO very unlikely to generate more accurate results.
        In the case of GCM’s there is a fundamental flaw in the theory that is encoded, Steven: our understanding of how the physics interacts is wrong.
        I’m inclined to looking at the the energy of 10-15u photons, but I’m not a physicist, when I suggest this I’m told I don’t understand. But let me point out the obvious, the Modelers don’t understand it either!

      • Mi Cro,

        We have all too many “proofs” that models are bound to fail. Such “proofs” are based on arguments that are valid as theoretical arguments but that can by themselves not tell, how serious the limitations are quantitatively. There are basic limits on what can be calculated, the theoretical arguments are strong enough to prove that, but they cannot tell where these limits are. Only practical research on each particular problem can give such quantitative understanding.

      • The models will always produce spaghetti graphs that look similar to their present output. From those results it’s possible to calculate statistical properties of the set of model output, and the statistical output may change and agree perhaps better with the statistical properties of the real Earth system.

        The numbers that produce the spaghetti graphs are the results of mathematical operations on the discrete FDEs. Let’s assume that the FDEs are a somewhat realistic representation of the climate systems in which mass and energy are rigorously preserved. It is only under the condition that the numbers actually satisfy the systems of PDEs that the numbers are valid and suitable for making spaghetti graphs.

        Mass and energy conservation is considered to be an absolute necessity when modeling physical phenomena and processes. In the case of numerical solution of PDEs, consistency of the FDEs with the original PDEs, and stability of the solution method applied to the FDE system is considered to be an absolute necessity. Consistency and stability lead to convergence.

        Mass and energy conservation is a theoretical argument that has been so far validated to the extent that a Law of Nature has been established. Consistency + stability = convergence is a theoretical argument that has also been validated to the extent that a Law of Numerical Solution of PDEs has been established

        How can it be argued that one Law is absolutely necessary but we can be a little sloppy when other equally important Laws arise?

        Use of ensembles is partially justified, I think, by proclaiming that the noisy output from GCMs is due to the chaotic nature of weather. So far as I know there have been no demonstrations of the validity of this hypothesis. Equally there have been no demonstrations that the numbers behind the spaghetti graphs are in any way related to the original PDEs.

      • Tomas Milanovic

        From those results it’s possible to calculate statistical properties of the set of model output, and the statistical output may change and agree perhaps better with the statistical properties of the real Earth system.

        As you say. Perhaps. Or perhaps not.
        Basing science on hopes that it will perhaps somehow work out is at best time waste and at worst ignorance.
        Coming back to the fundamental question of convergence and the very relevant comments about mass and energy conservation by Dan Hughes, the probability that an average of a random collection of non converged individual runs will somehow miraculously converge towards the correct solution is 0 for all practical purposes.
        To that I add what I have already said several times – there is NO warranty that an invariant PDF of future dynamical states exists at all.
        Especially for spatio-temporal chaotic regimes both cases – ergodic and non ergodic exist in the nature.
        .
        Sofar nobody has a beginning of an idea which of both cases would apply to the climate (or to the weather for that matter).
        When one observes what the meteorology is getting by perturbing initial and boundary condition as well as the subgrid equations themselves, non ergodicity would even be a safer bet.
        In any case these questions whether there are some invariant statistical properties at all needs much more fundamental and theoretical studies than numerical model runs which prove nothing either way.

      • Tomas,

        I agree that there’s no warranty of anything. We have open many possibilities including:

        1) both the real Earth system and the models have well defined statistical properties

        2) case (1) is not fully true, but their behavior is close enough to that of (1) for practical purposes

        3) case (1) is ultimately true over very long periods assuming given boundary conditions, but shorter term behavior is so unstable that the practical results are not useful

        4) statistical properties are not well enough defined over any period.

        The real Earth system may be in a different category from the models and some models may be in a different category than other models.

        We really do not know, what’s the actual situation. What we do know from historical data and from experience on the models is that it’s quite possible that either (1) or (2) applies. There’s enough to support that to make further work with the models justified.

        Even if the Earth system models fail over long periods, they may still be very valuable in interpreting empirical data of various types.

        As I have written so many times: Only practical experience can tell, what’s ultimately useful and what’s not.

    • In my industry, and my personal experience, a top-down approach to theoretical considerations of model, methods, and software development has proved to be significantly more efficient than bottom-up approach from executing the software. For one thing when something goes wrong in the latter case, one almost always must go back up the process to fully understand what happened.

      By top-down I mean the following, among others.

      1. Study to gain deep understanding of the physical domains. Note that the level of understanding will generally increase by application of the software.

      2. Study to gain deep understanding of the equations in the continuous domains.

      3. Reduction of the fundamental continuous domain equations to a tractable system.

      4. Study to gain deep understanding of the mathematical properties and characteristics of the model equation system in the continuous domain.

      5. Development of discrete approximations to the continuous equations.

      6. Study to gain deep understanding of the mathematical properties and characteristics of the discrete approximations.

      7. Development of numerical solution methods for the discrete approximations.

      8. Study to gain deep understanding of the mathematical properties and characteristics of the numerical solution methods.

      9. Code a representation of the discrete equations and solution method that is sufficient to test and verify the theoretical developments up to this point.

      10. Add degrees of completeness relative to the complete model to the code and test and verify.

      Note that at any step in the process it might be necessary to retreat to previous steps. This includes going from step 10 back to step 1. And related pairs of steps will very likely be connected by an iterative loop.

      Then we get to the stage of development of a somewhat useful version of the models, methods and software that can be used as a test-bed code; a beta release, maybe.

      There will be sufficiently detailed documentation of the ten steps that the root cause of a problem in the test-bed code can be efficiently determined.

      Eventually we get to a point that calculated results feed back to Step 1.

      Your mileage may vary.

      • Dan, that looks like the steps done to develop the toots and libraries that someone like me applied to a large number of in my case customer electronics designs.

  76. “When all you have is a hammer, everything looks like a nail.” What’s needed is a new type of model: a screwdriver not a sledgehammer.

  77. A PRWeb press release on the Canadian climate model CanESM2 is at
    http://www.friendsofscience.org/index.php?id=207

    Canadian Climate Change Predictions Fail by 590% Costing Global Consumers a Bundle says Friends of Science Study. A new Friends of Science study by research director Ken Gregory shows that the Canadian climate model CanESM2, used by the Intergovernmental Panel on Climate Change (IPCC) to predict global warming, fails to replicate temperature observations by hundreds of percent – predicting extreme heat when the reality observed is only nominal warming.

  78. The answer was in the question of course. The real question is how long it will take for the gig to be up?

    One way to avoid that begets another question: How long will they take to adjust the observations to match the models? I guess that without UAH being controlled by skeptics then they would have done so already. Santer already strong-armed the adjustment of radiosondes and RSS satellite data to match models for the hilarious Santer et al paper that used frequentist stats for pseudo-Bayesian-selected model outputs and concluded that the worse models actually were, the better they really were. None of the 30 co-authors or reviewers even noticed that using up to date data invalidated the farcical conclusions anyway.

  79. Tomas Milanovic

    Pekka

    This time I agree with almost everything you wrote with 2 exceptions :)
    Your 3) can’t exist. The ergodicity property is independent of time scales. So it is impossible that a system swaps from one property to another if one waits long enough.
    .
    Then in
    As I have written so many times: Only practical experience can tell, what’s ultimately useful and what’s not.
    I do not agree with the word “only”.
    There are also compelling theoretical considerations which allow to immediately distinguish between (probably) useful and (probably) useless.
    For instance if you proposed us some theory which violates the Lorentz invariance then almost everybody would immediately consider that it is useless.
    With an extremely high probability it would be that indeed and one wouldn’t need to wait for some “practical experience”.
    .
    I look at climate theories from this point of view and for compelling theoretical reasons (here non convergence) I don’t need to wait for farther megatons of practical experience to conclude at uselessness of the GCMs.
    Of course I may be wrong but in the IPCC speak the validity of my position is much more likely than not.

    • Tomas,

      On (3) I had in mind a somewhat different interpretation of the setup than what you describe, but that’s not essential and therefore not worth more discussion. When the question is set up as you describe, I agree on the conclusion as well.

      Naturally I do agree that there are models that are so badly wrong that nothing more is needed than pointing out the error and it’s large effect on the results.

      I’m sure that much of our disagreement has been due to misunderstanding what the other is trying to tell. Our judgments do surely differ on some judgmental issues, but that’s normal and almost unavoidable.

    • Chief Hydrologist

      Models do not have well defined statistical properties. Theoretically perturbed physics ensembles might sample the entire climate state space sufficiently to enable a PDF to be determined but it is far from certain. With ensembles of opportunity it is the case that the state space sampled is arbitrary.

      The statistics of climate are non-stationary – the fundamental characteristic of climate is abrupt shifts in state space at scales from minutes to eons.

      To paraphrase Pekka – if we could agree on the fundamental nature of climate it would be progress of sorts.

      • Chief Hydrologist

        I collect both climate graphs and quotes that best encapsulate an idea.

        James Hurrell and colleagues in an article in the Bulletin of the American Meteorological Society stated that the ‘global coupled atmosphere–ocean–land–cryosphere system exhibits a wide range of physical and dynamical phenomena with associated physical, biological, and chemical feedbacks that collectively result in a continuum of temporal and spatial variability. The traditional boundaries between weather and climate are, therefore, somewhat artificial.’ http://journals.ametsoc.org/doi/pdf/10.1175/2009BAMS2752.1

  80. Tomas Milanovic

    Well when we already are in the ash on my head issues Pekka, I will also admit that I am obviously more influenced by the mathematical and theoretical side of the issues.
    However I must also grudgingly agree that experience is something that is necessary to physics even if it sometimes (often) lacks correct mathematical grounds ;)
    Therefore I apologize if I may have come over as exagerately harsh and “judgmental”.

  81. The claim that probabilistic hypotheses cannot be falsified expresses such extreme ignorance that I can only call it willful. Has no one ever heard of the discipline of Population Genetics? All of its hypotheses are probabilistic. Does anyone believe that it has never falsified a probabilistic hypothesis?

    How can Population Genetics use probabilistic hypotheses? Very simple, the randomness that they study is in the world; the randomness is in the populations themselves. By reference to the actually existing populations of critters, Population Geneticists can define event spaces over the actual physical characteristics of the critters and the populations that they belong to.

    By contrast, climate scientists have made no headway in identifying physical characteristics of Earth’s climate that would enable them to define an event space for hypotheses about climate. The empirical work simply has not been done. To continue the comparison to Population Geneticists, climate scientists behave as if they were pre-Mendelian expositors and defenders of Darwin’s theory of evolution.

    In his earliest works on the philosophy of science, published in the Forties, Carl G. Hempel explained that probabilistic hypotheses that embody “objective probabilities” as opposed to subjective probabilities are falsifiable. What makes probabilities objective is that the randomness is in the world rather than in some scientist’s beliefs. Hempel’s views have dominated the philosophy of science in the US and England except for the few who have become Lotus-Eaters dreaming the Kuhnian Marxist dream.

  82. As I understand it, ALL these models ASSUME that water vapor is the real ghg culprit, generating 2 to 3 times the temperature increase as did the prior temp increase caused by increasing co2 level. But, that’s pure speculation. Nobody has a clue as to the aggregate climate feedback, not even whether it is positive of negative. What’s more we know that co2 heating capacity is limited to a very few narrow sun energy bandwidths which it can absorb. But at 20 ppmv co2 had already absorbed 50% of all the energy available to it. If that is the case, since co2 level is now 400 ppmv, there’s hardly any further influence on temperature increase. That would also seem to rule out any further feedback from water vapor (assuming that speculative guess is anywhere near accurate.)

    • BFJ Cricklewood

      Denis Ables
      at 20 ppmv co2 had already absorbed 50% of all the energy available to it. If that is the case, since co2 level is now 400 ppmv, there’s hardly any further influence on temperature increase.

      The idea being CO2 effect will fall off logorithmically, it that’s the idea – the more you add, the less difference a given amount will make ?

      So, given that, when putting a figure to the effect of a doubling of CO2, one should surely specify to what level of CO2 concentration this applies. But this never seems to be done. Why … ?

  83. Judith says in her article

    “If the pause continues for 20 years (a period for which none of the climate models showed a pause in the presence of greenhouse warming), the climate models will have failed a fundamental test of empirical adequacy. Does this mean climate models are ‘falsified’? Matt Briggs has a current post that is relevant, entitled Why Falsifiability, Though Flawed, Is Alluring.You have it by now: if the predictions derived from a theory are probabilistic then the theory can never be falsified….”

    These comments are misleading for two reasons:

    1) the models must be tested not on just the last 15 years having as a prospect the future 20 years. The models must be tested at least during the entire period they cover, that is from 1860 to now.
    An extended statistical analysis of all CMIP5 models (162 individual runs plus all projection simulations) has be made in my recent paper:

    Scafetta, N. 2013. Discussion on climate oscillations: CMIP5 general circulation models versus a semi-empirical harmonic model based on astronomical cycles. Earth-Science Reviews 126, 321-357.
    http://www.sciencedirect.com/science/article/pii/S0012825213001402

    Here in figures 4-11 all 162 model runs are plotted against the temperature and in many cases discrepancies in the trends as long as 50 years are seen. Numerous statistical tests confirm that these models poorly represent the climate patterns at multiple scales. Thus, in some sense there is no need to wait other 20 years to determine whether these models are representing well the temperature patterns. This has been already tested and the answer is “no” they don’t do it well at all since 1860 at all temporal scales.

    2) About the second point referring to whether then the models are therefore already falsified or not, this is based on a severe misunderstanding of science. In science a model theory is falsified only when another theory is proposed and demonstrated to perform better than the former in reproducing the data.

    If no theory alternative to the one implemented on the GHG-based CMIP5 GCMs were already proposed, the theory based on the GHG-based CMIP5 GCMs would still hold perhaps as imperfect but still as the only available theory to date. However, an alternative theory based on a harmonic understanding of climate change (which also include a significantly reduced anthropogenic component) has already been proposed and demonstrated to perform better than the CMIP5 models in reconstructing climate change (since the Medieval period and beyond), see again my paper above.

    Thus, the latter (harmonic climate model) theory holds as the best current interpretation of climatic changes. Of course, the theory can be better improved in the future, further confirmed or disproved by another theory of climate change, but up to now it is the most likely explanation of climate change independently on whether people may believe in it or not.

    See my web-site for additional details.

  84. The Climate Etc caravan or circus, it depends on your view point which one it is, with it’s clowns, it’s actors, it’s morality plays and it’s fast reacting, shoot from the lip players, all have moved on leaving a few members of it’s bemused audience to wander the near empty fairground, musing on what they have just seen and the meaning of it.

    Models are little more than an extension of the mentalities of their creators.
    As such they should be given no more credence than that given to artistic scene and portrait painters and photographers, the recorders of the visual world around us.

    The parallels between scenic and portrait artists and what is depicted on their canvases and climate modelers and the outputs of their models is quite striking.

    Artists have palettes, paints of various types and colours, brushes of many types and canvases of various textures and sizes.
    Climate modelers have various computer operating systems computer codes, various computer languages and a whole range of algorithms of an almost infinite range to choose from.

    Artists have schools that follow certain techniques and fashions that concentrate on scenic, portraiture, realism and, abstract interpretations along with the various individual nuances when painting.

    Climate modelers also no doubt have schools and groups who advocate differing approaches in language, codes and algorithms to the computational analysis of the data and their own interpretations of the meaning of that data.

    Artists gravitate to various schools and groups and argue vehemently that their techniques and interpretations are the superior way of depicting the subject.

    Climate modelers who follow a particular modeling techniques and strategies in the way of all humanity, will also gravitate to like thinking modeling groups or form close links to other similar technically inclined modelers.

    Artists are heavily influenced by others in their school of style and subject and will change and adapt to fit in with the meme of their school or group and style and subject.

    Climate modelers also suffer from me-to-ism as they also fit in with their school and groups preferred analysis methods and computational programs and use of certain specific algorithms and constrained data inputs to the exclusion of information and data that might conflict with their particular computational strategy and analyses methods.

    Artists are “commissioned “ie; funded, to paint certain specified subjects using their own interpretations on the canvas.
    Those commissioning and funding such artistic interpretations of a set subject expect something resembling and close to and within their expectations of what the artist is expected to portray.
    If it has some affinity and resemblance to the reality of the subject it is accepted as an artistic interpretation and given considerable elevation in it’s importance if the commissioning funders are satisfied and happy with the outcome.

    Climate modellers are funded to produce models that will supposedly provide a realistic portrayal of the climate in the real world. But in reality after all the hype is removed, are expected to provide a modeled outcome of the climate that is close to what the funders deep down and surreptitiously expect and demand.
    Like artists, if that outcome is acceptable to the funders then more such commissioning and funding for further climate modeling work by the climate modeler or group will be forthcoming.

    Artists see a potential scene or portrait and then using all the techniques and current artistic fashions of their group or school and the paints and canvases at their disposal, they interpret that scene according to it’s personal visual impact or for it’s sale-ability to a certain type of collector or art outlet for on sale to the like thinking, artistically minded members of the public.

    Climate modelers, as is the case with artists and for all of humanity, are driven by some deep hidden biases which despite all their vehement protestations to the contrary, they also when constructing climate models, again like the artists, select certain data inputs and observations so as to hopefully achieve the deeply hidden personal outcomes they are seeking.
    Like the artists on their canvases when paining, the climate modellers weight those data inputs and observations and using certain computational techniques and strategies, again like the artists, they attempt to provide an end product result that will satisfy their funders but also to impress their fellow modellers [ as with artists ] with their skill, knowledge and capabilities as well as to ensure that their modeled outcomes make an impact on the funders, impress the media, the politicals and the public.
    With those same deep hidden biases which artists readily admit too, the climate modellers also no doubt unwittingly, try to channel their models outputs to boost their own personal belief systems and / or egos as do artists on the “ego” side.

    Until the climate modeling profession has the immense amounts of hard observed pragmatic and proven data that the engineering based models use as inputs then climate modeling will not have graduated from the equivalent of the artist’s personal scenic interpretation of a scene or portrait.
    The climate modellers will still be at the equivalent artistic stage of a non realistic self orientated interpretation of the available climate data.
    They will not have graduated to the level of a skilled photographer who records the scene in the hard data pixels of the film or chip.
    The photographer in an artistic sense, is equivalent to the engineering modeler who works with hard data and proven computational techniques and verified constructs in his models to develop, construct and build something on which lives, perhaps millions of lives will depend on for their very existence.

    Climate modeling has, unlike the engineering modeling, cost thousands of lives and is pauperizing millions due to high energy costs arising from the climate modeler’s spurious claims of ever increasing global temperatures suposedly predicted to occur by the climate models as being due to increasing CO2.

    Now the climate modellers have another very serious strike against their personal standings and their discipline.
    the fear of the global warming / climate change/ extreme weather meme is now creating mental health problems in millions of those who have placed their complete belief and trust in climate science and the pronouncements of the climate modellers and their now entirely and utterly false predictions of an earth destroying climatic future.

    Of such are the musings of one of the audience left behind on the fair ground as the Climate Etc circus moves on.

    • +100 ROM fer content and artistry of presentation.
      Brings ter mind:

      ‘All nature faithfully! – But by what feint
      Can Nature be subdued by art’s constraint?
      Her smallest fragment is still infinite!
      And so he paints but what he likes in it.
      What does he like? He likes what he can paint.’

      Ht Nietzche.

      Jest substitute ‘modeller’ and ‘model’ fer ‘art’ and ‘paint.’
      Beth the serf ‘s sock puppet Belinda.

    • Strange Adventure

      Another blinder from B’linda.

  85. The Emperor was becoming concerned about failed harvest, and his ability to maintain his current level of taxation upon the peasants, so he called in the High Priest.

    “I must know what next year’s harvest will be like. Will it be a good one or a bad one like the last seven years?” the Emperor demanded.

    “Sire, I will make sure that all my best forecasters, with their expert knowledge on the reading of chicken entrails are immediately dedicated to this task,” the High Priest replied solicitously.

    “Your forecasts are useless, always predicting better harvests,” thundered the Emperor. “You just tell me what you think I want to hear. How on earth can chicken entrails predict the harvest anyway?”

    “Sire, there is a 97% consensus amongst all the priests that these methods are the best Science available. The problem is, to get better forecasts, what we need is more chickens.”

  86. Indeed, climate models have already failed the fundamental test of empirical adequacy, since proved unable to reproduce 3 decades of cooling as observed from 1940 to 1970.

  87. Pingback: Confronting the Fundamental Uncertainties of Climate Change | Fabius Maximus