Climate modelers open up their black boxes to scrutiny

by Judith Curry

Paul Voosen has written a remarkable article in Science about climate model tuning.

Background

In November 2009 following ClimateGate, I made a public plea for greater transparency in an essay published at ClimateAudit:  On the credibility of climate research.

When I started blogging at Climate Etc. in 2010, a major theme was a call for formal Verification and Validation of climate models:

In my 2011 paper Climate Science and the Uncertainty Monster,  I raised concerns about IPCC’s detection and attribution methods, whereby the same observations used for validation were used either implicitly or explicitly in model calibration/tuning.  A group of IPCC authors (led by Hegerl) responded to the uncertainty monster paper [link]. I further responded with a formal response to the journal.  In a blog post, I highlighted one of the Climategate emails from Gabi Hegerl:

So using the 20th c for tuning is just doing what some people have long
suspected us of doing…and what the nonpublished diagram from NCAR showing correlation between aerosol forcing and sensitivity also suggested. Slippery slope… I suspect Karl is right and our clout is not enough to prevent the modellers from doing this if they can. We do loose the ability, though, to use the tuning variable for attribution studies.

In the blog post, I concluded:

To me, the emails argue that there is insufficient traceability of the CMIP model simulations for the the IPCC authors to conduct a confident attribution assessment and at least some of the CMIP3 20th century simulations are not suitable for attribution studies. The Uncertainty Monster rests its case (thank you, hacker/whistleblower).

Those of you who have followed the climate debate for the past decade will recall the reception of the climate community to my writings on this topic.

In 2013, a remarkable paper was led by Max Planck Institute authors Mauritsen, Stevens, and Roeckner:. Tuning the climate of a global model, which was discussed in a CE blog post:

“Climate models ability to simulate the 20th century temperature increase with fidelity has become something of a show-stopper as a model unable to reproduce the 20th century would probably not see publication, and as such it has effectively lost its purpose as a model quality measure. Most other observational datasets sooner or later meet the same destiny, at least beyond the first time they are applied for model evaluation. That is not to say that climate models can be readily adapted to fit any dataset, but once aware of the data we will compare with model output and invariably make decisions in the model development on the basis of the results.”

This paper led to a workshop on climate model tuning, and a subsequent  publication in Aug 2016 .The art and science of climate model tuning, which was discussed in this blog post.  My comments:

This is the paper that I have been waiting for, ever since I wrote the Uncertainty Monster paper.

The ‘uncertainty monster hiding’ behind overtuning the climate models, not to mention the lack of formal model verification, does not inspire confidence in the climate modeling enterprise. Kudos to the authors of this paper for attempting to redefine the job of climate modelers.

Voosen’s article

Paul Voosen’s article in Science Climate scientists open up their black boxes to scrutiny follows up on the climate model tuning issue.  Its a short paper, publicly available, it is well worth reading.  Some excerpts:

Next week, many of the world’s 30 major modeling groups will convene for their main annual workshop at Princeton University; by early next year, these teams plan to freeze their code for a sixth round of the Coupled Model Intercomparison Project (CMIP), in which these models are run through a variety of scenarios. By writing up their tuning strategies and making them publicly available for the first time, groups hope to learn how to make their predictions more reliable, says Bjorn Stevens, an MPIM director who has pushed for more transparency. And in a study that will be submitted by year’s end, six U.S. modeling centers will disclose their tuning strategies—showing that many are quite different.

Indeed, whether climate scientists like to admit it or not, nearly every model has been calibrated precisely to the 20th century climate records—otherwise it would have ended up in the trash. “It’s fair to say all models have tuned it,” says Isaac Held, a scientist at the Geophysical Fluid Dynamics Laboratory, another prominent modeling center, in Princeton, New Jersey.

For years, climate scientists had been mum in public about their “secret sauce”: What happened in the models stayed in the models. The taboo reflected fears that climate contrarians would use the practice of tuning to seed doubt about models—and, by extension, the reality of human-driven warming. “The community became defensive,” Stevens says. “It was afraid of talking about things that they thought could be unfairly used against them.” 

But modelers have come to realize that disclosure could reveal that some tunings are more deft or realistic than others. It’s also vital for scientists who use the models in specific ways. They want to know whether the model output they value—say, its predictions of Arctic sea ice decline—arises organically or is a consequence of tuning. Schmidt points out that these models guide regulations like the U.S. Clean Power Plan, and inform U.N. temperature projections and calculations of the social cost of carbon. “This isn’t a technical detail that doesn’t have consequence,” he says. “It has consequence.”

Aside from being more open about episodes like this, many modelers say that they should stop judging themselves based on how well they tune their models to a single temperature record, past or predicted. The ability to faithfully generate other climate phenomena, like storm tracks or El Niño, is just as important. Daniel Williamson, a statistician at the University of Exeter in the United Kingdom, says that centers should submit multiple versions of their models for comparison, each representing a different tuning strategy. The current method obscures uncertainty and inhibits improvement, he says. “Once people start being open, we can do it better.”

JC reflections

Well finally, we are seeing climate modeling move in a healthy direction, that has the potential to improve climate models, clarify uncertainties, and so build understanding of and trust in the models.

Its about time: the response of the climatariat to my writings about climate models  circa 2009-2011  was to toss me out of the tribe, dismiss me as a ‘denier’, etc.

I find it absolutely remarkable that this statement was published in Science:

The taboo reflected fears that climate contrarians would use the practice of tuning to seed doubt about models—and, by extension, the reality of human-driven warming. “The community became defensive,” Stevens says. “It was afraid of talking about things that they thought could be unfairly used against them.”

This reflects pathetic behavior by the climate modelers (and I don’t blame Bjorn Stevens here; he is one of the good guys).  You may recall what I wrote in my post Climategate essay Towards rebuilding trust:

In their misguided war against the skeptics, the CRU emails reveal that core research values became compromised.

So the climate modelers were afraid of criticisms by skeptics, and hence kept climate models opaque to outsiders, to the extent that even other climate modeling groups didn’t know what was going on with their climate models.  And hence:

  • best practices in climate model tuning were not developed
  • scientists conducting assessment reports had no idea of the uncertainties surrounding conclusions they were drawing from climate models
  • scientists doing impact assessments and policy makers relying on this information had no idea of the uncertainties in the climate models and hence in their conclusions and the implications for their policies.

Well done, team climate modelers, all of this because you were afraid of some climate contrarians.

Well, it is a relief to finally see the international climate modeling community tackling these issues, thanks to the leadership by the MPI modelers.

But I wonder if these same climate modelers realize the can of worms that they are opening.  In my blog post The art and science of climate model tuning, I wrote:

But most profoundly, after reading this paper regarding the ‘uncertainty monster hiding’ that is going on regarding climate models, not to mention their structural uncertainties, how is it possible to defend highly confident conclusions regarding attribution of 20th century warming, large values of ECS, and alarming projections of 20th century warming?

 

220 responses to “Climate modelers open up their black boxes to scrutiny

  1. Pingback: Climate modelers open up their black boxes to scrutiny – Enjeux énergies et environnement

  2. How are the damage functions used in the IAM’s validated?

    • While I agree on the importance of damage functions, isn’t your comment a bit off topic?

      • Why? Do you not recognise V&V of IAMs and the damage functions in them as the basis for justification for :
        – mitigation,
        – estimating climate damages,
        – SCC,
        – comparing climate policy options,
        – carbon taxes,
        – public funding for the climate industry

        Or do you, like Mosher, argue all that is irrelevant?

    • Peter, these 300 year projections obviously cannot be validated. Public policy is not based on validated computer models. Asking for the impossible is not a valid (!) objection.

      • David Wojick,

        Thanks you for your comment. I take notice of them because they are invariably constructive.

        The point I’ve been trying to make – tellingly avoided and largely ignored by the climate alarmists – is that the basis for saying AGW will be damaging is the projected damages. To project damages requires a valid damage function. It seems there is no valid damage function – even IPCC says so (many times in AR5 WG3 Chapter 3). Richard Tol, one of the foremost authorities on the evidence to calibrate the damage functions, also said in a reply to me some time ago that the empirical evidence to calibrate the damage fuctions is sparse (or something similar to that).

        There has been 30 years of trying to calibrate the GCMs, but seemingly negligible effort to try to calibrate the damage functions. It would be informative to know the reason for the lack of research into this most important subject area.

        I am not arguing the 300 year projections can be validated. I am arguing the empirical evidence used to calibrate the damage functions needs to be well tested, and the damage functions validated. At the moment there can be no confidence in the damage functions, the damage estimates, the 2C damages threshold, SCC, the basis for carbon taxes, the EPA’s agenda to close coal, the US’s influence on UN bodies, IMF, World Bank etc. to stop poor countries using coal, or other command and control policies.

        The paleo-climate evidence plotted as GMST in the chart below shows the climate was much warmer in the past and also shows that a 3C rise in GMST wouldn’t even raise it to the middle, or even the average, of its temperature range the prevailed over the past 542 Ma. It also shows that we are currently in a rare coldhouse phase (an ice age in geologic terminology). In fact it’s only the second time it has been this cold in the past 542 Ma.

        Other evidence (including in IPCC) suggests that life thrived when the planet was warmer and struggled when colder. It’s no wonder the alarmists don’t want to discuss this key issue which provides the justification for public funding for the climate industry and the justification for damaging our economies (as ERPA is doing to US and the head of the Clinton Foundation is secretly doing to undermine India’s and Australia’s coal power industries.

        It seems here is negligible valid evidence to support their beliefs that GHG emissions will do significantly more harm than good, nor that proposed mitigation policies will deliver benefits that exceed the costs of them.

        David (and other readers) I’d like to ask you: How is it justifiable to spend so much time and resources (and public funding) on trying to calibrate ECS and other parameters needed for projecting damages but we don’t spend the effort to calibrate the damage function to get it to the equivalent level of uncertainty?

      • David,

        Here is the chart I refereed to in my comment above:
        https://html2-f.scribdassets.com/9mhexie60w4ho2f2/images/1-9fa3d55a6c.jpg

      • Please explain why there is climate modelling. I thought we had to have records of past weather to establish climate (and climate change). If we are not relying on some ‘FACTS’ how can any responsible person come out and say that ‘climate will be so and so’ when to the average reader, who should know that we are dealing with fiction? The reading of tea leaves that has produced reports of warmest temperatures since the start of the industrial revolution, ignores the fact the thermometer had just been invented and less than a century before, some clever person found out that there was a gas called carbon dioxide in the air.
        Since then, to find the presence of CO2 you had to purge the air sample of water vapor to get a ‘correct’ reading CO2, thus the condition resembling an atmosphere we humans could not live in. Or am I correct?
        If you cannot quantify the amount of water vapor in the air at any temperature, height, baro pressure, then the climate change issue is dead.
        I know politics and money is involved, but the general population who pay taxes should demand that scientists tell the truth without exemptions, limitations, ignorance, etc.
        Lately HFC was added to list of ‘dangerous greenhouse gases’. I have never heard how many ppm this gas has in our atmosphere. It was a commercial sponsored decision because the patent holder could, or wouldn’t improve on the patent.
        Water vapor would be in the vicinity of 10,000-40,000 ppm anywhere any time on the globe. Big task.
        Cheers.

      • @Peter Lang. That is a very, lets say crucial, point Peter. It really is all about the projected damage and we have almost no idea at all that is even vaguely accurate.

    • Another example of Segrest’s dishonesty. He says I’ve mis-represenetd Richard Tol and follows that with a link, (implying that the link is to something Richard Told said to me). But the link is to a comment by Segrest, not Tol, and it is just another baseless assertion. Segrest is a serial disinformer. He does not display the required integrity or ethical standards to be a Professional Engineer, which could explain why he never was one and, perhaps, why he had to change career to farming.

    • Every time Lang is called out either for his subject incompetence or skewed one sided views — all he can do is resort to name calling and/or “DEMAND” that only “HIS” strawman is valid and be discussed.

      People can go directly to the Dr. Tol extensive interview to see how Lang is cherry-picking and misrepresenting Dr. Tol’s macro views:

      https://www.carbonbrief.org/in-conversation-roger-harrabin-and-richard-tol

  3. What is the uncertainty on the projected damages from human-caused GHG emissions?

    • Uncertainty appears to be somewhere between $0.00 to $infinity.
      From no effect to the total destruction of the earth.
      I consider that to be a fairly wide margin of error.

      • That is about right Mike, yet the issue remains on the table. Public policy is like that. Oh but actually the possible damages may be negative, which widens the margin considerably.

    • Is it time to Unleash the Uncertainty Monster on the Damage Function

      I hope Judith will turn her attention to posting some threads on the Damage Function, the empirical evidence to calibrate it, and the uncertainties.

  4. This only goes to show that the forcing change in the 20th century is already so significant that unless a model can respond correctly to that, it won’t pass, and that means models with weak responses are ruled out too, as well as those that have too strong a response. This is all after allowing for random decadal variations due to the ocean that are small by comparison with the forcing change.

    • Recently, while preparing for the new model comparisons, MPIM modelers got another chance to demonstrate their commitment to transparency. They knew that the latest version of their model had bugs that meant too much energy was leaking into space. After a year spent plugging holes and fixing it, the modelers ran a test and discovered something disturbing: The model was now overheating. Its climate sensitivity—the amount the world will warm under an immediate doubling of carbon dioxide concentrations from preindustrial levels—had shot up from 3.5°C in the old version to 7°C, an implausibly high jump.

      MPIM hadn’t tuned for sensitivity before—it was a point of pride—but they had to get that number down. Thorsten Mauritsen, who helps lead their tuning work, says he tried tinkering with the parameter that controlled how fast fresh air mixes into clouds. Increasing it began to ratchet the sensitivity back down. “The model we produced with 7° was a damn good model,” Mauritsen says. But it was not the team’s best representation of the climate as they knew it. …

      • “The model we produced with 7° was a damn good model,” Mauritsen says.”
        That’s hilarious. That model may be “elegant”, “creative”, “thorough”, “sophisticated”, or any number of other nice things. One thing it cannot be, however, is “good”.
        I’m reminded of a line from the movie Top Gun: “That was some of the best flying I’ve ever seen–right up to the part where you got killed”.

  5. If the modelers are going to open up the black box on modeling, I hope there is similar focus on the basic data used as inputs to the modeling in re to global land and sea temperature data sets – in particular, what are motivations, decisions and methodologies in post correction of data sets, as in the case of Karl et. al., AAAS Science Express, June 2015. NOAA strongly objected to and resisted questioning of this matter… which came in the immediate run up to the Paris COP21 meetings. And AAAS (Science) strongly defended NOAAs objections to requests to open the books and records for a an open and balanced review and to determine if this was being politically driven from Washington. I believe this is still an open issue.

    • Tuning for satellite datasets purporting to be lower tropospheric temperatures remains opaque, and leads to much more variable results even for the same satellites than independent estimates of surface temperature. That’s where the can of worms is.

      • Sounds like we have no reliable data to model, just questionable crude estimates. The satellite estimates and the surface statistical model estimates do not agree, by a wide margin. if we do not know what the global temperatures are, how can we use climate models to explain them? No data, no science.

      • The satellite estimates don’t even agree with each other or past versions by a wide margin. They have the biggest problems, and that is because they are chaining together results from about 15 different satellites to cover nearly 40 years.

      • What is it you claim is ‘opaque’ ?

        Every nook and cranny of Spencer and Christy’s work has been thoroughly documented and published.

        If this remains ‘opaque’ to you it’s most likely that you have not even bothered to read single paper on what they have done over the last 25 years. Pathetic.

      • They have many decisions to make in order to get the results they do, and changing those decisions, as they have in various versions, affects the results a lot. Others would make different decisions with the same data and have. RSS versus UAH for example, but also other efforts at reproducing their results have been published where they can justify different decisions as being better. It’s all subjective with so many moving parts.

      • Tuning for satellite datasets purporting to be lower tropospheric temperatures remains opaque

        :The tuning isn’t opaque. There isn’t tuning. Models get tuned. Measurements, particularly those involving indirect effects, have to compensate for factors that influence the measurement. This is a situation, like Antarctic ice mass change, where there are extraneous factors (mostly tectonic in the Antarctic) that influence the measurement that must be compensated for. Plus the process while very transparent is highly technical.

        There are two groups that really understand it, which is why UAH and RSS inevitably end up reviewing each others work.

      • We don’t see enough criticism of satellite data tuning here. Some instead take it as a gold standard, and it isn’t. That’s all I am saying.

      • Jim D, I repeat. Sounds like we have no reliable data to model, just questionable crude estimates. The satellite estimates and the surface statistical model estimates do not agree, by a wide margin. if we do not know what the global temperatures are, how can we use climate models to explain them? No data, no science.

        I would rather string together 15 satellites that were all designed to measure global temperature than a 1500 heat contaminated thermometer convenience sample, plus a bunch of seawater measurements, all hugely adjusted, then rolled into a questionable area or field averaged statistical model. This is about as far from measurement as you can get.

        But if you are right then we simply do not know if it has warmed, or how much, or when. There is literally nothing for science to explain. Is that it?

      • On the contrary, the surface data is much more reliable and covers the whole period since 1950 of most of the warming and before. We know how much the warming rate has changed from the first half of the 20th century to the second with a good degree of accuracy, and not just the global average but regionally too. This is a major resource for checking models against.

      • “On the contrary, the surface data is much more reliable”

        Really…

        There is far more variation between the large assortment of surface data sets – especially if you include the historic “homogenised” ones – than there is between the satellite data sets, concerning which you just posted “The satellite estimates don’t even agree with each other or past versions by a wide margin”.

      • Yes, I think several skeptics still don’t believe in significant surface warming during the 20th century.

      • “Yes, I think several skeptics still don’t believe in significant surface warming during the 20th century.”

        Sorry Jimbo, you’re making stuff up again.

        I believe no sceptic denies that the 20th century warmed by around 0.5°C – 0.7°C.

        It was almost entirely caused by the World recovering from the Little Ice Age, of course.

      • Do you get these numbers from the surface temperature record that you don’t trust? Or do you trust it after all?

      • It was almost entirely caused by the World recovering from the Little Ice Age, of course.

        Ridiculous… lol. All that condescending tough talk out of you and you lay out the line of nonsense?

      • I would ask for a citation, but I know there aren’t any. It’s pseudoscience.

      • Maybe the rebound from the depths of the LIA constitutes a climate model? If so, I’ve heard that 17 years of PAWS is all that is required to falsify such a model.

        As you can see, there’s an 80-year PAWS in the model.

        Of course, being chaotic, complex, and nonlinear, maybe 80 is less than 17.

      • Jim D gets it right when he describes the climate models as follows: “They have many decisions to make in order to get the results they do, and changing those decisions, as they have in various versions, affects the results a lot. Others would make different decisions with the same data and have. […] also other efforts at reproducing their results have been published where they can justify different decisions as being better. It’s all subjective with so many moving parts.”.
        OK, OK, so he wasn’t actually describing climate models, but he should have been.

      • That is why IPCC relies on about 20 models that are independently developed by different groups worldwide. If satellite temps had 20 groups, they would be in a similar situation.

      • Jim D | November 6, 2016 at 12:40 pm |
        >>
        We don’t see enough criticism of satellite data tuning here. Some instead take it as a gold standard, and it isn’t. That’s all I am saying.
        >>

        Well if you think there is something to criticise, go ahead. Of course you won’t because you have not the first idea what you are talking about. You still have not explained why you describe one of the most thoroughly and explicitly documented datasets as “opaque”.

        It may be opaque to you because you are scientifically illiterate and have not even bothered to look at the relevant papers.

        So I suggest it is time to put up or shut up. Making baseless, generalised criticisms about something you have no knowledge of does not cut it.

      • The thing I would criticize is that there are not enough groups doing this independently. Two isn’t enough. One has said it is structurally uncertain or words to that effect, and he would trust surface temperatures more for trends.

      • JimD, “The satellite estimates don’t even agree with each other or past versions by a wide margin. They have the biggest problems, and that is because they are chaining together results from about 15 different satellites to cover nearly 40 years.”

        Perhaps you can do a post on how a 1/1000th of a degree adjustment in the satellite data produces a near chaotic range of satellite data products?

      • Do tell.

  6. If weather is chaotic (and I assume it is, until evidence to the contrary appears), then tuning a model to the past is a complete waste of time.

    As Lorenz showed, even a minute alteration to initial conditions may produce unpredictable results. Now, given the number of atoms in the atmosphere, and their changed position and velocity at any instant, not to say the influence of such things as photons from the Sun, and you find yourself in a bit of a pickle.

    If you decide to average the past, and hope that the future will follow the trend of the past, it is faster and cheaper to use a straight edge, a pencil, and a bit of commonsense.

    The Wright brothers, Lilienthal and all the rest, managed without computers. Man flew. Thought, pencil and paper, and experimentation, often produce superior results to blind faith in digital computers, and all the too-easily generated nonsense they can generate.

    Climate modellers tune their models to predict the past with varying degrees of accuracy. Predicting the future seems beyond their grasp, judging on progress to date.

    Cheers.

    • “Predicting the future seems beyond their grasp, judging on progress to date.”

      And always will be, of course.

  7. I wonder how many deniers will take this proof of uncertainty as proof that there’s no climate-mediated risk involved in digging up all that CO2 and dumping it into the system.

    • AK,

      Maybe you could be so kind as to name one person who believes the weather (and hence the climate) is unchanging.

      Are you just calling anybody who disagrees with your odd notions a “denier”?

      This would seem to fit with the standard practice of Warmists redefining words to suit their current purpose. The use of the words “climate mediated risk” seems nonsensical. I assume you mean to say that there are adverse consequences ensuing from adverse weather. Of course, adverse weather is just weather that doesn’t suit your purpose.

      The storm which scattered the Spanish Armada in 1588 was positively beneficial to the English.

      Svante Arrhenius was of the opinion that warming would lead to a more equable climate, but he was wrong about the GHE, so his thoughts about the beneficial effects of warming may have been wrong too.

      What do you think?

      Cheers.

    • Deniers might. But why do you bother mentioning that here, a sceptic site??

      • “Plenty of deniers here.”

        You, for example?

      • No, I’m with the CE majority, here. You?

      • catweazle666 | November 6, 2016 at 11:57 am |
        “Plenty of deniers here.”

        You must be new here. No, almost everyone one is a sceptic. As I am. Read a bit and you’ll see.

      • Punksta: “You must be new here”

        Not by any stretch of imagination, I’ve been here several years.

        My comment “Plenty of deniers here.” was italicised and double quoted, hence referred to another post by another poster.

      • No, almost everyone one is a sceptic. As I am. Read a bit and you’ll see.

        Been reading since the site came up. And many (most?) of those “sceptics” are actually deniers. Including you.

        Quite a few alarmists here are also deniers.

      • Only in the eyes of dishonest alarmists are sceptics seen as deniers. Sums you up then.

  8. “Climate modelers open up their black boxes to scrutiny”

    Oh please. Climate modelers have been publishing their methods for years. (But Curry will block this post, for conveying too much information.)

    NASA GISS GCM Model E: Model Description and Reference Manual http://www.giss.nasa.gov/tools/modelE/modelE.htm

    MPI-Report No. 349 – E. Roeckner, G. Bäuml, L. Bonaventura, R. Brokopf, M. Esch, M. Giorgetta, S. Hagemann, I. Kirchner, L. Kornblueh, E. Manzini, A. Rhodin, U. Schlese, U. Schulzweida, A. Tompkins (2003): The atmospheric general circulation model ECHAM 5. PART I: Model description.
    http://www.mpimet.mpg.de/fileadmin/publikationen/Reports/max_scirep_349.pdf
    via
    http://www.mpimet.mpg.de/en/science/models/echam.html

    “Description of the NCAR Community Atmosphere Model (CAM 3.0),” NCAR Technical Note NCAR/TN–464+STR, June 2004.
    http://www.cesm.ucar.edu/models/atm-cam/docs/description/description.pdf

    • the issue is model CALIBRATION. Until the CESM-CAM5, i did not see any U.S. model provide any substantial information about calibration.

    • DA, we had this go around before on other post threads. You cite the Cam3 manual (your last reference). I studied it in detail back in 2012 for the climate chapter of Arts of Truth. It describes in detail the many parameterizations needed. It says nothing about how those parameterizations were tuned to best hindcast. Tuning is the black box. You mistate this post issue completely. Now last time you something of the sort I supposed it was lack of knowledge. Not this time. Deliberate misdirection, not good form.

    • David Appell,

      The following does not fill me with confidence –

      “Model development is an ongoing task. As new physics is introduced, old bugs found and new applications developed, the code is almost continually undergoing minor, and sometimes major, reworking. Thus any fixed description of the model is liable to be out of date the day it is printed.”

      I believe the colloquial term for this sort of thing is CYA.

      I’ve had a look at some of the code changes apparently made by one G Schmidt. As a programmer, he’d probably make a good juggler!

      I wouldn’t be surprised if intractable bugs became “features” over time. Oh, wait, . . .

      Cheers.

    • This article reminded me of models being hawked by business consultants. Many of them had parameters that had to be entered judgementally. In practice, users would try different parameter choices until they got an output projection that looked reasonable to them. In reality, the model was providing no information. The models’ only virtue was to cloak the projection in an impressive presentation.

      • David Skurnick,

        I agree with you. The model of a very complex, mostly ignored process provides more information about the modeler, than about what is being modeled. They are learning/teaching tools and when you use them to project the future they only teach you how little you know about the process.

    • I believe there is a warming.
      Have you ever been in a sauna?
      Earth is a sauna with all the water that is occupying the space. If 1/10 mm of the global water surface was boiled off by the sun, then about 36 billion (give or take a billion) tonnes of water vapor would be generated in a 24 hour cycle.
      My theory is that most probably the CO2 has a cooling effect on our daily lives. Maybe we need more of the stuff?
      What the world needs is people with experience, not book learning, which just doesn’t mean in science, but engineering, politics, etc.

  9. All climate models have inbuilt assessments of ECS. Essential for predicting a future climate. Also inbuilt is an assessment of how the CO2 level is predicted to rise. This gives rise to variable outcomes depending on the preferred assessment of human caused CO2 rise.
    The latter is fine at the moment, CO2 is sticking to script but the former is wrong.
    Despite chaos theory there is a lot of repeatability and drunkard’s walk behaviour in the climate so attempting models is seriously good science.
    We can cope with a there is where it should go device, as long as we do not fall for the there is where it must go expectation.
    Models need to have all the available past observations built in to them and they should be updated as they go along.
    What is lacking is the time and algorithms to try variations on input requirements like ECS and see if they help. Whether by increasing negative feedbacks of clouds or increasing ocean CO2 intake.
    Still the pressure is on. If this is the missing step then the programmers braves enough to defy convention are the ones whose models will win the race to be VCR instead of Beta.

  10. It’s the logical fallacy called “begging the question.”

    He said he’s not guilty. Therefore he must be guilty because guilty people always say that.

  11. Already thought someone would pick up some of the papers from last weeks science.

    My favorite is actually another paper: “Using climate models to estimate the quality of global observational data sets”.

    Which squares the circle with the groundbreaking new concept of model based evidence. (To be fair the paper is actually slightly less outraging than the title suggests.)

  12. The Voosen link seems to be paywalled.

    • Shame on you that you don’t know a way to read it in full. I suspect you are not interested at all in science, it so you would know the way. Please don’t cry about paywalls, find the way around !

  13. People trying to model non-linear chaotic systems. It’s so innocent and cute, it shouldn’t be allowed really.

    Try fiddling, however minutely, with the values of x and r in the following and watch it winding rapidly out of the solution ballpark.

    https://en.wikipedia.org/wiki/Logistic_map

    A more prosaic example wivout da maffs.

    https://thepointman.wordpress.com/2011/01/21/the-seductiveness-of-models/

    Pointman

  14. Geoff Sherrington

    As part of CMIP6, I would like to see runs from one or more models after the alleged temperature change reported from each station has been halved.
    Or, if you like, some data and statements about how projections for the next few decades alter as the input climate sensitivity is varied.
    Reasons? There are many, here is one close to home.Try as we can here in Australia, our informal group cannot see the 0.9 deg C of warming officially claimed for the 20th century. We can see about half of that at best, with the other half allocated to adjustment.
    This climate sensitivity matter is of much importance in terms of economic and social disruption. If policy makers are so stupid as to plan on unvalidated computer scenarios of climate behavior, then at least we need to get the climate models to be more correct.
    If risk of a loss of pride by modelers is a main factor holding back such tests, then priorities are too far from reality.
    Imagine how the modelling community would feel if, by halving the input of temperature change, everything else ‘fell into place’.
    More likely, such runs have been made, but the outcomes have not been revealed to the public because they show little cause to continue the global fear that has been built up at enormous expense by the UN and others.
    Geoff.

  15. Reblogged this on Wolsten and commented:
    Congratulations to JC for her leading role in bringing climate modelling out into the open. It will be fascinating to see what falls out from this. Whether climate modelling can really improve will still be a tough ask.

  16. Reblogged this on CraigM350.

  17. nobodysknowledge

    “Climate models ability to simulate the 20th century temperature increase with fidelity has become something of a show-stopper as a model unable to reproduce the 20th century would probably not see publication, and as such it has effectively lost its purpose as a model quality measure. ”
    I saw a study of 20th century ocean temperatures, using models to find the development of OHC. The models could not be used for depths under 700m because they were clairly wrong. What can a model say about climate when it is wrong for about half of the climate change? So the conclusion: No model should see publication.

    • nobodysknowledge

      Sorry. The models were unsuited for depths more than 300m.
      Master thesis in Geosciences; Meteorology and Oceanography
      Malin Jeanette Rue. Impact of ocean heat uptake and release on the climate system during the 20th century.
      “In fact, we will see in the next section that there is an
      unbalanced drift in all the models which is why we choose to disregard the deep ocean and focus on the upper 300 m of the ocean”
      So the PhD thesis had to give up on calculating the whole ocean heat uptake during the 20th century. The differences were to big.
      “computation of OHC from the net energy flux results in much
      greater values than the OHC computed from the potential temperature for all the models. This means that there is an imbalance in the model’s energy budget, i.e.the energy is supplied or extracted from the ocean which can not be accounted for by the air-sea energy fluxes. Therefore we can not perform an exact energy budget for the models.”
      How can a model say anything about climate change when it is unable to simulate an energy budget.?

    • nobodysknowledge

      I had hoped to find a good estimate of OHC for the 20th century in this thesis to see the contribution to sea level rise. I have not fount it anywhere else either. Anybody who knows?

  18. If the models are being tuned to the data, how is validation or verification (I am not sure of the difference) possible?

    • Verification involves a thorough description of what has been done in the model. Validation relates to comparison with observations. Both can be done (not easily, but it is straightforward). This yields an empirically adequate model (provided that the validation compares well with models). If the validation data was used in calibration, then there is no reason to have any confidence in future projections of the model.

      • Just a semantic note: Verification is making sure the model is built as specified. Validation is making sure the model produces an output consistent with reality. These are technical terms with specific engineering usage. They are often confused by non-software folks simply because their non-engineering usage is so similar.

      • I see that many discussions are about terminology.

        May I suggest to use the terminology defined by
        Bureau International des Poids et Mesures BIPM:
        International Vocabulary of Metrology – Basic and General Concepts and Associated Terms or any other consistent and well-defined set of terminology for that sake.

        Terms like calibration, adjustment, verification and validation are defined in that document.

      • To Counter
        Humpty-Dumpty
        arbitrary word-play
        and Cli-Sci
        chameleon games.

      • Professor Curry,

        “If the validation data was used in calibration, then there is no reason to have any confidence in future projections of the model.”

        Do any climate scientists state that validation data was NOT used in calibration? I’ve had numerous conversations with climate scientists in which they admit that it was, then produce word salads defending the model validations despite this violation of a prime rule in model use.

        At the end of the following post are links to a sample of the literature (2 dozen papers) about model validation. Almost all rely on late 20th century data, the data used in model calibration.

        https://fabiusmaximus.com/2015/09/24/scientists-restart-climate-change-debate-89635/

      • David L. Hagen

        curryja Any there models that tune/fit half the data and then forecast/hindcast the other half and compare the results?

      • David,

        “Any there models that tune/fit half the data and then forecast/hindcast the other half and compare the results?”

        That’s called “out of sample testing” and is a standard method used in modeling — everywhere except climate science. I have asked similar questions of climate scientists. The usual reply is that their models are great — no need for such things.

        But that’s water over the dam; we can’t correct past errors. But there is out of sample data available: emissions and temperature data from after the creation dates of the models. Past models could be re-run using actual emissions (not scenarios), comparing their forecasts with past temperature — after the period against which they were tuned.

        This has been suggested by Roger Pielke Jr, me, and (most significantly) by David J. Frame and Dáithí A. Stone in “Assessment of the first consensus prediction on climate change“, Nature Climate Change, April 2013.

        Perhaps eventually climate scientists will listen. It would be a step towards greater acceptance of their forecasts by the public and policy-makers.

      • David,

        Three quotes about the importance of out of sample testing:

        “There is an important methodological point here: Distrust conclusions reached primarily on the basis of model results. Models are estimated or parameterized on the basis of historical data. They can be expected to go wrong whenever the world changes in important ways.”
        — Lawrence Summers in a WaPo op-ed on 6 Sept 2016, talking about public policy to manage the economy.

        “One of the main problems faced by predictions of long-term climate change is that they are difficult to evaluate. …Trying to use present predictions of past climate change across historical periods as a verification tool is open to the allegation of tuning, because those predictions have been made with the benefit of hindsight and are not demonstrably independent of the data that they are trying to predict.
        — “Assessment of the first consensus prediction on climate change“, David J. Frame and Dáithí A. Stone, Nature Climate Change, April 2013.

        “…results that are predicted “out-of-sample” demonstrate more useful skill than results that are tuned for (or accommodated).”
        — “A practical philosophy of complex climate modelling” by Gavin A. Schmidt and Steven Sherwood, European Journal for Philosophy of Science, May 2015 (ungated copy).

      • Is there an out-of-sample climate to test against? Perhaps Mars.

      • However, kidding aside, the 1981 Hansen model with a sensitivity of 2.8 C per doubling ran through today, which was obviously out of sample for the last 30 years, and got the warming spot on (1981, Science). Wally Broecker with a much simpler model did it from 1975.

      • Jim D,

        “Is there an out-of-sample climate to test against? ”

        Yes, there is — observations from after the creation of the model, when run as input to the model (i.e., running the model with actual emissions, not scenarios) and comparing the output with actuals.

        See the comment upthread, wth the Frame and. Stone cite:

        https://judithcurry.com/2016/11/05/climate-modelers-open-up-their-black-boxes-to-scrutiny/#comment-821905

      • Jim D.,

        “the 1981 Hansen model with a sensitivity of 2.8 C per doubling ran through today, which was obviously out of sample for the last 30 years, and got the warming spot on ”

        That is a big claim. Do you have a supporting cite from the peer-reviewed literature or the IPCC?

        It’s not just a matter of overlaying the temperature prediction with actual temps. The emissions forecast used in the model must match actual history, or the result is just GIGO.

        Similar claims are sometimes made for Hansen’s 1988 JGR paper. Julia C. Hargreaves attempted to verify them, but reported that “efforts to reproduce the original model runs have not yet been successful” ( WIREs: Climate Change, July/Aug 2010).

      • And the 2017 ENSO forecast, if about right, will probably put 2017 above the mean as well, which means the SPM’s .2 ℃ per decade for the first two decades of the 21st century will be in range:

        https://pbs.twimg.com/media/CoydY7vVIAEk__i.jpg:large

      • With reference to official usage:

        FROM JCGM 200:2012

        2.44
        verification
        provision of objective evidence that a given item fulfils specified requirements

        2.45
        validation
        verification, where the specified requirements are adequate for an intended use

        FROM WIKIPEDIA: Software verification and validation

        According to the Capability Maturity Model (CMMI-SW v1.1),

        Software Verification:
        The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. [IEEE-STD-610]

        Software Validation:
        The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. [IEEE-STD-610]

        In other words, software verification is ensuring that the product has been built according to the requirements and design specifications, while software validation ensures that the product actually meets the user’s needs, and that the specifications were correct in the first place. Software verification ensures that “you built it right”. Software validation ensures that “you built the right thing”. Software validation confirms that the product, as provided, will fulfill its intended use.

        (The above paragraph is copied and pasted directly from wikipedia)

      • You can check Hansen’s 1981 emission scenario and it was also about right, so the temperature matched and the CO2 matched. It was a big claim at the time that the temperature would rise more than 0.5 C because that was outside the natural variability of the previous century. For other out-of-sample tests, you could look at previous IPCC scenarios (AR1-AR3) and how they verify past their publication date. I sure someone must have done this.

      • Jim D.

        I asked “That is a big claim. Do you have a supporting cite from the peer-reviewed literature or the IPCC?”

        Your reply: “You can check Hansen’s 1981 emission scenario …For other out-of-sample tests, you could look at previous IPCC scenarios (AR1-AR3) and how they verify past their publication date. I sure someone must have done this.”

        Reading your comments I often wondered if you were just making stuff up. Thank you for confirming this.

        If Hansen’s 1981 paper was confirmation of his model, it would be big news. He and others would have published p-r papers about it. Also, the model results cited in the IPCC’s AR’s do not claim to have used out-of-sample data data for testing.

        If you believe otherwise, please provide a cite — not just more assertions to support your previous assertions.

      • Editor Fabius, I am saying that anyone can do the out-of-sample testing with the older model forecasts that have already gone out of sample since they were published. One example was Hansen 1981, but you can take old IPCC predictions too.

      • Jim D.

        “I am saying that anyone can do the out-of-sample testing with the older model forecasts that have already gone out of sample since they were published.”

        (1) No, that’s not what you said. You made specific assertions about the result of such testing. Let’s see some p-r research confirming your assertions. If true it is very big news, and you have the idea for a great article. I suggest submitting to Eos.

        (2) No, not anyone can do such testing. It’s complex. As you can see by reading the two dozen papers cited here (with links) about climate model testing:

        https://fabiusmaximus.com/2015/09/24/scientists-restart-climate-change-debate-89635/

    • Verification means comparison with truth (veritas=truth), so this is when a forecast is compared to reality.

      • Well, you just revealed your partisan ignorance upthread, JD. Verification precisely means the model programs function as specified. AKA no software bugs. Validation precisely means verified models reproduce real world results. Now produce any evidence either has ever been accomplished with coupl d climate models to these garden variety industry (not ‘climate science’) specs. Without which, homest enfineers would go to to jail for malfeasance.

      • In weather forecasting they say the forecast verifies when a case agrees with observations. If you said a forecast validates it would sound strange. Perhaps a lot of verifications can validate a model, but a single forecast does not validate a model. That’s the difference between verify and validate in my mind.

      • JD, the difference ‘ in your mind’ does not change the differences in the entire liable for failure enginerring community. Wrong.

      • That is what I said.

      • This is what the WMO means by Verification. It is checking a forecast against observed data. This is very different from the basic software check described by V&V because it is more like what they call Validation.
        http://www.wmo.int/pages/prog/amp/pwsp/qualityassuranceverification_en.htm

      • Jim D,

        It’s a pity that the verification link you provided gives the following –

        “The New Zealand verification scheme allows a maximum of eight points for each forecast for a 12-hour period, (say 6am to 6pm today), as follows:

        four points for precipitation
        two points for cloud cover
        A point each for wind direction and speed.”

        Nothing at all about temperature, or CO2.

        What a surprise!

        Maybe you could provide something relevant to your bizarre GHE forecasting claims? Or maybe not?

        Cheers.

      • MF, your question is not relevant to this thread.

      • “Validation precisely means verified models reproduce real world results. ”

        Wrong

        https://standards.ieee.org/findstds/standard/1012-2004.html

        “Verification: (A) The process of evaluating a system or component to
        determine whether the products of a given development phase satisfy the
        conditions imposed at the start of that phase. (B) The process of providing
        objective evidence that the software and its associated products conform to
        requirements (e.g., for correctness, completeness, consistency, accuracy) for all
        life cycle activities during each life cycle process (acquisition, supply,
        development, operation, and maintenance); satisfy standards, practices, and
        conventions during life cycle processes; and successfully complete each life
        cycle activity and satisfy all the criteria for initiating succeeding life cycle
        activities.”

        Validation: (A) The process of evaluating a system or component during or at
        the end of the development process to determine whether it satisfies specified
        requirements. (B) The process of providing evidence that the software and its
        associated products satisfy system requirements allocated to software at the
        end of each life cycle activity, solve the right problem (e.g., correctly model
        physical laws, implement business rules, use the proper system assumptions),
        and satisfy intended use and user needs.

        #######################

        formally, technically, specifically validation means

        MEETS THE SPEC

        not matches reality

        MEETS THE SPEC

        The reason for this is simple. Take CFD code.. or any complicated simulation of physical processes. When you know in advance than you cannot PERFECTLY MATCH REALITY you dont require or SPECIFY the impossible. Validation means meets the spec. not matches reality.

        MEETS THE SPEC..

        repeat that until it sinks in.

        As it stands a GCM gets the temperature right with 10%.

        20% is good enough for government work. I would SPEC ‘the system shall produce a temperature (etc) that is within 20% of actual value”

        Easy SPEC, easy to validate..

        here’s a guy smarter than rud, take special note of “Face Validity”

        http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.51.4210&rep=rep1&type=pdf

      • Note to Jim D: in our community, the words verification and validation have different meanings than the meanings in the software/engineering/regulatory science worlds. I tend to mix up the terms also.

      • Steven Mosher wrote:

        formally, technically, specifically validation means MEETS THE SPEC

        :o) And SPECS are determined up front. Very simple but to many a foreign concept.

      • And SPECS are determined up front. Very simple but to many a foreign concept.

        And which is the step where it’s demonstrated that a program that performs according to specification actually meets the requirements?

        (In business programming we call it User Acceptance Test. Except when the UAT is run like a system test, in which case it’s the first few months of production.)

      • Steven Mosher

        “:o) And SPECS are determined up front. Very simple but to many a foreign concept.’

        yup.

        here is the fundamental problem. There is NO SPEC for a climate model.
        no formal document, no informal document, no expectation of what the thing “SHALL DO”

        Of course they were built as tools for exploring questions.. Like
        ‘what happens to the temperature if a volcano blows… like THIS BIG
        and for this Long.. Questions that cannot be answered from history.

        What happens if X occurs?

        And they are built by scientists for scientists.. for providing insight or for testing in a gross way our understanding of the climate.

        Now.. trying to do a spec after the fact..

        political

        been there done that

      • AK wrote:

        And which is the step where it’s demonstrated that a program that performs according to specification actually meets the requirements?

        That really is an interesting question. While this answer is short I do not intend it to be dismissive. I would say both verification and validation. This of course would be sorted and specified upfront in part of the QA documentation.

        In both cases one may well need to run some simpler problems with the code being tested configured from comparison with hand and/or simpler numerical codes’ results. But to me it is also conceivable that comparisons of ‘results’ derived from local outputs from tested configurations might ideally be compared with ‘results’ derived from local observations. I write ‘results’ in quotes here suggesting that further, secondary machinations with the code direct output and/or observations may be needed.

        Checking the structural (physical models and interactions) aspects of the model falls under validation. Here I think that a good descriptive conceptual model of the problem; component processes and interactions; and the location, nature and justification of boundary conditions and initial conditions are required. The conceptual models consists of text and related graphics.

        Anyway, AK , that is how I see it from the peanut gallery at this point in time.

        mwg

        This all neglects the fact that codes are written and used now so as an ugly practical matter compromise regarding the time ordering of step should be pressed into service. Messy but unavoidable.

      • Now.. trying to do a spec after the fact..

        political

        Not necessarily. I’ve done it, with systems I inherited with no documentation.

        You just start with the code, back off to pseudo-code, and continue backing off, identifying fields by business function, until you have detailed specs all the way up. Usually the response from business manager(s) is something like “so that’s why everythings f*cked up when we use it!”

        Once you have specs for the original system, it’s a lot easier to identify requirements and specs for the new (i.e. altered) system.

        What makes it political, IMO, is when somebody doesn’t want to admit that the existing system isn’t doing what people thought it was. I can imagine how it would go with climate models.

      • Steven Mosher | November 7, 2016 at 4:18 pm |

        Yup :O)

      • AK,

        Also yup. Sometimes it seems like being the non-descript little man walking behind the elephant in a parade. Gawd, I hated that!

  19. “They want to know whether the model output they value—say, its predictions of Arctic sea ice decline—arises organically or is a consequence of tuning.”

    Tuned to death, as the circulation models mostly agree that rising GHG’s will increase positive NAO/AO, while the AMO and Arctic warming since the mid 1990’s is negative NAO/AO driven.
    http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch10s10-3-5-6.html

  20. The models are produced to support the desired political outcome.

    • Indeed, to my knowledge every major model is sponsored by a government that endorses the hypothesis of dangerous human caused global warming. They get what they pay for.

      • Note that CMIP itself is run by the US Dept. of Energy.

      • @David Wojicick

        “Note that CMIP itself is run by the US Dept. of Energy.”

        I usually agree with all you say… but this is flat wrong. The DoE doesn’t “run” anything. The DoE doesn’t have scientists or labs, the DoE simply allocates money (lots of it in fact) to universities, labs, etc…

      • Yes, and if the purse strings are controlled by government …. this strongly influences the work being done to fit the ideological outcome desired by administration which controls the department and names the top administrators. Does anyone in the room think Obama a) understands a twit about and b) even cares about “climate science.”

      • It is not quite true that Dept Energy doesn’t have labs. They have many national labs like Argonne, Savannah River. While technically run by entities like U. Chicago or Westinghouse (respectively), these labs have DoE staff on site, can only do work that DoE tells them to do. They do not operate in a hands-off manner like NSF.

    • The US contribution to IPCC AR4 was produced during the Bush administration, so doesn’t that dash this theory? If not, why not? Science is not determined by government fiat or an individual nation’s whims. It just is what it is, and is global.

  21. CMIP restricts the models to a specific set of causes of temperature change. Tuning the models to the past data then implicitly assumes that just that set of causes is operating and no others. This groundless assumption is the fundamental flaw in the entire exercise.

  22. “The ability to faithfully generate other climate phenomena, like storm tracks or El Niño, is just as important.”

    Faithfully generating ENSO is not an internal affairs job, but acknowledging the solar factors driving it would help create a far more faithful global climate model.

  23. Pingback: El ignorado “tuneado” de los modelos climáticos, y sus consecuencias | PlazaMoyua.com

  24. Different ways of tuning amount to different assumptions about the physical processes being modeled. These differences are therefore unresolved scientific questions. That they have been hidden is bad science indeed.

    • It will be interesting to see what these different “strategies ” are.

      Basically, all the parameter tweaking to reproduce recent climate is a crude manually driven form of multivariate regression fitting.

      They do something akin to a least squares test then try new tweak to see whether it is a closer fit. The trouble is that there are so many poorly constrained variables that there are probably thousands of possible combinations that will produce equally good fits , most of which will be equally useless outside the calibration period.

      It’s classic case of over-fitting the data: too many degrees of freedom to get a meaningful result.

      This, of course, allows them a free hand in poking certain parameters to produce highly sensitive models.

      over-fitting is well known problem in multivariate regression. If you have too many free parameters, it will likely converge quickly to a result but you have no way to know whether it is a usefully correct one. In fact you can pretty sure that it is not.

      This has never been a problem in the climate modelling community it seems.

  25. The climate models in question are all elaborations upon a basic supposition: rising CO2 causes rising GMT. The claim is that the past temperature record can not be replicated by models without that premise. Yet these same models do a poor job hindcasting, and in forecasting from 2005 to date.

    There is a simple mathematical model using a different premise: GMT varies as a function of solar and oceanic oscillations. And it achieves a high correlation with the record.

    https://rclutz.wordpress.com/2016/06/22/quantifying-natural-climate-change/

    • “GMT varies as a function of solar and oceanic oscillations.”

      Yes, that’s a more likely reason than a few extra ppm of plant food.

      But it would be very difficult to justify the mass movement of resources from the First World to the Third World and the absolute centralisation of power to a single Global government according to that theory, Ron.

      Now, if you can overcome that minor problem, you’ll be on a winner!

      THAT should be worth a Nobel prize if anything is!

  26. This is a huge step forward in transparency. I applaud it.

    I wonder if they re-tune models when their favorite temperature index is revised?

    • No need. The temperature index was “corrected” to be closer to the model output. That is whole point of correcting the data: it’s a lot easier than making the model work and having to change your politics.

  27. As a matter of tuning, what can compare with the Manabe and Strickland concept (1964) of equilibrium and thermal gradients set by an adiabatic constraint due to convection? Apart from Chris Essex, the entire climate community evidently accepts this notion as gospel. But I’ve yet to discover a value for this ‘equilibrium’ convective energy flux. Is it zero? I’ve seen papers asserting that, for sub-adiabatic gradients, convection actually reverses direction!

    Or, is this perhaps just a programmer’s assumption to simplify computation? Where is a critical evaluation of the errors introduced? Outside the climate science community, it has long been understood that the atmosphere is unstable with respect to convection when thermal gradients exceed an adiabatic level and that the close correspondence with measured gradients indicates large convective flux adjustments require minimal gradient changes. But, if this were so, then why should not minor convective adjustments overwhelm any increased impedance for energy flow due to GHGs?

    In many physical problems, potentials and potential gradients enter nearly equally into finding solutions. In climate science, gradients are dismissed as ‘equilibrium’ constraints and only temperature needs be considered.

    Abraham Lincoln reportedly once asked, “If you call a dog’s tail a leg, how many legs does a dog have?”

  28. There is an irony here. CMIP stands for the Climate Model INTERCOMPARISON Project (emphasis added). The original idea was to highlight the great scientific differences among the models. But under Gore it became the de facto climate model coordination project, standardizing inputs for scary IPCC modeling. Perhaps it will now return to its roots.

  29. Curious George

    I would love to see an honest discussion of the accuracy of models. So far, even models using a wrong latent heat of water vaporization do not stand out of the crowd.

  30. Hi Judy – Well summarized.

    “I wonder if these same climate modelers realize the can of worms that they are opening”

    I agree – Indeed a next step is for them to actually compare model predictive skill of changes in regional climate statistics (in hindcast runs) with observations. It is not pretty when they have done this; e. g. see

    https://pielkeclimatesci.wordpress.com/2012/10/09/quotes-from-peer-reviewed-paper-that-document-that-skillful-multi-decadal-regional-climate-predictions-do-not-yet-exist/

    In my recent Physics Today article, I proposed testing there two hypotheses

    http://scitation.aip.org/content/aip/magazine/physicstoday/article/69/11/10.1063/PT.3.3364;jsessionid=y4Tdg5G_LFvT3XztPRaIFS6s.x-aip-live-03

    “However, are CO2 levels and global averaged surface temperature sufficient to generate accurate and meaningful forecasts? Two leading hypotheses have emerged.

    “The first argues that the accuracy of climate forecasts emerges only at time periods beyond a decade, when greenhouse gas emissions dominate over other human forcings, natural variability, and influences of initial value conditions. The hypothesis assumes that changes in climate are dominated by atmospheric emissions of greenhouse gases, of which CO2 is the most important. It represents the current stance of the Intergovernmental Panel on Climate Change and was adopted as the basis of the Paris agreement.

    A second hypothesis is that multidecadal forecasts incorporating detailed initial value conditions and regional variation set an upper bound on the accuracy of climate projections based primarily on greenhouse gas emissions. According to that view, successful models must account for all important human forcings—including land surface change and management—and accurately treat natural climate variations on multidecadal time scales. Those requirements significantly complicate the task of prediction.

    Testing the hypotheses must be accomplished by using “hindcast” simulations that attempt to reproduce past climate behavior over multidecadal time scales. The simulations should be assessed by their ability to predict not just globally averaged metrics but changes in atmospheric and ocean circulation patterns and other regional phenomena.”

    These hypotheses are at the core of claims given to policymakers, the impact community, and others on the certainty of changes in climate in the coming decades. If the second hypothesis is correct, it invalidates millions of dollars of model results that have been given to those communities. As written in the post
    .
    “Schmidt points out that these models guide regulations like the U.S. Clean Power Plan, and inform U.N. temperature projections and calculations of the social cost of carbon. “This isn’t a technical detail that doesn’t have consequence,” he says. “It has consequence.”

    If Hypothesis #2 is correct, these regulations are based on an defective use of science.

    Roger A. Pielke Sr.

  31. A correct climate model should be able to be started with a best estimate of conditions of ten thousand years ago and it should run for ten thousand years, repeating cycles that look similar to the actual cycles, no need for exact match, but the upper and lower bounds and length of the cold and warm times should be not too far different. Or even just runs long enough to match a few cycles. The SH and NH cycles look similar, but they do not match and they do not mirror the cycles, both hemispheres operate on different time schedules. I have never seen climate model output that had different temperature plots for the two Hemispheres, in real life, they are different in timing and length of cycles. If you collect different proxies they look different, but none of them go out of bounds.

    • It seems that the models can only simulate a few years before they have to be pulled back on track by adjustments.

      « When initialized with states close to the observations, models ‘drift’ towards their imperfect climatology (an estimate of the mean climate), leading to biases in the simulations that depend on the forecast time. The time scale of the drift in the atmosphere and upper ocean is, in most cases, a few years. Biases can be largely removed using empirical techniques a posteriori. The bias correction or adjustment linearly corrects for model drift. The approach assumes that the model bias is stable over the prediction period (from 1960 onward in the CMIP5 experiment). This might not be the case if, for instance, the predicted temperature trend differs from the observed trend. It is important to note that the systematic errors illustrated here are common to both decadal prediction systems and climate-change projections. The bias adjustment itself is another important source of uncertainty in climate predictions. There may be nonlinear relationships between the mean state and the anomalies, that are neglected in linear bias adjustment techniques.»
      (Ref: Contribution from Working Group I to the fifth assessment report by IPCC; 11.2.3 Prediction Quality; 11.2.3.1 Decadal Prediction Experiments )

      It seems that at least some authors of the IPCC assessment report kind of noticed the need for adjustment and tuning of models. But obviously, the alarm bells did not ring. And this weakness did not find its way to the summary for policymakers.

  32. “and so build understanding of and trust in the models.”

    I have a philosophical problem with the a bove JC reflection. Building trust is something a salesman would do. I don’t like science discussions that feel the need to include salesmanship.

    Andrew

  33. It turns out you can literally fit an elephant with four parameters if you allow the parameters to be complex numbers.

    http://www.johndcook.com/elephant.png

    Ref.: How to fit an elephant with four parameters:
    “John von Neumann famously said:
    With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.

    By this he meant that one should not be impressed when a complex model fits a data set well. With enough parameters, you can fit any data set.”
    – John D. Cook

    • “… if you allow the parameters to be complex numbers.”

      ie you need 8 not four. that’s called double dipping.

      I think Von Neumann as meaning the upper profile of the elephant not it’s feet but it’s just to make the point.

      • Ok, I take that point. :)

        Anyhow, literally any series of data can be fitted by a model with quite few parameters. One example is Fourier analysis.

        http://i.stack.imgur.com/9ztB6.png

        By Fourier analysis of a data series we can get a good match between Fourier synthesis and the data series. The data series can then be extrapolated by Fourier synthesis, without the model having any predictive capabilities whatsoever.

  34. Within any other area of importance, Industry or governments will not accept test results unless they are provided by an independent laboratory accredited in accordance with ISO 17025:

    “ISO/IEC 17025 General requirements for the competence of testing and calibration laboratories is the main ISO standard used by testing and calibration laboratories. In most major countries, ISO/IEC 17025 is the standard for which most labs must hold accreditation in order to be deemed technically competent. In many cases, suppliers and regulatory authorities will not accept test or calibration results from a lab that is not accredited.” – Wikipedia

    This is the standard imposed by governments upon European industry emitting the tiniest amounts of CO2. While the reason behind this tremendous change to our societies is nowhere near being tested in the same manner.

    Just to mention it, within measurement we would not allow an instrument to be heavily tuned to the conditions of the test. Because then we would know nothing about its capabilities when it has not been tuned.

  35. “Schmidt points out that these models guide regulations like the U.S. Clean Power Plan, and inform U.N. temperature projections and calculations of the social cost of carbon. “This isn’t a technical detail that doesn’t have consequence,” he says. “It has consequence.”

    Of course Schmidt is correct and the EPA’s Endangerment Finding is in fact based upon the models the modelers have constructed. US policy and POTUS Obama’s championing of these EPA’s CO2 mitigating policies could represent nothing more than the utterances of just another smooth talking magic carpet salesman.

    Envision Obama’s legacy of Obamacare and removing coal from energy production as having false premises for politically expedient outcomes that have lasting negative consequences. And this is the legacy Hillary wants to continue?

    The Uncertainty Monster is our inability to predict the future. There is a certain hubris believing that, if we collect enough information, or enough of the “right” information, we will be able to predict the future.

    For me at least, I have little interest in such predictions of the future. Life itself is a mystery of what is coming next. I would become bored knowing today what tomorrow, and tomorrow’s tomorrow will hold. Would I have any keen excitement to watch the Cubs and Indians play? Human’s would no longer be human with the future laid out before them.

    Cue the cockroaches.

  36. The Climategate scandal has revealed that society is so distracted by technological, physical connections that our deeper, more fundamental spiritual connections to reality are mostly ignored – except when we are individually alone with Nature.

    • beththeserf

      I see myself in that orchestra, playing the cello, sixth chair, just out of the picture, adding to the folk stories by Mussorgsky, roaming amongst the major and minor keys.

      “C” as you portray it, seems to be for Cacophony. A suite of sour notes, written by sour artists, to play for an audience of sour persona.

  37. What the models should match, given climate is are spatio-chaotic, is statistics.

  38. This will indeed open a can of worms. Because of 6-7 orders of magnitude computational intractability at sufficiently fine grid scales to adequately model essential processes like convection cells, all GCMs have to be parameterized. The specific CMIP5 ‘experimental design’ called for hindcasting YE 2005 back to 1975. Parameters were tuned to best do so. This inherently drags in the attribution problem. AR4 SPM fig. 8.2 specifically said the essentially indistinguishable warming from 1920 -1945 could not be attributed to GHE, while 1975-2000 was. Natural variation did not cease in 1975. But there is not yet enough information to attribute some part of the CMIP5 tuning period to natural variation rather than GHE forcings. More exposure of tuning methods will expose this fundamental flaw in the whole modeling enterprise.

    • “The specific CMIP5 ‘experimental design’ called for hindcasting YE 2005 back to 1975”

      Even if by some miracle this magic computer game climate model succeeds in doing so, the probability of it being able accurately to predict the future is precisely zero.

  39. The really scary conclusion is that greater transparency with regard to climate models at this late date may not matter. The train may be about to leave the station.

    Coinciding with the November 4, 2016 effective date of the Paris agreement on climate change, climate change activists have publicly highlighted the need to spend trillions of green dollars on environmental projects and to limit carbon emissions with “a big, fat price on carbon.”

    Think about this hypothetical scenario. Near the end of the current inter-glacial period, say, in 2021, the global mean temperature trend begins to decline. If Hillary wins the election, this observed temperature decline, simply by chance, would have followed massively accelerated expenditures on green initiatives to reduce carbon emissions.

    Hallelujah! The climate change activists would declare victory, and the world would be forever mired in a spiraling out of control economic disaster fueled by unnecessary spending on more green initiatives. For the past 25 years, climate change activists’ have claimed a primary cause and effect relationship between rising temperatures and rising concentrations of CO2 in the atmosphere, which has been shown to be demonstrably false. They would now assert a cause and effect relationship between a declining global temperature and mankind inspired initiatives intended to reduce carbon emissions, also demonstrably false.

    The message that the science must be right before wasting trillions of dollars of the nation’s wealth on ill-considered environmental projects will be left at the station. Katy bar the door!

    • “a primary cause and effect relationship between rising temperatures and rising concentrations of CO2 in the atmosphere, which has been shown to be demonstrably false”
      Where? Observations so far back it up.
      http://www.woodfortrees.org/plot/gistemp/from:1950/mean:12/plot/esrl-co2/scale:0.01/offset:-3.2/plot/gistemp/mean:120/mean:240/from:1950/plot/gistemp/from:1985/trend

      • ” Observations so far back it up.”

        Oh dear, here we go again…

        Write out 1,000 times “correlation DOES NOT imply causation”

      • Correlation is evidence for the AGW theory.

      • Jim D,

        You can’t even find evidence of a falsifiable GHE hypothesis, let alone an AGW theory!

        Hopin’, and a-wishin’ . . . – ain’t facts.

        Cheers.

      • Once again,what I showed is evidence to support a theory. It doesn’t prove it, but it can make other theories rather dubious, such as the one the GHGs have no effect.

      • Jim D,

        From Wikipedia (not perfect, but it’ll do) –

        “A scientific theory is a well-substantiated explanation of some aspect of the natural world that is acquired through the scientific method and repeatedly tested and confirmed, preferably using a written, pre-defined, protocol of observations and experiments. Scientific theories are the most reliable, rigorous, and comprehensive form of scientific knowledge.”

        Maybe you could produce a copy of his theory. I’m particularly interested if climatologists have a ” . . . written, pre-defined protocol of observations and experiments.”

        Climatologists claim to be measuring surface temperatures, for example, when it transpires that their surface measurements are anything but. They assume that the air temperature above whatever is currently overlaying the Earths surface (depending on the definition of “surface”, of course), is the same as the surface temperature. Or not – who knows?

        The temperature of the majority of the surface covered by water is totally disregarded. Fabricated temperatures of the water surface, or some unknown depth below it, suffice for climatologists.

        The nonsensical GHE is wishful thinking, unsupported by normal science.

        The deluded leading the gullible. Have you a theory, or even a falsifiable GHE hypothesis? You’ll have to a little bit better than pointing out that CO2 can be heated, and will cool if allowed to do so! This might be miraculous to climatologists, but has been known for quite some time. All matter shares this property.

        Your claim of showing evidence is just nonsense, if you can’t provide at least a falsifiable GHE hypothesis which might be supported by your so-called “evidence”.

        Cargo Cult Scientism, no more, no less.

        Cheers.

      • The bottom line of the theory is 2-3 C per doubling and the temperatures in the last century or so support that.

      • Jim D: “The bottom line of the theory is 2-3 C per doubling and the temperatures in the last century or so support that.”

        You’re telling porkies again Jimbo, they do absolutely nothing of the sort – and I’m damn sure you know it..

    • The bottom line of the theory is 2-3 C per doubling and the temperatures in the last century or so support that.

      Earth warmed into the Roman and Medieval Warm periods without manmade CO2. It is really stupid to believe something different has now caused a similar warming.

  40. Reblogged this on Climate Collections.

  41. I’m a bit late here, sorry, but I’ll stick in my tuppence worth and hope it’s welcome. I’m not a scientist, I’m not even well-educated. I’m one of the feeble minded, great unwashed, that are either terrified of GW or puzzled that a valuable trace gas (CO2) which we were led to believe, during rudimentary secondary school science (I’m talking 70’s here folks), is the gas of life, is now a poison.

    As I represent a large majority of taxpayers (not literally of course) across the globe, I hope I have the ability to comment here. My belief is that the political, business and scientific communities have the moral responsibility to represent we thicko’s honestly and fairly. We don’t get it, and surely the job of scientists is to investigate the world, on our behalf, then deliver results in a digestible form. If you have followed the Brexit debate recently (probably a bigger global deal than the Clinton/Trump debacle) you will have seen how badly represented the working man is here by scientists, politicians and the media.

    I digress. I dug a little further into the scientific world and I can’t find any conclusive proof that CO2 causes GW, far less anthropogenic CO2. I’m happy to be proven wrong, but I have caused many arguments on a variety of forums by asking that simple question. No one, to date, has shown me the conclusive proof, therefore I must assume the entire CO2 derived, GW/AGW debate is predicated on a hypothesis. Indeed, there seems to be considerable evidence from observational studies that CO2 has lagged temperature rise.

    I also note a study published on the NASA website which shows the planet has greened by 14%, two continents the size of mainland USA of extra greening in the last 30 years, 70% of which is attributed to extra atmospheric CO2. Observational science Vs. a hypothesis?

    My question now is, whilst the planet has reacted positively, by 14% to GW(?) increased CO2, what negative reactions to GW(?) represent a 14% increase? Temperature? Humidity? Sea level rise? Hurricanes? Droughts? etc. etc……….Again, I’m happy to be proven to be talking rubbish.

    I was also reminded of a thought that whizzed through my thick grey matter some time ago.

    I couldn’t understand why, in the face of overwhelming criticism, virtually every westernised nations government collectively, and wholeheartedly, support the concept of AGW. My opinion is that it’s nothing more than business, but not in the sense that it generates profit (although it does for many individuals) rather that it contributes to the lifeblood of business.

    World economies have been in crisis for donkey’s years. The 2008 banking crisis wasn’t unexpected, on the contrary, it was considered merely a matter of time. It exposed the west’s comprehensive mishandling of taxpayers money, many countries teetering on the verge of bankruptcy. And whilst an efficient business should always walk the tightrope of success and bankruptcy, it is unforgivable that politicians allow countries to even dip a toe into the latter.

    But the problem, it seems to me, wasn’t the ‘wealth’ of any particular country, but rather the turnover of taxpayer money. There just isn’t enough revolving round the global marketplace to sustain the global business model.

    Along comes GW, and a perfect opportunity to generate taxpayer income to inject into the system by creating ‘fear of loss’ of the planet. But the cunning devils (politicians) won’t come out and say they are going to tax us more to save economies, they tell us the planet is at risk if we don’t accept subsidies for ‘clean’ energy generation. They then quietly tax our energy bills which generate’s business, creates employment, increases consumer spending etc. and kickstarts the vital cashflow imperative.

    OK, a simplistic view, but I’m a simple guy and must have my information delivered in understandable chunks I can rationalise. If scientists can’t do that, I have to make it up as I go along.

    When I began working in 1976 in the UK, my Income tax bill was around 30%. However pissed off I was at that rate, it covered virtually everything you could imagine, health, water, infrastructure, local councils etc. etc. and carried many nationalised operations that needed to be privatised, coal, steel and British Leyland, British Telecom, British Gas etc. etc. to mention but a few.

    We shed nationalisation, Income tax should have dropped. We introduced VAT (8% then, 20% now), Income tax should have dropped. We shed centrally funded local councils and the council tax was introduced, Income tax should have dropped. I could go on.

    I understand from a variety of sources that the tax burden on middle-income UK is now 40%. I am worse off and made to feel like a pariah to boot.

    And whilst I don’t imagine for a moment that GW taxation is the panacea for global finance, it certainly contributes.

    Governments don’t want the GW pretence to end because it’s too big a collective earner for them.

    The planet’s temperature MAY have risen over the last 100 or so years, but the very means of measuring ground temperature is a slatted white box with a thermometer in it, developed in the 19th Century. Even the paint covering, never mind location, record keeping etc. affects the measurements and the data quality. They were designed as weather stations, not climate stations, so even the premise they are based on is misleading. Similarly, sea temperatures recorded by the cabin boy, when he had the time or inclination, of a ship is hardly an accurate recording of data, as we expect it today. Even satellite data from the mid to late 20th Century is questionable as the science was ‘early days’ and the damn things kept breaking down.

    So to make up for all these inaccurate data sources, the ‘scientist’s’ homogenised the data to make up for bad data, badly recorded data and badly recorded data from unreliable sources. Roll all that up and no matter the belief in homogenisation, it is still little better than guesswork.

    The human race’s future has nothing to do with climate change. It is reliant on bad data, from scientists with a personal agenda, delivered to politicians with their personal agenda, activated by businessmen with their personal agenda, promoted by the media, with their personal agenda.

    Too many fingers in one juicy pie. We are being scammed, and we are being convinced it’s all for our own good.

    Now, the question is, not whether I’m right or not, the question is, how have I come to this conclusion. I was told by a wise businessman, many years ago “perception is reality”.

    My perception, and the perception of many, is that we are being scammed by the GW debate. That perception will grow. And when we dimwit proletariat don’t get answers we demand, we get ugly.

    Thanks for your patience. I may have contributed rubbish, but it’s my perspective.

    • Perception is more important than reality but the first concept is multifaceted (as individuals are) while the second is vague and subject to changes in community norms (as paradigms are) so the debate on climate change is a reflection of how human ideas evolve over time.

    • ++++++++++++++++++++++

    • HotScot,

      I’m a scientist, and I can only say that I wish the common sense that you display was more common among scientists. The biggest risk to a scientist is self-deception and common sense is a mighty protection, perhaps the only one, against it. I too have examined the evidence for the CO2 hypothesis and found it lacking. On the opposite side of the alarmist view, the biosphere response to the increase in CO2 and temperatures is mostly positive. Perhaps our CO2 emissions are the only positive thing that we collectively as a species have done for the planet. Ironically, but consistent with the unwise behavior of the only “sapiens” species, we have turned that into an evil.

    • HotScot and Javier,

      Please be so kind as to have a short look at my EASY introductory section of my website. I try to describe there in a very brief way what is my perception within the observed, published NASA CERES global energy budget data. I can see patterns there, energetic constraints, strict internal flux relationships which might say something about the annual mean energy flows, cloudiness, and albedo. IF my perceptions are correct and IF those constraints are real, then we have a very stable (steady state) internal flux structure that depends only on the incoming available solar energy (plus evident fluctuations within the ocean-atmosphere energy exchange).

      If you are familiar with the global energy budget (“Trenberth”-) diagrams, you will find out how the atmospheric longwave absorption, and hence the surface Planck-radiation, is assumed to grow by the increasing CO2; but with these CERES-numbers projected on those diagrams you may also find a different structure: a constrained, constant greenhouse effect.

      http://globalenergybudget.com/Easy-l2.html

  42. nobodysknowledge

    I had hoped to find a good estimate of OHC for the 20th century to see the contribution to sea level rise. I have not found it and nobody knows.

    OHC for different models:
    CCSM3 (1870 – 1999): -8.086·10’23 from the net energy balance
    -6.04·10’23 potential temperature whole ocean
    1.21·10’23 potential temperature 0 – 300 m
    Gfdl CM2 (1861 – 2000): 2.843·10’24 from the net energy balance
    9.06·10’23 potential temperature whole ocean
    1.30·10’23 potential temperature 0 – 300 m
    HadCM3 1861 – 2000) 1.082·10’24 from the net energy balance
    5.83·10’22 potential temperature whole ocean
    1.53·10’23 potential temperature 0 – 300 m
    Echam5 (1861 – 2000): 1.368·10’24 from the net energy balance
    6.20·10’23 potential temperature whole ocean
    3.97·10’22 potential temperature 0 – 300 m

    As I see it OHC is the most important parameter when it comes to climate science, and it is impossible to get right. Models give us a confusing mess. Who can say that they are tuned to data from the last century.

    • OHC has only been reasonably sampled for the last decade or two. Those observations are too short, but the decadal trend shows the current and recent imbalance.

  43. Some of us already know the issue with the models , the question is will the message/outcome of transparency be spoken loud enough to matter. There are many complex forcings in nature that are little understood and not quantifiable at this point in time . Until they can input such data , they will continue to be wrong . Complexities such as long term enso and it’s effects.

  44. Pingback: Una novità inattesa: I modellisti del clima aprono le loro scatole nere : Attività Solare ( Solar Activity )

  45. Economists rely on models, and appear to have a more sophisticated understanding of verification and validation issues. Nobel laureates Milton Friedman and Paul Krugman have advice about using models which could help climate scientists. Neither can be called “deniers” or even “skeptics.”

    Larry Summers gave similar advice in a WaPo op-ed on 6 Sept 2016, talking about public policy to manage the economy — but it applies as well to climate policy.

    “There is an important methodological point here: Distrust conclusions reached primarily on the basis of model results. Models are estimated or parameterized on the basis of historical data. They can be expected to go wrong whenever the world changes in important ways.”

    • Climate Models are wrong before anything changes.
      They don’t get what already happened right. They say we warmed over the past two decades while actual data shows us in a pause.

  46. A good data fit of the past ten thousand years would forecast a future that stayed in the same bounds. A model that makes a forecast that does not stay in the same bounds that we know has happened before is much more likely to be wrong than right.
    Climate has changed in natural cycles that we did not cause. Climate will continue to change in natural cycles that we can not control.

    • The models should be able to reproduce the past 800,000 years if they are valid. However, they can’t because they believe CO2 is the control knob. This excellent post suggests otherwise: https://judithcurry.com/2016/10/02/dust-deposition-on-ice-sheets-a-mechanism-for-termination-of-ice-ages/

      • Steven Mosher

        too funny the post is pretty much debunked..

      • Mosher, so says you. But as usual another baseless, unsupported assertion. Consistent with you ongoing attempts to defend and prop up your alarmist beliefs. Why don’t you provide links to where it has been debunked?

        And while your at it, why don’t you provide links to valid empirical data to define and calibrate the damage functions?

    • popesclimatetheory,

      A good data fit of the past ten thousand years would forecast a colder future, as temperatures have been decreasing for the past five thousand years:

      http://i.imgur.com/z07TBhB.png

      There is a problem of scale as there are forcings that act on the centennial scale, while others act on the millennial scale, and others are only relevant in the 10,000-100,000 yr scale, and yet others in the million year scale.

      When we test our models built on the past 150 year conditions against the evidence of the past 11,200 years, as Liu et al, 2014 did (Green curve), we see that they get part of the solar variability correct in the wiggles that match the orange bars (lows in the ~ 1000 yr Eddy cycle) and grey bars (lows in the ~ 2400 yr Bray cycle). However they get the general trend incorrect because they ignore the obliquity forcing and give too much response to GHG forcing (too high ECS; Blue, CH4; Red, CO2).

      Obliquity forcing can be ignored for models that are asked to forecast only a few decades to a couple of centuries in the future, if the decadal and centennial forcings are correct. Solar forcing is not expected to vary very much in the next decades, but GHG forcing is too high. This is the main problem and it is known, but there is too much political and reputation credit invested to correct it. The modelers, climate establishment, and politicians are just finger crossing that the deviation in the next couple of decades doesn’t get large enough to falsify their position.

    • Steven Mosher

      here is a CLUE

      if a model only predicts global temperature… ITS NOT A CLIMATE MODEL..

      A minimal spec for a climate model would be

      A) Represent relevant climate parameters at LEAST at a continental Scale
      B) Provide measures for

      A) High and Low temperature at the surface
      B) High and low Sea surface temperatures
      C) Sea Surface Salinity
      D) Global Ice
      E) Precipitation
      F) Temperatures at several pressure levels

      If it cant do that ITS NOT A CLIMATE MODEL and thats a pretty low bar

      • Steven Mosher,

        If there aren’t any useful climate models, what’s all the money been spent on?

        if you can’t even define the present climate of a continent (say the US), how can you specify a prediction of something you admittedly know nothing about?

        I’m surprised anybody fell for the idea that people like Mann, Hansen, or Schmidt, actually knew what they were talking about.

        Their cultist jargon should have alerted any rational person to the vacuity of their argument.

        Climate Cult Scientism. Not even a GHE falsifiable hypothesis to prop up the facade of the charade.

        Cheers.

      • A minimal spec for a climate model would be […]

        I’d add a few bullets:

        •       Get the ITCZ at least halfway right

        •       Get ENSO realistic

        •       Get the monsoon effect right

        •       Get the equal albedo between hemispheres right as an emergent phenomenon

        Once a climate model does that, perhaps we can pay attention to what it says about TCR.

      • and, get the Hot Spot right.

      • Steven Mosher: A minimal spec for a climate model would be

        That is a good start. I would add: for each dependent variable (mean temp, rainfall, etc) specify the criterion to reach in order to claim success, such as 0.5% IMSE over 20 years of forecasts from the same model.

      • Steven Mosher

        Hot spot?
        Not very well measured.

        ENSO? you cant get the timing right by definition.

        You guys realize that you just ruled out what scaffetta , Javier, and other clowns call climate models

      • Hot spot? Not very well measured.

        Neither is the surface well measured, but that doesn’t seem to prevent the general panic.

        RAOBS/Radiometers have their own separate issues, but they tend to agree that the models have been wrong – by a lot, for the last four decades.

      • ENSO? you cant get the timing right by definition.

        This from someone who’s diven feet first into statistical arguments?

        I remember something a while back (IIRC in an earlier generation of model) about models that produced El Nino’s at regular intervals. Well, the real thing is irregular, with a certain distribution profile. I would expect a model to get its distribution profile right (when hindcasting) without necessarily getting the exact timing the same.

        The same should be said for other definable events with such distributions.

        Of course, as the time period of our changing distributions gets longer (e.g. AMO), it becomes harder and harder to define how the global “average” temperature profile should be constrained.

        This is why the whole “global warming” thing is circular. Models that don’t match the expected warming from ~1970 forward get thrown out.

        But it certainly should be able to match ENSO, statistically.

  47. Pingback: Roundup Nov 8 | Catallaxy Files

  48. Thank you for the essay.

  49. I gotta second Matthew. Thanks for this essay.

  50. Weather, climate, and temperature are mathematically chaotic. The idea of an “average global temperature” is highly questionable. The idea of “radiative feedback” is highly questionable. Relationships between most of the hundreds (thousands?) of variables affecting climate are not known at all or only known to some degree of certainty. Not all important variables are considered in the modeling (variations in sun output, Milankovitch cycles, cosmic rays, to name a few really big ones). Other big ones are at best approximated (precipitation, cloud formation, for example). The assumptions that result in the equations that define the models have not been independently evaluated and verified. The best you can say about the equations defining the models is that they are very crude characterizations of the very complex climate system that climatologists are only beginning to understand.. The numerical approximation methods used to “solve” the differential equations that define the model are exactly that – “approximations”. The mesh upon which the numerical approximation methods produce iterative projections is arbitrary and large relative to the complexity of the system and generate error at each iteration. Climate modeling in an exercise in approximations of approximations of approximations of approximations with equations that do not take into consideration the chaotic nature of the whole system or frankly, the limitations of computers and mathematics.

    Who is kidding whom to claim that there is any credence in projections that come out of the models – “tuned” (otherwise known as “fudged”) or untuned?.

    Dr. Darko Butina puts it very simply (I am paraphrasing): “What are the 2500 molecules surrounding each CO2 molecule doing while CO2 goes about the hard work of destroying the planet?”

  51. Statement from Steven Mosher re climate models –

    “And they are built by scientists for scientists.. for providing insight or for testing in a gross way our understanding of the climate”

    Partially true in a Climatological double-speak way.

    Hopefully programmed by professional programmers initially. Demonstrably altered by amateur programmers such as G Schmidt – mathematician and possibly self proclaimed scientist.

    According to NASA –

    “Model development is an ongoing task. As new physics is introduced, old bugs found and new applications developed, the code is almost continually undergoing minor, and sometimes major, reworking. Thus any fixed description of the model is liable to be out of date the day it is printed.”

    So billions of dollars have been expended as a result of a nonscientific amateur programmer’s desire to provide himself with insight (it hasn’t worked so well, so far), or to test someone’s gross understanding of an average of events which have already occurred.

    A very expensive toy computer game. Amateurs masquerading as professionals.

    It’s worth noting that the WMO uses an example of validation which doesn’t mention temperature at all. Climatologists and their GHE supporters are fixated on the least important weather parameter. Even the World Meteorological Organisation doesn’t seem to think it’s very important.

    So what’s this whole GHE amateurish toy computer model building effort worth?

    Saying it’s not worth a brass razoo is probably overvaluing it by a couple of orders of magnitude!

    Cheers.

    • This is correct. The average citizen would never understand it. However, whether the model suits a particular scientist, another would create a ‘better model’. Do we ever hear about limitations, etc about these models?
      If we knew everything about the environment and we have computer muscle to crunch the information, we would not have the programming skills to build a proper model. What scientists hate is fact. That would put them out of business.

  52. Dear Professor Curry,

    I want to express my appreciation for your effort to separate climate science from climate politics. You have done well, but ultimately the vote tomorrow, if actually counted, will decide the conclusion of the AGW story.

  53. Pingback: Dreigen atol-eilanden door de golven te worden verzwolgen? - Climategate.nl

  54. AGW is safe and sound. The models are not a problem. Nothing will come of this. On Google Scholar, the incremental march of improved understanding just keeps marching in… and climate skeptics are getting their clocks cleaned.

    If doubt Bjorn Stevens keeps a list of good guys and bad guys as the vast vast majority of climate scientists are good guys, and everybody knows the handful of nut jobs.

    • If it cost x trillion to prevent y degrees of warming 80 years in the future, the models matter. If 70% of the y degrees is completely unavoidable, the models really matter. As it is, the models are running hot and are tuned so that they aren’t all that useful for making trillion dollar decisions.

      You can run with the moshers and kill coal, possibly eliminating 15% of the potential problem or go with the land reclamation/adaption gang and possibly offset 50% of the problem that might be avoidable.

      • I don’t think anybody has given serious thought as to what to do.

      • “As it is, the models are running hot and are tuned so that they aren’t all that useful for making trillion dollar decisions.”

        I keep reading this in *these* sorts of places.
        I suppose if you think the following data series are fraudulent with the world out to get you, or that UAH (alone) is the answer…..

        https://pbs.twimg.com/media/CqBWoEOXgAAMSz9.jpg

      • Tony, “I keep reading this in *these* sorts of places.”

        Right, places where engineers and entrepreneurs might wander more often than activists. Tuning a model to match a wiggle then adjusting the baseline to make it look as good as possible is pretty easy. The real climate world though works on thermodynamics meaning absolute temperature, humidities, thermal gradients (those pesky regions and hemispheres) need to match. Now if you can change a model’s initial value by 1/1000th of a degree to simulate natural variability, try changing them 0.3 degree to simulate uncertainty. Once you knock that out post the results.

      • JCH, “I don’t think anybody has given serious thought as to what to do.”

        “Energy will be necessarily more expensive.” BHO
        “If they build it we will bankrupt them.” BHO

        The fearless pen and phone wielder had it all figured out and represents the gang that has all the answers in the US.

      • Go nuclear!

      • Right, places where engineers and entrepreneurs might wander more often than activists. …

        Actually, to be right, that would be places where activist engineers and, up to, maybe, mid-management libertarians might wander by more often than scientists.

    • And what would you say to the following statistic? Old thinking is not an acceptable response?

      “Contrary to reports of a ‘97 per cent consensus’, the 2014 paper by Legates et al. demonstrated that only 0.5 per cent of the abstracts of 11,944 scientific papers on climate-related topics published over the 21 years from 1991-2011 had explicitly stated an opinion that more than half of the global warming since 1950 had been caused by human emissions of CO2 and other greenhouse gases.”

      Quotation from D.R. Legates, W.Soon, W.M. Briggs, C.Monckton, “Climate Consensus and ‘Misinformation’: a Rejoinder to Agnotlogy, Scienctific Consensus, and the Teaching and Learning of Climate Change”, Science and Education, August 2013.

  55. Pingback: Weekly Climate and Energy News Roundup #248 | Watts Up With That?

  56. Pingback: Year in review – Climate Etc.’s greatest ‘hits’ | Climate Etc.