by Judith Curry
Paul Voosen has written a remarkable article in Science about climate model tuning.
In November 2009 following ClimateGate, I made a public plea for greater transparency in an essay published at ClimateAudit: On the credibility of climate research.
When I started blogging at Climate Etc. in 2010, a major theme was a call for formal Verification and Validation of climate models:
- The culture of building confidence in climate models
- Climate model verification and validation
- Climate model verification and validation. Part II
- Verification, validation, and uncertainty quantification in scientific computing
In my 2011 paper Climate Science and the Uncertainty Monster, I raised concerns about IPCC’s detection and attribution methods, whereby the same observations used for validation were used either implicitly or explicitly in model calibration/tuning. A group of IPCC authors (led by Hegerl) responded to the uncertainty monster paper [link]. I further responded with a formal response to the journal. In a blog post, I highlighted one of the Climategate emails from Gabi Hegerl:
So using the 20th c for tuning is just doing what some people have long
suspected us of doing…and what the nonpublished diagram from NCAR showing correlation between aerosol forcing and sensitivity also suggested. Slippery slope… I suspect Karl is right and our clout is not enough to prevent the modellers from doing this if they can. We do loose the ability, though, to use the tuning variable for attribution studies.
In the blog post, I concluded:
To me, the emails argue that there is insufficient traceability of the CMIP model simulations for the the IPCC authors to conduct a confident attribution assessment and at least some of the CMIP3 20th century simulations are not suitable for attribution studies. The Uncertainty Monster rests its case (thank you, hacker/whistleblower).
Those of you who have followed the climate debate for the past decade will recall the reception of the climate community to my writings on this topic.
In 2013, a remarkable paper was led by Max Planck Institute authors Mauritsen, Stevens, and Roeckner:. Tuning the climate of a global model, which was discussed in a CE blog post:
“Climate models ability to simulate the 20th century temperature increase with fidelity has become something of a show-stopper as a model unable to reproduce the 20th century would probably not see publication, and as such it has effectively lost its purpose as a model quality measure. Most other observational datasets sooner or later meet the same destiny, at least beyond the first time they are applied for model evaluation. That is not to say that climate models can be readily adapted to fit any dataset, but once aware of the data we will compare with model output and invariably make decisions in the model development on the basis of the results.”
This paper led to a workshop on climate model tuning, and a subsequent publication in Aug 2016 .The art and science of climate model tuning, which was discussed in this blog post. My comments:
This is the paper that I have been waiting for, ever since I wrote the Uncertainty Monster paper.
The ‘uncertainty monster hiding’ behind overtuning the climate models, not to mention the lack of formal model verification, does not inspire confidence in the climate modeling enterprise. Kudos to the authors of this paper for attempting to redefine the job of climate modelers.
Paul Voosen’s article in Science Climate scientists open up their black boxes to scrutiny follows up on the climate model tuning issue. Its a short paper, publicly available, it is well worth reading. Some excerpts:
Next week, many of the world’s 30 major modeling groups will convene for their main annual workshop at Princeton University; by early next year, these teams plan to freeze their code for a sixth round of the Coupled Model Intercomparison Project (CMIP), in which these models are run through a variety of scenarios. By writing up their tuning strategies and making them publicly available for the first time, groups hope to learn how to make their predictions more reliable, says Bjorn Stevens, an MPIM director who has pushed for more transparency. And in a study that will be submitted by year’s end, six U.S. modeling centers will disclose their tuning strategies—showing that many are quite different.
Indeed, whether climate scientists like to admit it or not, nearly every model has been calibrated precisely to the 20th century climate records—otherwise it would have ended up in the trash. “It’s fair to say all models have tuned it,” says Isaac Held, a scientist at the Geophysical Fluid Dynamics Laboratory, another prominent modeling center, in Princeton, New Jersey.
For years, climate scientists had been mum in public about their “secret sauce”: What happened in the models stayed in the models. The taboo reflected fears that climate contrarians would use the practice of tuning to seed doubt about models—and, by extension, the reality of human-driven warming. “The community became defensive,” Stevens says. “It was afraid of talking about things that they thought could be unfairly used against them.”
But modelers have come to realize that disclosure could reveal that some tunings are more deft or realistic than others. It’s also vital for scientists who use the models in specific ways. They want to know whether the model output they value—say, its predictions of Arctic sea ice decline—arises organically or is a consequence of tuning. Schmidt points out that these models guide regulations like the U.S. Clean Power Plan, and inform U.N. temperature projections and calculations of the social cost of carbon. “This isn’t a technical detail that doesn’t have consequence,” he says. “It has consequence.”
Aside from being more open about episodes like this, many modelers say that they should stop judging themselves based on how well they tune their models to a single temperature record, past or predicted. The ability to faithfully generate other climate phenomena, like storm tracks or El Niño, is just as important. Daniel Williamson, a statistician at the University of Exeter in the United Kingdom, says that centers should submit multiple versions of their models for comparison, each representing a different tuning strategy. The current method obscures uncertainty and inhibits improvement, he says. “Once people start being open, we can do it better.”
Well finally, we are seeing climate modeling move in a healthy direction, that has the potential to improve climate models, clarify uncertainties, and so build understanding of and trust in the models.
Its about time: the response of the climatariat to my writings about climate models circa 2009-2011 was to toss me out of the tribe, dismiss me as a ‘denier’, etc.
I find it absolutely remarkable that this statement was published in Science:
The taboo reflected fears that climate contrarians would use the practice of tuning to seed doubt about models—and, by extension, the reality of human-driven warming. “The community became defensive,” Stevens says. “It was afraid of talking about things that they thought could be unfairly used against them.”
This reflects pathetic behavior by the climate modelers (and I don’t blame Bjorn Stevens here; he is one of the good guys). You may recall what I wrote in my post Climategate essay Towards rebuilding trust:
In their misguided war against the skeptics, the CRU emails reveal that core research values became compromised.
So the climate modelers were afraid of criticisms by skeptics, and hence kept climate models opaque to outsiders, to the extent that even other climate modeling groups didn’t know what was going on with their climate models. And hence:
- best practices in climate model tuning were not developed
- scientists conducting assessment reports had no idea of the uncertainties surrounding conclusions they were drawing from climate models
- scientists doing impact assessments and policy makers relying on this information had no idea of the uncertainties in the climate models and hence in their conclusions and the implications for their policies.
Well done, team climate modelers, all of this because you were afraid of some climate contrarians.
Well, it is a relief to finally see the international climate modeling community tackling these issues, thanks to the leadership by the MPI modelers.
But I wonder if these same climate modelers realize the can of worms that they are opening. In my blog post The art and science of climate model tuning, I wrote:
But most profoundly, after reading this paper regarding the ‘uncertainty monster hiding’ that is going on regarding climate models, not to mention their structural uncertainties, how is it possible to defend highly confident conclusions regarding attribution of 20th century warming, large values of ECS, and alarming projections of 20th century warming?