by Judith Curry
We survey the rationale and diversity of approaches for tuning, a fundamental aspect of climate modeling which should be more systematically documented and taken into account in multi-model analysis. – Hourdin et al.
Two years ago, I did a post on Climate model tuning, excerpts:
Arguably the most poorly documented aspect of climate models is how they are calibrated, or ‘tuned.’ I have raised a number of concerns in my Uncertainty Monster paper and also in previous blog posts.
The existence of this paper highlights the failure of climate modeling groups to adequately document their tuning/calibration and to adequately confront the issues of introducing subjective bias into the models through the tuning process.
Think about it for a minute. Every climate model manages to accurately reproduce the 20th century global warming, in spite of the fact that that the climate sensitivity to CO2 among these models varies by a factor of two. How is this accomplished? Does model tuning have anything to do with this?
Well, in 2014 the World Climate Research Programme Working Group on Coupled Modelling organized a workshop on climate model tuning. The following paper has emerged from this workshop.
Hourdin, F., T. Mauritsen, A. Gettelman, J. Golaz, V. Balaji, Q. Duan, D. Folini, D. Ji, D. Klocke, Y. Qian, F. Rauser, C. Rio, L. Tomassini, M. Watanabe, and D. Williamson, 2016: The art and science of climate model tuning. Bull. Amer. Meteor. Soc. doi:10.1175/BAMS-D-15-00135.1, in press. [link to full manuscript].
Abstract. We survey the rationale and diversity of approaches for tuning, a fundamental aspect of climate modeling which should be more systematically documented and taken into account in multi-model analysis. The process of parameter estimation targeting a chosen set of observations is an essential aspect of numerical modeling. This process is usually named tuning in the climate modeling community. In climate models, the variety and complexity of physical processes involved, and their interplay through a wide range of spatial and temporal scales, must be summarized in a series of approximate sub-models. Most sub-models depend on uncertain parameters. Tuning consists of adjusting the values of these parameters to bring the solution as a whole into line with aspects of the observed climate. Tuning is an essential aspect of climate modeling with its own scientific issues, which is probably not advertised enough outside the community of model developers. Optimization of climate models raises important questions about whether tuning methods a priori constrain the model results in unintended ways that would affect our confidence in climate projections. Here we present the definition and rationale behind model tuning, review specific methodological aspects, and survey the diversity of tuning approaches used in current climate models. We also discuss the challenges and opportunities in applying so-called ‘objective‘ methods in climate model tuning. We discuss how tuning methodologies may affect fundamental results of climate models, such as climate sensitivity. The article concludes with a series of recommendations to make the process of climate model tuning more transparent.
If ever in your life you are to read one paper on climate modeling, this is the paper that you should read. Besides being a very important paper, it is very well written and readable by a non-specialist audience.
I’m not sure where to even start with excerpting the text, since pretty much all of it is profound. Here are a few selected insights from the paper:
Climate model tuning is a complex process which presents analogy with reaching harmony in music. Producing a good symphony or rock concert requires first a good composition and good musicians who work individually on their score. Then, when playing together, instruments must be tuned, which is a well defined adjustment of wave frequencies which can be done with the help of electronic devices. But the orchestra harmony is reached also by adjusting to a common tempo as well as by subjective combinations of instruments, volume levels or musicians interpretations, which will depend on the intention of the conductor or musicians. When gathering the various pieces of a model to simulate the global climate, there are also many scientific and technical issues, and tuning itself can be defined as an objective process of parameter estimation to fit a predefined set of observations, accounting for their uncertainty, a process which can be engineered. However, because of the complexity of the climate system and of the choices and approximations made in each sub-model, and because of priorities defined in each climate center, there is also subjectivity in climate model tuning (Tebaldi and Knutti 2007) as well as substantial know-how from a limited number of people with vast experience with a particular model.
Why such a lack of transparency? Maybe because tuning is often seen as an unavoidable but dirty part of climate modeling; more engineering than science; an act of tinkering that does not merit recording in the scientific literature. There may also be some concern that explaining that models are tuned, may strengthen the arguments of those claiming to question the validity of climate change projections. Tuning may be seen indeed as an unspeakable way to compensate for model errors.
Although tuning is an efficient way to reduce the distance between model and selected obser3vations, it can also risk masking fundamental problems and the need for model improvements. There is evidence that a number of model errors are structural in nature and arise specifically from the approximations in key parameterizations as well as their interactions.
Introduction of a new parameterization or improvement also often decreases the model skill on certain measures. The pre-existing version of a model is generally optimized by both tuning uncertain parameters and selecting model combinations giving acceptable results, probably inducing compensation errors (over-tuning). Improving one part of the model may then make the ’skill’ relative to observations worse, even though it has a better formulation. The stronger the previous tuning, the more difficult it will be to demonstrate a positive impact from the model improvement and to obtain an acceptable retuning. In that sense, tuning (in case of over-tuning) may even slow down the process of model improvement by preventing the incorporation of new and original ideas.
The increase of about one Kelvin of the global mean temperature observed from the beginning of the industrial era, hereafter 20th century warming, is a de facto litmus test for climate models. However, as a test of model quality, it is not without issues because the desired result is known to model developers and therefore becomes a potential target of the development.
There is a broad spectrum of methods to improve model match to 20th century warming, ranging from simply choosing to no longer modify the value of a sensitive parameter when a match is already good for a given model, or selecting physical parameterizations that improve the match, to explicitly tuning either forcing or feedback both of which are uncertain and depend critically on tunable parameters. Model selection could, for instance, consist of choosing to include or leave out new processes, such as aerosol cloud interactions, to help the model better match the historical warming, or choosing to work on or replace a parameterization that is suspect of causing a perceived unrealistically low or high forcing or climate sensitivity.
The question whether the 20th century warming should be considered a target of model development or an emergent property is polarizing the climate modeling community, with 35 percent of modelers stating that 20th century warming was rated very important to decisive, whereas 30 percent would not consider it at all during development. Some view the temperature record as an independent evaluation data set not to be used, while others view it as a valuable observational constraint on the model development. Likewise, opinions diverge as to which measures, either forcing or ECS, are legitimate means for improving the model match to observed warming. The question of developing towards the 20th century warming therefore is an area of vigorous debate within the community.
Because tuning will affect the behavior of a climate model, and the confidence that can be given to a particular use of that model, it is important to document the tuning portion of the model development process. We recommend that for the next CMIP6 exercise, modeling groups provide a specific document on their tuning strategy and targets, that would be referenced to when accessing the dataset. We recommend distinguishing three levels in the tuning process: individual parameterization tuning, component tuning and climate system tuning. At the component level, emphasis should be put on the relative weight given to climate performance metrics versus process oriented ones, and on the possible conflicts with parameterization level tuning. For the climate system tuning, particular emphasis should be put on the way energy balance was obtained in the full system: was it done by tuning the various components independently, or was some final tuning needed? The degree to which the observed trend of the 20th century was used or not for tuning should also be described. Comparisons against observations, and adjustment of forcing or feedback processes should be noted. At each step, any occasion where a team had to struggle with a parameter value or push it to its limits to solve a particular model deficiency should be emphasized. This information may well be scientifically valuable as a record of the uncertainty of a model formulation.
The systematic use of objective methods at the process level in order to estimate the range of acceptable parameters values for tuning at the upper levels is probably one strategy which should be encouraged and may help make the process of model tuning more transparent and tractable. There is a legitimate question on whether tuning should be performed preferentially at the process level, and the global radiative budget and other climate metrics used for a posteriori evaluation of the model performance. It could be a good way to evaluate our current degree of understanding of the climate system and to estimate the full uncertainty in the ECS. Restricting adjustment to the process level may also be a good way to avoid compensating model structural errors in the tuning procedure. However, because of the multi-application nature of climate models, because of consistency issues across the model and its components, because of the limitations of process studies metrics (sampling issues, lack of energy constraints), and also simply because the climate system itself is not observed with sufficient fidelity to fully constrain models, an a posteriori adjustment will probably remain necessary for a while. This is especially important for the global energy constraints that are a strong and fundamental aspect of global climate models.
Formalizing the question of tuning addresses an important concern: it is essential to explore the uncertainty coming both from model structural errors, by favoring the existence of tens of models, and from parameter uncertainties by not over-tuning. Either reducing the number of models or over-tuning, especially if an explicit or implicit consensus emerges in the community on a particular combination of metrics, would artificially reduce the dispersion of climate simulations. It would not reduce the uncertainty, but only hide it.
JC reflections
This is the paper that I have been waiting for, ever since I wrote the Uncertainty Monster paper.
Recall my early posts, calling for verification and validation of climate models:
- The culture of building confidence in climate models
- Climate model verification and validation
- Verification, validation and uncertainty quantification in scientific computing
The recommendations made by the Hourin, Mauritsen et al. for CMIP6 to document the decisions and methods surrounding the model tuning are a critical element of climate model validation.
For too long, the job of climate modelers has seemed to be to make sure their model can reproduce the 20th century warming, in support of the need for support of highly confident conclusions regarding the ‘dangerous anthropogenic climate change’ meme of the UNFCCC/IPCC.
The ‘uncertainty monster hiding’ behind overtuning the climate models, not to mention the lack of formal model verification, does not inspire confidence in the climate modeling enterprise. Kudos to the authors of this paper for attempting to redefine the job of climate modelers to include:
- Documenting the methods and choices used in climate model tuning
- More objective methods of model tuning
- Explorations of climate model structural uncertainty
But most profoundly, after reading this paper regarding the ‘uncertainty monster hiding’ that is going on regarding climate models, not to mention their structural uncertainties, how is it possible to defend highly confident conclusions regarding attribution of 20th century warming, large values of ECS, and alarming projections of 20th century warming?
The issues raised by Hourdin, Mauritsen et al. are profound and important. Here’s to hoping that we will see a culture change in the climate modeling community that increases the documentation and transparency surrounding climate model tuning and begins explore the structural uncertainties of the climate models.
Don’t forget to read the paper.
Pingback: The art and science of climate model tuning – Enjeux énergies et environnement
A quick reaction to a quick read of the post and the paper — This might be of great use to climate scientists, but more important is that it appears to radically change the public policy debate. The recommendation of massive and immediate public policy action rests upon presumption that models’ forecasts are sufficiently reliable to justify such action. That rested largely on their successful back-testing: model’s replicating the 20th C warming (done without standard methodological safeguards, such as out-of-sample tests).
How can the public policy debate continue in its present form given this new information?
Since the 20thC predictions results to a large extent from tuning models — as many have alleged — than other methods of validation are required. For example, evaluation of model’s predictions of future temperatures (i.e., running the model with emissions observations from after its creation, matching the resulting predictions with observed temperatures). We have multi-decade observations from the models used in the past IPCC assessment results. The money to re-run them would be pocket change compared to the value of the test.
The Obama solution, hun?
If propaganda (models) fails to persuade, then the solution is better propaganda (models).
Glenn,
Science is, broadly speaking, the process of hypothesis testing.
Testing is not “propaganda”.
Editor of the Fabius Maximus website,
Science in theory is about the “process of hypothesis testing.”
Science in practice, however, is often something entirely different.
“The Clexit Campaign“ founder by Viv Forbes is the answer:
http://clexit.net/wp-content/uploads/2016/07/clexit.jpg
It is indeed extraordinarily difficult to “tune” models of open-ended non-linear feedback-driven chaotic systems when we don’t even know the signs of some of the feedbacks we have identified – clouds for example, let alone in all probability the majority of the feedbacks themselves, although I’m sure it is a nice steady number for those who have been fortunate enough to make a career of it.
It’s far, far easier to use AlGoreithms to Mannipulate the data to match the models, we even have specialists on this fascinating aspect of climate “science” posting on these very blogs, they are some of our most entertaining contributors.
“The data doesn’t matter. We’re not basing our recommendations on the data. We’re basing them on the climate models.”
~ Prof. Chris Folland ~ (Hadley Centre for Climate Prediction and Research)
No Fab, but claiming a record with 1\100th of a degree when half the globe is estimated data, is a political statement, and “propaganda”
A model produced that 1\100th of a degree record.
Models are not propaganda, how their results are fed to the public at large, is propaganda
“Selfishness, self-centeredness“ in both leaders and in opponents of worldwide social movements is the root of cause of human suffering.
Today, I am pleased to report that unselfish cooperation among a few vocal opponents of the AGW fable promise a reconnection of society to reality at the London Conference on Climate Change on 8-9 Sept 2016, “A NEW DAWNING OF PEACE & TRUTH ON EARTH!
https://tallbloke.files.wordpress.com/2016/08/london-conference-volume.pdf
Editor, FM Website: “How can the public policy debate continue in its present form given this new information?”
Regardless of this new information, the public policy debate over AGW will continue in its present form until and unless at least one of two things happens.
Either: (1) serious economic sacrifices and a series of disruptive lifestyle changes are forced upon the voting public; or (2) global mean temperatures fall continuously for a period of fifty years or more.
Don’t bet on either one of these happening within the lifetimes of most of the people who now read this blog.
I wouldn’t bet on that timescale, Beta Blocker.
The public are demonstrably rapidly losing confidence in the political “elite” and their presumptuous punitive solutions and it is becoming increasingly evident that Mother Nature herself is refusing to co-operate with the increasingly discredited alarmist prognostications of the AGW catastrophists.
What with Brexit and an increasing probability of an extremely non-establishment incumbent in the White House, things are going to move with increasing rapidity over the next few years.
Cue much wailing and gnashing of teeth by all the right people…
catweazle666 “I wouldn’t bet on that timescale, Beta Blocker.”
The voting public has no incentive right now to look more closely at the major science questions now facing the climate modeling community. For most people, any possible downside to greatly reducing our carbon emissions in response to the climate modeler’s predictions is nothing more than an abstraction.
As long as no sacrifice is being demanded of the general public, the details of today’s climate science will remain a highly-specialized and opaque topic, one of little direct interest to most people. And, as is the case with most science topics, the public will continue to trust the experts to do their jobs competently and in the public interest.
As long as global mean temperatures continue to rise at whatever rate — — small, large, or somewhere in between — and as long as most people aren’t being directly and visibly affected by climate-driven public policy decisions, the use of climate change as a justification for making significant changes to our energy policies and to the way our economy functions will continue indefinitely into the future.
“The voting public has no incentive right now…”
I beg to differ, especially about the public in the UK and Europe.
Ongoing increases in the price of energy entirely traceable to attempts to alter the climate by increasing taxation have had a very noticeable effect on the voting public.
Even the Guardian has noticed that.
The social cost of fuel poverty is massive, and growing. In the winter of 2012/13, there were 31,000 extra winter deaths in England and Wales, a rise of 29% on the previous year. Around 30-50% of these deaths can be linked to being cold indoors. And not being able to heat your home also takes a huge toll on health in general: those in fuel poverty have higher incidences of asthma, bronchitis, heart and lung disease, kidney disease and mental health problems.
https://www.theguardian.com/big-energy-debate/2014/sep/11/fuel-poverty-scandal-winter-deaths
Here are more:
http://fuelpoverty.eu/2014/07/02/energy-poverty-in-denmark/
http://fuelpoverty.eu/2014/07/09/energy-poverty-in-germany-highlights-of-a-beginning-debate/
Both those countries have greatly increased the cost of energy, causing loss of jobs and fuel poverty – especially for the poor and old.
Since Brexit, there is now a movement in the UK to withdraw from the EU energy treaties, and it is gaining momentum.
We have already lost tens of thousands of jobs due to the closure and offshoring of strategic industries such as steel and aluminium, and this has not gone unnoticed by the electorate.
So the voters this side of the Pond do indeed have incentives to react adversely to the climate change impositions, and they will vote accordingly – are already doing so, in fact.
I think you’re on the mark Beta. I might quibble about timeframes, but overall you have accurately assessed the situation in my opinion
“How can the public policy debate continue in its present form given this new information?”
This is not new information at all. it is simply revealing the man behind the curtain. Something that most model builders never bothered to reveal. Something that I have long known to be the case.
What is new is that a large group of climate scientists published this where they did, when they did, to significantly influence CMIP6. Mauritsen I expected; he published the modeled adaptive iris paper (see my and Judith’s back to back posts when it first came out). Says models are missing a significant negative feedback. But Watanabe???
I see it as another step in a large CAGW climb down. Models will be proven wrong by AR6, without ability to hide that fact after the AR5 shenanigans were exposed. Essay Hiding the Hiatus. Climate Audit. These folks know this already, and lay the foundation for a mea culpa excuse: parameterization is tricky, and we should not have used warming as the ‘test’ because of embedded attribution assumptions–oops, as the majority of surveyed modeling teams said they had. Crux of paper, with data in the appendix.
Another crack in the CAGW edifice.
United Nations climate panel IPCC kind of knew about heavy adjustments to match observation:
«When initialized with states close to the observations, models ‘drift’ towards their imperfect climatology (an estimate of the mean climate), leading to biases in the simulations that depend on the forecast time. The time scale of the drift in the atmosphere and upper ocean is, in most cases, a few years. Biases can be largely removed using empirical techniques a posteriori. The bias correction or adjustment linearly corrects for model drift. The approach assumes that the model bias is stable over the prediction period (from 1960 onward in the CMIP5 experiment). This might not be the case if, for instance, the predicted temperature trend differs from the observed trend. It is important to note that the systematic errors illustrated here are common to both decadal prediction systems and climate-change projections. The bias adjustment itself is another important source of uncertainty in climate predictions. There may be nonlinear relationships between the mean state and the anomalies, that are neglected in linear bias adjustment techniques.»
(Ref: Contribution from Working Group I to the fifth assessment report by IPCC; 11.2.3 Prediction Quality; 11.2.3.1 Decadal Prediction Experiments )
Unfortunately IPCC was unable to grasp that this implied that the models were unreliable. IPCC was aiming for consensus in support of United Nations Framework Convention on Climate Change UNFCCC, scrutiny was not one of the principles governing IPCC. Uncertainty and unreliability revealed by heavy adjustments of climate models to match observation did not find it´s way to the Summary for Policy Makers.
It is not possible for tuning to overcome uncertainty. Conversely, if tuning overcomes uncertainty it is a fudge factor. As an example, it is not possible to measure the impact of fossil fuel emissions on the carbon budget given uncertainties in natural flows.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2654191
The problem with communicating uncertainty is reality vs model of reality.
The uncertainty is quantified within the boundaries of the model, not the boundary of real climatic physics, and this has intentionally not been openly put forward for discussion.
The 95% confidence is a 95% confidence within the world of the model. Given tuning, lack of grid resolution and poorly understood processes, and parameters this would and should mean uncertainty is much much higher.
Most outside of modeling are just not aware that the uncertainties only exist in the model version of reality. The real world uncertainty is not calculated at all.
Reblogged this on TheFlippinTruth.
The skill would seem to be in resisting tuning that is done with an outcome in mind, but such objectivity does not have the simplicity and data to allow this.
The models are of course complex. The propagation of accumulating errors in them does not seem to have ever been done rigidly. The measures of confidence intervals should reflect absolute uncertainty, but too often they seem to be a hopeful invention thrown in near the end of the process.
I suspect that if the error bounds were calculated and propagated by standard, classical approaches, the errors would be so large that all models would fit within them so widely that their further application would be useless as not be credible to expert observers.
Why try to tune a discordant symphony?
It is far easier to accept it with its many lacks of harmony as the sign of the times, the “new” style of music, love it or leave it.
Geoff.
Why try to tune a discordant symphony? Because society desperately needs to know how much warming will accompany future emissions of GHGs. And because society has invested billions and scientists have invested thousands of careers in the development of AOGCMs in an attempt to provide an answer.
In the fine print, the IPCC recognizes that their models have not systematically explored parameter space and that the spread in the output from their “ensemble of opportunity” (the IPCC’s phrase) therefore can not be interpreted as confidence intervals. So how can they report likely warming of X-Y degC under RCP8.5? They use their “expert judgment” to describe the 90% confident interval as “likely” rather than “very likely”. Policymakers and the public don’t understand the difference.
Pingback: Sobre el arcano arte del tuneado de los modelos climáticos (nuevo “paper”) | PlazaMoyua.com
“I suspect that if the error bounds were calculated and propagated by standard, classical approaches, the errors would be so large that all models would fit within them so widely that their further application would be useless as not be credible to expert observers.”
Indeed there is no need to proceed down the dirt road leading only to the swamp.
” Why try to tune a discordant symphony? It is far easier to accept it with its many lacks of harmony as the sign of the times, the “new” style of music, love it or leave it. Geoff.”
There will be no acceptance of such! All must be stomped into the earth until only grease spot remains, burned, then paved over until no such thing as ‘climate scientist’ shall ever again appear upon the surface of this Earth!
Pay no attention to the mann behind the model.
http://www-personal.umich.edu/~hersko/Photos/wizard-of-oz-curtain.jpg
You got it!
Great pun.
Thx, Glenn.
Mann ‘n Emerald City – some obvious parallels,
even ‘award’ presentations?.
It’s a degrees of freedom issue, not a transparency issue. Every knob in the model makes it more of a curve fitter. Ask what good a climate model is that is simply a polynomial that exactly fits all past data, which it can easily do.
It’s no good because the physics doesn’t exist to support it.
You can formalize the model tuning with Kalman filtering. It tells you exactly how to turn the knobs to take into account each additional data point. It seems like it’s exhibiting very economical intelligence, but mathematically amounts to least squares in a recursive form.
It makes transparent how intelligent model tuning is identical to curve fitting.
Every knob is a place where no physics exists.
Climate modelers have been proven to be tone deaf. All the tuning in the world will still result in lousy music.
“Climate modelers have been proven to be tone deaf. All the tuning in the world will still result in lousy music.”
If only we could get some few good jazz musicians to hang out with some good JPL engineers for a while. So that the slow dance of solar system planets together with beats, flanges, and whacks, could be converted into sound, symphony , or even horrible noise. Perhaps then some earthlings could get a clue of the ever present is :-)
Pingback: Ilmastomallien validointi perustuukin käyränpakotukseen? | Roskasaitti
It’s like the Ford Agency. You just tell the models where to go and what to do.
At the Hillman works their model is almost the same, one where she …
http://www.breitbart.com/jerusalem/2016/07/31/exclusive-nsa-architect-agency-clintons-deleted-emails/
tells the agencies where to go and what to do.
Do you suppose the real reason for the alarmists’ urgency was they knew they couldn’t keep this stuff hidden forever?
No.
Denial: it ain’t just a river in Egypt.
You would know.
That IPCC 2nd Order Draft Reconstructed Model…
https://climateaudit.org/2013/09/30/ipcc-disappears-the-discrepancy/
Oh what a tangled web they weave’
those GCM spaghetti graphs intended
to deceive.
Yes.
Maybe
Is there a press release? Will this paper be covered on the front page of the New York Times? Will Elizabeth Kolbert be writing a ten page piece for the New Yorker?
Of course
Prof. Curry…
Can you tell us something about who these people are? The only name I recognize is Andrew Gettelman, whose publications include several papers on the tropical tropopause layer that I found highly informative and thought-provoking. (IIRC I’ve linked one or two of them here.)
Is this a combination of heavy-hitters too powerful for the alarmist establishment to dismiss as “deniers”?
One of them looms large in my interpretation of the pause…
Results indicate that inherent decadal climate variability contributes considerably to the observed global-mean SAT time series, but that its influence on decadal-mean SAT has gradually decreased relative to the rising anthropogenic warming signal. … – M Watanabe
Vanishing hiatuses; surging surges.
You mean this?
Gamblers get in trouble because they pay attention to the short term and selectively recall winnings and suppress memory of losings. The house wins because it imposes a slow long term advantage.
Well, greenhouse gas forcing is a house advantage. But that does not include the short term El Nino cycle that you get addicted to. It is the slow uniform forcing.
So, you can have your global warming. Just don’t erroneously conflate it with dramatic change.
T. Mauritzen has co-authored with Bjorn Stevens… these guys are “alarmists”.
Stevens is an “alarmist” only when he says things that “skeptics” don’t like. When he says something they do like he is a brave scientist who is willing to face the wrath of his McCarthyite colleagues but who is also yellow-bellied because her pays homage to the religious dogma of AGW to protect his career and his funding.
Well, you could be right as Stevens didn’t sign this one:
Arctic climate change: Greenhouse warming unleashedAcosta Navarro and Varma et al. have shown that over the past three decades, declining European sulfur emissions could have contributed substantially to Arctic warming. But ultimately, it is not the
uncertain temporary aerosol cooling, but the warming induced by long-lived greenhouse gases such as carbon dioxide that has
warmed the Arctic in the past — and will continue to do so. – T. Mauritsen
Questions from a simple layman. Can anyone briefly further explain the above statement: “Likewise, opinions diverge as to which measures, either forcing or ECS, are legitimate means for improving the model match to observed warming“.
Doesn’t any model have to assume something on TCR? Thanks.
AFAIK phenomena like TCR are supposed to be emergent.
Of course, there has to be an assumption that the TCR exists in the first place. An assumption I consider highly unwarranted.
The way the MSN spins things! You couldn’t make this stuff up.
The “bump” Clinton got was as follows:
Those slight differences in the numbers are within the margin of error of the poll.
I made a mistake and posted this comment here. It was meant for the politics thread.
But it’s eerily germane to this thread. The whole purpose of climate science has become to produce and spin numbers that fit a political narrative. This, of course, is exactly the same thing that the MSM does.
Science, politics, the fourth estate, it’s all the same these days.
Dr. Curry:
Many times on this blog I have mentioned my “model” of Climate Change being simply due to the reduction of SO2 aerosol emissions from the troposphere.
This model perfectly projects all of the warming that has occurred, 1972-2011 (the latest year for which SO2 data is available), yet no one takes it seriously.
Can you, or anyone, explain why my model is being completely ignored?
It projects even greater warming than that now envisioned for greenhouse gasses, and urgently needs to be examined in detail
Because it doesn’t fit the CO2=control knob paradigm?
Yes. Which means it’s wrong.
Of course: if CO2 isn’t the “control knob” you don’t have an excuse to shut down the Industrial Revolution. A much better way to evaluate right vs. wrong than actual science.
CO2 is probably one ( of many ) control knobs of global average temperature.
But, global average temperature is not a control knob of the atmosphere.
Global average temperature doesn’t even seem to control extreme high temperatures, much less all the other atmospheric processes. Global average temperature is calculable but irrelevant.
Behead those who slander the anhydride of carbon!
“Control freak” is the usual term but “control knob” work work equally well for the AGW climate elites.
Burl, Have you sent a manuscript in for publication? Or have you posted a detailed blog post explaining the model? In order to examine any model properly one needs a fairly detailed explanation of the assumptions and formulae in it. If you have done this, please provide a link. I’m not offering to examine it in detail, but the link could be posted for others to examine.
Bigterguy:
The manuscript has been rejected by 5 publication editors, with only one sending it out for review.. Here are the reasons;
Climatic Change: “Much more sophisticated treatments have inferred a green-house gas forcing in explaining the historical record”
Science: “Because your manuscript was not given a high priority during the initial screening process, we have decided not to proceed to in-depth review”
Nature: Climate Change: “Among the considerations that arise at this stage are the immediacy of interest for the wider climate research community, the degree of advance provided, and the like”
Earth’s Future: “In your approach, you completely ignore the fact that Pinatubo sulfate was injected in the stratosphere, where it resided for a couple of years, while anthropogenic SO2 is injected in the troposphere and is removed by physical process (wash-out, etc) within 1-2 weeks”.
(Here, I provided data showing that the climatic effect of stratospheric and tropospheric sulfate aerosols is identical, but it was ignored)
Geophysical Research Letters: “Based on my reading of the manuscript relative to a large number of other submissions, my conclusion is that the work extends it conclusions beyond what is supported by the research methods and results”
An earlier version of my model can be read by googling “It’s SO2, not CO2”. However, I have a later version titled “Climate Change Deciphered” which factually proves everything stated in the original version.. This I will try to “publish” in a blog.so that it can be reviewed..
”
:
The link Burl, if you please!
SO2 aerosols have a relatively short residence time in the atmosphere and are therefore not globally distributed, being produced principally in the Northern Hemisphere and getting washed out of the atmosphere in the Northern Hemisphere they have little impact on the Southern Hemisphere unless “your” model somehow redistributes local temperature impacts globally.
Jeff Norman:
What you state is simply not true. Anthropogenic SO2 aerosols, being constantly renewed, have a very long effective lifetime.
Google my blog post “Its SO2, not CO2” for more on this..
Burl wrote: Can you, or anyone, explain why my model is being completely ignored?
Your model doesn’t explain why warming was observed before 1972 when aerosols were rising, not falling.
By trial-and-error or “data mining”, one could find hundreds of phenomena that “perfectly” hindcast PART of the record of global warming. Most of these phenomena will have nothing to do with climate. Your reply might be that we know SO2 aerosol emissions (unlike other data) have something to do with climate; aerosols reflect SWR back to space. However, we also know from careful laboratory studies that increasing GHGs will slow down the rate at which LWR escapes to space. Those laboratory experiments define the warming effects of rising GHGs (in W/m2 of forcing) far more accurately than the cooling effects of aerosols. The indirect effect of aerosols on clouds is the biggest source of uncertainty in forcing.
So the IPCC uses models that include the combined effects of both GHGs and aerosols. FWIW, those models can replicate the entire historical record (1850 to present) far more accurately than a model based only on aerosols. In the IPCC’s projections, scenarios project falling sulfate aerosols and this does contribute significantly to future warming.
Franktoo:
Warming prior to 1972 was caused for the same reason as that which occurred after 1972, decreases in SO2 emissions, and El Nino’s..
Prior to 1972 many power plants were fitted with SO2 scrubbers to combat acid rain, and this decrease in their SO2 emissions would negate some of the cooling caused by increasing SO2 emissions.
Also, hundreds of nuclear power plants were being commissioned prior to 1972, with many of them replacing older, polluting fossil fuel plants, and their demise would result in fewer SO2 emissions, and consequent warming,
Finally, .there is a 100% correlation with respect to increases in average global temperatures and business recessions (caused by reduced industrial activity and consequent fewer SO2 emissions). This is true of all 10 recessions since 1950 (and probably all prior recessions)
All of the above would explain the observed warming periods while SO2 emissions were, in the main, increasing.
Burl:
Forget anecdotal stories about nuclear power plants. The record of sulfate emissions from 1850-2005 can be seen in Figure 3 of
http://www.atmos-chem-phys.net/11/1101/2011/acp-11-1101-2011.pdf
1) The warming from 1920-1945 was not caused by falling sulfate emissions – emissions rose modestly and irregularly.
2) If you want to blame the modest 20 Gg (15%) fall in sulfate emissions for the rapid 1975-1998 warming (0.5? degC), how much cooling would you expect from the 70 Gg rise in emissions from 1950-1970. Three times as much would be -1.5 degC of cooling.
Correlation – especially for only part of a record – does not prove causation.
Looking at the whole record, sulfate emissions both rose at roughly the same time. Unless sometime else was causing warming, there is a positive (not negative) correlation between sulfate emissions and temperature.
Franktoo:
You wrote that “the warming was not caused by falling sulfate emissions–emissions rose modestly and irregularly”.
You need to examine Fig. 6 of the referenced report: global SO2 emissions FELL by about 29 Megatonnes in the 1930’s, and temperatures rose by about 0.3 deg. C. (per NASA). This was due to the greatly decreased industrial activity and consequently fewer SO2 emissions (no El Nino’s then) My model would predict a rise of about 0.58 deg. C. However, the decade of the 1930’s was one with continuous strong La Nina’s, which would have caused a lowering of average global temperatures by 0.2 deg. C. or more, thus making both values essentially identical.
The subsequent high temperatures in the early 1940’s was due to an exceptionally strong El Nino, and the 1945 recession temperature spike.
You also wrote “Correlation – for only part of the record – does not prove causation.”
I examined the temperature record back to 1920: there were 5 recessions between 1923 and 1950, and all of them coincided with spikes of temporary warming..
So we now have 15 data points with 100% correlation over nearly 100 years. In this instance, correlation clearly IS causation.
You also wrote ” Looking at the whole record, sulfate emissions both rose at roughly the same time. Unless something else was causing warming, there is a positive (not negative) correlation between sulfate emissions and temperature”
The correlation is that as sulfate emissions decrease, temperatures will rise.. Look at the response to a large volcanic eruption – initial cooling from the SO2 emissions, then a temperature rise as they settle out of the atmosphere. (It can be proven that the climatic response to stratospheric and tropospheric emissions is identical).
Judith,
Thanks for highlighting this paper. The suggestion that an ensemble of ‘tuned’ models with wildly different projections for future average temperature, rainfall patterns, etc, can be used as the justification for drastic public policy has always seemed absurd to many outside of climate science. I hope that more within climate science will start to see that absurdity, but I very much doubt it will happen. While I agree with you that rational public policy should start with ‘no regrets’ actions, those actions are not what most working in climate science seem to want; they instead want people to ‘fundamentally change how they live their lives’ in order to reduce energy use. The disagreement is primarily political/philosophical, and the lack of tranparency about the ‘tuning’ of GCMs is a manifestation of that political disagreement. GCM projections of extreme future warming are too valuable for advancing certain politically ‘favored’ policies to permit a frank public discussion of all the outright kludges that go into GCM projections, nor a frank discussion of how those kludges make model projections dubious at best. I suspect the paper will be largely ignored within the field. GCMs projections are scientifically of little use, but very useful politically.
I find this example from IPCC amazing:
“The climate change projections in this report are based on ensembles of climate models. The ensemble mean is a useful quantity to characterize the average response to external forcings, but does not convey any information on the robustness of this response across models, its uncertainty and/or likelihood or its magnitude relative to unforced climate variability….
There is some debate in the literature on how the multi-model ensembles should be interpreted statistically. This and past IPCC reports treat the model spread as some measure of uncertainty, irrespective of the number of models, which implies an ‘indistinguishable’ interpretation.”
(IPCC;WGI;AR5;Box 12.1 | Methods to Quantify Model Agreement in Maps)
I am stunned – how come that IPCC don´t see how silly this is. And what “an ‘indistinguishable’ interpretation” is supposed to mean – I have no idea.
They can fiddle with their models until Rome burns, but Peter Landesman put some nails in the coffin of these models in his article “The Mathematics Of Global Warming” ( available on American Thinker website). Specifically, the nature of the non-linear partial differential equations that define the models and the numerical approximation methods used to derive “solutions”, cannot provide anything that can be even remotely regarded as an accurate projection of temperature, sea level, or any other catastrophe variable. The reasons are that: the models are not accurate representations of climate due to incomplete understanding of the interactions of the hundreds of variables involved; unverified assumptions are made in the models; climate is mathematically chaotic; neither the computational methods or the computers are up to the task even if the models were accurate representations. Simply put, as two physicists, Gerlich and Tscheuschner observed, “the running of computer climate models is a very expensive form of computer game entertainment”.
The preimage of an epsilon ball around any given climate state is likely dense in the set of all states at any given sufficiently prior time.
That sounds very profound, but I have no idea what it means unless of course you are just being amusingly cynical, in which case I will have to steal that sentence for further use at some sufficiently dense future time.
Judith Curry,
According to my experience, the climate model tuning is OK, as the model proves that climate sensitivity can not distinguised from zero; I agree with Cripwell, Arrak and Wojick as they claim that climate sensitivity can not be distinguished from zero, and even with Scafetta and Lindzen as they say the climate sensitivity is lower than 1 and 0.5 without claiming the lowest value.
As far as I am aware, politicians shall be forced to give up the actions agreed in The Paris Conference to cut anthropogenic CO2 emissions – I name it ‘Pexit’. Though challenging,
institutional partners, including even politicians, must be made understand that the share of anthropogenic CO2 emissions on global warming is too minimal to be distinguished from zero.
Judith Curry, on the basis of your topics here I understand that you regard as too uncertain to prove the recent, really complex, global warming to be dominated by anthropogenic CO2 emissions. For instance, by using temperature results of observations in reality instead of model results you et al. have yet found that the climate sensitivity is only about half of the results adopted by IPCC. Even this means that there is no need to cut anthropogenic CO2 emissions according to the Paris agreement. As I understand, however, you still assume that the recent increase of CO2 in atmosphere has been controlled by anthropogenic CO2 emissions, and that the increase of total CO2 content in atmosphere has dominated the recent increase of climate temperature. But, as I understand, vice versa there are observations in reality, on the basis of which one can prove that neither anthropogenic CO2 emissions nor even total CO2 emissions dominate the global warming.
UN politicians have set IPCC a task to clarify the scientific background of the recent global warming believed to be caused by anthropogenic CO2 emissions. As IPCC did not managed to find any evidence in reality for global warming controlled by antropogenic CO2 emissions, it adopted hypothetic climate model results as such. I have understood that the model results are based on parameters chosen by circular argumentation. The incompetence of the climate models is apparent already therefore, that they cannot get working results of simulations on forcasts and hindcasts. As the Paris agreement is based on these model results, they are wrong and unable to reach working results needed.
The recent climate warming is a complex, inter-disciplinary problem. This makes for instance the climate sensitivity assessments be difficult; any one-sided expertise on that kind of complex problem may seldom be adequate to reach a working solution. One must learn to find the most essential issues on a complex problem to make any discovery of working solution be possible; there the qualifications needed are open-mindness and ability to take advantage of cross-disciplinary research in a way or an other. That I have experienced by solving for instance multi-disciplinary metallurgical problems. And, in its complications, a problem of climate change can be compared to any problem of medicine, too: an appropriate diagnosis is the most essential target before any medical treatment.
Why the Paris agreement must be given up? As a key question there is, how to make understandable, that the anthropogenic CO2 emissions can not be accused of the recent global warming.
At first, according to natural laws, the share of anthropogenic CO2 emissions in the recent increase of CO2 content in atmosphere is so minimal, i.e, about 4 %, that it can not have any essential influence on atmospheric CO2 content or on warming of climate; http://judithcurry.com/2011/08/04/carbon-cycle-questions/#comment-198992 .
Secondly, as I have in my comment above shown, the recent trend on increase of CO2 content in atmosphere has followed the warming of sea surface, especially on the areas, where sea surface sinks are; sea surfaces warming follow climate warming by lag especially on the areas where sea surface sinks are. This means that the increase of CO2 content in atmosphere follow warming and not vice versa.
Thirdly, there has been proved that even during glasials and interglasials CO2 contents in atmosphere have followed temperature changes in atmosphere and not vice versa.
Fouthly, geological observations prove that during the last 100 million years, in periods of 10 million years, changes of atmospheric CO2 content have followed changes of climate temperature.
Excellent piece. Unfortunately, it reinforces some of my intuition about the state of modeling. As a former budgeteer, I developed fudging to an art form. This description reeks with opportunities for fudging.
I’ve become more confident in my suspicion about CAGW with all the other data and historical accounts. This peek into modeling just adds to my confidence.
Exactly when has Judith not been correct? The world wonders.
I agree.
Polls show that about two-thirds of the public has, using its own ways of knowing, concluded that the modelers’ CAGW narrative is not to be trusted.
But the work that Dr. Curry and others do in fleshing out these doubts is invaluable.
Here is what Gavin Schmidt wrote on this today in a tweet
Gavin Schmidt @ClimateOfGavin 41m41 minutes ago
Good paper on climate model tuning just out in @ametsoc BAMS http://journals.ametsoc.org/doi/abs/10.1175/BAMS-D-15-00135.1 …
More work needed to document this across CMIP6 though…
But here is what he wrote on this subject several years ago
Real Climate Misunderstanding Of Climate Models
https://pielkeclimatesci.wordpress.com/2008/11/28/real-climate-misunderstanding-of-climate-models/
Comments On Real Climate’s Post “FAQ on climate models: Part II”
https://pielkeclimatesci.wordpress.com/2009/01/20/comments-on-real-climates-post-faq-on-climate-models-part-ii/
Roger Sr.
Schmidt’s failing is partly that parameters. They are inaccurately called effects, in a physical sense on climate evolution in runs, but they are not representative of the real physical processes, not effects but simply parameters. This has been put forward in a dishonest or inaccurate way to the bogisphere. Yes I made up a word .
This misconception intentional or not has misled the media politicians and the public.
RPSr, Thanks for those links. Had not previously captured some despite almost 6 years pounding on climate research. Now added to my personal data base of climate stuff: the good, the bad, the ugly. Put in the good category.
Think about it for a minute. Every climate model manages to accurately reproduce the 20th century global warming, in spite of the fact that that the climate sensitivity to CO2 among these models varies by a factor of two. How is this accomplished? Does model tuning have anything to do with this?
This is the damning indictment.
Tunings might aid in resolving some errors, but ultimately amplify others.
Are we far enough along to conclude that the range of model results is actually larger than natural variability?
“Are we far enough along to conclude that the range of model results is actually larger than natural variability?”
I would certainly say that the model uncertainty is far too high to exclude natural variability. Here are a few relevant quotes from the paper:
“We know indeed that the system is nearly in balance but for the ocean heat uptake, believed to be of about 0.5 W/m2 in our warming climate, a value much smaller than the model and observational uncertainties.
This provides a strong large-scale constraint. A common practice to fulfill this constraint is to adjust the top-of-atmosphere or surface energy balance in atmosphere-only simulations exposed to observed sea surface temperatures (component tuning) and check if the temperature obtained in coupled models is realistic. This energy balance tuning is crucial since a change by 1 W/m2 of the global energy balance produces typically a change of about 0.5 to 1.5 K in the global mean surface temperature in coupled simulations depending on the sensitivity of the given model.»
«Clouds exert a large net cooling effect (about -20 W/m2), but this effect is uncertain to within several W/m2 (Loeb et al. 2009). A 1 W/m2 change in cloud radiative effects is only a 5% variation of the net cloud cooling effect, and 2% of the solar (or shortwave) effect, well below observational and model uncertainty (L’Ecuyer et al. 2015).»
«Some modeling groups claim not to tune their models against 20th century warming, however, even for model developers it is difficult to ensure that this is absolutely true in practice because of the complexity and historical dimension of model development. The reality of this paradigm is questioned by findings of Kiehl (2007) who discovered the existence of an anti-correlation between total radiative forcing and climate sensitivity in CMIP3 models: High sensitivity models were found to have a smaller total forcing and low sensitivity models a larger forcing, yielding less cross-ensemble variation of historical warming than otherwise to be expected. Even if alternate explanations have been proposed and even if the results were not so straightforward for CMIP5 (cf. Forster et al. 2013), it could suggest that some models may have been inadvertently or intentionally tuned to the 20th century warming.»
Not only the CO2 tuning variations, the same models all had different aerosol tuning.
Process works like this, tune model to hindcast {blank) extrapolate future
[Blank] represents Are the models representing reality or falsely reproducing past climate.
So blank either represents dishonesty, or leap of faith. Neither are good for policy which is why this has been excluded from discourse
With four parameters I can fit an elephant, and with five I can make him wiggle his trunk (John von Neumann). We are witnessing the fine art of wiggling the trunk.
Actually, climate models model a giraffe, and then they try to make him wiggle its trunk. The problem is a cavalier attitude of modelers, exemplified by a Gavin’s comment “If the specific heats of condensate and vapour is assumed to be zero (which is a pretty good assumption given the small ratio of water to air, and one often made in atmospheric models) then the appropriate L is constant (=L0).”
https://judithcurry.com/2012/08/30/activate-your-science/#comment-234131
Gavin shows that this “pretty good approximation” leads necessarily to an overestimation by 3% of heat transfer by evaporation from tropical seas. The average temperature of the Earth’s surface is close to 300 K, so a 3% error would be 9 degrees K. Maybe a 3% error in a heat transfer does not result in a 3% error in temperature; modelers should have shown that. But they did not.
What we really need is a honest analysis of a reliability of models and a good estimate of error bounds.
https://judithcurry.com/2013/06/28/open-thread-weekend-23/
+1 Curious. The error bounds would seem to be too large for the use of Climate Models to inform policy.
Hi, Peter. IMO large error bounds would be informing policy–or at least policy makers. Know what you don’t know is always good to know. ;O)
Hi mwg. Thus policy makers are skating on very thin ice when they are making economic decisions on such projections?
Thus policy makers are skating on very thin ice when they are making economic decisions on such projections?
That would depend on the actual policy, how the projections were used in formulating the policy, what other factors were used (or omitted), etc. Good models or bad models to not determine the need for policy. In spite of the beliefs and/or considerations of some people models are not the sole or even primary indicators of warming. Also difficulty in attribution of causes is not sufficient reason to not have a policy*. IMO there is enough known to suggest policy is needed at this time. The question is what policy. I also suspect that policies fomented in extremes such as denial or panic, or policies fomented in political ideology are unwise. But then, we have long since gone beyond the pale, so ‘meh’.
——
* Or policies at different levels.
Making a model do what already has happened can be achieved multiple ways, the key issue is are you modeling how it happened, and the answer is no, models are not doing that because if they were, they would not be running so hot, if they were coupled models would be able to reproduce not only temperature changes, but arctic and antarctic changes, monsoons, precipitation.
Massive shortcomings, aerosols, clouds, precipitation, airflow and grid resolution and fudges for less well understood processes
As such the confidence expressed in models is patently false
Right now a net of runs is being cast to capture all changes within the 95% spread, and temperature is just barely captured, not because of model accuracy, because a wide cast net has more change of catching fish.
I noted a comment on RC one day, it read “Climate is not tracking the mean of all models, it is tracking one model”.
I was stunned lol
My problem (no modeler here or climate scientist) is the logic in the arguments for models being able to replicate. It does not stack up. The claims of uncertainty does not relate to the real world
All of above assumes climate is a deterministic process that can be modeled and is not a chaotic process.
Not really.
Chaos is deterministic. However, I am skeptical of the ability to make useful predictions regarding future climate given even a “good” climate model. A reasonable treatment of uncertainties would need to sample model and initial conditions to “seed” the model. The result would be a group of possible trajectories so varied that they would be useless as policy tools.
“chaos” is conceptual, it translates to “I dont have all of the variables”
No it doesn’t.
Good comment dougbadgero. The error bounds would seem to be so wide so that there would be too many possible trajectories to serve as useful policy tools. The problem we all have is that for the most part, climate policy options under consideration are politically biased against the use of fossil fuels for the generation of energy and that the models are post hoc rationalisations.
I can only say: It is worse than i thaught.
Judy, please elaborate a bit on what “formal model verification” would entail, and how that differs from current verification or validation strategies. Thanks.
Hi Marlo, see my previous posts on this:
https://judithcurry.com/2010/12/01/climate-model-verification-and-validation/
https://judithcurry.com/2011/09/25/verification-validation-and-uncertainty-quantification-in-scientific-computing/
This book
And this book
This Google search
the Patrich Roache book is indeed excellent, I purchased it several years ago based on Dan Hughes’ recommendation
There is a large ‘hidden’ tuning bias in the attribution assumption that the ~1975 to ~2000 warming was (mostly) anthropogenic. The CMIP5 archive second mandatatory model ‘experiment’ was a hindcast from YE 2005 back to 1975 — 3 decades. The tuning was explicitly to that period. In fact, with the model/temperature divergence from 2000 to now we can begin to conclude at least half this warming was natural. The statistically indistinguishable warming from ~1920 to ~1945 had to be mostly natural, as there was insufficient rise in CO2.
Start by assuming the answer, then tune to suit. That is what was done. And is why CMIP5 has already gone badly awry with its now decade of projections. That is why Gavin hates the Christy chart so much.
Apparently, the current state of the art in climate modeling, when all else fails, is to adjust the observational data to agree with the model output. See Steve Goddard’s excellent analysis of this at
Pingback: Model tuning | …and Then There's Physics
Well, when the models are tuned to replicate the increase from around 1975 to 2000, along with the CO2 levels, it’s hardly surprising that they show a much higher sensitivity than appropriate (to longer periods, for instance).
I’m sure there are plenty of people who were really happy with those results.
” Based on the comments here, it appears that some are interpreting this as confirming their claims that climate models are tuned to give preferred results.”
Consider again the poser above:
Think about it for a minute. Every climate model manages to accurately reproduce the 20th century global warming, in spite of the fact that that the climate sensitivity to CO2 among these models varies by a factor of two. How is this accomplished? Does model tuning have anything to do with this?
Climate models are tuned to predict the past, but that doesn’t seem to help much ( and might actually hurt ) with predicting the future.
Twiddling with a parameterization here or a parameter there might alleviate one model infidelity only to amplify others.
At a minimum, it should cause you to reflect on your moniker.
Is tuning physics?
Have they considered using neural networks and top down models to tune their fine grid models? I realize their models take a huge amount of computer power, but they should aim in that direction. Sometimes I feel like climate model work flow was designed by teenagers.
More processing power will not change the underlying problems, as in replicating poorly understood processes.
The main fault here is the lack of communication to all and sundry of the issues with models, they are offered as highly confident analogues when there is no justification for such certainty.
As far as i know the GCM evolved from weathermodels, that only works up to a week or two. It reminds of hybris when you believe that combining the whole globe and predict years in advance could have any validity.
Chaos is present both spatially, over short and long time and no averaging can help.
A simple energy balance would be better. These very complex models hides more than they reveal.
That said, they can be used to gain understanding of weather processes, but should have a warning sign attached: Do not use for predictions.
https://wattsupwiththat.files.wordpress.com/2013/09/josh-knobs.jpg?w=720
I read that there is a Russian model that fairly well tracks the balloon and satellite measurements illustrated in Dr. Spencer’s graph. Has there been any effort to differentiate the factors and assumptions incorporated in this model as opposed to the ones running hot?
Yes. The Russian INM-CM4. RClutz posted the differences on his blog, although you can google them for yourself. Several model comparison papers not paywalled. Plus AR4 WG1 has a discussion of CMIP3 (INM-CM3 at 8.6.2.3. Biggest two parameterization differences are higher ocean thermal inertia and lower water vapor feedback. The first gets the heat sink more right, the second reflects newer observational data from corrected radiosonde and satellite data ignored/dismissed by AR4 and ignored by AR5. Essay Humidity is still Wet has the AR4 /5 references.
Model is second lowest TCR and lowest ECS in CMIP5. ECS 2.1 versus energy budget ~1.8.
Stuart,
You are missing the important points:
1) If any GCM appears accurate, then you must accept that they are all accurate. Nobody in climate science gives a bloody sh!t about the accuracy of the model ensemble or the accuracy of their projections.
2) The Russian GCM may actually project warmimg accurately, and may sit well below the rest of the GCM ensemble, but really, who cares? The Russian GCM is not going to motivate the ‘correct’ (which is to say, leftist) political response…. one which will justify forced impoverishment of most of humanity. Um, except for ‘policy elites’ who will always live large, like, for example, climate scientists.
3) The individual GCMs are irrelevant when you alread know the ‘correct’ answer, which is that future warming ‘may be’ catastrophic… so the cost of public policy (both monitary and social) to combat global warming is irrelevant. Do exactly what the leftist/climate activists say, or they will prosecute you under RICO statutes.
Well, I disagree with your subcomment. That one differently tuned model does not agree does not invalidate the others. They could all be invalid.
Tuning, turning, Turing testing to render reality indistinguishable from the shadows on the wall of Plato’s prison cave– Deep learning. Structured learning. Hierarchical learning. Machine learning through branching and sets of algorithms to conjure up high-level abstractions by processing layers of multiple linear and non-linear regression equations to develop relationships in data to develop prognostications of global or hemispherical mean temperature that go through the roof.
Pingback: VIP Very Important Post for climate nerds on climate model tuning | Catallaxy Files
The weather and also climate (average of weather) is chaotic.
There exists methods to examine if time series are chaotic.
I then wonder if GCM’s output are also chaotic, which they should be.
They sure look chaotic. Dame Slingo proposes to run them over extended time periods to map an attractor of a chaotic climate system. Even assuming that models reproduce the climate system faithfully, I don’t know what she means. The attractor is a cloud of trajectories in the climate system’s phase space; how many dimensions does that space have? 20? 50? 100? No one knows.
Yes , people know.
It is infinite dimensional because the climatic chaos is both spatial AND temporal .
It is described by a system of PED and a PED is equivalent to an infinity of ODE .
As each ODE defines 1 degree of freedom, a PDE defines an infinity of degrees of freedom (e.g phase space dimensions) .
Climate is… average of weather over time. The global climate has been increasing and it also has been decreasing, depending on the start and stop points that are chosen.
“The art and science of climate model tuning. Bull.” While I understand abbreviating “Bulletin of American Meteorologists” as “Bull Amer Meteor,” is common, I’d suggest ending a line with just the first abbreviation right after the name of this paper is unfortunate. With other papers it might be more appropriate.
Of course the pause deniers were insisting that models weren’t tuned at all – based around the experience of never having modelled anything in their lives but adept at just making stuff up.
Data must fall where it may, you cant torture it to where you want it. Much climate science is now about torturing data until it dances like you want.
A model that has tunable parameters or (effects) can be made dance to your tune.
Ba dum tis
“There are no inductive inferences”
Karl Popper
Inductivist will always be left wit the following challenge:
“The epistemological idea of simplicity plays a special part in theories of inductive logic, for example in connection with the problem of the ‘simplest curve’. Believers in inductive logic assume that we arrive at natural laws by generalization from particular observations. If we think of the various results in a series of observations as points plotted in a co-ordinate system, then the graphic representation of the law will be a curve passing through all these points. But through a finite number of points we can always draw an unlimited number of curves of the most diverse form. Since therefore the law is not uniquely determined by the observations, inductive logic is confronted with the problem of deciding which curve, among all these possible curves, is to be chosen.”
– Karl Popper; the logic of scientific discovery
By endorsing subjectivity and inductivism within science, United Nations has become an international problem of a cultural character – barbarians.
There are so many nuggets in that paper:
” We know indeed that the system is nearly in balance but for the ocean heat uptake, believed to be of about 0.5 W/m2 in our warming climate, a value much smaller than the model and observational uncertainties. This provides a strong large-scale constraint. A common practice to fulfill this constraint is to adjust the top-of-atmosphere or surface energy balance in atmosphere-only simulations exposed to observed sea surface temperatures (component tuning) and check if the temperature obtained in coupled models is realistic. This energy balance tuning is crucial since a change by 1 W/m2 of the global energy balance produces typically a change of about 0.5 to 1.5 K in the global mean surface temperature in coupled simulations depending on the sensitivity of the given model.»
It is a pity that Policy Makers are unable to grasp that they have been had.
“It is a pity that Policy Makers are unable to grasp that they have been had.”
It is no surprise. Politicians have little or no science background and no understanding of the scientific method. This is compounded by the fact that. scientists have a hard time communicating to such people .It is compounded further by the fact that once you get below the level of AGW emotion, the science gets complicated very quickly. It was clearly illustrated in a taped conference that four non-AGW scientists, including Ross McKitrick, had with a group of Canadian Senators from the Canadian Committee on Energy. The scientists’ goal was to convince the Senators that they needed to reconsider their pro global warming stance and policies. After an almost 2 hour discussion, the Chairman of the meeting said words to the effect: “there is so much overwhelming scientific consensus throughout the world from independent scientists who refute everything you say, that we have no choice but to listen to them, rather than sign on to some vast worldwide conspiracy that is presenting misleading information to the public and governments.” You can see the video at
https://www.youtube.com/watch?v=oMmZF8gB7Gs
It is 2 hours long, but fast forward to the 1:45:10 mark to hear these closing remarks- not just duped but reeled in hook, line, and sinker
I guess my question is why it took so long to come clean? Absolutely everyone in the CFD model development community already knew this. Perhaps skeptical auditing and replication attempts had something to do with it.😀
Climate Science finally moves, slowly, toward more nearly complete, and critically necessary, documentation of the ultimate source of the numbers that are reported to be GCM results. That ultimate source is the discrete representation of the continuous model equations and the associated numerical solutions of the large system of non-linear algebraic approximations to the continuous model equations. As more nearly complete documentation of all aspects of the models, methods, code, and applications appears, it will be revealed that GCMs are, at the fundamental level, process models that are far removed from the Basic Science models that is presented to the public.
The paper illustrates the well-known fact that the beloved Climate Science mantra ( slogan, motto, maxim, catchphrase, catchword, watchword, byword, buzzword, tag (line) ) “Based on the Laws of Physics” does not begin to provide a valid summary of the actual critically important aspects of GCMs: the parameterizations and the numerical approximations. The paper directly focuses on the parameterizations. Relative to the conservation of energy Law of Physics, the paper reports:
The level of accuracy required for the global energy tuning (of a few tenths of W/m2) is for instance smaller than the error arising from not computing radiation at every time-step, as often done to save computational means (of the order of several W/m2, see e. g. Balaji et al. 2016).
Another issue relative to energy conservation is outlined starting at line 199 of the MS.
The quote above is just one example of how numerical methods can destroy Conservation of Energy. Others include (1) representation of a driving potential with the time-level evaluated at two adjacent values, instead of at the same time level, (2) failure to ensure that solutions are not a function of the convergence criteria for all iterative solutions, and (3) improper use of inter-related parameterizations; conversion of mechanical to thermal energy, for example. The quoted example above from the paper is the case of (1) when the time levels are not adjacent. Item (1) is generally encountered whenever mass, momentum, and energy exchange across interfaces between sub-systems in Earth’s climate system are modeled.
While conservation of energy of the whole might not be affected by the approximation of (1), the distribution of energy, and mass and momentum, among the sub-systems certainly is directly affected. In fact, aphysical situations can be encountered due to evaluation of the driving potential at different time levels; exchange and transfers against the physical driving potential, for example. The parameterizations directly themselves also ensure that the distributions among the sub-systems will not agree with the physical domain, because they are at best only estimates of the physical-domain driving coefficients; some are more or less heuristic or ad hoc.
The fact that the continuous equations are simply models of the fundamental Laws of Physics, and not the Laws of Physics themselves, is proven by the presence of parameterizations. The Laws of Physics will never have terms that are represented by parameterizations of states that materials have previously attained. The Laws of Physics contain only terms that represent properties of materials. They never contain states that the materials have previously attained.
The paper seems to open another issue as follows. Is it valid to construct ensembles of convenience when the individual members have been tuned to represent different aspects of Earth’s climate system to different degrees of fidelity. It seems that this approach negates the hypothesis that the physical internal variability is somehow averaged out and climate is determined by the ensemble. Not to mention that there there has been no demonstration that the calculated internal variability is in any way related to that of the physical domain. It is not Weather, for certain.
The paper is an excellent start on some of the critically important aspects of climate modeling that have heretofore been completely neglected by the Climate Science community. While deep investigations of the effects of parameterizations is a good starting point, Verification of the numerical solution methods is also critically necessary. These Verification investigations can be initially conducted by the hierarchical approach suggested by the paper: process, component, and system. In engineering these are frequently designed to be single effects, integral effects, and system effects, respectively, for both Verification and Validation.
The paper also mentions:
Because tuning will affect the behavior of a climate model, and the confidence that can be given to a particular use of that model, it is important to document the tuning portion of the model development process. [My bold]
So eventually, Verification of each application of the model is necessary.
Finally, all of the aspects of computational climate science noted in the paper, each and every one of them without exception, have been known and addressed in the engineering literature, across all disciplines, for decades. Further, the engineering literature, and some science disciplines, have advanced far far beyond Climate Science. Maybe that literature would prove useful to assist Climate Science in starting these critically important investigations.
A question; Can parameter estimation methodologies be successfully applied to chaotic response? Even simple ODE chaos such as the Lorenz systems?
Reblogged this on 4timesayear's Blog.
are the models “Based on Physics” or they are “Tuned”.
Which is it now?
Basically… to satisfy the demand for raw, non-adjusted data that is measured and not estimated, and for raw climate model results, we would have Mosher’s “the raw data is hotter” and a frozen northern hemisphere. Great.
Not to mention that “the measured data is smaller”. Cold way down south too… what a crazy world we live upon.
” Maybe because tuning is often seen as an unavoidable but dirty part of climate modeling; more engineering than science; an act of tinkering that does not merit recording in the scientific literature.”
So, engineering == tinkering
That’s a pretty ignorant statement.
Reblogged this on I Didn't Ask To Be a Blog.
IPCC used circular reasoning to exclude natural variability. IPCC relied on climate models (CMIP5), the hypotheses under test, to exclude natural variability:
“Observed Global Mean Surface Temperature anomalies relative to 1880–1919 in recent years lie well outside the range of Global Mean Surface Temperature anomalies in CMIP5 simulations with natural forcing only, but are consistent with the ensemble of CMIP5 simulations including both anthropogenic and natural forcing … Observed temperature trends over the period 1951–2010, … are, at most observed locations, consistent with the temperature trends in CMIP5 simulations including anthropogenic and natural forcings and inconsistent with the temperature trends in CMIP5 simulations including natural forcings only.”
(Ref.: Working Group I contribution to fifth assessment report by IPCC. TS.4.2.)
From what we know about how climate models has been adjusted to match observations – this is really starting to look ugly:
«Formalizing the question of tuning addresses an important concern: it is essential to explore the uncertainty coming both from model structural errors, by favoring the existence of tens of models, and from parameter uncertainties by not over-tuning. Either reducing the number of models or over-tuning, especially if an explicit or implicit consensus emerges in the community on a particular combination of metrics, would artificially reduce the dispersion of climate simulations. It would not reduce the uncertainty, but only hide it.»
From: The art and science of climate model tuning
There is no doubt that there has been an explicit and implicit consensus in the climate industry about the causes for global warming. While consensus is not a valid scientific argument – consensus might be a valid observation in an sociological enquiry – observations which might explain a collective bias or preference.
I’ve said it before and will say it again, natural variability is mostly driving ECS estimations which explains the regression of estimates toward natural trends.
Judith: IMO, the best evidence concerning the reliability of climate models can be found Tsushima and Manabe, PNAS (2013). Due to the low heat capacity of the NH, GMST (not temperature anomaly) rises and falls 3.5 K every year. This seasonal warming produces LARGE changes in OLR and reflected SWR of 5-10 W/m2 that have been monitored from space for parts of two decades. If climate models accurately reproduce fast feedbacks, they should do a good job of modeling this seasonal warming, both on a planetary scale and a regional scale. Fast feedbacks cover everything except slow changes in ice caps and ice-albedo feedback (which includes fast changes in seasonal snow cover) is a minor component of total feedback.
Seasonal warming was studied as part of the CMIP5 project and analyzed globally by Tsushima and Manabe. Observations vs projections were broken down into four channels: seasonal changes in outgoing LWR from clear and cloudy skies and in reflected SWR from clear and cloudy skies. (The change in reflected SWR from clear skies is due to seasonal snow cover, which changes more in the NH than the SH.) Assuming that climate models have not been tuned to match seasonal change, seasonal warming provides a much more severe test than reproducing the historical record of warming: 1) Many records of 3.5 K of surface warming monitored by each of two satellites vs 1 record of about 1 K of warming compiled from a constantly changing set of instruments (with homogenization making a significant contribution to warming). 2) Aerosols can’t be used as a fudge factor.
One conclusion I draw from this paper is that most climate models MUST be wrong because they disagree with EACH OTHER. They all make similar predictions about the change in LWR from clear skies that agrees with the observed change (WV+LR feedback of about 1 W/m2/K). Otherwise, they are all over the map about feedback in the other three channels. And no model agrees well with observations. IMO, the data suggests the existence of large discrepancies between models that have been tuned to produce the same total feedback – i.e. similar climate sensitivity.
Seasonal warming (warming in the NH and cooling in the SH) isn’t an ideal model for feedbacks. (Nor was the El Nino warming and La Nina cooling used by Linden and Choi. Nor Pinatubo.) In particular, seasonal surface albedo feedback observed as SWR from clear skies has little in common with the surface albedo change that will accompany global warming. However, seasonal warming is a phenomena that reliable AOGCMs should be able to replicate.
Tsushima and Manabe don’t pass any judgments about the reliability of AOGCMs. They simply conclude that the data on seasonal warming can be used to improve AOGCMs.
http://www.pnas.org/content/110/19/7568.long
Thanks much for this link
Judith, Thx for the article, I look forward to re-reading it in detail. Process modeling from a chemical engineering perspective has been my vocation for over 30 yrs.
Since everyone is accutely aware in global climate modeling we are dealing with both observable & non-observable, complex non-linear, multi-multi-multi variable processes with complex constraints and limits. It has been interesting to watch the progression of climate modeling, e.g. how they’re developed, tested, tuned & varified. A practicioner needs to have an intimate knowledge of not only the process involved but how to accurately & repeadably measure data, observe perturbations & responses of the processes to the perturbations. These skills are not typically learned in school but take decades to develop. Having this experience is essential to model development, analysis & testing.
The fact that the intent of global climate model development was to attempt to emulate a certain period of “climatic warming” but can’t replicate non-warming periods without major tweaking of tuning constants or parametric values points to poor/inadequate model development. No measure of parametric adjustment, tuning or changing initial states will correct systemic problems.
My experience leads me to believe that it will take decades of good data collection and observation to validate the processes involved – assuming our culture and society has the patience to do that.
When I was just starting my engineering carrier in the early ’70s an elder Climatologist friend of mine said that it may take hundreds or even a thousand years or more of data collection & observations to identify the major climate processes involved in model identification . I was taken back at first, but have now come to see the wisdom in what he was telling me.
Pingback: The art and science of climate model tuning | Climate Etc. | Cranky Old Crow
It’s time to model PDO AMO ENSO driven climate augmented by solar cycles that can make these others warmer or colder.
Too much myopia.
In the last 30 years we have experienced the 1930s heat that was absorbed by the oceans during the following cooling in 50s 60s and 70s, it came back out during the 80s 90s and 2000s
The whole “radiation in” Radiation out” over short terms is complete junk science. Warming phase – Cooling phase – Solar cycles augment both
Radiative heat balance is a complete misdirection from what is actually happening. But still, ocean heat content is the best option we have for rad in rad out. not tuning
Southern Hemisphere ice anomaly, for me is the canary in the coal mine. Not the arctic, the arctic ice levels is residue of changes in Southern Hemisphere
Tuning the climate of a global model – Thorsten Mauritsen, Bjorn Stevens, et al
When the only difference between a politician and a salesperson is a matter of privacy. With that in mind, when you look at the number of deals he has done versus the number of her closes, where do you put your money? You can have all the plans in the world but if you can’t close the deal what are you?
I have no idea how he will pull this off but my money is on a neo-world anyway, so what difference does it make to me now?
With all her training for the position of Human Resource Coordinator,
Unproductive
Reblogged this on Climate Collections.
Either reducing the number of models or over-tuning, especially if an explicit or implicit consensus emerges in the community on a particular combination of metrics, would artificially reduce the dispersion of climate simulations. It would not reduce the uncertainty, but only hide it.
.
This sentence is important and shows the circularity of the problem .
I will also expand on Dan Hughes’ post .
The problem is indeed how (and even if it is possible) to tune a spatio-temporally chaotic system with an infinity of degrees of freedom .
Let us take the Lorentz system which has an advantage to be “only” 3 dimensionnal .
Of course the parameters can be also adjusted but let us first consider only the numerical accuracy of the computation .
We can use f.ex a 10^-3 numerical accuracy and running the model for a time T , we obtain a final state F3 defined by the 3 values of the degrees of freedom X3, Y3, Z3 .
Now we run the model for the same time T but with an accuracy of 10^-6 .
We obtain a final state F6 defined by the 3 values of the degrees of freedom X6, Y6, Z6 .
Of course F6 may be very different from F3 .
Now assume that we have measured the system over the time T and know the empirical values Xe, Ye, Ze .
Suppose that they are very near to the state X3,Y6,Z6.
What went wrong ?
The ODEs are sure – they are “physics” of the system.
The parameters are less sure but reasonable .
So you look for the one which has the biggest impact on X and start to tweak it while maintaining the accuracy at 10^-6 .
Finally you may (but must not !) come reasonably close to Xe, Ye and Ze and you freeze both the numerical accuracy at 10-6 and the parameters at their tweaked values.
.
Unfortunately the model is still hopelessly wrong .
Somebody comes , works with an accuracy 10^-16 over a longer time 2T and shows that the final state computed with the previously “validated” model is totally off from the measured empirical values .
The circularity is now obvious – one can start again the work described above and still fail as soon as somebody changes the accuracy and/or the time . The values of the parameters are here almost irrelevant .
This result is known for the simple 3D Lorentz system so that nobody thinks about tuning as soon as T starts to get longer and longer.
The lesson is inherent to the chaotic nature of the system which allows (in the best case !) to only predict the probabilities of the final states .
The lesson generalizes of course to infinite dimensional chaotic systems like weather/climate too .
An attempt to deterministicallly tune any chaotic system is isomorphic to an attempt to create a deterministic (tuned) model of quantum mechanics and the former must fail for the same reason that the latter fails .
In both systems only probabilities are knowable.
Unfortunately we cannot measure N different climates to study the distribution of the probabilities while we can measure N different quantum systems what allows to know the distributions .
In this way one can say that studying climate is much more difficult than studying quantum field theory and no amount of tuning can help here because like the quote says , “the tuning only hides the uncertainties” (e.g the probabilistic nature of the system) .
Hi Tomas, thanks very much for your comment
I’m not sure I understand this:
While the real-world system may be a spatio-temporally chaotic system with an infinity of degrees of freedom, wouldn’t the models have to be just temporally chaotic? And with many orders of magnitude fewer degrees of freedom than the real-world system being modeled?
Of course, it does lead to the same conclusion: that GCM-type models are in no way a good representation of the system their supposedly modelling.
As soon as you model the real world with equations that represent the natural laws of the said real world (that’s what people express by saying “there is real physics in the model”) , the chaos is necessarily “Inside”.
The GCMs use Navier Stokes. The property of NS equations is to be infinite dimensionnaly chaotic.
Follows that the models MUST exhibit the same properties as the real world – e.g spatio temporal chaos .
Of course due to the finite accuracy and parametrisations of the numerical calculus the calculated solution may be and generally is far from the real state because the Nature solves Navier Stokes with infinite accuracy and no parametrisation what a climate scientist cannot :)
.
If you are familiar with QM, I always advice to imagine chaotic systems as an analogy to quantum systems .
The different states (quantum or climate) are distributed according to some probability law (that can be at least in principle computed) but you cannot (even in principle) compute deterministically THE state that the system will achieve after a time T .
In this case any tuning which would have for result to force the probability distribution to become a delta (e.g a huge peak with almost zero everywhere else) would only destroy and hide the real distribution of the future states and hopelessly fail to describe the real world .
NSS is bounded by its rigidity,the only natural parameters being viscosity v >0 ,and the external forces f ( x,t) which may be random.
The use of NSS in a state of the art GCM is incomplete,as the last full equation of motion is replaced by Coriolis ,reducing its dimension to 2.5 (where there is an absence of theory in statistical mechanics) .
Models improve the understanding of the climate… adding CO2 to the atmosphere:
https://pbs.twimg.com/media/CoYiOVQWYAA3K0d.jpg
the big blue Atlantic blob off Greenland is going away, and the AMO is about to shoot to new heights…
January:
http://www.ospo.noaa.gov/data/sst/anomaly/2016/anomnight.1.4.2016.gif
August:
http://www.ospo.noaa.gov/data/sst/anomaly/2016/anomnight.8.4.2016.gif
Franktoo:
In your Aug. 4 post you stated “The warming from 1920-1945 was not caused by falling sulfate emissions – emissions rose modestly and irregularly”
You need to examine Fig. 6 of your referenced paper. It shows DECREASING SO2 emissions in the 1930’s (about 29 Megatonnes), Per NASA, there was a temp. rise of about 0.3 deg. C.in 1938. My model would predict a temp. rise of 0.58 deg. C However, the decade of the 1930’s was characterized by strong La Nina’s (no El Nino’s), which would have reduced average global temperatures by 0.2 deg. C, or more. When this cooling is accounted for, both values are essentially identical.
Regarding the “recession correlation”, you wrote “Correlation – especially for only part of the record – does not prove causation”.
I examined the temperature record back to 1920, and there were 5 recessions between 1923 and 1950. In every instance, they coincided with a temporary spike in average global temperatures.
So there are 15 data points with 100% correlation over nearly 100 years. In this instance, correlation IS causation – unless you can PROVE otherwise..
You also wrote “Looking at the whole record, sulfate emissions both rose at roughly the same time. Unless something else was causing warming, there is a positive (not negative) correlation between sulfate emissions and temperature).
The correlation is that whenever net global sulfate emissions are reduced, temperatures will rise.
Consider the climatic response to a large volcanic eruption-an initial cooling due to the injection of SO2 aerosols into the stratosphere, followed by warming as the aerosols settle out of the atmosphere (it can be proven that the climatic response to stratospheric and tropospheric aerosols is identical)
Extended Multivariate ENSO Index (MEI.ext)
http://www.esrl.noaa.gov/psd/enso/mei.ext/ext.ts.jpg
Bob Tisdale has an ONI index that shows an EL Nino 1930 – 1931, and another starting in at thevery end of 1939.
http://i56.tinypic.com/i3850k.jpg
JCH:
Thank you for the link.
I was using NASA’s NOAA Research listing of “SOI ranked by year 1896-1995.”
La Nina rankings are shown as being present for every quarter for every year 1929-1939, which would exclude any El Nino’s., hence my comment that there were no El Nino’s in the decade of the 1930’s..
This does not agree with Bob Tisdale’s data, but I may be missing something.
JCH:
With respect to the 1930’s, Bob Tisdale’s graph differs from the accompanying data listing. The graph shows El Nino warming in the mid-1930’s, but the data listing shows none.
Also, the graph shows El Nino activity in late 1929, extending to 1933. The data listing shows no El Nino, in 1929, and only extending to 1931.
I have not looked for other discrepancies, but I would suggest using the graph with caution.
Burl – the graph I included is Wolter’s MEI. The ONI chart is Tisdale’s. NOAA’s goes back to 1950.
Burl Henry: So there are 15 data points with 100% correlation over nearly 100 years. In this instance, correlation IS causation – unless you can PROVE otherwise..
Not so: any two autocorrelated data sequences completely independent of each other can me highly correlated for short periods of time. This is well known. You can find discussions in many books on time series; one is “The Analysis of Neural Data” by Kass, Eden and Brown.
Mattthewrmarler:
You wrote “…any two autocorrelated data sequences completely independent of each other can be highly correlated for short periods of time”.in response to my stating that “So there are 15 data points with 100% correlation over nearly 100 years. In this instance, correlation IS causation – unless you can prove otherwise”
This was for data going back to 1920.
Since 1880, there have been 28 recessions. In every instance (except one), there has been a temporary temperature increase coincident with the dates of the recession (the Sept. 1902 – Aug. 1004.recession showed no spike because of the very strong La Nina cooling at that time)
Warming due to a business recession can only be due to the decrease in industrial activity and its consequent decrease in SO2 emissions.into the troposphere.
Decreases in SO2 emissions into the troposphere due to Clean Air efforts
will have the same effect, and MUST be accounted for in any study of climate change.
This I have done, and the expected warming from their removal for the 1975 – 2011 period precisely matches the warming that occurred, leaving NO room for any additional warming due to greenhouse gasses.
The continued focus on CO2 (which has no climatic effect) as the cause of climate change, as opposed to the real reason for the warming, will have serious repercussions for life on this planet!
Dr. Curry:
I have made 2 attempts to respond to Franktoo’s Aug. 4 post, but neither have shown up on this thread. Is this a technical problem, or are you blocking my posts?
a few things got caught in moderation for some reason, they have been released.
Pingback: Weekly Climate and Energy News Roundup #235 | Watts Up With That?
Pingback: Climat : l’art et la science de l’ajustement des modèles
Climate Scientists have long insisted that Verification of GCM methods is both un-necessary and not possible. Now some claim that Documentation is also not possible.
The answer, I think, is that it is very detailed, and very model / centre specific. There’s also a lot of “implicit” tuning, in the sense that if something works out, you leave it alone; and if it doesn’t, you tweak it. It is also such a multi-layered process (even picking one thing I’m vaguely familiar with, like the sea ice) that you’d be hard pressed to go back afterwards and work out what all the tweaks even were (which is why the rather naive Hourdin recommendation for “better documentation” is rather naive; that’s the sort of recommendation anything like this always comes up with). [ my bold ]
Doesn’t that comment basically indicate that even the persons who work on the GCMs don’t know what’s in the GCMs?
You are surely right Dan and it reminds me of working with Mandelbulb.
.
One of my hobbies is to create fractal art with Mandelbulb3D (this example is not mine but it is to show what that means : http://haltenny.deviantart.com/art/Filigree-Sphere-495642256 ) .
Mandelbulb3D is the generalisation of the famous Mandelbrot 2D set to 3D .
The fractal that will be created by a given combination of transforms and their parameters in the software modelling the Mandelbulb3D is absolutely unpredictable .
Now the typical workprocess is that you look , say , for spheres and you happen to catch a combination of transforms that vaguely looks like what you’d want .
Each transform depends typically on 5-10 continuous parameters so even if you have only 3 transforms you may imagine the uncountable infinity of fractals you could create . This without mentionning the finite but extremely high number of selected combinations among some 100 available transforms .
.
So once you froze the combination of transforms you selected, you start to tweak their parameters to see how much each of them impacts the shapes . Some have dramatic impacts and others hardly change the result .
After a certain time you start to get a shape quite near to the sphere you imagined in the beginning and obtain something like the image I linked .
You stop there and dcument the parameters that worked .
.
Now the point of this post is not to talk about fractal art but about tweaking and documentation of processes modelling chaotic systems .
It is indeed practically impossible to document the whole process .
In my sphere example above I went through something like several dozens of continuous parameter changes and it is simply impossible to store and classify (let alone analyse !) the intermediate values and shapes obtained with the intermediate values .
A GCM is a software which looks for shapes in a deterministic chaotic system (climate) so is identical for all practical purposes to the Mandelbulb3D software which looks for shapes in a deterministic chaotic system (fractal sets) .
It seems indeed impossible to have an exhaustive documentation of all attempts that have been done and rejected because the result was not “spherical” enough .
Yet experience showed me that sometimes a nice sphere was hiding just a 0.01 change of 1 parameter away from a shape I rejected for not being “nice” .
Trust me. I spent quite some time to create a classification and documentation system and all failed .
Oooh, pretty!
Thanks for that, Tomas!
I remember when I first saw a Mandelbrot set…took about quarter of an hour to draw on my 8088 with Hercules graphic card.
Things have come on a bit!
I bet you have not ‘dumped’ your data, to save space.
Thank you for linking to, and presenting, that paper.
Which is more nearly correct:
(1) GCMs are based on the Fundamental Laws of Physics
(2) GCMs are based on models of the Fundamental Laws of Physics + parameterizations + heuristic parameterizations + ad hoc parameterizations + tuned parameterizations + tweaked parameterizations + tweaked tuned parameterizations + algebraic approximations to the continuous model equations + approximate numerical “solutions” of the algebraic approximations + software bug effects + user effects
Under which of the Fundamental Laws of Physics do tuning and tweaking of parameters fall: the Law of Tuning and Tweaking?
When software developers do not know critically important aspects of the software, you’ve got the blackest of black-box software.
When constructing ensemble results, shouldn’t the candidate members be first determined to be samples from the same population?
Dan, The truth lies somewhere between your 2 alternatives. The “laws of physics” doctrine is an invention of people like ATTP who are either fundamentally confused, or as seems more likely, dishonestly misrepresenting the truth.
One thing is certain and that is that the uncertainty in these models is larger perhaps vastly larger than the IPCC reports have made it out to be and using them as a “line of evidence” is probably misplaced.
Well said. Approximations of approximations of approximations (or averages of averages of averages) are hardly the bedrock on which projections of variables well into the future can be made, and unfortunately upon which energy policy is already being made. In fact, according to some scientists, even the idea of an “average global temperature” is erroneous. It may very well be a meaningless number in a chaotic climate system containing chaotic temperature variations on a planet that is not in thermodynamic equilibrium.. For a very elegant and straightforward analysis of temperature data to prove this point, see the article, “Numerical analysis of daily temperature at the Armagh Observatory”, by Dr. Darko Butina, on his website 14patterns.com. (The website is somewhat flaky. The only way I have been able to get to it is to do a google search for “Darko Butina” and look for the 14patterns.com link in the google list). The other papers listed on this site also make for thought provoking reading. Butina is a retired scientist with particular expertise in data analysis.
I think there is much, indeed tantalizing, motivations to be right about a substance in climate that can be controlled to the betterment of human welfare. It would be akin to discovering the ONE thing that causes all cancers and that ONE thing can be controlled. Imagine the emotional payment one would get. I think that is why climate models are overwhelmingly turned with anthropogenic forcings. Subconsciously major stars in climate science are searching for the holly anthropogenic grail and think they are moments away from proving it exists and can be controlled. Tuning to that beat of the drum will never stop. The motivation is too great.
The awardless, no-funding-for-you, scientists will not be allowed in this game and they know it. So they sit back and wait for the jagged slide down to happen, even if not in their lifetime, all the while smacking their foreheads as we flush money down the toilet chasing after boogy-weather.