by Judith Curry
To solve these pressing problems, there needs to be much better recognition of the importance of probability models in climate science and a more integrated view of climate modelling whereby climate prediction involves the fusion of numerical climate models and statistical models. – Stephenson et al.
Over the last two weeks, there have been some interesting exchanges in the blogosphere on the topic of interpreting an ensemble of models. See this recent post at Climate Etc. for background: How should we interpret an ensemble of models? Part I: Weather Models.
AR4
Excerpts from the IPCC AR4, section 10.5.4:
Uncertainty in the response of an AOGCM arises from the effects of internal variability, which can be sampled in isolation by creating ensembles of simulations of a single model using alternative initial conditions, and from modelling uncertainties, which arise from errors introduced by the discretization of the equations of motion on a finite resolution grid, and the parametrization of sub-grid scale processes (radiative transfer, cloud formation, convection, etc).
While ensemble projections carried out to date give a wide range of responses, they do not sample all possible sources of modelling uncertainty. More generally, the set of available models may share fundamental inadequacies, the effects of which cannot be quantified.
Non-informative prior distributions for regional temperature and precipitation are updated using observations and results from AOGCM ensembles to produce probability distributions of future changes. Key assumptions are that each model and the observations differ randomly and independently from the true climate.
Recent IPCC Report
The IPCC presents a more mature perspective on interpretation on the multi-model ensemble in Report from the IPCC Expert Meeting on Assessing and Combining Multi-Model Climate Projections. Excerpts:
Climate model results provide the basis for projections of future climate change. Previous assessment reports included model evaluation but avoided weighting or ranking models. Projections and uncertainties were based mostly on a ‘one model, one vote’ approach, despite the fact that models differed in terms of resolution, processes included, forcings and agreement with observations.
The reliability of projections might be improved if models are weighted according to some measure of skill and if their interdependencies are taken into account, or if only subsets of models are considered. Since there is little opportunity to verify climate forecasts on timescales of decades to centuries (except for a realization of the 20th century), the skill or performance of the models needs to be defined.
Statistical frameworks in published methods using ensembles to quantify uncertainty may assume (perhaps implicitly):
a. that each ensemble member is sampled from a distribution centered around the truth (‘truth plus error’ view). In this case, perfect independent models in an ensemble would be random draws from a distribution centered on observations.
Alternatively, a method may assume:
b. that each of the members is considered to be ‘exchangeable’ with the other members and with the real system . In this case, observations are viewed as a single random draw from an imagined distribution of the space of all possible but equally credible climate models and all possible outcomes ofEarth’s chaotic processes. A ‘perfect’ independent model in this case is also a random draw from the same distribution, and so is ‘indistinguishable’ from the observations in the statistical model.
With the assumption of statistical model (a), uncertainties in predictions should tend to zero as more models are included, whereas with (b), we anticipate uncertainties to converge to a value related to the size of the distribution of all outcomes. While both approaches are common in published literature, the relationship between the method of ensemble generation and statistical model is rarely explicitly stated.
When analyzing results from multi-model ensembles, the following points should be taken into account:
• Consideration needs to be given to cases where the number of ensemble members or simulations differs between contributing models. The single model’s ensemble size should not inappropriately determine the weight given to any individual model in the multi-model ensemble. In some cases ensemble members may need to be averaged first before combining different models, while in other cases only one member may be used for each model.
• Ensemble members may not represent estimates of the climate system behaviour (trajectory) entirely independent of one another. This is likely true of members that simply represent different versions of the same model or use the same initial conditions. But even different models may share components and choices of parameterizations of processes and may have been calibrated using the same data sets. There is currently no ‘best practice’ approach to the characterization and combination of inter-dependent ensemble members, in fact there is no straightforward or unique way to characterize model dependence.
Environmetrics paper
Statistical problems in the probabilistic prediction of climate change
David Stephenson, Matthew Collins, Jonathan Rougier, Richard Chandler
Abstract. Future climate change projections are constructed from simulated numerical output from a small set of global climate models—samples of opportunity known as multi-model ensembles. Climate models do not produce probabilities, nor are they perfect representations of the real climate, and there are complex inter-relationships due to shared model features. This creates interesting statistical challenges for making inference about the real climate. These issues were the focus of discussions at an Isaac Newton Institute workshop on probabilistic prediction of climate change held at the University of Exeter on 20–23 September 2010. This article presents a summary of the issues discussed between the statisticians, mathematicians, and climate scientists present at the workshop. In addition, we also report the discussion that took place on how to define the concept of climate.
[link]
Excerpts:
Despite their increasing complexity and seductive realism, it is important to remember that climate models are not the real world. Climate models are numerical approximations to fluid dynamical equations forced by parameterisations of physical and unresolved sub-grid scale processes. Climate models are inadequate in a rich diversity of ways, but it is the hope that these physically motivated models can still inform us about various aspects of future observable climate. A major challenge is how we should use climate models to construct credible probabilistic forecasts of future climate. Because climate models do not themselves produce probabilities, an additional level of explicit probabilistic modelling is necessary to achieve this.
To make reliable probabilistic predictions of future climate, we need probability models based on credible and defensible assumptions. If climate is considered to be a probability distribution of all observable features of weather, then one needs well-specified probability models to define climate from the single realisation of weather provided by nature. In an increasingly non-stationary climate forced by accelerating rates of global warming, the time average over a fixed period provides only an incomplete description perhaps better suited to a more stationary pre-industrial world.
We also need probability models (or more generally a statistical framework) to make inference about future observable climate based on ensembles of simulated data from numerical climate models. Such frameworks involve making simplifying, yet transparent and defensible, assumptions about ensembles of climate models and their relationship to the real world. These frameworks not only have to model dependency between different climate models, but also need to account for model discrepancies (biases) and how these might evolve in the future. Development and testing of such frameworks are a pressing interdisciplinary challenge in statistical and climate science. Furthermore, it is likely that good statistical frameworks might be useful in the sequential design of climate model experiments, which take increasingly large amounts of time on the world’s fastest supercomputers.
To solve these pressing problems, there needs to be much better recognition of the importance of probability models in climate science and a more integrated view of climate modelling whereby climate prediction involves the fusion of numerical climate models and statistical models. This challenge will require greater collaboration and understanding between climate scientists and statisticians. Details of project areas are given in the final programme report (http://www.newton.ac.uk/reports/1011/clpdraft.pdf).
JC’s take
I wrote about this issue in my paper Climate Science and the Uncertainty Monster. Excerpts:
Given the inadequacies of current climate models, how should we interpret the multi-model ensemble simulations of the 21st century climate used in the IPCC assessment reports? This ensemble-of-opportunity is comprised of models with generally similar structures but different parameter choices and calibration histories. McWilliams (2007) and Parker (2010) argue that current climate model ensembles are not designed to sample representational uncertainty in a thorough or strategic way. Stainforth et al. (2007) argue that model inadequacy and an inadequate number of simulations in the ensemble preclude producing meaningful probability density functions (PDFs) from the frequency of model outcomes of future climate. Nevertheless, as summarized by Parker (2010), it is becoming increasingly common for results from individual multi-model and perturbed-physics simulations to be transformed into probabilistic projections of future climate, using Bayesian and other techniques. Parker argues that the reliability of these probabilistic projections is unknown, and in many cases they lack robustness. Knutti et al. (2008) argues that the real challenge lies more in how to interpret the PDFs rather whether they should be constructed in the first place. Stainforth et al. (2007) warns against over interpreting current model results since they could be contradicted by the next generation of models, undermining the credibility of the new generation of model simulations.
Stainforth et al. (2007) emphasize that models can provide useful insights without being able to provide probabilities, by providing a lower bound on the maximum range of uncertainty and a range of possibilities to be considered. Kandlikar et al. (2005) argue that when sources of uncertainty are well understood, it can be appropriate to convey uncertainty via full PDFs, but in other cases it will be more appropriate to offer only a range in which one expects the value of a predictive variable to fall with some specified probability, or to indicate the expected sign of a change without assigning a magnitude. They argue that uncertainty should be expressed using the most precise means that can be justified, but unjustified more precise means should not be used.
And from my paper Reasoning About Climate Uncertainty:
Following Walker et al. (2003), statistical uncertainty is distinguished from scenario uncertainty, whereby scenario uncertainty implies that it is not possible to formulate the probability of occurrence particular outcomes. A scenario is a plausible but unverifiable description of how the system and/or its driving forces may develop in the future. Scenarios may be regarded as a range of discrete possibilities with no a priori allocation of likelihood. Whereas the IPCC reserves the term “scenario” for emissions scenarios, Betz (2009) argues for the logical necessity of considering each climate model simulation as a modal statement of possibility, stating what is possibly true about the future climate system, which is consistent with scenario uncertainty.
Stainforth et al. (2007) argue that model inadequacy and an insufficient number of simulations in the ensemble preclude producing meaningful probability distributions from the frequency of model outcomes of future climate. Stainforth et al. state: “[G]iven nonlinear models with large systematic errors under current conditions, no connection has been even remotely established for relating the distribution of model states under altered conditions to decision-relevant probability distributions. . . Furthermore, they are liable to be misleading because the conclusions, usually in the form of PDFs, imply much greater confidence than the underlying assumptions justify.”
Stainforth et al. make a statement that is equivalent to Betz’s modal statement of possibility: “Each model run is of value as it presents a ‘what if’ scenario from which we may learn about the model or the Earth system.” Insufficiently large initial condition ensembles combined with model parameter and structural uncertainty preclude forming a PDF from climate model simulations that has much meaning in terms of establishing a mean value or confidence intervals. In the presence of scenario uncertainty, which characterizes climate model simulations, attempts to produce a PDF for climate sensitivity are arguably misguided and misleading.
JC summary: Interpretation of an ensemble of models remains an open question, and it is good to see this issue being discussed by mathematicians and statisticians. In the meantime, take the multi-model ensemble mean with a grain of salt.
Moderation note: this is a technical thread, comments will be moderated for relevance.
Fiery Coal Hot Heat Wave Climate Forecasting Made Easy.
at Neven’s sea ice blog “the dangers of basing an end-of-melt-season prediction on anything as simple as the current area or extent values. The variables that determine the eventual outcome of the annual melt season are truly legion, ”
Surely this applies even more so to climate models in general.
Interpreting an ensemble of models is easy, match them yearly with the actual observations, discard them when they are wrong
BTW where are all the new models ? Should we not have had new sets each 5 years with a more up to date base ?
Where are they and what are they showing. A 13 year old model is like a 13 year old car, time to get a new one.
You don’t need to wait for future observations to discard a model when it gets a year wrong. The cool thing about physical models like these is you can initalize them to conditions in say 1900 or 2000 and run them forward comparing against historicial observations. In that case, by your pass/fail metric, they’d all be tossed in the bit bucket. The reason there’s an ensemble in the first place is all in the ensemble members fail to reproduce historical climate change. They don’t all fail in the same way. The raison d’être of an ensemble is the hope that individual models are have most of it right so the faults in each average out. In that way instead of a single model that is right most of the time but spectacularly wrong once in a while you instead have an ensemble which is a little wrong all of the time and spectacularly wrong none of the time.
This is essentially what we have and now we know from observation that the little wrong all the time is the ensemble predicts a little too much warming each year such that after 22 years of rising CO2 and model comparison to the actual climate response the actual global average temperature has slowly but surely drifted outside the 95% confidence bound of ensemble prediction. Notably it drifted outside the lower temperature bound which vindicates the so-called climate skeptics in no uncertain terms.
It now appears the warmist faction is hoping that a Hail Mary play involving excess heat from anthropogenic greenhouse gases sequestered in the deep ocean will violate the 2nd Law of Thermodynamics and somehow concentrate itself on the surface in the future. It’s an act of desperation.
“Surely this applies even more so to climate models in general.
Interpreting an ensemble of models is easy, match them yearly with the actual observations, discard them when they are wrong.”
The problem is that all models, I repeat all models are wrong. So, your methodology would result in always throwing out models. Lets take a simple example. You have a CFD code to predict the flow over an aircraft surface.
The model works pretty well but in high AOA regimes its fails miserably. It gets the vortices all wrong. Do you throw the model away? Nope. You realize that in some areas the model works well but in other areas it sucks.
So you might look at that flight regime with a water tunnel model, or wind tunnel, but you will still not get the vortices right. You’ll understand more but the actual plane will behave differently. Now here is the interesting part. You might say ‘Dont fly the plane in this regime because the results are unpredicatable”
With a climate model you’d need to
A) identify which variables you intend to match. For example, does a model
fail if it gets the size of snowflakes wrong? Note the model makes billions
of predictions.
B) identify what sort of error is meaningful in the context of the decision you are making. Do I need to get with .5C? 1C? .1C? why, why not? For example, when designing an atomic bomb how closely does one have to predict the explosive potential? a kiloton? a gram?
In short all models are wrong, some are useful. The concept of use logically entails a concept of purpose and a concept of actor. A hammer is useful for a builder to drive nails. You need to identify the PURPOSE ( drive the nails ) and the ACTOR ( builder). A hammer is also useful for a murderer to crush a skull. Model evaluation is intimately tied to the specific user and their specific goal.
Steve
I do not understand you continuing to write all “models are wrong”. That is obviously true but not germane.
What was a particular model designed to predict and within what margin of error was it expected to perform within for the characteristic over a specific time period? How well has the model performed over those same time periods in matching observed conditions?
What is the rationale to continue to consider the outputs of models for government policy that have failed to produce results that reasonably accurately match observed conditions? In my opinion there in no good reason. Models that have not been demonstrated to produce reliable outputs matching observed conditions should not be considered in any ensemble of models being used.
Steve
I do not understand you continuing to write all “models are wrong”. That is obviously true but not germane.
####################
Well, it is germane. read through the comments here or anywhere in the skeptosphere and you will see the mantra ‘the models are wrong”, “the models are falsified”. So, it bears repeating that all models are wrong. That is, we can always find some way in which they dont ‘match’ reality.
#################################
What was a particular model designed to predict and within what margin of error was it expected to perform within for the characteristic over a specific time period? How well has the model performed over those same time periods in matching observed conditions?
#########
This starts to get to the crux of the issue. Typically, one doesnt have a margin of expected error. Typically one has an allowable error set by user requirements. Lets take an example. From Paleo we know
that modern civilization has developed in a world that has been less
than 18C ( hmm thats close enough for this example) and we decide,
because we can, that this is a boundary we don’t want to cross because we don’t know the risks on the other side of that boundary. It’s 15C now.
we can decide that we need to know if we are going to get withn 1C of that boundary. And so we can decide that a model error of 1C is acceptable. The decision that 18C is a boundary isnt a statistical decision. The decision to come no closer than 17C is not a statistical decision. You decide you want to have 6 months of monthly expenses in cash.. that can be informed by probabilities but in the end it comes down to your willingness to take risk. do you feel lucky punk?
#############
What is the rationale to continue to consider the outputs of models for government policy that have failed to produce results that reasonably accurately match observed conditions? In my opinion there in no good reason.
############
The rational is simple
A) the models do produce accurate projections, given the magnitude of the system being simulated.
B) there is no better decision support tool. Look at hurricanes. You have collections of models that are routinely wrong and wrong by large margins. Would you plan to go surfing because the models sucked?
Or take something as simple as crash dummies. A crash dummy is a model of the human being. Is it perfect? nope. do we disregard test results because of this? nope. you just keep making them better or more specialized
http://www.humaneticsatd.com/about-us/dummy-history
In short, as a policy maker, you are concerned about the level of sea rise in the future, if for example, you are planning development of a coastline, then you have a legitamate question: where will the shore be in 100 years. Now suppose you are making that decision. Its your job to make that decision and you have the power to make it.
What information should you consider? why, you can consider any evidence you damn well please to. So for example you could look at the historical trend. What are you doing there? you are assuming a linear model. you fit the past with that linear model and you project the future. And you know that model can never be correct, but you can still use it. You can also use a physics based model. That too will be in correct. What you can’t do is shrug your shoulders and say ” we dont know?’ its trivially true that we don’t know and never know.
One could say ‘well let people build whereever they like, maximize freedom” This too is a model. Its a model that says the best results come from people freely deciding. We see the failure of this model ever hurricane season. here is the bottom line you cannot AVOID using models. every time you think and decide you are using models, and none of those models is correct, some are less wrong than others.
#####################
Models that have not been demonstrated to produce reliable outputs matching observed conditions should not be considered in any ensemble of models being used.
you have no choice to avoid using models. when you think you are using a model. Further we can use unreliable models to get reliable results. Simple example; my car has a distance to empty model.
Its nowhere near realistic. Its prediction is highly unreliable, but I rely on it. When it says 30 miles to empty I fill up.
Steve writes:
The rational is simple
A) the models do produce accurate projections, given the magnitude of the system being simulated.
My perspective- Steve, that imo is bologna. Most of the models do not produce accurate projections. Only a small percentage gives reasonably accurate forecasts of a narrow list of characteristics. If you have 25 models forecasting temperature rise over X years (as an example) and you note that the observed rise was .5C and the range of the model’s forecasts was that only 3 models forecasted a rise between .3C and .5C while the other forecasted a rise of over 1C why do want to continue considering the outputs of the 22 models that have been show to perform poorly?
Steve writes:
B) there is no better decision support tool. Look at hurricanes. You have collections of models that are routinely wrong and wrong by large margins. Would you plan to go surfing because the models sucked?
My perspective- A bad decision support tool is of no value and can frequently lead to decisions worse than no tool at all. We only listen to the predictions of hurricane models because they have been shown to produce reasonably accurate results for the purpose. If we look at 5 hurricane models that predicted the paths of hurricanes and 3 of them are not consistent with the observed path the hurricane is taking, do you still advocate people making steps to prepare based on the projected path of the 3 poorly performing models? Obviously not. You are being inconsistent.
Rob Starkey | July 1, 2013 at 12:20 pm |
“I do not understand you continuing to write all “models are wrong”. That is obviously true but not germane.”
The hell it isn’t germane. It neccessarily means that the ensemble is wrong too. The trite expression is “Two wrongs don’t make a right”. Write that down.
The question then becomes ‘How wrong is the ensemble and why?’ Write that down too.
Steve is writing that no models is ever perfect and my point is that anyone who has developed or used models know that they are never perfect, or always wrong.
What Steve is failing to acknowledge is that most of the models are so wrong that they should be removed from future consideration. This is done is all other areas.
Steven Mosher | July 1, 2013 at 1:10 pm |
“A) the models do produce accurate projections, given the magnitude of the system being simulated.”
This seems patently not true or at least highly suspect. If it were true we wouldn’t be talking about missing heat or a pause or speculating about cloud feedback or Chinese aerosol production or black carbon or any of that.
You’re in denial about model skill and if the above doesn’t convince you of that I don’t think any reasonable evidence could pursuade you. You appear to demand an instant reversion to the mean. Have patience. It’s heading back toward the mean as we speak. Rome wasn’t built in a day.
Rob:
My perspective- Steve, that imo is bologna. Most of the models do not produce accurate projections. Only a small percentage gives reasonably accurate forecasts of a narrow list of characteristics. If you have 25 models forecasting temperature rise over X years (as an example) and you note that the observed rise was .5C and the range of the model’s forecasts was that only 3 models forecasted a rise between .3C and .5C while the other forecasted a rise of over 1C why do want to continue considering the outputs of the 22 models that have been show to perform poorly?
#############################################
what you consider reasonable accurate and what I consider reasonably accurate are two entirely different things. Given the complexity of the planet I think that getting the trend over a 30 period within 100% to be pretty damn good. That is, if the model predicted .25C and we saw .5C that would be pretty damn good. The problem of course is that your idea of accurate is different than my idea of accurate. And in reality Accuracy is relative to the purpose.
##############################
“My perspective- A bad decision support tool is of no value and can frequently lead to decisions worse than no tool at all. We only listen to the predictions of hurricane models because they have been shown to produce reasonably accurate results for the purpose. If we look at 5 hurricane models that predicted the paths of hurricanes and 3 of them are not consistent with the observed path the hurricane is taking, do you still advocate people making steps to prepare based on the projected path of the 3 poorly performing models? Obviously not. You are being inconsistent.”
All decision tools are “bad” the point is the relative “badness” The point is that there is a continuum of value. Lets take the climate models. They predicted .2C of warming. in reality we are seeing something on the order of .15C. Its very easy to weight the result even if the model is biased.
##############################
“Rob Starkey | July 1, 2013 at 2:24 pm |
Steve is writing that no models is ever perfect and my point is that anyone who has developed or used models know that they are never perfect, or always wrong.
What Steve is failing to acknowledge is that most of the models are so wrong that they should be removed from future consideration. This is done is all other areas.”
###################
you can only remove a model if you have something better to replace it. The reason is simple. You ALWAYS have a model. When you think about any problem or any future you ALWAYS have a model. As long as you are using the human mind you ALWAYS have a model. Now, sometimes you dont think you have a model.. but you always have a model. So, you might think you can disgard all the models but you cant. You might think you have, but you havent.
This comes back to the fundamental choice you have.
When you are posed the question: What will the temperature be in 2100,
you have 3 ways to answer. ‘i dont know’ is one of those choices and it always loses on a test of skill.
Steve Mosher,
The issue for me is predicting abrupt change; as I am driving to the airport, the cab driver will run the red light and crash into me. My model of transit to the airport was wrong, it didn’t predict the unknowable. True, cab drivers run red lights. True, not all cab drivers run red lights. Unknowable, which cab driver at which time will run a red light and will I be at the intersection that that cab driver runs the red light.
The physics is simple. What is unknowable, is the abrupt change in the behavior of the cab driver and will I be impacted.
When we look at climate models, and I agree from my own experiences, all models are wrong, abrupt changes are not predicted by the model. If models could predict, a cab driver somewhere will run a red light and that I will be at the intersection at the precise time this particular cab driver will run this red light, then my statistical model predicting the crash, if I knew, I would not drive to the airport on this trip and take instead the rapid transit. Or I may elect to not go out at all. I’m paralyzed with fear.
My purpose is to go to the airport safely. The model that I choose, time to leave, route, speed, etc., didn’t figure into the calculation the red light running cab driver. There is something beyond the inputs to my model that impacts me although I know about red light running cab drivers. My assumed error in decision making is small as I don’t have a metric for abrupt change, the split second decision of the cab driver to run the red light.
Now if I were driving in Boston to Logan Airport, I would assume all cab drivers were going to run red lights. An easier calculation with high probability of being true. In Lawrence KS, not likely to be true.
Mosher: “The problem is that all models, I repeat all models are wrong.”
While all climate models are wrong, in applied science (called engineering) models are NOT wrong, or even modestly wrong. Why not? Because the science behind them is based on KNOWNS – knowns measured in times past and archived (by many now unknown people whose jobs and industries required it to be known). Because of those efforts – and the standardization of, say, basic materials production and off-the-shelf products – a lay person with only a reasonable amount of “skills” can design a structure that will not fall and injure people; it doesn’t take a rocket scientist. Or a climate scientist.
The reason climate models all “are wrong” is because the underlying concepts and quantifications have not been done yet. If we engineered bridges and buildings with the unknowns allowed in climate science, we’d have a populace afraid to enter buildings or cross over running water. As it is, we have alarmed people to the extent that they are afraid the seas will boil (James Hansen) or the planet dying – afraid like hell for their grandkids. What level of ignorance permits such irresponsibility?
We live in a time before sufficient facts and processes are documented. We need to admit that to our overbearing egos and realize that real understanding of the climate will have to be left to those same grandkids or even later generations. If we want to do them a service, we should be putting out nose to the grindstone and simply logging facts as we find them – not running around screaming, “The sky is falling!” and rending our clothes and smearing ash all over ourselves.
The models SUCK, and they suck because we won’t even admit that what we are putting in them is garbage (because we don’t HAVE enough facts to put into them – if we did, the models wouldn’t suck). The models are premature; they are based on formulas and constants that are not known yet. If engineering models did that, they’d string the modelers up by the yardarm.
We can’t blame the climate models “sucking” on “variability” like the IPCC does. Variability is their code word for “we don’t know WTF is going on.” Shame on them for not admitting that – and then blaming the present state of our ignorance on such a vague lawyerese thing as some undefined variability. And then they still expect everyone to hop on the wagon and fork over their many trillions of future hard-earned euros on such an iffy proposition. Shame on them.
Personally, I am sure that when the underlying science is known, stripped of its mystical, magical, religious “variability” (like the Mind of God), the maths can be worked out and the modelers’ job will be a piece of cake. Until that time, the models suck – but only the climate models and economic models.
Re: “Climate models are inadequate in a rich diversity of ways, but it is the hope that these physically motivated models can still inform us about various aspects of future observable climate.”
When will models take half the data to tune and the other half to validate against forecasts/hindcasts to provide a weighting of how accurately the models can replicate nature?
Current model tuning appears to be motivated by the politically correct lemming predictions of global warming with a systematic hot bias in ALL the models.
When will there be an effort to recognize this and to validate/invalidate models? When will there official recognition that the models are failing in predictions when tested against reality of actual global temperatures?
As Richard Feynman said:
“If it disagrees with experiment it is wrong. That’s all there is to it.”
The longer this takes, the less trust there will be in climate science and its “projections” (not “predictions”).
Re: “take the multi-model ensemble mean with a grain of salt.”
Try a bucket of salt.
Smart people everywhere know the climate model ensemble failed validation testing, David. Now people who aren’t engineers need to be somehow brought up to speed on what happened. ;-)
Such is the problem of some “climate scientists” (aka tenured residents of ivory towers) whose promotion depended on publishing and secured grants, not on accurate predictions with minimum life cycle costs and helping the extreme poor develop.
As Thomas Kuhn observed,
Will they every learn the Scientific Method? or will they go to their graveyards with their erroneous beliefs?
When will they ever learn?
The very notion of an “ensemble of models” suggests that there is not a best model. There may still be good models, but how to prove it?
The models should not only predict climate variables, but also evaluate their own confidence levels, for example, “The predicted average temperature for a grid rectangle 214 [coordinates] for November 21, 2013, 2pm UT, is between 23C and 31C with a 95% confidence level”. It is more work but it would boost my confidence in models immensely.
When do they start to show concern about the validity of the models? What good is a projection without that? We accept that weather models have limits to predicting. We assume that climate models do not, or at least we don’t talk about that in public.
In this case, observations are viewed as a single random draw from an imagined distribution of the space of all possible but equally credible climate models and all possible outcomes ofEarth’s chaotic processes.
ouch.
A ‘perfect’ independent model in this case is also a random draw from the same distribution, and so is ‘indistinguishable’ from the observations in the statistical model.
ow!
‘In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. This reduces climate change to the discernment of significant differences in the statistics of such ensembles. The generation of such model ensembles will require the dedication of greatly increased computer resources and the application of new methods of model diagnosis. Addressing adequately the statistical nature of climate is computationally intensive, but such statistical information is essential.’ http://www.ipcc.ch/ipccreports/tar/wg1/505.htm
‘
mean.
Uncertainty in climate-change projections3 has traditionally been assessed using multi-model ensembles of the type shown in figure 9, essentially an ‘ensemble of opportunity’. The strength of this approach is that each model differs substantially in its structural assumptions and each has been extensively tested. The credibility of its projection is derived from evaluation of its simulation of the current climate against a wide range of observations. However, there are also significant limitations to this approach. The ensemble has not been designed to test the range of possible outcomes. Its size is too small (typically 10–20 members) to give robust estimates of the most likely changes and associated uncertainties and therefore it is hard to use in risk assessments.
As already noted, much of the uncertainty in the projections shown in figure 9 comes from the representation of sub-gridscale physical processes in the model, particularly cloud-radiation feedbacks [22]. More recently, the response of the carbon cycle to global warming [23] has been shown to be important, but not universally included yet in the projections. A more comprehensive, systematic and quantitative exploration of the sources of model uncertainty using large perturbed-parameter ensembles has been undertaken by Murphy et al. [24] and Stainforth et al. [25] to explore the wider range of possible future global climate sensitivities. The concept is to use a single-model framework to systematically perturb poorly constrained model parameters, related to key physical and biogeochemical (carbon cycle) processes, within expert-specified ranges. As in the multi-model approach, there is still the need to test each version of the model against the current climate before allowing it to enter the perturbed parameter ensemble. An obvious disadvantage of this approach is that it does not sample the structural uncertainty in models, such as resolution, grid structures and numerical methods because it relies on using a single-model framework.
As the ensemble sizes in the perturbed ensemble approach run to hundreds or even many thousands of members, the outcome is a probability distribution of climate change rather than an uncertainty range from a limited set of equally possible outcomes, as shown in figure 9. This means that decision-making on adaptation, for example, can now use a risk-based approach based on the probability of a particular outcome.’
http://rsta.royalsocietypublishing.org/content/369/1956/4751.full
Oh – I was just agreeing.
Unskilled Model forecasts are really not useful.
Skilled review of data for the past ten thousand years does offer really good skill for showing that what will happen next is a repeat of what has happened in the past.
there is still the need to test each version of the model against the current climate before allowing it to enter the perturbed parameter ensemble.
That has been done and they have not got one right yet.
‘Atmospheric and oceanic computational simulation models often successfully depict chaotic space–time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but nonunique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model. Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.’ http://www.pnas.org/content/104/21/8709.long
You’ve not quite got the idea HAP. There is no correspondence between ‘ensembles of opportunity’ and ‘perturbed model’ ensembles. One has a collection of results of different models that are chosen arbitrarily form an unknown range of outcomes – irreducible imprecision – and the other is theoretically more applicable but still struggling with models and methods.
Understanding the problems of models emerges from understanding the nonlinear nature of the underlying maths – chaos theory in other words.
Don’t be so hard on the model ensemble, Chief. If you subtract 0.2C/decade from the 0.3C/decade prediction they come pretty close. Those modeling boys just need to let me add a single line of code and presto-chango we’re good to go. Pseudocode:
annualGlobullWarming += -0.2;
Easy peasy.
‘In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. This reduces climate change to the discernment of significant differences in the statistics of such ensembles. The generation of such model ensembles will require the dedication of greatly increased computer resources and the application of new methods of model diagnosis. Addressing adequately the statistical nature of climate is computationally intensive, but such statistical information is essential. ‘ http://www.ipcc.ch/ipccreports/tar/wg1/505.htm
It is impossible at the time of the third assessment and remains impossible now.
We should interpret ensembles realistically.
‘Simplistically, despite the opportunistic assemblage of the various AOS model ensembles, we can view the spreads in their results as upper bounds on their irreducible imprecision. Optimistically, we might think this upper bound is a substantial overestimate because AOS models are evolving and improving. Pessimistically, we can worry that the ensembles contain insufficient samples of possible plausible models, so the spreads may underestimate the true level of irreducible imprecision (cf., ref. 23). Realistically, we do not yet know how to make this assessment with confidence.’ http://www.pnas.org/content/104/21/8709.long
“Such frameworks involve making simplifying, yet transparent and defensible, assumptions about ensembles of climate models and their relationship to the real world.”
In light of the tuned aggregate structure of climate data, it’s flatly unconscionable to continue saying such things. My patience for this BS has totally & completely expired. You might as well try to force feed the populace the notion that 1+1=3. I conclude that there are some darkly intransigent forces of ignorance &/or deception at play and that sensible people had better be very wary. This is the sort of thing that can eat the foundation out from under society & civilization. Modeling “science” is becoming a mean, angry beast that threatens a very large proportion of the human population and a good deal more beyond that. Organized religion is the only natural force I can see that might be capable of neutralizing the influence of deeply corrupted university & government modeling “science” that is based on patently untenable, thoroughly egregious, fundamentally false assumptions.
pv here’s the climate modeling way to make 1 + 1 = 3;
1+-0.33 + 1+-0.33 = 3 +-0.66.
See how that works? All you have to do is introduce some uncertainty around the middle and then claim the model is performaing flawlessly. At least for a while until error accumulates beyond the amount you specified. This is in fact what happened. It took 22 years for the accumulated error to exceed the built-in uncertainty.
In defense of the usual suspects some or most of them have at least admitted something is missing from the models. My favorite excuse so far is “The ocean ate my global warming” after the proverbial dog that ate your homework. That’s all well and good but the further claim is that if we wait long enough the dog will schit the homework and it will emerge unblemished. Unfortunately that violates the 2nd Law of Thermodynamics and even schoolchildren should know that you can’t unbake a cake i.e. once the ocean eats your global warming it cannot come out in the same form it entered. In this case the warming which could have have made a sharp temperature rise in a small volume of surface water, once diluted in the far greater volume of the whole ocean, can never concentrate back into a shallow surface layer. At least not without violating the law of entropy. So the energy that enters at 0.5Wm2 over the course of a century takes, by my rough calculation, 2000 years to escape at 0.025Wm2. That’s not enough to heat detectably increase the temperature of the atmosphere above the ocean. Judith Curry and Gavin Schmidt at least, to their credit, acknowledge this. Trenberth however appears to be in denial.
DS, the imbalance remains either until the ocean surface warms, or the land surface has some other means of warming. The imbalance rises with increasing forcing (CO2 in this case) and stays the same with OHC increasing as long as the surface temperature is not changing. So you see, it doesn’t need the heat to come out of the ocean if the land has another way of warming, but the key is that the imbalance remains or continues to rise until the surface warms somehow to equilibrate it. In economic terms, the imbalance is like a growing debt than can only be paid off with surface warming. OHC rising just postpones the payment.
No Jim. The debt can be paid by accelerated surface cooling.
Riddle me this, batman. What causes an interglacial period to end and global average temperature to plummet? And why can’t whatever causes interglacials to end suck the sequestered heat out of the ocean even while global average temperature is going down?
Models have demonstrated no skill. To take them with a grain of salt is giving them too much credit.
Skill is mathematical measure. They have skill. The question is
A) what metrics are relevant to policy
B) what is the skill on those metrics
C) what alternative method of prediction does better
the only metric relavent to policy is regional change in weather patterns with concrete economic impact and some agreement on how much the white on a polar bear is worth in dollars and cents – good luck on the former and since the latter is subjective there will never be agreement
What part of ‘a posteriori solution behaviour’ don’t you understand mosh?
“They have skill.”
Really?
Clearly the “climate scientist’s” definition of “skill” differs massively from that recognised by the vast majority of the population.
Perhaps it is such egregious alteration of the commonly understood meaning of words that is responsible for the rapid decline in public credibility which is currently afflicting your profession.
“C) what alternative method of prediction does better”
Tea leaf readings?
Black cock entrails?
Throwing darts over your shoulder?
Surely a better question would be “is there any method known to mankind that does worse”?
“In the meantime, take the multi-model ensemble mean with a grain of salt.”
This is the best advice: from Judith.
The early work of the IPCC suggests it was confusing its scientific principles with its democratic. Like having about 20 models when one properly validated one would do.
It is so not true – you need models with hundreds to thousands of runs to generate PDF’s. No model has single deterministic solution – there are many solutions to any model within the range of feasible inputs. To understand this you need to understand the nature of the nonlinear Navier-Stokes partial differential equations. You need to have some appreciation of chaos theory to understand the models. Chaos theory was rediscovered by Edward Lorenz using the Navier-Stokes equations in a simple 1960’s convection model. This is a case where 5 plus 5 doesn’t equal 10.
A hand is five,
Another is five.
So what do you get,
Add five plus five?
A butterfly.
It seems you are deliberately misunderstanding the issue.
Nobody has any problem with 100/1000’s of runs of a single model, and with the ensemble mean of these runs being used as the most likely trajectory of the model output, and the range representing the noise/weather.
This in no way endorses taking an average of any two different models. It’s a different problem, and many seem to think it’s as pointless as asking what is the average gender of the pupils in a mixed school.
You’re not suggesting that these climate models have incorporated chaos theory and non-linear NS PDE’s Chief? As far as I can determine, the climate models currently in use assume ergodicity and linearity.
The core of the models are the non-liner Navier-Stokes equations. They cant help but be chaotic. There are many divergent solutions – irreducible imprecision – possible within the range of feasible inputs. How do they know which one is right.
‘ The bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms, and a posteriori solution behavior.’ http://www.pnas.org/content/104/21/8709.long
A posteriori solution behavior? That’s right – they pull it out of their arses.
There are 2 approaches. The ensemble of opportunity – for which there is no theoretical justification and the perturbed model ensemble. The former is an ad hoc collection of solutions that are pulled out of arses. The latter systematically changes input parameters in a single model to produce a family of solutions that can then be treated statistically. The latter is in principle a better approach but is lacking a rigorous methodology. The latter is what I was talking about here.
Peter, the models actually produce a lot of variability over long periods of time. One could hardly tell this from the projections the IPCC shows. It makes me wonder if he that tunes the models tunes the past and he that tunes the past tunes the future.
http://mms.dkrz.de/pdf/DKRZ25/Poster/poster_0582.pdf
http://www.ldeo.columbia.edu/~jsmerdon/papers/2012_jclim_karnauskasetal.pdf
http://www.agu.org/pubs/crossref/pip/2012GL052107.shtml
Do you know what statistical mechanics is, Chief? Every swinging dick on the planet knows about the butterfly effect but the fact remains that macroscopic order emerges from microscopic chaos. Get a clue.
David, “Do you know what statistical mechanics is, Chief? Every swinging dick on the planet knows about the butterfly effect but the fact remains that macroscopic order emerges from microscopic chaos. Get a clue.”
That is true. If CO2 were having a strong enough impact to obviously be driving climate, chaos would not be an issue. When climate scientists use methods that require “microscopic” precision to dig out the CO2 signal, chaos becomes a problem both in the system and in the methods.
Face it, when people are arguing over the significance of a 0.034 C trend using a +/- 0.011 confidence interval, thar be chaos.
@Steve Ta: Mostly agreed. The problem for me is, “the range representing the noise/weather.”
The range of the ensemble values are claimed to reflect the variability of nature. This simply isn’t a reasonable interpretation of what the variability actually is.
The worse the members of an ensemble do, the better it is for ensemble fans, because: 1) it makes the ensemble less falsifiable, 2) it allows them to talk about the huge (upside) worst-case scenario, and 3) the whackiest ensemble members make the poor members look reasonable by comparison.
Models are based on the non-linear equations. So we have first of all a confusion of model behavior and climate. The discussion is about models and so the question of ‘statistical mechanics’ – even if it did apply to climate – is utterly irrelevant.
Climate is however at the fundamental system level chaotic.
‘Nonlinear phenomena characterize all aspects of global change dynamics, from the Earth’s climate system to human decision-making (Gallagher and Appenzeller, 1999). Past records of climate change are perhaps the most frequently cited examples of nonlinear dynamics, especially where certain aspects of climate, e.g., the thermohaline circulation of the North Atlantic ocean, suggest the existence of thresholds, multiple equilibria, and other features that may result in episodes of rapid change (Stocker and Schmittner, 1997). As described in Kabat et al. (2003), the Earth’s climate system includes the natural spheres (e.g., atmosphere, biosphere, hydrosphere and geosphere), the anthrosphere (e.g., economy, society, culture), and their complex interactions (Schellnhuber, 1998). These interactions are the main source of nonlinear behavior, and thus one of the main sources of uncertainty in our attempts to predict the effects of global environmental change. In sharp contrast to familiar linear physical processes, nonlinear behavior in the climate results in highly diverse, usually surprising and often counterintuitive observations…’
http://www.globalcarbonproject.org/global/pdf/pep/Rial2004.NonlinearitiesCC.pdf
‘Atmospheric and oceanic forcings are strongest at global equilibrium scales of 10^7 m and seasons to millennia. Fluid mixing and dissipation occur at microscales of 10^−3 m and 10^−3 s, and cloud particulate transformations happen at 10U^−6 m or smaller. Observed intrinsic variability is spectrally broad band across all intermediate scales. A full representation for all dynamical degrees of freedom in different quantities and scales is uncomputable even with optimistically foreseeable computer technology. No fundamentally reliable reduction of the size of the AOS dynamical system (i.e., a statistical mechanics analogous to the transition between molecular kinetics and fluid dynamics) is yet envisioned.’
http://www.pnas.org/content/104/21/8709.long
Let’s see – should I place more weight on James McWilliams or Jarhead the Jabberwocky?
So you don’t know what statistical mechanics is. Check.
In your reply you further demonstrate that you don’t know that appeal to authority is a logical fallacy.
Nice. Babble on, Garth.
Again the models are non-linear without any doubt at all. Twittering on about statistical mechanics in the context of numerical simulation is not just wrong but utterly misguided.
‘James C. McWilliams primarily does research in computational modeling of the Earth’s oceans and atmospheres.[3] McWilliams has written numerous papers from 1972 to the present,[4] attempting to construct accurate models to describe the Earth’s fluid reservoirs. One of McWilliam’s most influential papers was a paper written in 1990 titled “Isopycnal mixing in ocean circulation models”, in which together with Peter R. Gent they proposed a subgrid-scale form of mesoscale eddy mixing on isopyncal surfaces for use in non-eddy resolving ocean circulation models.[5]
McWilliams has contributed greatly to the development of accurate models of the Earth’s atmosphere and ocean, and his subjects of interest are maintenance of the general circulations; climate dynamics; geostrophically and cyclostrophically balanced (or slow manifold) dynamics in rotating, stratified fluids; vortex dynamics; planetary boundary layers; planetary-scale thermohaline convection; coherent structures of turbulent flows in geophysical and astrophysical regimes; magnetohydrodynamics; numerical methods; and statistical estimation theory.[6]
More recently, he has helped develop a three-dimensional simulation model of the U.S. West Coast that incorporates physical oceanographic, biogeochemical, and sediment transport aspects of the coastal circulation. This model is being used to interpret coastal phenomena, diagnose historical variability in relation to observational data, and assess future possibilities.[7]’ Wikipedia
The ‘appeal to authority’ is indeed a fallacy – however – an appeal to an actual authority is not a fallacy. Claiming Jarhead the Jabberwock as an authority would be a fallacy. Referencing an acknowledged leader in the field from per reviewed science is not.
‘No fundamentally reliable reduction of the size of the AOS dynamical system (i.e., a statistical mechanics analogous to the transition between molecular kinetics and fluid dynamics) is yet envisioned.’
An ensemble of models is like a consensus of opinions, it doesn’t really mean anything.
Well, we are talking abo.ut it so it of course makes sense.
Operationally it is a collection of our best physical understandings of the climate system
Even if true, “our (modeled) best physical understandings” do not provide reliable enough results to inform policy. Bad information that people believe is immeasurably worse than no information at all. We would be better off preparing for those things that we know can happen because they have happened before.
New England/New York had been struck by multiple hurricane Sandys.
The Gulf Coast had been struck by multiple hurricane Katrinas.
Let’s prepare for what we know can happen before we spend billions on model outputs that have demonstrated no useful predictive skill but are nonetheless “our best physical understandings of the climate system”.
How should we interpret an ensemble of models?
It appears we have just past peak of solar activity for at least two or three decades to come:
http://www.vukcevic.talktalk.net/SSN.htm
My interpretation for any ensemble of models, especially the semi-naked ones, Siberian or Canadian fur numbers will be the high fashion for years of winters to come!
A model projection ensemble reflects the activities and thinking of the modelling groups and their interactions; not physical reality. It may be interesting and useful for the modellers themselves, but should not be presented to the outside world as information about climate, because it is not. Neither the mean, nor the spread, nor the correlations. It does not even represent uncertainty in any meaningful sense. Just an accumulation of choices that have been made by various people at various stages and for various reasons one may only guess.
This is the madness at the core and the inception. Had attention been paid to Nature this could have self-corrected. How come it didn’t? Good question.
=============
Kim
It would help if we could ge the past right and realised the shortcomings of models. Looking at historic evidence helps as Phil jones agrees.
bearing in mind the time money and effort expended and their importance to policy makers, models are not fit for the purpose they were intended for. Perhaps in future, but not at present
http://judithcurry.com/2013/06/26/noticeable-climate-change/#comment-339260
Tonyb
Of course, Tony; now the raw question, what is the good PHILosopher doing JONESing around in the historical temperature record for Europe?
==============
Kim
In answer to your question he and Dario Camuffo (amongst others) are changing the early records with a large EU grant under the ‘Improv’ project auspices.
I have the results in book form as I write this its called. ‘Improved understanding of past climatic variability from early daily European instrumental sources.’
Have you ever heard of the Mannheim Palatine?
tonyb
Cees, you write “It does not even represent uncertainty in any meaningful sense.”
This is the key point which our hostess and all the other warmists refuse to discuss. I have noted many times, CAGW is a viable hypothesis, which no-one can prove is wrong. The work the warmists do to try and prove that their hypothesis is correct, is completely valid science. I applaud it.
But what is fatally and fundamentally wrong with the warmist approach to CAGW is that, led by the IPCC, they insist all this hypothetical modelling “proves” that CAGW is correct. They will not acknowledge that their guesses as to what the value of climate sensitivity is, for example, are just that, guesses. Thye have asbolutely no idea how accurate these guesses are.
But the IPCC still goes on claiming that it is “extremely likely”, or “very likely” that something or other about CAGW is correct. I believe that this will be part of the AR5, and appear in the SPMs in September. That is where I feel our hostess and her warmist bretheren are being scientifically dishonest.
Cees de Valk (July 1, 2013 at 7:46 am)
“It does not even represent uncertainty in any meaningful sense. Just an accumulation of choices […]”
more specifically:
fundamentally false assumptions
“A model projection ensemble reflects the activities and thinking of the modelling groups and their interactions; not physical reality. ”
no model represents ‘physical reality’. let’s put this another way
the idea that there is a model over here and reality over there, is itself
a model.
Operationally we have the following question; what will the climate be like in 100 years.
There are 3 approaches to answering this
A) shrugging your shoulders.
B) statistical approaches
C) physics based approaches.
If you want to play on the field of informing policy, then option A means you give up your power. That is what many skeptics have done by claiming we cant know, or the climate is too complex, or by grunting “models bad”.
That means you get to choose between B and C. Option C beats option B hands down in terms of scope and accuracy.
Steven, you write “That means you get to choose between B and C. Option C beats option B hands down in terms of scope and accuracy.”
Utter and complete garbage. Option C is only of any use if you can prove that the physics is capable of solving the problem. Since there is no empirical data to support the hypothesis of CAGW, Option C is useless; as is Option B. The ONLY honest thing for proper scientists to say, is “We dont know”. This is, of course, Option A, which you dismiss.
Ancient Greek generals used to sacrifice chickens and examine their entrails in order to “predict” the outcome of campaigns. Xenephon gives many examples in the Anabasis. In all his cases, these auguries supported what Xenephon wanted to do in the first place.
Most of the anti-GCMers here believe that these simulators are in reality no better than chicken entrails and just as easily manipulated by motivated interests (in this case including ideological members of the priestly caste). It is no good to cite instrumentalist dogma about the use of models when your audience thinks that they are at best naive equivalents of chicken entrails.
Personally, I see the GCMs as lying somewhere between entrails and macroeconomic models. They are certainly no more believable and I would say less believable than the latter when scaled to the needs of policymakers and the public.
The best argument for CO2 mitigation remains the same one we started with–we know that the partial derivative of expected temperature with respect to CO2 is positive and we are on track to put a lot more CO2 into the air than we’ve seen in recorded history. The GCMs add no credibility beyond this argument, nor do they pin down with believable precision anything quantitatively, particularly at the regional level relevant for policy. So please, no more shilling for these simulators–they are neither necessary nor sufficient nor confirmatory for the Urgent Mitigationist case.
Speaking of entrails, or tea leaves, or palm lines, water-witching or whatever… they are the gimmick whose only purpose is to distract the gullible from catching on that the person “reading” the fortune is making it up himself … and you would not believe hm without that gimmick.
The skeptic will suspect that climate models fall into this same category. Steven seems to believe that all (?) the climate models are serious science. I expect there is a bit of each. Until I see climate scientists critiquing each other’s models and data sets in the spirit of SM, I can’t give blind trust to the modellers. A model may be the best we have, or even the best we can do, and still be imperfect enough to be mostly useless. For example, no model will ever predict a series of random coin tosses or throws of dice. Can we prove that climate is deterministic? How are we doing on earthquake prediction? Sometimes the more you know, the harder the problem seems.
If we substitute “Hentai” for “climate” these sound like bullet points for a 12-step program.
Still trying to model massive non-linear open-ended feedback driven (where we don’t even know the sign of some of the tiny proportion of the feedbacks we have identified – some of which which even change sign periodically) chaotic systems subject to extreme sensitivity to initial conditions are we?
And claiming this is somehow relevant to something or other?
And pretending that if we can get “ensembles” of 40, 70 or however many (which, considering that they all claim to be based on 150 year old solid, proven physics show a remarkable propensity to produce widely different results) that will somehow increase their predictive ability?
And setting policies costing trillions of dollars/euros/yen/GBP etc?
And being very handsomely paid for this by the taxpayer too?
Nice work if you can get it.
Jolly good.
Carry on.
+1000
Hi Judy – I recommend the set of posts at
http://www.climatedialogue.org/are-regional-models-ready-for-prime-time/
as relevant to your post.
Roger Sr.
Hi Roger, thanks for this link
Not seeing much of this one in the discussion here yet. An ensemble failure of the opposite kind. Sea ice is disappearing in the Arctic 50 years ahead of the ensemble prediction which seems to have been too conservative in this area.
http://blogs.scientificamerican.com/guest-blog/files/2012/09/naam-ice-12.jpg
JimD, ” Sea ice is disappearing in the Arctic 50 years ahead of the ensemble prediction which seems to have been too conservative in this area.”
Always the optimist. Sea ice could be disappearing because the magnitude of natural variability was underestimated by a factor of three and CO2e forcing by a factor of 2 or more. That would be the GIGO hypothesis.
Indeed, you have to check if this has happened before in the last millennium. Answer, no. I think it is real.
http://www.skepticalscience.com/pics/1-kinnard2011.jpg
Another hockey stick? How naive.
JimD, Actually the probability of a similar melt in the past millennium is quite likely.
https://lh5.googleusercontent.com/-yCVnY6nXIiQ/UZmVEhGt-oI/AAAAAAAAIJs/EozQSkgn614/s817/IPWP%2520spliced%2520with%2520cru4%2520shifted%2520anomaly%2520from%25200ad.png
https://lh3.googleusercontent.com/-rRs69Ekl9Zc/T_7kMjPiejI/AAAAAAAAChY/baz0GHWEGbI/s917/60000%2520years%2520of%2520climate%2520change%2520plus%2520or%2520minus%25201.25%2520degrees.png
That second one, Tierney et al, was one that Marcott completely screwed the dating on. Both represent the majority of the world heat capacity, the tropics.
Just in case you still have doubts,
https://lh3.googleusercontent.com/-zz3b_nx9E_I/Ua1L-4Q65RI/AAAAAAAAIZc/hXnBPycSUio/s815/oppo%2520and%2520CET.png
Compared to CET, the IPWP seems to indicate the handle of the hockey stick is not as smooth as some insist.
Then there is a wealth of paleo ocean data that indicates a variety of recurrent frequencies plus a new paper by our host that indicates natural variability is not insignificant.
Other that that, I got nothing.
Staying away from showing anything relevant to the Arctic there, captain. This was about ensemble models and their predictions in case you forgot. Point was, it already is lower than they predicted for 50 years from now.
https://lh3.googleusercontent.com/-EitTjDkCKmU/UdGkXs5zX5I/AAAAAAAAI4Q/q1rnbX1c0ds/s912/high%2520resolution%2520sub%2520polar%2520arctic.png
JimD, since then whole heat capacity and internal variability thing eludes you, that is Sicre with the Kaplan AMO. The 25mo AMO just gives you an idea of the noise that has to be dealt with and the matching smoothing of the AMO shows the pit falls of Mann-o-maticing paleo reconstructions. But, there is evidence that the MWP did exist in at least the Arctic regions. What a shock! Those D-O events and other squigglies might be real?
captain, Arctic Sea ice loss could be more than a squiggle this time, I would suggest.
JimD, For “Global” temperature, pretty much just a squiggle since the hemispheres tend to seesaw. On an NH temperature scale, pretty big squiggle. That IPWP reconstruction only varies +/- 0.8C. That is not a lot of change in “global” heat capacity. The Arctic ocean though is only 3.4% of the global surface area. It can have big changes that impact NH land surface temperatures and don’t do diddly in the SH. Since the tropics represent most of the heat capacity, fiddle farting around with 3.4% of the globe doesn’t make much sense. Unless of course you are volunteering to stop all industry above latitude 50N so we can grow some ice sheets.
captain, the Arctic summer albedo loss would have a noticeable impact on global temperatures. It is at the center of a major feedback on global climate that would then surely spread to the northern continental winter snow, permafrost melting, etc.
JimD, “captain, the Arctic summer albedo loss would have a noticeable impact on global temperatures. It is at the center of a major feedback on global climate that would then surely spread to the northern continental winter snow, permafrost melting, etc.”
Because of the angle of incidence and cloud variations, “late” summer arctic albedo loss would not have that great of an impact and remember it is only 3.4% of the Globe.
Now if the “late” summer Arctic albedo loss were a major concern, I would not have expected 6 years to pass between records. The major SSW events and bitter winters that follow the record melts tends to indicate there is some not very well considered negative feedbacks to sea ice melt like our host has published on in the past.
Since we are at an AMO peak, if the AMO is actually an “oscillation” it would need a trigger to reverse itself. Kinda like Arctic Sea Ice melt. Face it, this would not be the first time the models have missed on feedback estimates.
Jim D | July 1, 2013 at 10:01 am |
Indeed, you have to check if this has happened before in the last millennium. Answer, no. I think it is real.
#######################
the skeptical null hypothesis is this
“Its all happened before”
They will never admit to any evidence that contradicts this null.
Mosh said in a sweeping statement ;
‘the skeptical null hypothesis is this
“Its all happened before”
They will never admit to any evidence that contradicts this null’
—– —
The sceptical null hypothesis is that the historical record shows it has all happened before so show us the evidence that it is unprecedented. Models (paleo reconstructions)are not evidence, specially when they miss out the granularity of our climate.
tonyb
jimD
Spare me that Kinnard reconstruction. The models made of arctic extent back in the 1970’s did not reflect either the Russian area nor the full extent of the melting partially shown in the DMI models of the time which stopped in August.
http://judithcurry.com/2013/04/10/historic-variations-in-arctic-sea-ice-part-ii-1920-1950/
Hopefully the Back to 1870 project will get somewhere closer to the reality than was managed 40 years ago
tonyb
Are you as sure as the captain that what is happening now in the Arctic is just a natural variation? How sure, 75%, 100%? Will it all melt by September 2020? Looks that way to me when I look at PIOMAS ice volume trends. When did that last happen? Anyway, this is off the subject. It is an area where ensembles underpredicted a trend, and for some reason “skeptics” don’t like to talk about this mode of ensemble failure.
jimd
The ice extent was considerably overestimated during the 1920-1940 period. I asked Neven to help me do a proper reconstruction but he felt he was too busy. Best estimate is that ice extent back then was around that of the early 2000’s but probably not as low as 2007 and 2012.
The two warmest decades in Greenland (see article for ref) were, according to phil jones the 1920 to 1940 period. We will have to wait until 2020 to see if the current warming beats it.
Natural? The arctic melts back with surprising regularity. The 1700’s and 1500’s are in my current research programme for that reason.
According to information I have seen at the Scott Polar institute in Cambridge there seems SOME evidence that the 1500 to 1550 period MAY have seen the first traverse of the Northern passage. Similarly the land temperature also shows considerable warming round that time , if so it appears there may have been arctic amplification.
But its early days and as my regular huge cheques from Big Oil seem to have dried up it may be some time before I can dig further.
See my interchange with Julienne Strove in the article as to what ‘ice’ actually means.
tonyb.
You should try working with Kinnard. He has 69 proxies of various kinds in his 2011 work, so he may be a little ahead of you.
jimd
I saw his proxies He’s not.
Mann also had numerous proxies but that doesn’t make his work correct either
tonyb
jimd
I sounded more dismissive than I meant. Kinnard did a good job in bringing some proxies into the open.
http://climateaudit.org/2011/12/05/kinnard-arctic-o18-series/
trouble comes when interpreting data. like tree rings certain proxies have limited value. Also they are likely to miss the granularity of the annual changes which can be considerable. We saw this with the paleo reconstructions .
its a great shame that historians and modelers don’t work more closely together.
tonyb
At least a few of his proxies were called documentary, which I assume means historical documents.
It – has – all – happened -before!
A – serf – out – in -all -goddam -variable – weather.
“These issues were the focus of discussions at an Isaac Newton Institute workshop on probabilistic prediction of climate change held at the University of Exeter on 20–23 September 2010. This article presents a summary of the issues discussed between the statisticians, mathematicians, and climate scientists present at the workshop. In addition, we also report the discussion that took place on how to define the concept of climate.”
This workshop is all online in the form of videos. I’ve posted links to a few of the talks over the past 3 years, if you cant find it using google just ask and I’ll link again.
Invest the time in watching every presentation. Its better than reading comments.
@Steven Mosher
Stephenson’s summary video (Dec 2010 ) is particularly good (IMO). [Still watching… Thanks, for the heads up/prod.
link: http://sms.cam.ac.uk/media/1083858
here are some links to worthwhile presentations
http://www.newton.ac.uk/programmes/CLP/seminars/2010082515301.html
http://www.newton.ac.uk/programmes/CLP/clpw02p.html
Judith would be interested in Dempster
http://www.newton.ac.uk/programmes/CLP/seminars/2010092209309.html
“I believe it is fair to say, however, that how
physicists approach scientific uncertainty has been scarcely touched by
fundamental developments within statistics concerning mathematical
representations of scientific uncertainty. An indication of the disconnect is provided
by the guidelines used by the IPCC in its 2007 major report, where the terms
“likelihood” and “confidence” were recommended for two types of uncertainty
reports, apparently in complete ignorance of how these terms have been used for
more than 60 years as basic textbook concepts in statistics, having nothing
whatsoever in common with the recommended IPCC language (which I regard as
operationally very confusing). Another indication is that experts from the statistical
research community constituted according to one source only about 1% of the
attendees at the recent Edinburgh conference on statistical climatology.”
Steve, i did a previous post on Dempster
http://judithcurry.com/2011/04/11/dempster-on-climate-prediction/
I recall that now..thx
Bringing up DA is evocative of the golden age of AI. [NIce time to be around.] Maybe the important thing isn’t DS as much as the development over the few decades of non-probabilistic approaches to uncertainty and things like attempting to incorporate the conceptual problems such as ignorance. Other methodologies dealing with uncertainty that come to mind include fuzzy sets, rough sets, and GLUE (in hydrology), and might contribute though in policy aspects time is a factor. But in the comment above the long quote is most notable as a concise critique having a broad scope.
On Dempster-Shafer
http://www.newton.ac.uk/programmes/CLP/seminars/2010092209309.pdf
“A detailed exposition of DS is not possible in this note, but I wish to
draw attention to two basic features of the DS system. The first is
that probabilities are no longer additive. By this I mean that if p
denotes probability “for” the truth of a particular assertion, here some
statement about a specific aspect of the Earth’s climate in the future
under assumed forcing, while q denotes probability “against” the
truth of the assertion, there is no longer a requirement that p + q = 1.
Instead these probabilities are allowed to be subadditive, meaning
that in general p + q < 1. The difference 1 – p – q is labeled r, so that
now p + q + r = 1, with r referred to as the probability of "don't know".
(Note: each of p, q, and r is limited to the closed interval [0,1].)"
And Keynes
'By "uncertain" knowledge, let me explain, I do not mean merely to distinguish what is known
for certain from what is only probable. The game of roulette is not subject, in this sense, to
uncertainty; nor is the prospect of a Victory bond being drawn. Or, again, the expectation of life
is only slightly uncertain. Even the weather is only moderately uncertain. The sense in which I
am using the term is that in which the prospect of a European war is uncertain, or the price of
copper and the rate of interest twenty years hence, or the obsolescence of a new invention, or
the position of private wealth-owners in the social system in 1970. About these matters there is
no scientific basis on which to form any calculable probability whatever. We simply do not
know. Nevertheless, the necessity for action and for decision compels us as practical men to
do our best to overlook this awkward fact and to behave exactly as we should if we had behind
us a good Benthamite calculation of a series of prospective advantages and disadvantages,
each multiplied by its appropriate probability, waiting to be summed. (Excerpted from "The
General Theory of Employment", Quarterly Journal of Economics, February, 1937, pages
209-223.)
Thanks for this. This i why i think we are in the realm of scenario uncertainty and not statistical/probabilistic uncertainy, and why i have been recommending going to a scenario based possibility distribution
More here
http://www.newton.ac.uk/programmes/CLP/clpw04p.html
http://www.newton.ac.uk/programmes/CLP/seminars/2010120711401.html
Stephansen is hilarious and well worth the time
http://www.newton.ac.uk/programmes/CLP/seminars/2010120714001.html
weighting
http://www.newton.ac.uk/programmes/CLP/seminars/2010120716301.html
And watch all the videos on decision support
“Stephansen is hilarious and well worth the time”
Finished earlier. Definitely thumbs up. Something was fun for once. That must have been a hell of workshop to attend.
Despite their increasing complexity and seductive realism, it is important to remember that climate models are not the real world. Climate models are numerical approximations to fluid dynamical equations forced by parameterisations of physical and unresolved sub-grid scale processes. Climate models are inadequate in a rich diversity of ways, but it is the hope that these physically motivated models can still inform us about various aspects of future observable climate. A major challenge is how we should use climate models to construct credible probabilistic forecasts of future climate. Because climate models do not themselves produce probabilities, an additional level of explicit probabilistic modelling is necessary to achieve this.
To make reliable probabilistic predictions of future climate, we need probability models based on credible and defensible assumptions. If climate is considered to be a probability distribution of all observable features of weather, then one needs well-specified probability models to define climate from the single realisation of weather provided by nature. In an increasingly non-stationary climate forced by accelerating rates of global warming, the time average over a fixed period provides only an incomplete description perhaps better suited to a more stationary pre-industrial world.
I like those paragraphs. However, I don’t think it has ever been shown that the pre-industrial world was either “stationary” or “more stationary”, or that the current climate is “forced by accelerating rates of global warming.” (that is the issue that has to be resolved!) Whatever else the last 15 years might be, the data do not display an “accelerating [rate] of global warming”. At most, warming might “have accelerated” for the period of 1978-1998, as it had done earlier in the post-LIA epoch..
Indeed, this unsubstantiated claim is enough to write them off as just another exercise in AGW science:
“In an increasingly non-stationary climate forced by accelerating rates of global warming, the time average over a fixed period provides only an incomplete description perhaps better suited to a more stationary pre-industrial world.”
Who knows, but IMO you’re jumping the gun. However, Stephenson does write a more stationary pre-industrial world and did not write the more stationary pre-industrial world. There is a difference and based on the content and tone of his video summary I think his choice here is quite deliberate.
I believe Mosher et al might need to see some real atmospheric physics.Professional who actually teach and understand the processes and have published widely and are respected physicists
http://www.youtube.com/watch?v=2ROw_cDKwc0&feature=player_embedded
The attempt to model climate for humans is infantile as major climate changes occurr within 100’000 years or more. I acn assure you no one here will see dramatic “climate change” neither will your kids or theirs. LOL
he’s not credible
Oops! Not the best reply. Doesn’t help you. You want to carry the standard, then you need to have standards. Burden of leadership and all that crap. Thrust! parry! riposte! Or something like that.
Seems to be a lot of that going around.
http://hockeyschtick.blogspot.com/2013/07/swedish-scientist-replicates-dr-murry.html
If you reject models as a decision making tool, you are left with other tools like politics, which are even less reliable.
‘In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. This reduces climate change to the discernment of significant differences in the statistics of such ensembles. The generation of such model ensembles will require the dedication of greatly increased computer resources and the application of new methods of model diagnosis. Addressing adequately the statistical nature of climate is computationally intensive, but such statistical information is essential.’ http://www.ipcc.ch/ipccreports/tar/wg1/505.htm
We get back to probability models. But perhaps the truth is that we are not going to find the keys under this particular lamppost.
‘Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable. They echo other famous limitations on scientist’s expectations, namely the undecidability of some propositions within axiomatic mathematical systems (Gödel’s theorem) and the uncomputability of some algorithms due to excessive size of the calculation.’
http://www.pnas.org/content/104/21/8709.long
On every dimension where the models differ, averaging assumes those differences are commutative and equivalent. But there is no average of an apple and orange.
The four humors of Hippocratic medicine; black bile, yellow bile, phlegm and blood, served doctors from around 400 BC until about 1860.
It was a quite simple model used to explain a complex system, human health. Surely the reductionists, who rejected this model, inflicted an enormous hardship on both medical science and humanity, because they have replaced a simple and understandable model, with ever more layers of complexity and uncertainty. In the old days cancer therapy was a doddle, find out which humor was out of balance and correct it, now we are mucking around with personalized medicine and treating each patients tumor as a single, unique, disease.
We have so lost the simple and the means to project from a few, simple, observations.
So we should scrap all this modern medicine and take a leaf out of the climate science manual, measure patients temperature, take people money and offer a treatment that is both unproven and inefficient, and claim that what ever outcome is observed is withing the CI..
This is an interesting and informative thread. For those of us who don’t do the modelling, it is of some importance to try to gain an appreciation of what is being done. As an end-user, however, what counts is what comes out – the spaghetti graphs. I still have problems getting past the data. The proxies seem to be a mess, with wide variations within, and different interpretations of even the same results. I watch with interest as folks “smooth” derived temperatures over 50 year or longer periods. Then we move up to historical data based on recorded literature. Then early measured data is manipulated, often changing temperatures by several degrees C. Then modern temperatures are “adjusted”. Then all of this is shoved into massive computers, subjected to high-powered statistical analysis, and out come temperature “projections” to 0.01C 100 years from now. Its all fun- but hardly something to base policy on. Are bigger/better/faster computers really going to change anything?
Costs will go up, extramural expectations on the modeling will irrationally increase, more ‘careers’ will be made. …. probably some other stuff too.
Models that don’t do clouds
churning out
spaghetti graphs.
Guvuhmints that won’t do productivity
churning out
regulations and new taxes based on
spaghetti graphs.
Models that don’t do clouds churning out spaghetti graphs.
Oh I see what you mean.
http://www.ecmwf.int/newsevents/calendar/miscellaneous/EPS_Dec2012/Spaghetti-plot-pesto-sauce-map.png
Invalid assumptions using inadequate models mean poor results. The real problem is that these models, with their assumption-led conclusions are being used for policy as if their output was actually meaningful and independent. In fact climate scientists have gravely misled policymakers as to the adequacy of these projections. The models are not fit for policy. And without the models there is no alarm. We revert to a fairly steady and probably natural rise in temperature of 0.5K per 100 years. big deal!
Many climate scientist seem to regard their failed models as more reliable than observations of reality. They cling to their original assumptions about a climate dominated by atmospheric CO2 concentration and hope that the reality is wrong and soon things will revert back to accelerating global warming as predicted by their models.
The sceptics are right. The scientists have abandoned science and have adopted a religion that worships CO2.
Aren’t GCM’s essentially Simulations? They attempt to “play out” in virtual space what one possible scenario from the bottom-up might look like.
To me, a “Model” requires leaving things out. For example CAPM in finance is such a model – it values all investments based on two parameters. Further, it explains how this would make sense to view an individual investor, and the market as a whole as defined by these two parameters.
One could of course simulate the capital markets from the bottom-up. Consumer preferences, population growth changing demand, technology and wages altering supply, etc…Certain aspects of this simulations might look like the real thing – Oh look we get a business cycle in the ECON5 runs.
The reason nobody does this is because the output would have no significance to decision making. It could be made to produce any desired forecast or advance any possible policy, all by changing assumptions and starting points within the cushion of where they can’t be precisely measured and thus can’t be invalidated.
The segment: Statistical frameworks in published methods using ensembles to quantify uncertainty may assume (perhaps implicitly):
a. that each ensemble member is sampled from a distribution centered around the truth (‘truth plus error’ view). In this case, perfect independent models in an ensemble would be random draws from a distribution centered on observations. reveals that the writer has made the intellectual transition from modeling a problem in physics to one in mathematics by his use of assume.
But the underlying physics is nowhere near correct. Imagine researchers noticing the Newtonian mechanics gave (small) errors for the orbit of Mercury and rather than use these errors to motivate the discover of relativity, they obscured the errors with statistics.
Steven Mosher
“Operationally it is a collection of our best physical understandings of the climate system”
Once upon a time, our best understanding of cows drying up was that it could be fixed by burning old women at the stake…
Science fireworks or…
http://abcnews.go.com/Technology/god-particle-higgs-boson-year/story?id=19574423#.UdXRbawtXNk
more waivey gravy?
credibility deficient climate models are just that; bucketfuls of crap / fodder for the zombies .
IPCC:
“Uncertainty in the response of an AOGCM arises from the effects of internal variability…”
Actually, the uncertainty is not variability. The uncertainty is our infantile, totally rudimentary level of expertise in this arena. W’eve been on this gig for HOW long? Maybe really 40 years (200 if you want to include the frontiersmen who were still beginning to try to learn basic processes).
We are SO ignorant. SO ignorant. We still observe basic stuff, like the current thinking about the Antarctic Circumpolar Circulation and immediately posturing that NOW we know what causes such and such in the formation of the Antarctic ice cap. Balderdash. It is one of maybe 50,000 pieces of evidence or processes we will need to know to understand the climate in a predictable, falsifiable way.
Iin time, if we have to, we may develop better maths, but the variability IS something that we can – and WILL – understand someday, including all the variables.
But that day is not THIS day. We are entering Kindergarten and posturing as if we were receiving our doctorates from Stanford or Oxford. We know next to NOTHING.
Don’t blame it on “variability”, no matter if you are the IPCC or Steve McIntyre or Kevin Trenbreth or Anthony Watts – or Judith Curry.
The problem is that we are NEW to the subject and at this stage we have to collect data and other evidence. Our ignorance is legion, no matter HOW much we may think otherwise.*** I liken it to the UK butterfly/insect collectors of the 18th and 19th centuries who helped biology progress, simply by tramping around and collecting evidence: It’s simply too early to think we can understand it beyond simplified concepts. Trying to expand our concepts is a nice pastime – but ultimately all our current guesses (yes, guesses) now, will turn out to be juvenile. As in ill-formed – and as in ill-INformed.
I think Judith is doing a HELLUVA service here, pointing out how little we know and providing a forum for all of us to beat up on each other in our mutual ignorance. We are also spreading what little knowledge we have, and that can’t be a bad thing (hopefully!).
:-)
***Which is why it is monumentally insane for people to in any way make predictions about what the climate will be out beyond even a handful of years. (Ask the UK Met Office about THAT.) Do they really think SO much of their egotistical brains that they think they know any of this? To claim to know such things is the biggest fraud on the history of western science.
The Pindyck paper I mentioned is available here:
Link for the Pindyck paper at NBER (National Bureau for Economic Research):
http://www.nber.org/papers/w19244