Generating regional scenarios of climate change

by Judith Curry

This post is about the practical aspects of generating regional scenarios of climate variability and change for the 21st century.

The challenges of generating useful scenarios of climate variability/change for the 21st century were discussed in my presentation at the UK-US Workshop [link].  Some excerpts from my previous post:

At timescales beyond a season, available ensembles of climate models do not provide the basis for probabilistic predictions of regional climate change. Given the uncertainties, the best that can be hoped for is scenarios of future change that bound the actual change, with some sense of the likelihood of the individual scenarios. 

Scenarios are provocative and plausible accounts of how the future might unfold. The purpose is not to identify the most likely future, but to create a map of uncertainty of the forces driving us toward the unknown future. Scenarios help decision makers order and frame their thinking about the long-term while providing them with the tools and confidence to take action in the short-term.

Are GCMs the best tool? – GCMs may not be the best tool, and are certainly not the only tool, for generating scenarios of future regional climate change. Current GCMs inadequate for simulating natural internal variability on multidecadal time scales. Computational expense precludes adequate ensemble size. GCMs currently have little skill in simulating regional climate variations. Dynamical & statistical downscaling adds little value, beyond accounting for local effects on surface variables. Further, the CMIP5 simulations only explore various scenarios of emissions.

The challenge for identifying an upper bound for future scenarios is to identify the possible and plausible worst case scenarios. What scenarios would be genuinely catastrophic? What are possible/plausible time scales for the scenarios? Can we ‘falsify’ these scenarios for the timescale of interest based upon our background knowledge of natural plus anthropogenic climate change?

New project

With this context, my company Climate Forecast Applications Network (CFAN) has signed a new contract to develop regional climate change scenarios as part of a large, complex project.  Here is a [link] describing generally how CFAN approaches developing such scenarios.

Without disclosing the target region or the client or the specific impact issue being considered (at some point, presumably there will be a publicly issued report on the project), I will describe the relevant aspects of the project that frame my part of the project.

My team is one of four teams involved in the generation of future scenarios for the region. Our role is to use the CMIP5 simulations, observations, and other model generated scenarios (developed by other teams) to develop scenarios of high-resolution surface forcing (temperature and precipitation), that will be used to force a detailed land surface process model.

There are two overall aspects of the project that I find appealing:

  1.  The creation of a broader range of future scenarios, beyond what is provided by the CMIP simulations
  2.  The importance of a careful uncertainty analysis.

Overview of proposed strategy

In context of the caveats provided in my UK-US Workshop Presentation, how can we approach developing a broad range of future scenarios for this region?

Previous analyses of CMIP5 simulations have identified several climate models that do a credible job of simulating the current climate of this region (particularly in terms of overall atmospheric circulation patterns and annual cycle of precipitation).  Nevertheless, even the best models show biases in both temperature and precipitation relative to observations.

This suggests starting with a really good historical baseline period  (say the last 30 years), using a global reanalysis product as the base.  I prefer to use a reanalysis product as the base rather than gridded observational datasets because the reanalysis product provides a dynamically consistent gridded state estimation that includes assimilation of available surface and satellite observations.  That said, the reanalysis products can have biases in data sparse regions particularly in the presence of topography (which is the case for the project region).  Hence satellite data and surface data can be used to adjust for biases in the reanalyses.

The second challenge for the baseline period is downscaling to a 1 km grid.  This sounds like a crazy thing to do, but the land surface model requires such inputs, and there is topography in the region that influences both temperature and precipitation. We propose to do a statistical downscaling approach based on a previous study that used an inverse modeling approach.  The resulting high resolution historical dataset will be used to calibrate and test the land surface model.  Further, they desire daily forcing (at 1 km) for the land surface model.  No, I am not going to defend the need for 1 km and daily resolution for this, but that requirement is in the project terms of reference.

The historical baseline dataset can be used in two ways in developing the 21st century scenarios from climate model simulations:

  1. To calibrate the climate model (remove bias) by comparing the observed baseline period with the historical climate model simulations, and then apply this same bias correction to 21st century simulations;
  2. Use as a baseline to which changes in the 21st century relative to the model baseline period are applied (add for temperature, multiply for precipitation).

For a variety of ancillary reasons, I prefer strategy #2, since it requires that only the observed baseline be downscaled, and it is the only approach that I think will work for more creatively imagined scenarios.

At first blush, it seems that both methods should give the same result in the end. The complication arises from chaos and decadal scale internal variability in the climate models, which was the topic of a recent post.

And then there is is the issue of which/how many CMIP5 models to use, but that is a topic for another post.

Implications of chaos and internal variability

In CMIP5, some of the modeling groups provided an ensemble of simulations for   the historical period and for the 21st century (varying the initial conditions).  The ensemble size was not large (nominally 5 members), when compared to the Grand Ensemble of 30 members described in last week’s post.

Re strategy #1. So, if you are trying to identify bias in the climate models relative to observations (say the past 30 years), how would you do this?  Average the 5 ensembles together, and calculate the bias?  Calculate the bias of each ensemble member?  Look for the single ensemble member that seems to have the multi-decadal variability that is most in phase with the observations?  Use the decadal CMIP5 simulations that initialize the ocean?  The reason this choice matters is that this bias correction will be applied to the 21st century simulations, and the bias corrections are useless if you are merely correcting for multidecadal variability that is out of phase with the observations.

So, is strategy #2 any better?  Strategy #2 bypasses the need to calibrate the climate models against observations.  But the challenge is then shifted to how to calculate the delta changes, and then use them to create a new scenario of surface forcing  that captures the spatio-temporal weather variability that is desired from a daily forcing data set.

How should we proceed with calculating a delta change from 5 historical simulations (say the last 10 years or so) and 5 simulations of the 21st century?  If you average the 5 historical simulations and average the 5 21st century simulations and subtract, you will end up with averaged out mush that has no meaningful weather or interannual or decadal variability. How about subtracting the average of the 5 historical simulations from each of the 21st century simulations?  Or, should we calculate a delta using each possible combination from the groups of historical and 21st century simulations? Then, how to assess uncertainty from an insufficiently large ensemble?

Lets assume that we solve the above problem in some satisfactory way.  How do we then incorporate the delta changes with the baseline observational field to create a credible daily, high resolution surface forcing field? Perhaps the deltas should be done for monthly averaged (or even regionally averaged values), and then the general sub monthly variability of the 21st simulation can be preserved?


Well I don’t have any conclusions obviously at this point, the above summarizes the issues that we are grappling with as we set this up.  You may ask why we are even doing this project, given the magnitude of the challenges.  Well, we would like business from this particular client, the project is scientifically interesting and important, and I would like to establish some sort of credible procedure for developing 21st century scenarios of climate variability and methodology for assessing uncertainty.  There is a cottage industry of people using one CMIP5 simulation, and then dynamically downscaling using a mesoscale model, and then presenting these results to decision makers without any uncertainty assessment.  Even Michael Mann agrees that this doesn’t make sense [link]. Fortunately, the client for this project appreciates the uncertainty issue, and is prepared to accept as an outcome to this project a 10% change with 200% uncertainty.

I have been crazy busy the last few months, I think I will start doing more of this sort of post, about things I am directly working on (including follow on posts related to this project).

I look forward to your comments and suggestions on this.

Moderation note:  This is a technical thread, please don’t bother to comment unless you have some technical input that you think I would be interested in.  General discussion about this can be conducted on Kip Hansen’s thread from last week.

98 responses to “Generating regional scenarios of climate change

  1. Pingback: Generating regional scenarios of climate change – Enjeux énergies et environnement

  2. Give yourself a chance. Drop CMIP5. Work top-down, like you did with Marcia W. Trying to work bottom-up on a 1km grid or any grid is an exercise in futility. Check what I say, with RGBrown.

    • Seems like a great idea to me, but my intuition suggests getting it to the stage of making detailed predictions would be a really, really big project.

      Just sayin’.

  3. In Part V of the Climate of Doom Natural Variation and Chaos thread (“Natural Variability and Chaos – Five – Why Should Observations match Models?”), he makes the following point, which I had considered but never tried to articulate:

    So here’s my question:
    If model simulations give us probabilistic forecasts of future climate, why are climate model simulations “compared” with the average of the last few years current “weather” – and those that don’t match up well are rejected or devalued?

    It seems like an obvious thing to do, of course. But current averaged weather might be in the top 10% or the bottom 10% of probabilities. We have no way of knowing.

    Might there be some way of using a version/variation of the Monte Carlo Method to evaluate how robust the predicted probabilities are to changes in the assumed “climate average” for the region? (And does that question even make sense? Or does your approach already handle it and I just didn’t see it?)

    • AK – There’s a way of answering your question, as per my comment below. Take the model run that successfully matches the last few years. Then do the same run with multiple initial states altered in the same direction by unmeasurably small amounts (eg. temperatures increased by say 0.01 deg C), and again but with opposite sign. Do they still match the measured? If not, the model is useless.

      • Yes, there is a likelihood that even a good match is just a lucky draw. These models are so processor intensive that it seems no one is able to make a sufficient number of runs to properly evaluate the output.

        It is not even possible to do short runs since they need a few hundred years to ‘spin up’ even if you only want to examine 30y of output.

        There is a massive amount of calculation effort which goes into creating the detail from ‘basic physics’ models when the big picture depends heavily on assumed ‘guestimates’ of poorly constrained parameters.

        It’s rather like investing effort in calculating the result to 10 sig. figs when you don’t even know if the first sig. fig. is correct. Really, we are still arguing about whether 2xCO2 produces 1.5 or 4.5 degree of warming !! And that all depends upon the choice of the ‘parametrisations’.

        Having said that, assuming this effort is to go ahead, it should be possible to eliminate some of the more erroneous models.

        Despite being a deeply flawed paper which drew conclusions which were contrary to what their work actually showed, Marotzke & Forster 2015 may be informative.

        Their fig 3b based on 60y sliding trends shows the models dividing into two clearly separate groups in the most recent 60 years ( then last years of trend data on the graph ).

        Despite some communication on their methods they did not respond to my request for the 60y trend values of individual models at the end of the graph. This would have enabled plotting a cross-section of this 3D graph and would have shown that it is clearly bimodal, there are two groups. Since their abstract stated the opposite they apparently did not want to cooperate.

        Whether Dr Curry would have more success with such a request, I don’t know but it may help in the triage.

        I reworked the idea of trend analysis using less distorting filters and reproduced the split in two groups even using shorter filters ( whereas M&F sufficiently mangled the data that the separation was not found using their 15y sliding averages).

        The high and low sensitivity models used are listed in notes at the end.

        Greg Goodman.

      • The CMIPS ensemble mean shows that collectively they are too sensitive to volcanic aerosol forcing. This is part and parcel of being too sensitive radiative forcing generally, hence the over-run in the post 2000 period when there was no major stratospheric eruptions.

      • That graph also shows how they failed to reproduce the early 20th c. warming trend. Being so badly flawed across the board again raises the question of whether it’s worth investing time in using their output on any scale.

    • This is exactly the issue that I am grappling with

      • OK, please forgive me if I get some of the jargon a little off.

        In CMIP5, some of the modeling groups provided an ensemble of simulations for the historical period and for the 21st century (varying the initial conditions). The ensemble size was not large (nominally 5 members), when compared to the Grand Ensemble of 30 members described in last week’s post.

        So for models that provide a simulation, could you build a PDF of simulated historical conditions from the instances then locate the actual conditions within that?

        So if the actual conditions fall somewhere out on the “tail” (or n-dimensional equivalent), this means that, if the simulations match reality, the actual historical instance itself was a low-probability event.

        So then create a set of possible PDF’s for the actual conditions, wherein the actual historical instance falls at various distances and directions from the center. This is where some analog of Monte Carlo techniques might be useful. For each of those possible PDF’s (call them “cases”) you compare it with the PDF built from the model historical simulations, and weight the model projection outputs by their similarity.

        This leaves the issue of how you compare different models. It would be easiest if the sets of baseline simulations from different models produce similarly shaped PDF’s, since then your set of “cases” could be used among all models. Or you could start by assuming the PDF’s have similar shapes, and use the assumed shapes to simplify generating the PDF of simulated historical conditions for each model. (But I wonder if that assumption might not introduce substantial bias.)

        Perhaps you could derive a standardized shape for the PDF based on a statistical analysis of local variation correlated with global conditions? If there’s a good idea what the shape is, then all you need to “randomize” is the location of the actual historical instance within that shaped PDF. Then compare the result for each case with the PDF derived from the 5 (or whatever) historical model runs, and use the result to generate a weighting for the model’s simulations.

        If you have an idea what the shape of your PDF should be, the models’ simulations could each also be used to determine a simulation PDF, weight it by the results of the step above, and combine it with the weighted PDF’s from the rest of the models.

        The end result would be a set of “cases” based on different assumptions regarding the PDF in which the actual (observed) historical instance occurred. Each case would produce an output simulation PDF based on multiple runs of multiple models weighted by similarity of the model baseline PDF’s to the “case” PDF.

        Comparing the various “case” outputs would allow some estimation of their robustness to different assumptions regarding the PDF for historical conditions.

        I’m not sure how useful the above will be to you, but it’s the sort of thing I would try as a “straw-man” approach with a business manager, more to identify how it diverges from the desired result than a solid design proposal.

        Of course, since I’m guessing you have more design and programming talent available than I could bring to a problem, its major value, if any, would be stimulating ideas.

        Hope this helps.

      • thx, the challenge is that there are not enough simulations for a true pdf

      • Wiki has an article: Kernel density estimation, don’t know if it’s relevant (I’m playing a bit out my league here) but one of their examples uses just 6 instances.

        I don’t know how many dimensions in your PDF (space), but I found Similarity Learning for High-Dimensional Sparse Data by Kuan Liu, Aurelien Bellet, Fei Sha. Would it be feasible to translate from a similarity network to a PDF? They also have some ref’s to lower dimensionality techniques.

        The first reaction I had was to try a bunch of sample PDF’s, and for each measure the overall probability that your sample set would fall into it, then combine them weighted by that probability. Of course, I don’t know whether combining them wouldn’t be a bigger problem.

        My intuition tells me that doing this from scratch should be sort of re-inventing the wheel. That it isn’t (if I understand correctly) just reinforces the points about GCM’s not really being a good tool.

        I wonder whether there are useful tools (for creating a PDF with sparse data points) in other areas of fluid dynamics research?

      • Wiki has an article: Kernel density estimation, don’t know if it’s relevant (I’m playing a bit out my league here) but one of their examples uses just 6 instances.

        I don’t know how many dimensions in your PDF (space), but I found Similarity Learning for High-Dimensional Sparse Data by Kuan Liu, Aurelien Bellet, Fei Sha. Would it be feasible to translate from a similarity network to a PDF? They also have some ref’s to lower dimensionality techniques.

        The first reaction I had was to try a bunch of sample PDF’s, and for each measure the overall probability that your sample set would fall into it, then combine them weighted by that probability. Of course, I don’t know whether combining them wouldn’t be a bigger problem.

        My intuition tells me that doing this from scratch should be sort of re-inventing the wheel. That it isn’t (if I understand correctly) just reinforces the points about GCM’s not really being a good tool.

        I wonder whether there are useful tools (for creating a PDF with sparse data points) in other areas of fluid dynamics research?

    • The highly correlated structure of model and numerical errors….just don’t see this probabilistic assumption ever going anywhere…

  4. Dr. Curry,

    I enjoyed reading this article.

    >At first blush, it seems that both methods should give the same result in the end. The complication arises from chaos and decadal scale internal variability in the climate models, which was the topic of a recent post.

    I may be missing a nuance (or completely missing something significant), but it occurs to me that the baseline for the model bias determination ought to match the the planning time horizon of your client. If they’re only interested in a decade or two, then the strategy of the CMIP5 decadal experiments makes a good deal of sense to me. Averaging a number of runs in the ensemble still smears interannual variability but it seems to me that would still leave a decent indication of decadal trend bias. Running stats over the individual members might then give a useful indication of trend uncertainty, ranges, frequency of extremes, etc.

  5. Very technical. Seems like statistical downscaling has to make a lot more assumptions than a high-resolution downscaling model would. It is hard to see how that works in areas where you don’t have a lot of data for the statistics because you can’t interpolate very well in complex terrain. Is the statistical method at least calibrated with a high-resolution model?

  6. Dr Curry, I think the four types of constraints I found in the annual global mean data (albedo, cloud, energy budget, flux pattern) will severely limit the number of possible outcome scenarios. I propose to build these constraints into the GCMs, and look how the system responds with seasonal and regional redistribution.

    • Test it, before relying on it. Bear in mind that constraints reduce the volatility of the model, but they don’t give the model any more value. They simply constrain it to match your preconceived notions. Before you do the calcs on a grid system with squillions of iterations, try simply applying the preconceived notions to the climate on something little more complex than the back of an envelope. Then do three GCM runs and compare results. The three runs are (1) the run that you think gets everything right, (2) the same run with multiple initial states altered in the same direction by unmeasurably small amounts (eg. temperatures increased by say 0.01 deg C), (3) as (2) but with opposite sign. If all three runs match your back of envelope then it’s reasonable to drop the GCM and use the back of envelope. If the 3 GCM runs are badly different, then you can safely drop the GCM because it can never work – whether the back of envelope is worth anything is another matter. If the 3 GCM runs are very close to each other – ie. differing by unmeasuraby small amounts just like the initial states – and are different to the back of envelope, then I am wrong and please let me know. Oh, and if you decide to run the GCM over a longer period, you need to do the same tests again for the longer period.

  7. Dr Curry,

    Along the lines suggested by Mike Jonas, and maybe include a double double blind test.

    Give an independent observer data from 10 or 20 or 30 years ago, going back your choice of years, and let the observer produce a naive persistence forecast for today. Keep the location disguised – and maybe use your testers in different countries from where the data originates.

    Use the same data for your models, and then check if the models –

    a) reflect current reality, and

    b) significantly outperform the naive “blind” forecast.

    Even trying to define useful outcomes might be difficult. Are you trying to forecast winds, rainfall, drought? Would the timing of the event be more important to your client than the quantum or intensity?

    I hope that the client has a clear idea of what they want, but I have found in the past that the client might expect you to give them what they need, rather than what they ask for. It might be hard to pin them down!

    I wish you the best of luck. An intimidating task at the best of times.


  8. Stumbled about this one on model uncertainty yesterday:
    (check the link to the video)

    He seems to have a point, but I’m not sufficiently knowledgeable to really check it. Depends on whether the models were meant to simulate clouds or whether clouds were used to “tune” the model and thus have to be seen as complementary of the models and not as their output. What the last possibility might say about the models would then be another interesting question.

    But I believe in principle it has to be done this way, specifying tolerances for the uncertainty of the input parameters and then working out the spread of the variations.
    But of course this would be to expensive, leading to umpteen model runs.
    And perhaps better not to know about it anyway.

    • On further reflection I believe that in principle it doesn’t matter how you measure uncertainty (I guess this is what you called bias for individual runs), especially if the aforementioned kind of baseline is unknown. But I would try to specify the conditionality of that measure and make it transparent to the customer.
      Perhaps taking the measure the customer prefers (or is most likely to find useful) and explaining the conditionality (the uncertainty of the uncertainty measure so to speak).

  9. Start with a region that we have solid, raw, climatic data. Temperature, precipitation, wind, cloud & anything else, dating back say 100 years.
    A city will do fine, as you’ll not have to faff around with UHI corrections.
    Chose your model, stick the data into it & run.
    Does the output look like reality?
    If not, bin it.
    Find a model that will work (Good luck)
    Then repeat with another region and the same model.
    Do you get a reasonable replication with that region’s climate?
    If you have, congratulations, you’ve a working model.
    My money’s on you’ll blow through your funding before you find a working model.

    • This might not be a bad approach if one split the process into looking for single forcings at a time – no one model is going to be good at everything, but certainly will be better at something. If there is enough diversity in how the models are tuned, and therefore which forcings they emulate best, then pick each forcing separately and try a reanalysis on the resulting data to match with reality.

  10. Berényi Péter

    The broadest possible regional scenario is a hemispheric average. As annual integrated absorbed shortwave radiation is measured to be the same for the two hemispheres (in spite of the huge difference between their clear sky albedos), and this feature is replicated by no computational climate model, generating realistic regional scenarios from output of said models is currently hopeless.

    Otherwise see notes on chaos in another post.

  11. I am surprised that you gave up interest in my solar based NAO/AO anomaly forecasts. Climate forecasting cannot progress while trapped in the illusory paradigm of internal variability. If you are smart, you can extrapolate a good AMO forecast from this:

  12. Given a situation where confidence in developing regional climate forecasts that are useful is low and the puropse of such work is to enable cost effective proactive adaptation, I think it would be of more use to look at the problem from the opposite direction.

    By that I mean for each region identify what are the worst (damaging) climate (weather?) related events that have happened in the past and firstly ask what is cost effective adaptation to prevent/minimise the damage done should such an event occur again?

    Outside of storm related damage I’d guess it’s affects on regional agriculture and sea-level rise and other gently accumulating issues (temperature?) that are long term changes that can be addressed.

    As sea-level rise is global and one knows regional high and low tides and local surge affects then consequences of sea-level rise should be reasonably clear locally and actioned planned.

    As regards agriculture, farmers have always adapted to change whether market demand, technological opportunity or changes in climate.

    This process is surely both necessary and sufficient to adapt and should be happening any way ie business as usual. As regards local climate change prediction I see that as a purely academic pursuit as opposed to practical in the forseeable future.

  13. Temperature anomalies are generally predicted to be warmer than the 1961–1990 baseline period over much of the globe, particularly in Northern latitudes and over the African and Asian continents, with anomalies above 1.6 K in some regions. Despite the ensemble mean showing a warming over much of the Northern hemisphere, there is diversity across the ensemble members in terms of their predicted spatial patterns. The standard deviation across the ensemble members ranges from 0.1 K up to 0.5 K in some regions. A small number of ensemble members even predict cooler temperature anomalies relative to the 1961–1990 baseline in some regions (e.g. a cooling of ∼1.4 K over Eastern Europe, Asia, or in the Southern and Pacific Oceans, as shown in the middle panels of Fig. 10). However the majority of ensemble members show a warming over much of the globe for 2016.

  14. Are the uncertainties sufficiently great to include a scenario with significant cooling? If not then the project is probably unrealistically constrained.

    • i can propose various ‘what if’ scenarios. In this particular region, precip is a bigger deal than temperature itself

      • Lots of cyclones?

      • Model precipitation is presumably principally determined by projected temp rise and assumptions about what RH does.

        Temp rise relates to model sensitivity which I addressed above, re. M&F 2015 and triaging for models with more credible CS.

        R.H. seems largely to be assumed to be constant, though close inspection of documentation may reveal models which do something a little more sophisticated. Historically RH has been dropping on average over lower 48 , IIRC. That would presumably be a key factor to inspect any candidate models for.

        I appreciate you probably have contractual needs not to be too specific about the region in question. If it is Atlantic coastal states TS activity is going to be a key factor in assessing critical precip. events. In which case comparing model output to some of the observations in my recent ‘Ace in a hole’ article may provide some insight into reliability: circa 9y periodicity and ACE drop off during ‘plateau’ periods. My guess is that no models are reproducing either of those characteristics.

        I’m with Mike Jonas on this. I doubt that models have any real predictive value at this stage. It seems that your client is convinced otherwise, so I suppose you have to do the best on that basis and provide error bars sufficiently large to account for the deviation of model runs on key parameters.

        The instability to millikelvin changes in initial conditions revealed in the recent chaos article is frightening. It’s worse than we thought, as the saying goes. :(

        For the rest son of Mulder has covered it, gradual adaptation is likely enough unless there is any observational evidence of run-away processes actually occurring. So far the respective canaries are looking fine and still a bright yellow colour.

      • BTW, what kind of timescale are they hoping to address with this work?

      • we are doing detailed analyses of time slices from 2040-2050 and 2090-2100

      • Thanks. I was afraid you might say something like that.

        As John Kennedy informed us in our discussion of HadSST3 adjustments, models are tuned primarily to reproduce 1960-1990 period. That is the principal calibration period.

        Hindcasts even as short as back to 1900 perform very poorly in reproducing the early 20th c. warming and that is a period where the contentious AGW was not significant.

        IMHO extrapolation 40y to 90y outside the 30y calibration period has no scientific merit whatsoever, however it is done. I cannot think of any other field of study which would accept such a practice.

        Uncertainties fan out exponentially in both directions as soon as you leave the domain of the observed data.

        ε = +/- wicked * exp ( monster * t )

        The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible. Rather the focus must be upon the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions.

        Taking “ensembles” of single runs of different models is worthless until the probability has been established for individual models. It takes hundreds of runs to generate a probability distribution and AFAIK, this has never been done for any model. At best we have 8 or 9 runs with different parameter values, which produces 8 or 9 individual results for statisically different models, not a probability distribution.

        The only evidence I have seen of this being attempted was the recent chaos article here documenting the horrendous instability to initial conditions of the order of a millikelvin.

        What other aspect of political, economic or technological development would we be foolish enough to imagine we could anticipate that far into the future ? I don’t know how we even got to a place where otherwise reasonable, educated people would even ask such a question.

        Forward planning and risk assessment is highly commendable but has to be theoretically *possible* before being attempted.

        Maybe it would be worth estimating the risk of flooding due a meteor strike or the societal impacts of a Carrington event and asking whether funds would be better placed in mitigating that kind of risk. That may actually be calculable.

        It really seems like what you are being asked to advise on is whether we will need to take an umbrella if go for a walk of the 17th March next year and what the probability of it raining is.

        If our calibration record was a thousand times more accurate, we had computers with a hundred times the processing power that we currently have and we did not have dozens of poorly constrained parameters, we still would not be able to answer questions about rainfall in 2090.

        Paleo may be the way to bracket the problem but I don’t see how you can work from model output.

        At some stage this uncertainty bubble has to pop and it’s going to make a monster mess on the walls.

      • In terms of the ensembles, i am focusing on two different models (with different values of TCR), that both do a credible job on simulating the annual cycle in the region. I will look at all the ensemble members from the two different models, and process them independently, will not combine ensembles from different models and will do no averaging of simulations.

        Each ensemble member is a possible future scenario. The ensembles from the models do not define the entire range of future possible (or plausible) scenarios.

      • Judith – You simply cannot get any model up to 2040 or 2090 from today. You don’t have to take my word for it, you can test it as I have suggested (or there could be better ways of testing). But what you could do is to look at everything at a high-level – historical patterns, global and regional cycles, etc – and establish a set of possible high-level scenarios [El Nino, PDO/AMO/StadiumWave phase etc]. Then you could run the model repeatedly over the same very short period (1 or 2 days only!) for each scenario plus range variants, resetting the high-level inputs for each run, while letting the model’s imagination keep going [not sure how easy this would be to do]. That could then give you a meaningful set of possible outcomes.

        That way, you are still using the model, as is required, but you are restricting its use to something that might be within its capability. And I repeat – you can test my assertions.

  15. “Fortunately, the client for this project appreciates the uncertainty issue, and is prepared to accept as an outcome to this project a 10% change with 200% uncertainty.”

    Not a bad way to make a living.

  16. There may be some potential conceptual confusions here. Plausible and possible are very different concepts. And either there are many different kinds of worst case scenarios, or there may be no such thing.

    • A clarification on worst case scenario. This is defined by impacts – what could happen that would be catastrophic. Without specifying the region and the particular impacts, the approach I am taking is to look at regional paleo data for such catastrophic events/periods in the past, and understanding what caused them and whether they could happen in the 21st century.

      • I assume you mean whether the impacts could still happen. The precip and temp causes can certainly still happen.

      • The Central England Temperature record (CET) indicates there were periods of regional warming in central England which approached +0.3 C per decade and which lasted for three or more decades. Is there any anecdotal or physical evidence as to what kinds of impacts these periods of strong regional warming might have had in central England? Is enough known about weather patterns, climate variability factors, climate influences (and so on) during these regional warming periods in central England to draw any useful conclusions about what might happen in the future in regions that are experiencing a similarly strong warming trend?

      • “the approach I am taking is to look at regional paleo data for such catastrophic events/periods in the past, and understanding what caused them and whether they could happen in the 21st century.”

        Grand solar minima, the next starts from the late 2090’s. But we have yet to see the worst phase of this solar minimum.

      • regional paleo data?
        based on reconstructions that use observations that you dont seem to trust. Seems like an epistemology build on sand.

        There is no issue with and analysis choosing to use re analysis as it foundational observation dataset– with the appropriate accounting of uncertainty for that choice ( structural)

        Then a consistent approach would be to redo the paleo work using
        reanalysis as the observations.. a few simple comparisons will demonstrate that you get different paleo answers if you use different observation data sets. ( another structural uncertainty)

        any way.

        neat project

      • There are also historical anecdotes related to the paleo periods of interest. Also, I find tree rings more credible when they are examining rainfall/drought

      • Judy ==> On general principles, I think that looking to the past to inform us of the spread of potential futures is the soundest approach — as in my recent essay, the chaos in the climate system has already had thousands of years to exhibit its constraining boundaries — from this we might derive reasonable understanding of the outside limits/boundaries, and look for probabilities within that set. Projecting into the future with expected or projected perturbations of knowable parameters (temps, climate cycles, etc) might be done comparing the known statistical probabilities from historic data.

        Sounds like a fascinating project.

  17. In light of the last “chaos” climate model post, it is difficult to see how they can be reliably used to predict future climate. You have a huge challenge.

    If one-trillionth degree variation of global mean temperature input causes such a huge variation in the result, consider what an approximation of some sub-system does. That probably represents a deviation from the actual subsystem behavior that is much greater than the one-trillionth degree one. Then there is always the question of unknown software bugs, which also have potential to distort model output.

    At any rate, it sounds like a fun project for you. Congratulations on the contract.

    • Defining plausible or possible worst case impact scenarios is not a case of prediction. The challenge is to do it sensibly, given that the climate models fail to address recent long term natural variability. The results could be conceptually very useful in that regard, pointing a way, as it were. i agree it might be fun, and congratulations.

      • where is the evidence for long term natural variability?

        in observations you dont trust?

        weird contradiction

      • There is some evidence for each of the forms of long term variability that I listed in my recent “Refocusing the USGCRP” CE post. For example, I am prepared to believe that we are emerging from the Little Ice Age. I am prepared to believe in the so-called abrupt events, which may be due to chaotic transitions in ocean circulation. The satellite record, which I tend to trust, indicates that all of the warming since 1978 has been somehow linked to strong El Ninos.

        Then of course there are the ice ages, which I believe actually happened, but which cannot yet be explained, which may well have smaller counterpart fluctuations. We just do not know.

        I could go on but time is short. Just got a new dog, which needs lots of attention and walks. In my view there is a great deal more evidence for long term natural variability then there is for AGW. I doubt that climate is naturally constant on any time scale. How could it be, given all the nonlinearities at play?

      • Strong El Niño; therefore, weak La Niña?

      • No warming 1978-1997, then big El Nino, then warmer but no warming 2000-recently, then not clear what happening now, maybe another warmer flat trend line. So all warming coincident with strong El Nino.

      • When it comes to El Nino, is the recent bias to warmer El Ninos due to natural variability or due to CO2 transferring energy to the atmosphere and, subsequently, to the ocean surface?

    • “In light of the last “chaos” climate model post, it is difficult to see how they can be reliably used to predict future climate.”

      quite the opposite.

      • I see your point. If we assume the model (from the chaos post) is deterministic, then it has been reasonably demonstrated the model is chaotic. Of course, no evidence has been presented that the particular build upon which the paper was based is deterministic.

  18. “The second challenge for the baseline period is downscaling to a 1 km grid.”

    you can get the observation field for temperature at 1km if you want. and yes, dependent upon topography.

  19. Some have pointed to the model made by the Russian Institute of Numerical Mathematics. Have you seen results from this model?
    It must be easier to work with a single model instead of several, that anyway gives different output.

  20. It seemed to me that this sort of work is more designed for local governments:

    Obviously business would also be interested since climate change and weather change would have a huge impact on their business especially involving insurance. Things like hurricanes, rain, drought etc. would obviously affect their bottom line. I was wondering if there is any direction for individuals?

  21. have an affect on their bottom line especially involving insurance. I was wondering whether there is information and direct for individuals?

  22. Judith

    I have often complained that in the UK weather forecasts are too generalised ( southern England, northwest England etc)

    that is all very well but most of us are interested in the micro climate that our own home or place of work is situated in.

    The Met office is some 20 miles from my home as the crow flies ( over the sea) and the weather they predict for us often bears little relationship to what we actually get. This can be problematic in as much I live in a tourist area and if bad weather is predicted that keeps tourists away, but then if it turns out fine, that will be a serious financial loss locally.

    For some nine years I was involved with the Environment agency, a govt body also located in Exeter and they and the met office worked closely together.

    The topography here is unusual in as much there are many steep sided valleys with rivers that lead down to the sea. There can be serious consequences if a cloudburst coincides with a high tide which caused great problems in Boscastle , which achieved international fame a few years ago.

    The met office were called in to advise as their forecast had failed to predict that the slow moving depression that caused the rain would stall over the BoscStle catchment area.

    They agreed to try to refine their twelve hour forecast to a grid resolution of some 5 km but to my knowledge they have not yet achieved this.

    My point is that if an organisation with one of the biggest supercomputers in the world can not forecast the conditions 12 hours ahead for a 5 km grid area, what hope is there for a long term prediction on a 1 km scale?

    Should not this aim be substituted with something more realistic?

    Incidentally I have just returned today from cortina in the Dolomites. The weather forecasters completely failed to predict the weather accurately for any of the 7 days we were there and spectacularly missed the snow we had this morning.

    Micro climates are great. But predicting One kilometre grids some time ahead? Really?


    • The first casualties of contact with the future are the predictions.

    • The NWS uses a fine grid computerized forecasting system that generates local forecasts that are typically quite good for the day. They even take into account the local elevation. I am 1000 feet above the nearest town, which is just ten miles away, and they accurately adjust the temperature forecast accordingly (once I gave the system my coordinates).

      • David

        How complex is your weather? Predicting weather in the middle of a continent is Iikely to be much easier than on an island in the Atlantic! I do not know where you are located . How local is local? Ten miles away is a lot bigger than a one km grid


    • The issue with the 1 km grid is to account for topographic impacts on precip, plus the land surface model requires 1 km resolution. The final results/impacts are not relevant at 1 km scale, but rather on the regional scale. The point they are arguing is that you can’t get the integral/regional scale without accounting for these small scale forcings/processes. If it were up to me, I would be doing monthly and 100 km scale, with some sort of calibration for the missed subgridscale stuff

      • I don’t think their point is valid. Monthly and 100km scale is the most you can hope for and is something surely yet to be achieved in any climate that has considerable variability within its makeup.


      • If you could somehow separate out the micro-climate effects from scatter (e.g. T-storms) using recent data, could you apply that to the regional-scale model outputs?

        Maybe start with regional “average” weather for a bunch of days (by season), correlate the km-scale variation with it for each day. Stuff that hangs together for similar days, that’s micro-climate, Stuff that doesn’t, that’s weather variation.

        Then once you’ve got a pattern for a bunch of regional data points, interpolate it throughout the space? Then apply it to regional-scale model outputs?

      • “I don’t think their point is valid. ”

        The customer is always right.


    • “Micro climates are great. But predicting One kilometre grids some time ahead? Really?”

      its easy. many ways to do it.. the question is what accuracy is required, and how does that drive the final answer.

      • Mosh

        Enough accuracy for it to be a worthwhile model.


      • “Enough accuracy for it to be a worthwhile model.”

        worthwhile is NOT FOR YOU TO JUDGE.

        it astounds me that people think they have a say in what a customer has asked a contractor to provide.

        For example: I provide analysis to my customer. faced with a tough problem where they have no clue about what to do.. I can tell them up or down.. I cannot say up 52 or down 16.. I can only say up or down

        For them, THAT IS WORTHWHILE because they want to head in the right direction. they dont want to stand still, and they donnt want to move in the wrong direction. they want direction. They fully understand that they wonnt get a PRECISE estimate of how far to move.. they just want to move in the right direction.

        So we pick an arbitary UNIT. “5” and so I tell them up 5 or down 5

        They next week… I tell them up 5 again… over time we refine this amount.

        people operate this was ALL THE TIME.. when we caannot compute a precise answer we compute the best we can.

        then we watch




        try again..

        and watch




        try again.

        Even Snipers have spotters , ding dong

      • Does it really matter? Judith has stated that the client is good with a 10% change and 200% uncertainty.

        Must be government or ENGO money.

      • Steven Mosher

        Thank you mark.
        I don’t think Tony b has ever had a customer.

        It’s not uncommon for customers to double or triple the estimate s provided by outside advisors.

        Yes even commercial organizations.

    • Tonyb

      >They agreed to try to refine their twelve hour forecast to a grid resolution of some 5 km but to my knowledge they have not yet achieved this.

      “The latest addition is a variable resolution UK model (UKV) which has a high resolution inner domain (1.5 km grid boxes) over the area of forecast interest, separated from a coarser grid (4 km) near the boundaries by a variable resolution transition zone. This variable resolution approach allows the boundaries to be moved further away from the region of interest, reducing unwanted boundary effects on the forecasts.”

      > There can be serious consequences if a cloudburst coincides with a high tide which caused great problems in Boscastle , which achieved international fame a few years ago.

      “The 12 km model gave a weak signal of light rain across most of the Southwest Peninsula and provided little guidance. The higher resolution models showed the development of thunderstorms with much more representative intensity and location, which was very similar to the radar data from the actual time, shown below.”

      I was on duty that day and the radar echoes did indeed look alarming.
      Things (the science) moves on in meteorology as indeed it does in climatology.

      • Tony

        Thanks for this.

        I can think of around 50 steep sided valleys where the Boscastle scenario could happen and many more where there isn’t a river involved. This possibly makes it worse as the water would just course down the hillside to its foot where the village and other habitation is typically located.

        How far ahead do you think a IKm grid forecast could be made with any accuracy as per Judith’s long term requirement?


  23. Sure, sure, why not describe all desired public works projects in terms of a needed responses to climate change — e,g. raising 90 year old seawalls, dredging harbors, etc. — especially if matching federal funds are available?

  24. Prof Curry,

    Maybe just do the opposite of the Australian Bureau of Meteorology –

    “The World’s Best Practice climate models predicted Australia would be hotter than normal in September, instead the maximum temperature anomaly was 1 to 5 degrees below average across most of Australia.”

    For temperature, anyway.

    The Australian Government apparently seems pleased with the accuracy. Hasn’t indicated any dissatisfaction at all, to my knowledge.

    Ah, the joys of having Governments as customers! Not surprising really, as democratic Governments are a collection of popularity contest winners. If your customer is Government, you can depend on them defending your selection to the death!


  25. Great project. Well done Prof Curry!
    The test of any new scientific theory is the ability to make numerical predictions which prove to be correct on independent assessment. Have we reached the point where this is required for ‘climate change’?
    Could we select a limited number (<100) of sites not subject to the urban heat effect and invite interested parties to predict; a) the annual average temperature at that site and b) the annual temperature range at that site. The predictions could be made for any future period at one year intervals. The actual temperatures and the accuracy or otherwise of the predictions would be published annually.
    The average temperature at a site would be determined as the arithmetic average of hourly, automated data recording using a platinum resistance thermometer or similar. That would give an average of nearly 9,000 readings for each site annually.
    Those who believe in the dominant influence of CO2 could qualify their longer term predictions with a CO2 forecast and a +/- allowance for any departure from that expectation.

  26. Dr Roy Spencer –

    “Poverty is a REAL threat to humanity, as demonstrated by throusands of years of human history…climate change is, so far, an imaginary threat that exists in the future.”

    Gee. Steven Mosher could take a leaf from Dr Spencer.

    Is agreeing with with a GHE non-believer “handing him his ass on a plate”, or did I miss something? Maybe I’ve missed some cunning Warmist redefinition.

    Maybe a bit O/T, but Dr Spencer refers to “climate change” as an “imaginary threat”. Do I perceive a hint of a scientist starting to believe the data in preference to the dogma?


  27. Dr. Curry.

    Sorry to be late.

    Your project sounds valuable.
    Even if (tuned) models reproduce global average temperature or whatever exactly, unless regional variations also match we are likely wasting our time. The presumed characteristics of the chaotic system have not been matched.

    A comparison of the principal components analysis (PCA) of model output and data would be most interesting. The analysis might support the result that either models or data at whatever scale chosen are, or not, chaotic in the sense that there exists a small set of components which describe the regional variations (using multiple meanings for the word region). But if simulations are to be believed, I would think that components of the simulation should match those of the data. Also, I have seen situations where PCA’s expressed events that would have been considered insignificant on a case by case basis, but which appeared consistently in combination with other events, and which subsequently were found to be real. This is a sort of point-to-point sharing of variances that allows greater significance to be extracted with components than is possible with isolated CDF’s and can greatly reduce experimentation. (Significance testing does get a bit tricky.)

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s