by Judith Curry
Overall, climate modeling has made enormous progress in the past several decades, but meeting the information needs of users will require further advances in the coming decades. – NRC
The U.S. National Research Council (NRC) has just released a new study under the auspices of BASC that takes on the challenge of A National Strategy for Advancing Climate Modeling . Here is a summary of the recommendations:
The Committee recommends a national strategy for advancing the climate modeling enterprise in the next two decades, consisting of four main new components and five supporting elements that, while less novel, are equally important. The Nation should:
1. Evolve to a common national software infrastructure that supports a diverse hierarchy of different models for different purposes, and which supports a vigorous research program aimed at improving the performance of climate models on extreme-scale computing architectures;
2. Convene an annual climate modeling forum that promotes tighter coordination and more consistent evaluation of U.S. regional and global models, and helps knit together model development and user communities;
3. Nurture a unified weather-climate modeling effort that better exploits the synergies between weather forecasting, data assimilation, and climate modeling;
4. Develop training, accreditation, and continuing education for “climate interpreters” who will act as a two-way interface between modeling advances and diverse user needs.
At the same time, the Nation should nurture and enhance ongoing efforts to:
5. Sustain the availability of state-of-the-art computing systems for climate modeling;
6. Continue to contribute to a strong international climate observing system capable of comprehensively characterizing long-term climate trends and climate variability;
7. Develop a training and reward system that entices the most talented computer and climate scientists into climate model development;
8. Enhance the national and international IT infrastructure that supports climate modeling data sharing and distribution; and
9. Pursue advances in climate science and uncertainty research.
Well, almost all of these individual recommendations are good (I take issue with #4). And a few of them are very good (I especially like #1, #2, #9). But will they do the job, in terms of dealing with the fundamental problems with climate models?
The large investment in climate modeling is justified by its perceived importance in supporting decisions. Until recently, such decision making has focused on CO2 stabilization and climate sensitivity. In this report, the target users and policy decisions are farmers, hydropower systems managers, insurance companies, national security sector, and the building community. What is needed by these users? High resolution regional accuracy, with a focus on extreme events. This is exactly where climate models have the least skill. How will this be addressed?
As climate models become more comprehensive and their gridscale becomes finer, they can provide meaningful projections of more parts of the climate response and their possible feedbacks on the overall climate system, but this does not necessarily reduce projection uncertainty about some aspects of climate change. Indeed, global climate sensitivity, defined as the global warming simulated by a climate model in response to a sustained doubling of atmospheric CO2 concentrations, still shows a similar 30 percent spread across leading models as it did 20 years ago.
The authors recognize that increasing resolution comes with its own problems:
However, increasing spatial resolution is not a panacea. Climate models rely on parameterizations of physical, chemical, and biological processes to represent the effects of unresolved or subgrid-scale processes on the governing equations. Increasing spatial resolution does not automatically lead to improved accuracy of simulations. Often, the assumptions in the parameterizations are scale-dependent, although so-called “scale-aware” parameterization development has been pursued recently. As model resolution is increased, the assumptions may break down, leading to a degradation of the simulation fidelity.
Even if the assumptions remain valid over a range of model resolutions, there is still a need to recalibrate the parameters in the parameterizations as resolution is refined (sometimes called model tuning), and the tuning may only be valid for the time period for which observations used to constrain the model parameters are available. The lack of understanding and formulation of the interactions between parameterizations and spatial resolution makes it hard to quantify the influence of spatial resolution on model skill. Furthermore, structural differences among parameterizations may have comparable, if not larger, effects on the simulations than spatial resolution.
The authors recognize the problems associated with attempting to address the regional issues using downscaling:
Climate projections at finer scales (such as resolving climatic features for a small state, single watershed, county, or city) are typically produced using one of two approaches: either dynamical downscaling using higher resolution (50 km or finer) regional climate models nested in the global models or empirical statistical downscaling of projections developed from global climate model output and observational data sets. Neither downscaling approach can reduce the large uncertainties in climate projections, which derive in large part from global-scale feedbacks and circulation changes, and it is important to base such downscaling on model output from a representative set of global climate models to propagate some of these uncertainties into the downscaled predictions. The modeling assumptions inherent in the downscaling step adds further uncertainty to the process. There has been inadequate work done to date to systematically evaluate and compare the value added by various downscaling techniques for different user needs in different types of geographic regions. However, as the grid spacing of the global climate model becomes finer, simple statistical downscaling approaches become more justifiable and attractive because the climate model is already simulating more of the weather and surface features that drive local climate variations.
What are some other sources of improvements?
As discussed in Chapter 3, model improvements to address these research frontiers will be achieved through three main mechanisms: (i) development of Earth system models (increasing model complexity), (ii) improvements to the existing generation of atmosphere-ocean models, through improved physics, parameterizations, and computational strategies, increased model resolution, and better observational constraints, and (iii) improved co-ordination and coupling of models at global and regional scales, including shared insights and capabilities of modeling efforts in the climate, reanalysis, and operational forecast communities.
The authors recognize an important point:
There is no one-size-fits-all answer, but instead the approach should be problem-driven. Some problems that are of great societal relevance, such as sea level rise and climate change impacts on water resources, require increased model complexity, and progress is likely through the addition of new model capabilities (ice sheet dynamics and land surface hydrology, in these examples). In other cases, such as improved model skill in regional precipitation and extreme weather forecasts, increased resolution and “scalable” physical parameterizations are the highest priorities for extending model capabilities. Other problems, such as water resource management, require both increased resolution and complexity.
The make the following summary recommendation:
Recommendation 4.1: As a general guideline, priority should be given to climate modeling activities that have a strong focus on problems which intersect the space where:
(i) addressing societal needs requires guidance from climate models and
(ii) progress is likely, given adequate resources. This does not preclude climate modeling activity focused on basic research questions or “hard problems,” where progress may be difficult (e.g., decadal forecasts), but is intended to allocate efforts strategically.
Here is what they have to say on natural internal variability:
Climate predictions and projections are subject to uncertainty resulting from the internal variability of the climate system. The relative role of this type of uncertainty, compared to other sources of uncertainty, is a function of the future time horizon being considered and the spatial scale of analysis (Hawkins and Sutton, 2009, 2011). Hawkins and Sutton note that internal variability dominates on decadal or shorter time scales, and is more important at smaller (e.g., regional) space scales. Natural variability is usually explored by running ensembles of climate model simulations using different initial conditions for each simulation. Traditionally the number of ensemble members has not been large (e.g., around three in the CMIP3 data set), nor has it been based on rigorous statistical considerations. In addition, estimation of natural variability using models is limited by inherent uncertainty in the models because of parametric and structural uncertainty.
In this regard, the role of internal variability has been under-investigated in the exploration of future climate change, although recent research on larger ensembles has developed improved measures of natural variability and underscored how substantial it can be particularly on regional scales.
The above statement is excellent, I couldn’t have said it better myself. This is a FAR GREATER impediment to providing information for regional decision makers than is model resolution. I see no strategy here for actually addressing this issue.
While there is a section on uncertainty (a good thing), their ‘uncertainty’ strategy is woefully inadequate, focusing on
- Model weighting in multi-model ensembles
- Parameter optimization
- Arguments that they are reducing uncertainty
- Communicating uncertainty (which received most of the emphasis)
There is a brief section on decision making under uncertainty, and one particular statement caught my eye:
The promise of uncertainty reduction, when not realized, stands as a metric of poor management, poor scientific method, or outright scientific failure.
Their overall recommendation on uncertainty has the right words (but from the preceding text, not clear what if anything of use would actually emerge from this. The whole issue of verification and validation was ignored (apart from a general call to maintain the observing system).
Recommendation 6.1: Uncertainty is a significant aspect of climate modeling and should be properly addressed by the climate modeling community. To facilitate this, the Unites States should more vigorously support research on uncertainty, including:
- understanding and quantifying uncertainty in the projection of future climate change, including how best to use the current observational record across all time scales;
- incorporating uncertainty characterization and quantification more fully in the climate modeling process;
- communicating uncertainty to both users of climate model output and decision makers; and
- developing deeper understanding on the relationship between uncertainty and decision making so that climate modeling efforts and characterization of uncertainty are better brought in line with the true needs for decision making.
And finally, the NRC BASC Newsletter announces a new website:
Along with its new report about advancing climate modeling, the Board on Atmospheric Sciences and Climate has just released Climate Modeling 101 , a website designed to help the public learn more about the basics of climate modeling — how they work and why they are important. The site features short videos and animations that explain everything from the difference between climate and weather to how climate models are built and verified.
I applaud their making this kind of effort, do you think it works for the intended audience?
JC summary: If the objective of this report is to argue for increased resources for the U.S. climate modeling enterprise, it might be effective. If the objective is to improve climate projections for decision making, it is not clear now much of an impact this will have. My prediction is that #3 unified weather-climate prediction will have the greatest impact; it will force a focus on subseasonal and seasonal timescales, where there is some hope of progress on these timescales that would have great societal impact. Improving simulations of the MJO, ENSO, AO etc will require getting the coupling between the atmosphere and ocean correct, which is needed for climate models to get the circulations (and hence regional climates) correct.
And finally, if you missed it the first time around, check out my presentation on climate modeling strategy issues that I made to DOE BERAC about a year ago, if you want to know what I think needs to be done.