Site icon Climate Etc.

Climate model discussion thread

by Judith Curry

My perspective on climate models (uncertainty monster, DOE presentation, RS presentation) have been regarded as outside the ‘mainstream’.  Here are some new papers by leading climate modelers that provide new evidence and arguments on the concerns that I have been raising.

Tuning the climate of a global model 

Mauritsen T. , B. Stevens E. Roeckner T. Crueger M. Esch M. Giorgetta H. Haak J. H. Jungclaus D. Klocke D. Matei U. Mikolajewicz D. Notz R. Pincus H. Schmidt L. Tomassini

Abstract.  During a development stage global climate models have their properties adjusted or tuned in various ways to best match the known state of the Earth’s climate system. These desired properties are observables, such as the radiation balance at the top of the atmosphere, the global mean temperature, sea ice, clouds and wind fields. The tuning is typically performed by adjusting uncertain, or even non-observable, parameters related to processes not explicitly represented at the model grid resolution. The practice of climate model tuning has seen an increasing level of attention because key model properties, such as climate sensitivity, have been shown to depend on frequently used tuning parameters. Here we provide insights into how climate model tuning is practically done in the case of closing the radiation balance and adjusting the global mean temperature for the Max Planck Institute Earth System Model (MPI-ESM). We demonstrate that considerable ambiguity exists in the choice of parameters, and present and compare three alternatively tuned, yet plausible configurations of the climate model. The impacts of parameter tuning on climate sensitivity was less than anticipated.

Citation:  Journal of Advances in Modeling Earth Systems 4 , doi: 10.1029/2012MS000154 , [link]

This paper is also discussed at the Blackboard.  Here is the key point from my perspective, as discussed by Lucia:

The MPI-ESM was not tuned to better fit the 20th century. In fact, we only had the capability to run the full 20th Century simulation according to the CMIP5-protocol after the point in time when the model was frozen. Yet, we were in the fortunate situation that the MPI-ESM-LR performed acceptably in this respect, and we did have good reasons to believe this would be the case in advance because the predecessor was capable of doing so. During the development of MPI-ESM-LR we worked under the perception that two of our tuning parameters had an influence on the climate sensitivity, namely the convective cloud entrainment rate and the convective cloud mass flux above the level of nonbuoyancy, so we decided to minimize changes relative to the previous model. The results presented here show that this perception was not correct as these parameters had only small impacts on the climate sensitivity of our model. Climate models ability to simulate the 20th century temperature increase with fidelity has become something of a show-stopper as a model unable to reproduce the 20th century would probably not see publication, and as such it has effectively lost its purpose as a model quality measure. Most other observational datasets sooner or later meet the same destiny, at least beyond the first time they are applied for model evaluation. That is not to say that climate models can be readily adapted to fit any dataset, but once aware of the data we will compare with model output and invariably make decisions in the model development on the basis of the results.
.

Lucia’s comment: This seems to be an admission that modelers have known their models would be compared to 20th century data early on. So, early models were tuned to that. We are now in a situation where models can — mostly– match 20th century data. So, the good match in the hindcast for the historic surface temperatures in no longer a very good metric for determining which models are good or bad.

JC comment:  This supports my circular reasoning argument in the uncertainy monster paper, whereby tuning (implicit or explicit) to 20th century time series of global average temperature anomalies makes these models not useful for 20th century attribution studies.

Lucia also pulled this tidbit related to simulations of Arctic sea ice:

We usually focus on temperature anomalies, rather than the absolute temperature that the models produce, and for many purposes this is sufficient. There is considerable coherence between the model realizations and the observations; models are generally able to reproduce the observed 20th century warming of about 0.7 K, and details such as the years of cooling following the volcanic eruptions. Yet, the span between the coldest and the warmest model is almost 3 K, distributed equally far above and below the best observational estimates, while the majority of models are cold-biased. Relative to the 20th century warming the span is a factor four larger, while it is about the same as our best estimate of the climate response to a doubling of CO2, and about half the difference between the last glacial maximum and present. To parameterized processes that are non-linearly dependent on the absolute temperature it is a prerequisite that they be exposed to realistic temperatures for them to act as intended. Prime examples are processes involving phase transitions of water: Evaporation and precipitation depend non-linearly on temperature through the Clausius-Clapeyron relation, while snow, sea-ice, tundra and glacier melt are critical to freezing temperatures in certain regions. The models in CMIP3 were frequently criticized for not being able to capture the timing of the observed rapid Arctic sea-ice decline [e.g., Stroeve et al., 2007]. While unlikely the only reason, provided that sea ice melt occurs at a specific absolute temperature, this model ensemble behavior seems not too surprising

JC comment: I first became aware of the problem two years ago, when Tim Palmer presented (at Fall AGU) a plot of absolute global temperatures simulated by the CMIP3 climate models for the 20th century.  The impact of this on the model thermodynamics (notably the sea ice) is profound; model tuning then works around this problem, so that n wrongs make a ‘right’, almost certainly torquing feedbacks in unfortunate ways.

Simulating regime structures in weather and climate prediction models

A. Dawson, T.N. Palmer, S. Corti

Abstract.  It is shown that a global atmospheric model with horizontal resolution typical of that used in operational numerical weather prediction is able to simulate non-gaussian probability distributions associated with the climatology of quasi-persistent Euro-Atlantic weather regimes. The spatial patterns of these simulated regimes are remarkably accurate. By contrast, the same model, integrated at a resolution more typical of current climate models, shows no statistically significant evidence of such non-gaussian regime structures, and the spatial structure of the corresponding clusters are not accurate. Hence, whilst studies typically show incremental improvements in first and second moments of climatological distributions of the large-scale flow with increasing model resolution, here a real step change in the higher-order moments is found. It is argued that these results have profound implications for the ability of high resolution limited-area models, forced by low resolution global models, to simulate reliably, regional climate change signals.

Citation:  GEOPHYSICAL RESEARCH LETTERS, VOL. 39, L21805, 6 PP., 2012doi:10.1029/2012GL053284  [abstract]

Excerpts from the Conclusions:

Understanding gained from studies of low-dimensional dynamical systems suggests that the response to external forcing of a system with regimes is manifested primarily in changes to the frequency of occurrence of those regimes. This implies that a realistic simulation of regimes should be an important requirement from climate models. We have shown that a low resolution atmospheric model, with horizontal resolution typical of CMIP5 models, is not capable of simulating the statistically significant regimes seen in reanalysis, yet a higher resolution configuration of the same model simulates regimes realistically. This result suggests that current projections of regional climate change may be questionable.

This finding is also highly relevant to regional climate modelling studies where lower resolution global atmospheric models are often used as the driving model for high resolution regional models. If these lower resolution driving models do not have enough resolution to realistically simulate regimes, then then boundary conditions provided to the regional climate model could be systematically erroneous. It is therefore likely that the embedded regional model may represent an unrealistic realization of regional climate and variability.

The models studied here used observed SSTs for boundary conditions. However, the coupled atmosphere–ocean models typically used for climate prediction have an interactive ocean model, complete with its own errors and biases. It seems unlikely that one would see such a large improvement moving from T159 to T1279 in a coupled scenario simply due to errors in the ocean model and the two-way interactions between the atmospheric and oceanic model components. The coupling process often involves a certain amount of model parameter tuning, which may also decrease the impact of improved atmospheric resolution noted here. However, there is evidence that with modest improvements to oceanic resolution one can reduce some of the large SST biases that affect global circulation, suggesting improved atmospheric resolution may still provide considerable benefits.

JC comment:  This is a very good paper that uses an interesting technique for understanding the model circulations.  I have long argued that using low resolution global models to force regional climate models is a pointless exercise. I disagree somewhat with the authors that increasing the model resolution will solve these problems in coupled atmosphere/ocean models; I suspect that there are more fundamental issues at play in coupling of two nonlinear, chaotic fluids.

Communicating the role of natural variability in future North American climate

Clara Deser, Reto Knutti, Susan Solomon, Adam Phillips

Abstract.   As climate models improve, decision-makers’ expectations for accurate climate predictions are growing. Natural climate variability, however, poses inherent limits to climate predictability and the related goal of adaptation guidance in many places, as illustrated here for North America. Other locations with low natural variability show a more predictable future in which anthropogenic forcing can be more readily identified, even on small scales. We call for a more focused dialogue between scientists, policymakers and the public to improve communication and avoid raising expectations for accurate regional predictions everywhere.

Citation:  Nature Climate Change, published online 26 October 2012  DOI: 10.1038/NCLIMATE1562  [link]

The abstract and title hides some important scientific results.  Excerpts:

Model projections are inherently uncertain. But the results shown here suggest that often models may disagree because future changes are within the natural variability. Such natural fluctuations in climate should be expected to occur, and these will augment or reduce the magnitude of climate change due to anthropogenic forcing in many parts of the world. Such intrinsic climate fluctuations occur not only on interannual-to-decadal timescales but also over periods as long as 50 years.

Through an examination of a large ensemble of twenty-first century projections produced by the CCSM3 climate model, we have illustrated that even over the next 55 years, natural variability contributes substantial uncertainty to temperature and precipitation trends over North America on local, regional and continental scales, especially in winter at mid and high latitudes. Such uncertainty and regional variation in projected climate change is largely a consequence of the chaotic nature of large-scale atmospheric circulation patterns, and as such is unlikely to be reduced as models improve or as greenhouse-gas trajectories become more certain.

It is worth noting that downscaled information derived statistically or dynamically from global climate model output will add local detail, but remains dependent on the overlying larger-scale field, and cannot mitigate the uncertainty of projected climate trends due to natural climate variability.

JC comment:  The significance of this paper is the use of a large ensemble of simulations from a single model.  The large ensemble produces greater variability (not surprisingly).  While the simulations show that all this natural variability averages out globally (not surprising, since these simulations were not initialized) and produces a steady trend in global average temperatures, the regional and continental scale averages  clearly show the dominance of the natural internal variability.  Again, this paper argues that regional downscaling from these global model simulations is not useful, and that there is very large uncertainty on regional scales, even on scales as large as the continental U.S.  It is good to see these authors paying attention to natural internal variability, and not just relegating it to ‘noise’.

JC summary

Each of these papers makes an important contribution to understanding how we should use climate models and make inferences from climate model simulations. The manner in which climate models have been tuned makes them of dubious use in 20th century attribution studies.  In most regions, the climate models have little skill on regional projections of climate change on time scales of 50 years.

Nevertheless, we have the recent recommendations from the NRC National Strategy for Advancing Climate Models that assumes that climate models are useful for these applications and the focus is on supporting decision making.  These papers provide further evidence that climate models are not fit for these purposes, and that we are not currently on a path that is likely to improve this situation.

Exit mobile version