by Judith Curry
Short answer: I’m not sure.
I spent the 1990’s attempting to exorcise the climate model uncertainty monster: I thought the answer to improving climate models lay in improving parameterizations of physical processes such as clouds and sea ice (following Randall and Wielicki), combined with increasing model resolution. Circa 2002, my thinking became heavily influenced by Leonard Smith, who introduced me to the complexity and inadequacies of climate models and also ways of extracting useful information from weather and climate model simulations. I began thinking about climate model uncertainty and how it was (or rather, wasn’t) characterized and accounted for in assessments such as the IPCC. A seminal event in the evolution of my thinking on this subject was a challenge I received at Climate Audit to host a thread related to climate models, which increased my understanding of why scientists and engineers from other fields find climate models unconvincing. The Royal Society Workshop on Handling Uncertainty in Science motivated me to become a serious monster detective on the topic of climate models. So far, it seems that the biggest climate model uncertainty monsters are spawned by the complexity monster.
This post provides my perspective on some of the challenges and uncertainties associated with climate models and their applications. I am by no means a major player in the climate modeling community; my expertise and experience is on the topic of physical process parameterization, challenging climate models with observations, and extracting useful information from climate model simulations. My perspective is not in the mainstream among the climate community (see this assessment). But I think there are some deep and important issues that aren’t receiving sufficient discussion and investigation, particularly given the high levels of confidence that the IPCC gives to conclusions derived from climate models regarding the attribution of 20th century climate change and climate sensitivity.
I don’t think we can answer the question of what we can learn from climate models without deep consideration of the subject by experts in dynamical systems and nonlinear dynamics, artificial intelligence, mechanical engineers, philosophy of science, and probably others. I look forward to such perspectives from the Climate Etc. community.
This discussion is somewhat esoteric, and written for an audience that has some familiarity with computer simulation models. For background on global climate models, I suggest the following references:
For the latest thinking on the topic, I recommend:
- Series on Mathematical and Statistical Approaches to Climate Modeling hosted by the Isaac Newton Institute for Mathematical Sciences
- Special issue in Studies in History and Philosophy of Modern Physics
- Special issue in the Journal of Computational Physics on Predicting Weather, Climate and Extreme Events
- WIRE’s Interdisciplinary Reviews on Climate Modeling
Modeling the complex global climate system
The target of global climate models is the Earth’s climate system, consisting of the physical (and increasingly, chemical) components of the atmosphere, ocean, land surface, and cryosphere (sea ice and glaciers). When simulating climate, the objective is to correctly simulate the spatial variation of climate conditions in some average sense. There is a hierarchy of different climate models, ranging from simple energy balance models to the very complex global circulation models such as those used by the IPCC. It is this latter category of climate models that provides the focus for this analysis. Such models attempt to account for as many processes as possible to simulate the detailed evolution of the atmosphere, ocean, cryosphere, and land system, at a horizontal resolution that is typically 100s of km.
Climate model complexity arises from the nonlinearity of the equations, high dimensionality (millions of degrees of freedom), and the linking of multiple subsystems. Solution of the complex system of equations that comprise a climate model is made possible by the computer. Computer simulations of the climate system can be used to represent aspects of climate that are extremely difficult to observe, experiment with theories in a new way by enabling hitherto infeasible calculations, understand a system of equations that would otherwise be impenetrable, and explore the system to identify unexpected outcomes.
At the heart of climate model complexity lies the nonlinear dynamics of the atmosphere and oceans, which is described by the Navier-Stokes equations. The solution of Navier-Stokes equations is one of the most vexing problems in all of mathematics: the Clay Mathematics Institute has declared this to be one of the top 7 problems in all of mathematics and is offering a $1M prize for its solution (Millenium Prize Problems).
Chaos and pandemonium
Weather has been characterized as being in state of deterministic chaos, owing to its sensitivity to initial conditions. The source of the chaos is nonlinearities in the Navier-Stokes equations. A consequence of sensitivity to initial conditions is that beyond a certain time the system will no longer be predictable; for weather this predictability time scale is weeks.
One way of interpreting climate is as the distribution of states on some ‘attractor’ of weather. Annan and Connelly make the case that weather chaos averages out over time in climate simulations, and so does not preclude predictability in climate simulations. However, climate model simulations are also sensitive to initial conditions (even in an average sense). Coupling of a nonlinear, chaotic atmospheric model to a nonlinear, chaotic ocean model gives rise to something much more complex than the deterministic chaos of the weather model, particularly under conditions of transient forcing.
Coupled atmosphere/ocean modes of internal variability arise on timescales of weeks (e.g. the Madden Julian Oscillation), years (e.g. ENSO), decades (e.g. NAO, AMO, PDO), centuries and millenia (global thermohaline circulation), plus abrupt climate change. These coupled modes give rise to bifurcation, instability and chaos. How to characterize such phenomena arising from transient forcing of the coupled atmosphere/ocean system defies classification by current theories of nonlinear dynamical systems, where definitions of chaos and attractor cannot be invoked in situations involving transient changes of parameter values. Stainforth et al. (2007) refer to this situation as “pandemonium.” I’m not sure what this means in the context of nonlinear dynamics, but pandemonium seems like a very apt word to describe this situation.
Confidence in climate models
Particularly for a model of a complex system, the notion of a correct or incorrect model is not well defined, and falsification is not a relevant issue. The relevant issue is how well the model reproduces reality, i.e. whether the model “works” and is fit for its intended purpose.
In the absence of model verification or falsification, Stainforth et al. (2007) describes the challenges of building confidence in climate models. Owing to the long time scales present, particularly in the ocean component, there is no possibility of a true cycle of model improvement and confirmation, particularly since the life cycle of an individual model version (order of a few years) is substantially less than the simulation period (order of centuries). Model projections of future climate states relate to a state of the system that has not been previously experienced. Hence it is impossible to calibrate the model for the future climate state or confirm the usefulness of the forecast.
Confidence in climate models relies on tests of internal consistency and physical understanding of the processes involved, plus comparison of simulations of past climate states with observations. Issues surrounding climate model verification and validation and challenges of model evaluating climate model simulations with observations will be the subject of a future post. Failure to reproduce past observations highlights model inadequacies and motivates model improvement, but success in reproducing past states provides only a limited kind of confidence in simulation of future states.
Climate model imperfections
This discussion of model imperfections follows Stainforth et al. (2007). Model imperfection is a general term that describes our limited ability to simulate climate and is categorized here in terms of model inadequacy and model uncertainty. Model inadequacy reflects our limited understanding of the climate system, inadequacies of numerical solutions employed in computer models, and the fact that no model can be exactly isomorphic to the actual system. Model uncertainty is associated with uncertainty in model parameters and subgrid parameterizations, and also uncertainty in in initial conditions. As such, model uncertainty is a combination of epistemic and ontic uncertainties.
Model structural form is the conceptual modeling of the physical system (e.g. dynamical equations, initial and boundary conditions). In addition to insufficient understanding of the system, uncertainties are introduced as a pragmatic compromise between numerical stability and fidelity to the underlying theories, credibility of results, and available computational resources. One issue in the structural form of a complex system is the selection of subsystems to include, e.g. whether or not to include stratospheric chemistry and ice sheet dynamics.
Issues related to the structural form of the atmospheric dynamical core are of paramount importance in climate models. Staniforth and Wood 2008 give a lucid overview of the construction of the atmospheric dynamical core in global weather and climate models. The dynamical core solves the governing for fluid dynamics and thermodynamics on resolved scales, and parameterizations represent subgrid scale processes and other processes not included in the dynamical core (e.g. radiative transfer).
Thuburn (2010) (oops 2008) articulates the challenges for a dynamical core to possess discrete analogues of the conservation properties of the continuous equations. Because a numerical model can have only a small finite number of analogous conservative properties, some choice must be made in the design of a numerical scheme as to which conservation properties are most desirable, e.g. mass, momentum and angular momentum, tracer variance and potential enstrophy, energy, potential vorticity. Other aspects of a numerical scheme may be incompatible with certain conservation properties, such as good wave dispersion properties and computational efficiency. The relative importance of different conservation properties depends on the timescales for their corresponding physical sources and sinks. This particular assessment of Thuburn caught my attention:
“Moist processes are strongly nonlinear and are likely to be particularly sensitive to imperfections in conservation of water. Thus there is a very strong argument for requiring a dynamical core to conserve mass of air, water, and long-lived tracers, particularly for climate simulation. Currently most if not all atmospheric models fail to make proper allowance for the change in mass of an air parcel when water vapour condenses and precipitates out. . . However, the approximation will not lead to a systematic long term drift in the atmospheric mass in climate simulations provided there is no long term drift in the mean water content of the atmosphere.”
Given the importance of water vapor and cloud feedbacks in the simulated climate sensitivity, Thuburn’s analysis sounds a warning bell that that mass conservation schemes that seem adequate for numerical weather prediction may be the source of substantial error when applied in climate models.
Characterizing model uncertainty
Model uncertainty arises from uncertainty in model structure, model parameters and parameterizations, and initial conditions. Uncertainties in parameter values include uncertain constants and other parameters, subgridscale parameterizations (e.g. boundary layer turbulence, cloud microphysics), and ad hoc modeling to compensate for the absence of neglected factors.
Calibration is necessary to address parameters that are unknown or inapplicable at the model resolution, and also in the linking of submodels. As the complexity, dimensionality, and modularity of a model grows, model calibration becomes unavoidable and an increasingly important issue. Model calibration is accomplished by kludging (or tuning), which is “an inelegant, botched together piece of program; something functional but somehow messy and unsatisfying, a piece of program or machinery which works up to a point” (cited by Winsberg and Lenhard 2010; draft). A kludge required in one model may not be required in another model that has greater structural adequacy or higher resolution.
Continual ad hoc adjustments of the model (calibration) can mask underlying deficiencies in model structural form; Occam’s razor presupposes that the model least dependent on continual ad hoc modification is to be preferred. It should be noted that in a climate model with millions of degrees of freedom, it is impossible to tune the model to provide a correct 4D solution of many variables. This post at realclimate.org addresses some of these issues in more detail.
Ensemble methods are a brute force approach to representing model uncertainty. Rather than conducting a single simulation, multiple simulations are run that sample some combination of different initial conditions, model parameters and parameterizations, and model structural forms. While the ensemble method used in weather and climate predictions is inspired by Monte Carlo approaches, application of a traditional Monte Carlo approach far outstrips computational capacity owing to the very large number of possible combinations required to fully represent climate model uncertainty. A high level of model complexity and high model resolution precludes large ensembles. Stochastic parameterization methods are being introduced (see this presentation by Tim Palmer) to characterize parameter and parameterization uncertainty, reducing the need to conduct ensemble simulations to explore parameter uncertainty.
Fit for purpose?
George Box has famously stated that: “All models are wrong, but some are useful.” Some of the purposes that climate scientists use climate models are for:
- Hypothesis testing, numerical experiments, to understand how the climate system works, including its sensitivity to altered forcing.
- Simulation of present and past states to understand planetary energetics and other complex interactions
- Atmosphere and ocean state reanalysis using 4D data assimilation into climate models
- Attribution of past climate variability and change
- Attribution of extreme weather events
- Simulation of plausible and dynamically consistent future states, on timescales ranging from months to decades to centuries
- Projections of future regional climate variation for use in model-based decision support systems
- Projections of future risks of extreme weather events
The same climate model configurations are used for this plurality of applications. Choices about model structural form, level of detail in the physical parameterizations, parameter choices, horizontal resolutions, and experimental design (e.g. ensemble configuration and size) have to be made within the constraints of available computer resources. There is continual tension among climate modeling groups about allocation of computer resources to higher model resolutions versus more complex physical parameterizations versus simulation of large ensembles. One alternative is to devote all of the resources to a single best model with increased resolution, improved model parameterizations, and greater model complexity. Another alternative is to use simpler models and conduct a large ensemble simulation with varying initial conditions, varying parameter values and parameterizations, and different model structures. Different applications would be optimized by different choices.
Wendy Parker nails it in this statement:
Lloyd (2009) contends that climate models are confirmed by various instances of fit between their output and observational data. The present paper argues that what these instances of fit might confirm are not climate models themselves, but rather hypotheses about the adequacy of climate models for particular purposes. This required shift in thinking—from confirming climate models to confirming their adequacy-for-purpose—may sound trivial, but it is shown to complicate the evaluation of climate models considerably, both in principle and in practice.
My personal opinion is that at this stage of the game, #1 is paramount. This implies the need for a range of model structural forms and parameter/parameterization choices in the context of a large ensemble of simulations. The fitness for task #1 of current climate models is suboptimal owing to compromises that have been made to optimize for the full range of applications. The design of climate models and their experiments needs renewed consideration for #1 in light of model inadequacy and uncertainties. Once we learn more about the climate models and how to design climate model experiments, we can possibly have a realistic chance of effectively tackling regional climate variation and extreme weather events. However, there may be no regional predictability owing to the ontic uncertainty associated with internal modes of natural climate variability. The challenge of predicting emergent extreme weather events seems overwhelming to me, I have no idea whether this is possible.
Atmospheric science has played a leading role in the development and use of computer simulation in scientific endeavors. Computer simulations have transformed the climate sciences, and simulations of future states of weather and climate have important societal applications. Computer simulations have come to dominate the field of climate science and its related fields, often at the expense of the traditional knowledge sources of theoretical analysis and challenging theory with observations. The climate community should be cautioned against over reliance on simulation models in climate research, particularly in view of uncertainties about model structural form. However, given the complexity of the climate problem, climate models are an essential tool for climate research, and are becoming an increasingly valuable tool for a range of societal applications.
Returning to the question raised in the title of this post, we have learned much from climate models about how the climate system works. But I think the climate modeling enterprise is putting the cart before the horse in terms of attempting a broad range of applications that include prediction of regional climate change, largely driven by needs of policy makers. Before attempting such applications, we need a much more thorough exploration of how we should configure climate models and test their fitness for purpose. An equally important issue is how we should design climate model experiments in the context of using climate models to test hypotheses about how the climate system works, which is a nontrivial issue particularly given the ontic uncertainties. Until we have achieved such an improved understanding, the other applications are premature and are detracting resources (computer and personnel) from focusing on these more fundamental issues.
And finally, we should ponder this statement by Heymann (2010):
Computer simulation in the atmospheric sciences has caused a host of epistemic problems, which scientists acknowledge and with which philosophers and historians are grappling with. But historically practice overruled the problems of epistemiology. Atmospheric scientists found and created their proper audiences, which furnished them with legitimacy and authority. Whatever these scientists do, it does not only tell us something about science, it tells us something about the politics and culture within which they thrive. . . The authority with which scientific modeling in climatology in the eighteenth century or numerical weather prediction, climate simulation and other fields of the atmospheric science towards the close of the twentieth century was furnished has raised new questions. How did scientists translate or transform established practices and standards in order to fit to shifted epistemic conditions? . . . How did they manage to reach conceptual consensus in spite of persisting scientific gaps, imminent uncertainties and limited means of model validation? Why, to put the question differently, did scientists develop trust in their delicate model constructions?
This post is envisioned as the first in a series on climate modeling, and I hope to attract several guests to lead threads on this topic. Future topics that I am currently planning include:
- How should we interpret simulations of 21st century climate?
- Assessing climate model attribution of 20th century climate change
- How should we assess and evaluate climate models?
- The challenge of climate model parameterizations
- The value of simple climate models
- Seasonal climate forecasts
- Complexity (guest post)
Moderation note: this is a technical thread. The thread will be tightly moderated for topicality.