Site icon Climate Etc.

Implications for climate models of their disagreement with observations

by Judith Curry

How should we interpret the growing disagreement between observations and climate model projections in the first decades of the 21st century?  What does this disagreement imply for the epistemology of climate models?

One issue that I want to raise is the implications of the disagreement between climate models and observations in the 21st century, as per Fig 11.25 from the AR5.

Panel b) indicates that the IPCC views the implications to be that some climate models have a CO2 sensitivity too high — they lower the black vertical bar (indicating the likely range from climate models) to account for this.  And they add the ad hoc red stippled range, which has a slightly lower slope and lowered range  that is consistent with the magnitude of the current model/obs discrepancy.  The implication seems that the expected warming over the last decade is lost, but future warming will continue at the expected (albeit slightly lower) pace.

The existence of disagreement between climate model predictions and observations doesn’t provide any insight in itself to why the disagreement exists, i.e. which aspect(s) of the model are inadequate, owing to the epistemic opacity of knowledge codified in complex models.

What IF?

For the sake of argument, lets assume (following the stadium wave and related arguments) that the pause continues at least into 2030’s.

Further, it is important to judge empirical adequacy of the model by accounting for observational noise.  If the pause continues for 20 years (a period for which none of the climate models showed a pause in the presence of greenhouse warming), the climate models will have failed a fundamental test of empirical adequacy.

Does this mean climate models are ‘falsified’?  Matt Briggs has a current post that is relevant, entitled Why Falsifiability, Though Flawed, Is Alluring.

You have it by now: if the predictions derived from a theory are probabilistic then the theory can never be falsified. This is so even if the predictions have very, very small probabilities. If the prediction (given the theory) is that X will only happen with probability &epsilon (for those less mathematically inclined, ε is as small as you like but always > 0);, and X happens, then the theory isnot falsified. Period. Practically false is (as I like to say) logically equivalent to practically a virgin.

With a larger ensemble, perhaps there would be ‘some’ simulations with a 20+ year pause.

If the pause does indeed continue for another 2+ decades, then this arguably means that the scenario of time evolution of the predictions, on timescales of 3+ decades, has been been falsified.  This then brings us back to the ‘time of emergence‘ issue, i.e. whether the climate models are fit for the purpose of transient climate predictions, rather than merely equilibrium climate sensitivity.

If the climate models are not fit for the purpose of transient climate projections, and they are not fit for the purpose of simulating or projecting regional climate variability, what are they fit for?  Estimation of equilibrium climate sensitivity?  Maybe, but nearly all signals are pointing to a climate model sensitivity being systematically too high.  Well, ok, the climate models aren’t perfect, but it is argued that we are moving on a path that will make climate models fit for these purposes, as per the National Strategy for Advancing Climate Models.  Increasing model resolution, etc. are not going to improve the situation, IMO.

The argument is then made that climate models were really designed as research tools, to explore and understand climate processes.  Well, we have long reached the point of diminishing returns from climate models in terms of actually understanding how the climate system works; not just limited by the deficiencies of climate models themselves, but also by the fact that the models are very expensive computationally and not user friendly.

So, why are so much resources being invested in climate models?  A very provocative paper by Shackley et al. addresses this question:

“In then addressing the question of how GCMs have come to occupy their dominant position, we argue that the development of global climate change science and global environmental ‘management’ frameworks occurs concurrently and in a mutually supportive fashion, so uniting GCMs and environmental policy developments in certain industrialised nations and international organisations. The more basic questions about what kinds of commitments to theories of knowledge underpin different models of ‘complexity’ as a normative principle of ‘good science’ are concealed in this mutual reinforcement. Additionally, a rather technocratic policy orientation to climate change may be supported by such science, even though it involves political choices which deserve to be more widely debated.”

If the discrepancy between climate model projections and observations continues to grow, will the gig be up in terms of the huge amounts of funding spent on general circulation/earth system models?

Exit mobile version