by Judith Curry
Are climate models the best tools? A recent Ph.D. thesis from The Netherlands provides strong arguments for ‘no’.
The usefulness of GCM climate models, particularly for projections, attribution studies, and impact assessments has been the topic of numerous Climate Etc. posts:
- Model structural uncertainty – are GCMs the best tools?
- Limits of climate models for adaptation decision making
- Philosophical reflections on climate model projections
- What can we learn from climate models? Part II
- Broadening the portfolio of climate information
A new thesis from The Netherlands (pointed out to me by Jeroen van der Sluijs) provides a very interesting and unique perspective on this topic.
The Robustness of the Climate Modelling Paradigm, Ph.D. thesis by Alexander Bakker from The Netherlands [link]
The context for the thesis is related in the Preface:
In 2006, I joined KNMI to work on a project “Tailoring climate information for impact assessments”. I was involved in many projects often in close cooperation with professional users.
In most of my projects, I explicitly or implicitly relied on General Circulation Models (GCM) as the most credible tool to assess climate change for impact assessments. Yet, in the course of time, I became concerned about the dominant role of GCMs. During my almost eight year employment, I have been regularly confronted with large model biases. Virtually in all cases, the model bias appeared larger than the projected climate change, even for mean daily temperature. It was my job to make something ’useful’ and ’usable’ from those biased data. More and more, I started to doubt that the ’climate modelling paradigm’ can provide ’useful’ and ’usable’ quantitative estimates of climate change.
After finishing four peer-reviewed articles, I concluded that I could not defend one of the major principles underlying the work anymore. Therefore, my supervisors, Bart van den Hurk and Janette Bessembinder, and I agreed to start again on a thesis that intends to explain the caveats of the ’climate modelling paradigm’ that I have been working in for the last eight years and to give direction to alternative strategies to cope with climate related risks. This was quite a challenge. After one year hard work a manuscript had formed that I was proud of and that I could defend and that had my supervisors’ approval. Yet, the reading committee thought differently.
According to Bart, he has never supervised a thesis that received so many critical comments. Many of my propositions appeared too bold and needed some nuance and better embedding within the existing literature. On the other hand, working exactly on the data-related intersection between the climate and impact community may have provided me a unique position where contradictions and nontrivialities of working in the ’climate modelling paradigm’ typically come to light. Also, not being familiar with the complete relevant literature may have been an advantage. In this way, I could authentically focus on the ’scientific adequacy’ of climate assessments and on the ’non- trivialities’ of translating the scientific information to user applications, solely biased by my daily practice.
The thesis is in two Parts: Part I addresses the philosophy of climate modeling, while Part II describes specific impact assessment studies. I excerpt here some text from Part I of the thesis.:
In the light of this controversy about the dominant role of GCMs, it might be questioned whether the state-of-the-art fully coupled AOGCMs really are the only credible tools in play. Are there credible methods for the quantitative estimation of climate response at all? And more important, does the current IPCC approach with large multi-model ensembles of AOGCM simulations guarantee a range of plausible climate change that is relevant for (robust) decisions?
Another important consideration is about the expenses. Apart from the very large computation time (and costs), the post processing and storage of the huge amounts of data ask lots of the intellectual capacity among the involved researchers. The used capacity is not available anymore for interpretation and creativity. This might be at the expense of the framing and communication of uncertainties and of the quality of some doctoral dissertations.
The ’climate modelling paradigm’ is characterized by two major axioms:
- More comprehensive models that incorporate more physics are considered more suitable for climate projections and climate change science than their simpler counterparts because they are thought to be better capable of dealing with the many feedbacks in the climate system. With respect to climate change projections they are also thought to optimally project consistent climate change signals.
- Model results that confirm earlier model results are perceived more reliable than model results that deviate from earlier results. Especially the confirmation of earlier projected Equilibrium Climate Sensitivity between 1.5″C and 4.5″C degree Celsius seems to increase the perceived credibility of a model result. Mutual confirmation of models (simple or complex) is often referred to as ’scientific robustness’.
This chapter explores the legitimacy and tenability of the ’climate modelling paradigm’. It is not intended to advocate other methods to be better nor to completely disqualify to use of GCMs. Rather it aims to explore what determines this perception of GCMs to be the superior tools and to assess the scientific foundation for this perception. First, section 2.2 explains the origin of the paradigm and illustrates that the paradigm is mainly based on the great prospects of early climate change scientists. Then section 2.4 elaborates on the pitfalls of fully relying on physics. Subsequently, section 2.3 argue that empirical evidence for the perceived GCM-superiority is weak. Thereafter, section 2.5 argues that biased models cannot provide an internally consistent (and plausible) climate response, which is especially problematic for local and regional climate projections. Next, the independence of the multiple ’lines of evidence’ is treated in section 2.6. Finally, in section 2.7 it is concluded that the climate modelling paradigm is in crisis.
The state-of the-art fully coupled AOGCMs do not provide independent evidence for human-induced climate change. GCM-based multi-model ensembles are likely to be (implicitly) tuned to earlier results. The confirmation of earlier results by GCMs is therefore no reason for higher confidence. The confidence in the GCMs originates primarily from the fact that, after extensive tuning of the feedbacks and other processes, a radiative balance is found for the Top-Of-Atmosphere. This is indeed quite an achievement, but the tuning usually provides only one of countless solutions. Multi-model ensembles tuned to a particular response give us only little insight in the possible range of outcomes. Besides, the GCMs only include a limited selection of potentially importantfeedbacks and sometimes artefacts have to be incorporated to close the radiative balance.
The founding assessments of Charney et al. (1979) and Bolin et al. (1986) did see the great potential of future GCMs, but based their likely-range of ECS on expert judgment and simple mechanistic understanding of the climate system. And even today, the IPCC acknowledges that the model spread (notably of multi-model ensembles) is only a crude measure for uncertainty because it does not take model quality and model interdependence into account. Nevertheless, in practice, GCMs are often applied as a ’pseudo-truth’.
The paradigm that GCMs are the superior tools for climate change assessments and that multi-model ensembles are the best way to explore epistemic uncertainty has lasted for many decades and still dominates global, regional and national climate assessments. Studies based on simpler models than the state-of-the-art GCMs or studies projecting climate response outside the widely accepted range have always received less credence. In later assessments, the confirmation of old results has been perceived as an additional line of evidence, but likely the new studies have been (implicitly) tuned to match earlier results.
Shortcomings, like the huge biases and ignorance of potentially important mechanisms, have been routinely and dutifully reported, but a rosy presentation has generally prevailed. Large biases seriously challenge the internal consistency of the projected change, and consequently they challenge the plausibility of the projected climate change.
Most climate change scientists are well aware of this and a feeling of discomfort is taking hold of them. Expression of the contradictions is often not countered by arguments, but with annoyance, and experienced as non-constructive. “What else?” or “Decision makers do need concrete answers” are often heard phrases. The ’climate modelling paradigm’ is in ’crisis’. It is just a new paradigm we are waiting for.
I was gratified to see three of my recent papers in his reference list:
- Climate science and the uncertainty monster
- Reasoning about climate uncertainty
- Nullifying the climate null hypothesis
There isn’t too much in Part I of the thesis that is new or hasn’t been discussed elsewhere. His discussion on model ‘tuning’ – particularly implicit tuning – is very good. Also I particularly like his section ‘Lines of evidence or circular reasoning’.
However, the remarkable aspect of this to me is that the ‘philosophy of climate modeling’ essay was written not by a philosopher of science or a climate modeler, but by a scientist working in the area of applied climatology. His experiences in climate change impact assessments provide a unique perspective for this topic. The thesis provides a very strong argument that GCM climate models are not fit for the purpose of regional impact assessments.
I was very impressed by Bakker’s intellectual integrity and courage in tackling this topic in the 11th hour of completing his Ph.D. thesis. I am further impressed by his thesis advisors and committee members for allowing/supporting this. Bakker notes many critical comments from his committee members. I checked the list of committee members, one name jumped out at me – Arthur Petersen – who is a philosopher of science that has written about climate models. I suspect that the criticisms were more focused on strengthening the arguments, rather than ‘alarm’ over an essay that criticizes climate models. Kudos to the KNMI.
I seriously doubt that such a thesis would be possible in an atmospheric/oceanic/climate science department in the U.S. – whether the student would dare to tackle this, whether a faculty member would agree to supervise this, and whether a committee would ‘pass’ the thesis.
Bakker’s closing statement:
The ’climate modelling paradigm’ is in ’crisis’. It is just a new paradigm we are waiting for.
I have made several suggestions re ways forward, contained in these posts: