by Judith Curry
Last October, I introduced this topic in Part I and followed up with Part II and Part III, which formed an early draft of an argument I was using in a paper entitled “Climate Science and the Uncertainty Monster.” I’ve gotten the reviews back on my paper, this post is a draft of the revised version of that particular section.
5. Uncertainty in the attribution of 20th century climate change
“Give me four parameters, and I can fit an elephant. Give me five, and I can wiggle its trunk.” John von Neumann
Arguably the most important conclusion of IPCC AR4 is the following statement: “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.” The IPCC’s conclusion on attribution is reached using probabilistic causation, whereby an ensemble of simulations are used to evaluate agreement between observations and forcing for simulations conducted with and without anthropogenic forcing. Formal Bayesian reasoning is used to some extent by the IPCC in analyzing detection and attribution. The reasoning process used by the IPCC in assessing likelihood in its attribution statement is described by this statement from the AR4 (Chapter 9):
“The approaches used in detection and attribution research described above cannot fully account for all uncertainties, and thus ultimately expert judgment is required to give a calibrated assessment of whether a specific cause is responsible for a given climate change. The assessment approach used in this chapter is to consider results from multiple studies using a variety of observational data sets, models, forcings and analysis techniques. The assessment based on these results typically takes into account the number of studies, the extent to which there is consensus among studies on the significance of detection results, the extent to which there is consensus on the consistency between the observed change and the change expected from forcing, the degree of consistency with other types of evidence, the extent to which known uncertainties are accounted for in and between studies, and whether there might be other physically plausible explanations for the given climate change. Having determined a particular likelihood assessment, this was then further downweighted to take into account any remaining uncertainties, such as, for example, structural uncertainties or a limited exploration of possible forcing histories of uncertain forcings. The overall assessment also considers whether several independent lines of evidence strengthen a result.”
The above statement raises a number of questions about the IPCC’s attribution statement. Given that “very likely” corresponds to 90-95% probability, what was the original likelihood assessment from which this apparently minimal downweighting occurred? How is this minor downweighting justified in context of substantial uncertainties in forcings and the models themselves? What other physically plausible explanations for the given climate change were considered? And finally, what does “most” mean – 51% or 99%? The high likelihood of the imprecise “most” seems rather meaningless. From the IAC: “In the Committee’s view, assigning probabilities to imprecise statements is not an appropriate way to characterize uncertainty.”
This section critically evaluates the IPCC’s attribution argument in the context of uncertainty.
5.1 IPCC’s detection and attribution argument
“What we observe is not nature itself, but nature exposed to our method of questioning.” Werner Karl Heisenberg
The problem of attributing climate change is intimately connected with the detection of climate change. A change in the climate is ‘detected’ if its likelihood of occurrence by chance due to internal variability alone is determined to be small. Knowledge of internal climate variability is needed for both detection and attribution. Because the instrumental record is too short to give a well-constrained estimate of internal variability, internal climate variability is usually estimated from long control simulations from coupled climate models. The IPCC AR4 formulates the problem of attribution to be: “In practice attribution of anthropogenic climate change is understood to mean demonstration that a detected change is ‘consistent with the estimated responses to the given combination of anthropogenic and natural forcing’ and ‘not consistent with alternative, physically plausible explanations of recent climate change that exclude important elements of the given combination of forcings’.”
The IPCC AR4 (WG I, Chapter 9) describes two types of simulations have been used in detection and attribution studies. The first method is a ‘forward calculation’ that uses best estimates of external changes in the climate system (forcings) to simulate using a climate model the response of the climate system. These ‘forward calculations’ are then directly compared to the observed changes in the climate system. The second method is an ‘inverse calculation’ whereby the magnitude of uncertain model parameters and applied forcing is varied in order to provide a best fit to the observational record.
The IPCC’s detection and attribution analysis, which is the basis of the “very likely” attribution statement in the AR4, is based upon the following argument:
- Detection. Climate change in the latter half of the 20th century is detected based upon an increase in global surface temperature anomalies that is much larger than can be explained by natural internal variability.
- Confidence in detection. The quality of agreement between model simulations with 20th century forcing and observations supports the likelihood that models are adequately simulating the magnitude of natural internal variability on decadal to century time scales. From the IPCC AR4: “However, models would need to underestimate variability by factors of over two in their standard deviation to nullify detection of greenhouse gases in near-surface temperature data, which appears unlikely given the quality of agreement between models and observations at global and continental scales (Figures 9.7 and 9.8) and agreement with inferences on temperature variability from NH temperature reconstructions of the last millennium.”
- Attribution. Climate model simulations for the 20th century climate that combine natural and anthropogenic forcing agree much better with observations than simulations that include only natural forcing. From the IPCC AR4: “The fact that climate models are only able to reproduce observed global mean temperature changes over the 20th century when they include anthropogenic forcings, and that they fail to do so when they exclude anthropogenic forcings, is evidence for the influence of humans on global climate.”
- Confidence in attribution. Detection and attribution results based on several models or several forcing histories suggest that the attribution of a human influence on temperature change during the latter half of the 20th century is a robust result. From the IPCC AR4: “Detection and attribution results based on several models or several forcing histories do provide information on the effects of model and forcing uncertainty. Such studies suggest that while model uncertainty is important, key results, such as attribution of a human influence on temperature change during the latter half of the 20th century, are robust.”
Whereas all of the IPCC AR4 models agree that the warming observed since 1970 can only be reproduced using anthropogenic forcings, models disagree on the relative importance of solar, volcanic, and aerosol forcing in the earlier part of the 20th century (IPCC AR4 WGI Section 9.4.1). The substantial warming during the period 1910-1940 has been attributed by nearly all the modeling groups to some combination of increasing solar irradiance and a lack of major volcanic activity. The cooling and leveling off of average global temperatures during the 1950’s and 1960’s is attributed primarily to aerosols from fossil fuels and other sources, when the greenhouse warming was overwhelmed by aerosol cooling.
5.2 Sources of uncertainty
“Not only does God play dice, but sometimes he throws the dice where we can’t see them.” Stephen Hawking
Attribution of observed climate change that compares simulated and observed responses will be affected by errors and uncertainties in the prescribed external forcing and in the model’s capability to simulate both the response to the forcing (sensitivity) and decadal scale natural internal variability.
Uncertainties in the model and forcing are acknowledged by the AR4 (Chapter 9): “Ideally, the assessment of model uncertainty should include uncertainties in model parameters (e.g., as explored by multi-model ensembles), and in the representation of physical processes in models (structural uncertainty). Such a complete assessment is not yet available, although model intercomparison studies (Chapter 8) improve the understanding of these uncertainties. The effects of forcing uncertainties, which can be considerable for some forcing agents such as solar and aerosol forcing (Section 9.2), also remain difficult to evaluate despite advances in research.”
The level of scientific understanding of radiative forcing is ranked by the AR4 (Table 2.11) as high only for the long-lived greenhouse gases, but is ranked as low for solar irradiance, aerosol effects, stratospheric water vapor from CH4, and jet contrails. Radiative forcing time series for the natural forcings (solar, volcanic aerosol) are reasonably well known for the past 25 years (although these forcings continue to be debated), with estimates further back in time having increasingly large uncertainties.
Based upon new and more reliable solar reconstructions, the AR4 (Section 126.96.36.199) concluded that the increase in solar forcing during the period 1900-1980 used in the AR3 reconstructions is questionable and the direct radiative forcing due to increase in solar irradiance is reduced substantially from the AR3. However, consideration of Table S9.1 in the AR4 shows that each climate model used outdated solar forcing (from the AR3) that was assessed to substantially overestimate the magnitude of the trend in solar forcing prior to 1980. The IPCC AR4 states: “While the 11-year solar forcing cycle is well documented, lower-frequency variations in solar forcing are highly uncertain.” “Large uncertainties associated with estimates of past solar forcing (Section 2.7.1) and omission of some chemical and dynamical response mechanisms make it difficult to reliably estimate the contribution of solar forcing to warming over the 20th century.”
The greatest uncertainty in radiative forcing is associated with aerosols, particularly the aerosol indirect effect whereby aerosols influence cloud radiative properties. Consideration of Figure 2.20 of the AR4 shows that, given the uncertainty in aerosol forcing, the magnitude of the aerosol forcing (which is negative, or cooling) could rival the forcing from long-lived greenhouse gases (positive, or warming). The 20th century aerosol forcing used in most of the AR4 model simulations (Section 188.8.131.52) relies on inverse calculations of aerosol optical properties to match climate model simulations with observations. The only constraint on the aerosol forcing used in the AR4 attribution studies is that the derived forcing should be within the bounds of forward calculations that determine aerosol mass from chemical transport models, using satellite data as a constraint. The inverse method effectively makes aerosol forcing a tunable parameter (kludge) for the model, particularly in the pre-satellite era. Further, key processes associated with the interactions between aerosols and clouds are either neglected or treated with simple parameterizations in climate model simulations evaluated in the AR4.
Given the large uncertainties in forcings and model inadequacies in dealing with these forcings, how is it that each model does a credible job of tracking the 20th century global surface temperature anomalies (AR4 Figure 9.5)? Schwartz (2004) notes that the intermodel spread in modeled temperature trend expressed as a fractional standard deviation is much less than the corresponding spread in either model sensitivity or aerosol forcing, and this comparison does not consider differences in solar and volcanic forcing. This agreement is accomplished through inverse calculations, whereby modeling groups can select the forcing data set and model parameters that produces the best agreement with observations. While some modeling groups may have conducted bona fide forward calculations without any a posteriori selection of forcing data sets and model parameters to fit the 20th century time series of global surface temperature anomalies, the available documentation on each model’s tuning procedure and rationale for selecting particular forcing data sets is not generally available.
The inverse calculations can mask variations in sensitivity among the different models. If a model’s sensitivity is high, it is likely to require greater aerosol forcing to counter the greenhouse warming, and vice versa for a low model sensitivity. Schwartz (2004) argues that uncertainties in aerosol forcing must be reduced at least three-fold for uncertainty in climate sensitivity to be meaningfully reduced and bounded. Further, kludging and neglect of ontic uncertainty in the tuning can result in a model that is over- or under-sensitive to certain types or scales of forcing.
With regards to the ability of climate models to simulate natural internal variability on decadal time scales, “there has been little work evaluating the amplitude of the main modes of Pacific decadal variability in [coupled climate models].” (IPCC AR4, Chapter 80 . Whereas most climate models simulate something that resembles the Meridional Overturning Circulation (MOC), “the mechanisms that control the variations in the MOC are fairly different across the ensemble of [coupled climate models.]”
5.3 Bootstrapped plausibility
“If it was so, it might be, and if it were so, it would be; but as it isn’t it ain’t. That’s logic!” Charles Lutwidge Dodgson (Lewis Carroll)
‘Bootstrapped plausibility’ (Agassi 1974) occurs with a proposition that is rendered plausible that then lends plausibility to some of its more doubtful supporting arguments. As such, bootstrapped plausibility occurs in the context of circular reasoning, which is fallacious due to a flawed logical structure whereby the proposition to be proved is implicitly or explicitly assumed in one of the premises. This subsection argues that the IPCC’s detection and attribution arguments involve circular reasoning, and that confidence in the evidence and argument is elevated by bootstrapped plausibility.
Consider again the following argument:
- Detection. Climate change in the latter half of the 20th century is detected based upon an increase in global surface temperature anomalies that is much larger than can be explained by natural internal variability.
- Confidence in detection. The quality of agreement between model simulations with 20th century forcing and observations supports the likelihood that models are adequately simulating the magnitude of natural internal variability on decadal to century time scales.
- Attribution. Climate model simulations for the 20th century climate that combine natural and anthropogenic forcing agree much better with observations than simulations that include only natural forcing.
- Confidence in attribution. Detection and attribution results based on several models or several forcing histories suggest that the attribution of a human influence on temperature change during the latter half of the 20th century is a robust result.
The strong agreement between forced climate model simulations and observations for the 20th century (premise #3) provides bootstrapped plausibility to the models and the external forcing data. This strong agreement depends heavily on inverse modeling, whereby forcing data sets and/or model parameters are selected based upon the agreement between models and the time series of 20th century observations. Further confidence in the models is provided by premise #4, even though the agreement of different models and forcing datasets arises from the selection of forcing data sets and model parameters by inverse calculations designed to agree with the 20th century time series of global surface temperature anomalies. This agreement is used to argue that “Detection and attribution studies using such simulations suggest that results are not very sensitive to moderate forcing uncertainties.”
Confidence in the climate models elevated by inverse calculations and bootstrapped plausibility is used as a central premise in the argument that climate change in the latter half of the 20th century is much larger than can be explained by natural internal variability (premise #1). Premise #1 underlies the IPCC’s assumption (AR4, Chapter 9) that “Global mean and hemispheric-scale temperatures on multi-decadal time scales are largely controlled by external forcings” and not natural internal variability. The IPCC’s argument has effectively eliminated multi-decadal natural internal variability as a causative factor for 20th century climate change. Whereas each model demonstrates some sort of multidecadal variability (which may or may not be of a reasonable amplitude or associated with the appropriate mechanisms), the ensemble averaging process filters out the simulated natural internal variability since there is no temporal synchronization in the simulated chaotic internal oscillations among the different ensemble members.
The IPCC’s detection and attribution method is meaningful to the extent that the models agree with observations against which they were not tuned and to the extent that the models agree with each other in terms of attribution mechanisms. The AR4 has demonstrated that greenhouse forcing is a plausible explanation for warming in the latter half of the 20th century, but cannot rule out substantial warming from other causes such as solar forcing and internal multi-decadal ocean oscillations owing to the circular reasoning and to the lack of convincing attribution mechanisms for the warming during 1910-1940 and the cooling during the 1940’s and 1950’s.
Avoiding bootstrapped plausibility and circular reasoning in detection and attribution arguments can be accomplished by:
- use of the same best estimate of forcing components from observations or forward modeling for multi-model ensembles;
- conducting tests of the sensitivity to uncertainties associated with the forcing datasets using a single model;
- improve understanding of multi-decadal natural internal variability and the models’ ability to simulate its magnitude; and
- improve detection and attribution schemes to account for the models’ inability to simulate the timing of phases of natural internal oscillations and the meridional overturning circulation.
The experimental design being undertaken for the CMIP5 simulations to be used in the IPCC AR5 show some improvements that should eliminate some of the circular reasoning in the AR4 associated with using climate model results to determine 20th century attribution. In the CMIP5 simulations, the use of best estimates of forcing for solar and aerosols is recommended. The NCAR Community Climate System model 20th century simulations for CMIP5 (Gent et al. 2011) arguably qualifies as a completely forward calculation, with forcing data sets being selected a priori and no tuning of parameters to the 20th century climate other than the sea ice albedo and the low cloud relative humidity threshold. The results of NCAR’s CMIP5 calculations are that after 1970, the simulated surface temperature increases faster than the data, so that by 2005 the model anomaly is 0.4oC larger than the observed anomaly. Understanding this disagreement should provide an improved understanding of the model uncertainties and uncertainties in the attribution of the recent warming. This disagreement implies that the detection and attribution argument put forth in the AR4 that was fundamentally based upon the good agreement between models and observations will not work in the context of at least some of the CMIP5 simulations.
5.4 Logic of the attribution statement
“Often, the less there is to justify a traditional custom, the harder it is to get rid of it.” Mark Twain
Over the course of the four IPCC assessments, the attribution statement has evolved in the following way:
- FAR: “The size of this warming is broadly consistent with predictions of climate models, but it is also of the same magnitude as natural climate variability Thus the observed increase could be largely due to this natural variability, alternatively this variability and other human factors could have offset a still larger human-induced greenhouse warming. The unequivocal detection of the enhanced greenhouse effect from observations is not likely for a decade or more.”
- SAR: “The balance of evidence suggests a discernible human influence on global climate.”
- TAR: “There is new and stronger evidence that most of the warming observed over the last 50 years is attributable to human activities.”
- AR4: “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.”
The attribution statements have evolved from “discernible” in the SAR to “most” in the TAR and AR4. The attribution statements are qualitative and imprecise in the sense of using words such as “discernible” and “most.” The AR4 attribution statement is qualified with “very likely” likelihood. As stated previously by the IAC, assigning probabilities to imprecise statements is not an appropriate way to characterize uncertainty.
The utility of the IPCC’s attribution statement is aptly summarized by this quote from a document discussing climate change and national security:
“For the past 20 years, scientists have been content to ask simply whether most of the observed warming was caused by human activities. But is the percentage closer to 51 percent or to 99 percent? This question has not generated a great deal of discussion within the scientific community, perhaps because it is not critical to further progress in understanding the climate system. In the policy arena, however, this question is asked often and largely goes unanswered.”
The logic of the IPCC AR4 attribution statement is discussed by Curry (2011b). Curry argues that the attribution argument cannot be well formulated in the context of Boolean logic or Bayesian probability. Attribution (natural versus anthropogenic) is a shades-of-gray issue not a black or white, 0 or 1 issue, or even an issue of probability. Curry argues that fuzzy logic provides a better framework for considering attribution, whereby the relative degrees of truth for each attribution mechanism can range in degree between 0 and 1, thereby bypassing the problem of the excluded middle. There is general agreement that the percentages of warming each attributed to natural and anthropogenic causes is less than 100% and greater than 0%. The challenge is to assign likelihood values to the distribution of the different combinations of percentage contributions of natural and anthropogenic contributions. Such a distribution may very show significant likelihood in the vicinity of 50-50, making a binary demarcation at the imprecise “most” a poor choice.
JC note: This is a technical thread, comments will be moderated for relevance. I look forward to your comments on this, particularly since I received such helpful comments on my previous draft uncertainty paper.
Some backstory on the uncertainty monster paper. This paper was submitted to the Bulletin of the American Meteorological Society. It was a very long and comprehensive paper, addressing many of the uncertainty topics raised at Climate Etc. The comments from the reviewers suggested shortening and focusing, and hence I have deleted the section on decision making under climate uncertainty, which will be the focus of a future separate paper.
Given the kerfuffle surrounding the PNAS review of Lindzen and Choi, I’ll make a few comments here regarding the review of this paper, which has to be regarded as controversial and contentious since it includes criticisms of the IPCC. I described my concerns about potential problems with with the review process in my submission letter, and requested that they not use any reviewers that were involved in the AR4 or AR5. I suggested that I would be most appreciative of reviews from some of the authors that I referenced in the bibliography. Unlike most journals, the AMS does not operate under the ‘pal review’ system, and the editors select the reviewers. The editors selected two excellent reviewers, who both made extremely helpful comments that are resulting in substantial revision of the paper. This is the peer review system at its best, IMO.
And finally a comment about “blog science.” From the previous Parts I, II and III, you can see how my thinking on this has evolved, and a significant element in that evolution has been the comments and discussion on these threads at Climate Etc.