by Judith Curry
The focus of this series on detection and attribution is the following statement in the IPCC AR4:
“Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.”
Part I addressed the IPCC’s detection strategy and raised issues regarding the IPCC’s inferences about the relative importance of the multi-decadal modes of natural internal variability (e.g. AMO, PDO). Part II addresses uncertainties in external forcing data sets used in the attribution studies and the relevant climate model structural uncertainties. Part III (finale) will address deficiencies in the overall logic of the IPCC’s attribution argument.
The primary reference used here is:
- IPCC AR4 Chapter 9: Understanding and attributing climate change (hereafter referred to as IPCC AR4)
This discussion focuses on two categories of uncertainty locations:
- external forcing data used to force the climate models
- climate model structural uncertainties
The IPCC AR4 has this to say about the uncertainties:
“Model and forcing uncertainties are important considerations in attribution research. Ideally, the assessment of model uncertainty should include uncertainties in model parameters (e.g., as explored by multi-model ensembles), and in the representation of physical processes in models (structural uncertainty). Such a complete assessment is not yet available, although model intercomparison studies (Chapter 8) improve the understanding of these uncertainties. The effects of forcing uncertainties, which can be considerable for some forcing agents such as solar and aerosol forcing (Section 9.2), also remain difficult to evaluate despite advances in research. Detection and attribution results based on several models or several forcing histories do provide information on the effects of model and forcing uncertainty. Such studies suggest that while model uncertainty is important, key results, such as attribution of a human influence on temperature change during the latter half of the 20th century, are robust.” The last sentence provides a classical example of IPCC’s leaps of logic that contribute to its high confidence in its attribution results.
A key distinction for understanding IPCC attribution analysis is the distinction between forward and inverse calculations. “Two basic types of calculations have been used in detection and attribution studies. The first uses best estimates of forcing together with best estimates of modelled climate processes to calculate the effects of external changes in the climate system (forcings) on the climate (the response).” “In the second type of calculation, the so-called ‘inverse’ calculations, the magnitude of uncertain parameters in the forward model (including the forcing that is applied) is varied in order to provide a best fit to the observational record. In general, the greater the degree of a priori uncertainty in the parameters of the model, the more the model is allowed to adjust.” “Probabilistic posterior estimates for model parameters and uncertain forcings are obtained by comparing the agreement between simulations and observations, and taking into account prior uncertainties (including those in observations.” (IPCC AR4)
External forcing data
The level of scientific understanding of radiative forcing is ranked by the AR4 (Table 2.11) as high only for the long-lived greenhouse gases, but is ranked as low for solar irradiance, aerosol effects, stratospheric water vapor from CH4, and jet contrails. Radiative forcing time series for the natural (solar, volcanic aerosol) forcings are reasonably well known for the past 25 years but estimates further back in time have increasingly large uncertainties.
There are a number of different forcing data sets available for climate modelers to choose from. The different forcing data sets used by the different modeling groups are summarized in the AR4 Chapter 9 Supplementary Material, see especially Table S9.1. In the IPCC attribution simulations, climate modelers are permitted to select whatever forcing data set and combinations of forcing data sets that they want to from the list of published forcing data sets that are generally regarding to within the bounds of our background knowledge. Inverse modeling is also used in the selection of forcing data sets.
The two forcings whose uncertainties arguably have the greatest impact on 20th century attribution studies are solar forcing and anthropogenic aerosol forcing.
Based upon new and more reliable solar reconstructions, the AR4 (Section 188.8.131.52) concluded that the increase in solar forcing during the period 1900-1980 used in the AR3 reconstructions is questionable and the direct radiative forcing due to increase in solar irradiance is reduced substantially from the AR3. However, consideration of Table S9.1 in the AR4 shows that each climate model used outdated solar forcing (from the AR3) that substantially overestimates the magnitude of solar forcing prior to 1980 (h/t to Bob Tisdale).
Even in the satellite era, there is still debate regarding the calibration of satellite sensors and its impact on decadal scale trends.
The impact of the reduction in the solar forcing in the earlier part of the 20th century is that the direct effects of solar forcing is not a convincing source for the attribution of the early 20th century warming.
The greatest uncertainty in radiative forcing is associated with aerosols, particularly the aerosol indirect effect whereby aerosols influence cloud radiative properties. Consideration of Figure 2.20 of the AR4 shows that, given the uncertainty in aerosol forcing, the magnitude of the aerosol forcing (which is negative, or cooling) could rival the forcing from long-lived greenhouse gases (positive, or warming).
The 20th century aerosol forcing used in most of the AR4 model simulations (Section 184.108.40.206) relies on inverse calculations of aerosol optical properties to match climate model simulations with observations. The inverse method effectively makes aerosol forcing a tunable parameter (kludge) for the model, particularly in the pre-satellite era. In trying to sort out which models use what for aerosol forcing, I ran into a dead end (rather dead link) referenced to in Table S9.1. Sorting this out requires reading 13 different journal articles cited in Table S9.1: an uncertainty monster taming strategy of “make the evidence difficult to find and sort out.”
But not to worry, the IPCC AR4 has sorted it out for us: “In the past, forward calculations have been unable to rule out a total net negative radiative forcing over the 20th century (Boucher and Haywood, 2001). However, Section 2.9 updates the Boucher and Haywood analysis for current radiative forcing estimates since 1750 and shows that it is extemely likely that the combined anthropogenic [radiative forcing] is both positive and substantial (best estimate: +1.6 W m–2). A net forcing close to zero would imply a very high value of climate sensitivity, and would be very difficult to reconcile with the observed increase in temperature (Sections 9.6 and 9.7).”
I do not see how the analysis associated with Figure 2.20 in Section 2.9 makes it “extremely likely that the combined anthropogenic radiative forcing is both positive and substantial” (I assume “extremely likely” is greater than the 90% associated with “very likely”?). Well, the IPCC is just flat out overconfident about this. Morgan (2006, 2009) elicited subjective probability distributions from 24 leading atmospheric scientists that reflects their individual judgments about radiative forcing from anthropogenic aerosols. Consensus was strongest among the experts in their assessments of the direct aerosol effect. However, the range of uncertainty that a number of experts associated with their estimates for indirect aerosol forcing was substantially larger than that suggested by either the IPCC 3rd or 4th Assessment Reports.
But the real head-spinner in the IPCC’s statement cited above is this sentence: “A net forcing close to zero would imply a very high value of climate sensitivity, and would be very difficult to reconcile with the observed increase in temperature.” In other words, the anthropogenic forcing has to be a net positive, otherwise we can’t explain the temperature increase in terms of external forcing. Which, after all, was determined to be necessary since they have dismissed multi-decadal natural internal variability as a possible explanation for the temperature increase. This is circular reasoning along with the logical fallacy of affirming the consequent.
Climate model uncertainty was discussed at length in a previous post “What can we learn from climate models?” Here we discuss uncertainties in climate sensitivity, and also model inadequacy associated with possible indirect solar effects and aerosol-cloud interaction processes.
With regards to indirect solar effects, the IPCC AR4 states: “Since the TAR, new studies have confirmed and advanced the plausibility of indirect effects involving the modification of the stratosphere by solar UV irradiance variations (and possibly by solar-induced variations in the overlying mesosphere and lower thermosphere), with subsequent dynamical and radiative coupling to the troposphere. Whether solar wind fluctuations (Boberg and Lundstedt, 2002) or solar-induced heliospheric modulation of galactic cosmic rays (Marsh and Svensmark, 2000b) also contribute indirect forcings remains ambiguous.”
With regards to the indirect aerosol forcing (associated with cloud-aerosol interactions), the IPCC AR4 considers “The total anthropogenic aerosol effect as defined here includes estimates of the direct effect, semi-direct effect, indirect cloud albedo and cloud lifetime effect for warm clouds from several climate models.” This definition does not include issues related to cold (ice) clouds. The aerosol direct effect is the only one associated with some confidence; the others are highly uncertain. Improved climate model treatments of cloud and aerosol microphysics are under active development, but this is generally regarded as the source of large uncertainties in the models. (This topic will be addressed in future posts).
These model inadequacies imply errors in the sensitivity of the climate models to solar and aerosol forcing. The absence of solar indirect effects implies that model sensitivity to solar forcing is to low. The inadequacies of the aerosol and cloud microphysical parameterizations will produce an incorrect sensitivity to aerosol forcing, although the sign of the error is unknown and likely variable.
The AR4 uses the following definitions for climate sensitivity. The equilibrium climate sensitivity (ECS) is defined as the global annual mean surface air temperature change experienced by the climate system after it has attained a new equilibrium in response to a doubling of atmospheric CO2concentration. The ‘transient climate response’ (TCR) is defined as the global annual mean surface air temperature change (averaged over a 20-year period centred at the time of CO2 doubling in a 1% yr–1 compound CO2 increase scenario. The TCR depends both on the sensitivity and on the ocean heat uptake. Climate sensitivity depends on the type of forcing, its location, and the background climate state.
Table 8.2 in the IPCC AR4 gives values climate sensitivity for the AOGCMS used in the attribution studies to range from 2.1-4.4C for ECS and 1.3-2.6C for TCR. A much broader range of values for ECS is “based on large ensembles of simulations using climate models of varying complexity, where uncertain parameters influencing the model’s sensitivity to forcing are varied. (IPCC AR4 Chapter 9) Figure 9.20 compares different estimates of the PDF for equilibrium climate sensitivity. This figure reflects a large range of sensitivities, much larger than the values for the AOGCMS used in the AR4 attribution simulations.
In spite of this large range of sensitivities, the IPCC’s main conclusion is “It is likely to be in the range 2°C to 4.5°C with a best estimate of about 3°C, and is very unlikely to be less than 1.5°C.” The basis for this narrowing of the range of sensitivities is incorporating multiple lines of evidence into a Bayesian analysis combined with expert judgment. I am not convinced by these arguments, and maintain that Figure 9.20 reflects our best understanding of equilibrium climate sensitivity. (Note: climate sensitivity will the topic of a future series of posts).
IPCC’s attribution results
Whereas all models agree that the warming observed since 1970 can only be reproduced using anthropogenic forcings, models disagree on the relative importance of solar, volcanic, and aerosol forcing in the earlier part of the 20thcentury. The substantial warming during the period 1910-1940 has been mostly attributed to some combination of increasing solar irradiance and a lack of major volcanic activity. With little or no increase in solar forcing during this period as evidenced by more recent and apparently more robust reconstructions , the observed temperature increase during the period 1910-1940 cannot be attributed with any confidence to solar forcing in this attribution framework. The cooling and leveling off of average global temperatures during the 1950’s and 1960’s is attributed primarily to aerosols from fossil fuels and other sources, when the greenhouse warming was overwhelmed by aerosol cooling, a result derived in large part from the kludged aerosol forcing.
Given that the IPCC argues that multidecadal natural internal variability is not an important factor and that external forcing can explain the 20th century variability, confidence in the attribution for the latter half of the 20th century is diminished by the lack of a robust attribution for the earlier warming between 1910-1940 (which is of the same magnitude as the warming from 1970-2000) and the mid century cooling.
Here is another issue that diminishes the confidence in the IPCC’s attribution of the warming in the latter half of the 20th century. Given the large uncertainties in forcings and different model sensitivities, how is it that each model does a credible job of tracking the 20th century global surface temperature anomalies (AR4 Figure 9.5)? Schwartz (2004) notes that the intermodel spread in modeled temperature trend expressed as a fractional standard deviation is much less than the corresponding spread in either model sensitivity or aerosol forcing (and this comparison does not consider differences in solar and volcanic forcing). This agreement is accomplished through each modeling group selecting the forcing data set that produces the best agreement with observations, along with model kludges that include adjusting the aerosol forcing to produce good agreement with the surface temperature observations. If a model’s sensitivity is high, it is likely to require greater aerosol forcing to counter the greenhouse warming, and vice versa for a low model sensitivity. Schwartz (2004) argues that uncertainties in aerosol forcing must be reduced at least three-fold for uncertainty in climate sensitivity to be meaningfully reduced and bounded.
These concerns raise the issue of fitness for purpose of the IPCC AOGCMS for attribution analysis. While the kludging of model parameters and forcing produces model simulations that empirically adequate in representing aspects of the 20th century climate (which is useful for certain purposes), these models are not fit for attribution studies to the extent that kludging (tuning) of model parameters and forcing has been done to match the 20th century temperature time series.
There are two major flaws in the design of the IPCC attribution experiments:
- inverse modeling that tunes the model and forcing to reproduce the 20th century surface temperature observations
- lack of account for uncertainty in the external forcing data
The relatively simple models used in the extensive suite of simulations for equilibrium climate sensitivity (Figure 9.20) should be used to conduct an extensive set of simulations for the 20th century with both natural and anthropogenic forcing. A large ensemble of simulations should be conducted that includes the variations in sensitivity and also different combinations of external forcing data sets.
Forthcoming: Part III
Hard to imagine that this is taking three parts, each of which exceeds 2000 words. I have developed brain strain this week from trying to sort through all this; Part II is admittedly not my best writing but I think I have the arguments straight. Stay tuned for Part III, which is the most interesting one, on the overall logic of the IPCC’s attribution argument. Its already written, so I have very likely confidence that Part III will be the last part in this little series :)