Overconfidence in IPCC’s detection and attribution. Part II

by Judith Curry

The focus of this series on detection and attribution is the following statement in the IPCC AR4:

“Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.”

Part I addressed the IPCC’s detection strategy and raised issues regarding the IPCC’s inferences about the relative importance of the multi-decadal modes of natural internal variability (e.g. AMO, PDO).  Part II addresses uncertainties in external forcing data sets used in the attribution studies and the relevant climate model structural uncertainties.  Part III (finale) will address deficiencies in the overall logic of the IPCC’s attribution argument.

The primary reference used here is:

  • IPCC AR4 Chapter 9: Understanding and attributing climate change (hereafter referred to as IPCC AR4)

This discussion focuses on two categories of uncertainty locations:

  • external forcing data used to force the climate models
  • climate model structural uncertainties

The IPCC AR4 has this to say about the uncertainties:

“Model and forcing uncertainties are important considerations in attribution research. Ideally, the assessment of model uncertainty should include uncertainties in model parameters (e.g., as explored by multi-model ensembles), and in the representation of physical processes in models (structural uncertainty). Such a complete assessment is not yet available, although model intercomparison studies (Chapter 8) improve the understanding of these uncertainties. The effects of forcing uncertainties, which can be considerable for some forcing agents such as solar and aerosol forcing (Section 9.2), also remain difficult to evaluate despite advances in research. Detection and attribution results based on several models or several forcing histories do provide information on the effects of model and forcing uncertainty. Such studies suggest that while model uncertainty is important, key results, such as attribution of a human influence on temperature change during the latter half of the 20th century, are robust.”  The last sentence provides a classical example of IPCC’s leaps of logic that contribute to its high confidence in its attribution results.

A key distinction for understanding IPCC attribution analysis is the distinction between forward and inverse calculations.  “Two basic types of calculations have been used in detection and attribution studies. The first uses best estimates of forcing together with best estimates of modelled climate processes to calculate the effects of external changes in the climate system (forcings) on the climate (the response).” “In the second type of calculation, the so-called ‘inverse’ calculations, the magnitude of uncertain parameters in the forward model (including the forcing that is applied) is varied in order to provide a best fit to the observational record. In general, the greater the degree of a priori uncertainty in the parameters of the model, the more the model is allowed to adjust.” “Probabilistic posterior estimates for model parameters and uncertain forcings are obtained by comparing the agreement between simulations and observations, and taking into account prior uncertainties (including those in observations.” (IPCC AR4)

External forcing data

The level of scientific understanding of radiative forcing is ranked by the AR4 (Table 2.11) as high only for the long-lived greenhouse gases, but is ranked as low for solar irradiance, aerosol effects, stratospheric water vapor from CH4, and jet contrails. Radiative forcing time series for the natural (solar, volcanic aerosol) forcings are reasonably well known for the past 25 years but estimates further back in time have increasingly large uncertainties.

There are a number of different forcing data sets available for climate modelers to choose from.  The different forcing data sets used by the different modeling groups are summarized in the AR4 Chapter 9 Supplementary Material, see especially Table S9.1.  In the IPCC attribution simulations, climate modelers are permitted to select whatever forcing data set and combinations of forcing data sets that they want to from the list of published forcing data sets that are generally regarding to within the bounds of our background knowledge. Inverse modeling is also used in the selection of forcing data sets.

The two forcings whose uncertainties arguably have the greatest impact on 20th century attribution studies are solar forcing and anthropogenic aerosol forcing.

Solar forcing

Based upon new and more reliable solar reconstructions, the AR4 (Section 2.7.1.2) concluded that the increase in solar forcing during the period 1900-1980 used in the AR3 reconstructions is questionable and the direct radiative forcing due to increase in solar irradiance is reduced substantially from the AR3.  However, consideration of Table S9.1 in the AR4 shows that each climate model used outdated solar forcing (from the AR3) that substantially overestimates the magnitude of solar forcing prior to 1980 (h/t to Bob Tisdale).

Even in the satellite era, there is still debate regarding the calibration of satellite sensors and its impact on decadal scale trends.

The impact of the reduction in the solar forcing in the earlier part of the 20th century is that the direct effects of solar forcing is not a convincing source for the attribution of the early 20th century warming.

Aerosol forcing

The greatest uncertainty in radiative forcing is associated with aerosols, particularly the aerosol indirect effect whereby aerosols influence cloud radiative properties.  Consideration of Figure 2.20 of the AR4 shows that, given the uncertainty in aerosol forcing, the magnitude of the aerosol forcing (which is negative, or cooling) could rival the forcing from long-lived greenhouse gases (positive, or warming).

The 20th century aerosol forcing used in most of the AR4 model simulations (Section 9.2.1.2) relies on inverse calculations of aerosol optical properties to match climate model simulations with observations.  The inverse method effectively makes aerosol forcing a tunable parameter (kludge) for the model, particularly in the pre-satellite era. In trying to sort out which models use what for aerosol forcing, I ran into a dead end (rather dead link) referenced to in Table S9.1.  Sorting this out requires reading 13 different journal articles cited in Table S9.1:   an uncertainty monster taming strategy of “make the evidence difficult to find and sort out.”

But not to worry, the IPCC AR4 has sorted it out for us: “In the past, forward calculations have been unable to rule out a total net negative radiative forcing over the 20th century (Boucher and Haywood, 2001). However, Section 2.9 updates the Boucher and Haywood analysis for current radiative forcing estimates since 1750 and shows that it is extemely likely that the combined anthropogenic [radiative forcing] is both positive and substantial (best estimate: +1.6 W m–2). A net forcing close to zero would imply a very high value of climate sensitivity, and would be very difficult to reconcile with the observed increase in temperature (Sections 9.6 and 9.7).”

I do not see how the analysis associated with Figure 2.20 in Section 2.9 makes it “extremely likely that the combined anthropogenic radiative forcing is both positive and substantial” (I assume “extremely likely” is greater than the 90% associated with “very likely”?).  Well, the IPCC is just flat out overconfident about this.  Morgan (2006, 2009) elicited subjective probability distributions from 24 leading atmospheric scientists that reflects their individual judgments about radiative forcing from anthropogenic aerosols. Consensus was strongest among the experts in their assessments of the direct aerosol effect. However,  the range of uncertainty that a number of experts associated with their estimates for indirect aerosol forcing was substantially larger than that suggested by either the IPCC 3rd or 4th Assessment Reports.

But the real head-spinner in the IPCC’s statement cited above is this sentence: “A net forcing close to zero would imply a very high value of climate sensitivity, and would be very difficult to reconcile with the observed increase in temperature.” In other words, the anthropogenic forcing has to be a net positive, otherwise we can’t explain the temperature increase in terms of external forcing.  Which, after all, was determined to be necessary since they have dismissed multi-decadal natural internal variability as a possible explanation for the temperature increase. This is circular reasoning along with the logical fallacy of affirming the consequent.

Model uncertainty

Climate model uncertainty was discussed at length in a previous post “What can we learn from climate models?” Here we discuss uncertainties in climate sensitivity, and  also model inadequacy associated with possible indirect solar effects and aerosol-cloud interaction processes.

Model inadequacies

With regards to indirect solar effects, the IPCC AR4 states: “Since the TAR, new studies have confirmed and advanced the plausibility of indirect effects involving the modification of the stratosphere by solar UV irradiance variations (and possibly by solar-induced variations in the overlying mesosphere and lower thermosphere), with subsequent dynamical and radiative coupling to the troposphere. Whether solar wind fluctuations (Boberg and Lundstedt, 2002) or solar-induced heliospheric modulation of galactic cosmic rays (Marsh and Svensmark, 2000b) also contribute indirect forcings remains ambiguous.”

With regards to the indirect aerosol forcing (associated with cloud-aerosol interactions), the IPCC AR4 considers “The total anthropogenic aerosol effect as defined here includes estimates of the direct effect, semi-direct effect, indirect cloud albedo and cloud lifetime effect for warm clouds from several climate models.”  This definition does not include issues related to  cold (ice) clouds. The aerosol direct effect is the only one associated with some confidence; the others are highly uncertain.  Improved climate model treatments of cloud and aerosol microphysics are under active development, but this is generally regarded as the source of large uncertainties in the models. (This topic will be addressed in future posts).

These model inadequacies imply errors in the sensitivity of the climate models to solar and aerosol forcing.  The absence of solar indirect effects implies that model sensitivity to solar forcing is to low.  The inadequacies of the aerosol and cloud microphysical parameterizations will produce an incorrect sensitivity to aerosol forcing, although the sign of the error is unknown and likely variable.

Climate sensitivity

The AR4 uses the following definitions for climate sensitivity.  The equilibrium climate sensitivity (ECS) is defined as the global annual mean surface air temperature change experienced by the climate system after it has attained a new equilibrium in response to a doubling of atmospheric CO2concentration.  The ‘transient climate response’ (TCR) is defined as the global annual mean surface air temperature change (averaged over a 20-year period centred at the time of CO2 doubling in a 1% yr–1 compound CO2 increase scenario. The TCR depends both on the sensitivity and on the ocean heat uptake.  Climate sensitivity depends on the type of forcing, its location, and the background climate state.

Table 8.2 in the IPCC AR4 gives values climate sensitivity for the AOGCMS used in the attribution studies to range from 2.1-4.4C for ECS and 1.3-2.6C for TCR.  A much broader range of values for ECS is “based on large ensembles of simulations using climate models of varying complexity, where uncertain parameters influencing the model’s sensitivity to forcing are varied. (IPCC AR4 Chapter 9)  Figure 9.20 compares different estimates of the PDF for equilibrium climate sensitivity.  This figure reflects a large range of sensitivities, much larger than the values for the AOGCMS used in the AR4 attribution simulations.

In spite of this large range of sensitivities, the IPCC’s main conclusion is “It is likely to be in the range 2°C to 4.5°C with a best estimate of about 3°C, and is very unlikely to be less than 1.5°C.”  The basis for this narrowing of the range of sensitivities is incorporating multiple lines of evidence into a Bayesian analysis combined with expert judgment.  I am not convinced by these arguments, and maintain that Figure 9.20 reflects our best understanding of equilibrium climate sensitivity. (Note: climate sensitivity will the topic of a future series of posts).

IPCC’s attribution results

Whereas all models agree that the warming observed since 1970 can only be reproduced using anthropogenic forcings, models disagree on the relative importance of solar, volcanic, and aerosol forcing in the earlier part of the 20thcentury. The substantial warming during the period 1910-1940 has been mostly attributed to some combination of increasing solar irradiance and a lack of major volcanic activity. With little or no increase in solar forcing during this period as evidenced by more recent and apparently more robust reconstructions , the observed temperature increase during the period 1910-1940 cannot be attributed with any confidence to solar forcing in this attribution framework.  The cooling and leveling off of average global temperatures during the 1950’s and 1960’s is attributed primarily to aerosols from fossil fuels and other sources, when the greenhouse warming was overwhelmed by aerosol cooling, a result derived in large part from the kludged aerosol forcing.

Given that the IPCC argues that multidecadal natural internal variability is not an important factor and that external forcing can explain the 20th century variability, confidence in the attribution for the latter half of the 20th century  is diminished by the lack of a robust attribution for the earlier warming between 1910-1940 (which is of the same magnitude as the warming from 1970-2000) and the mid century cooling.

Here is another issue that diminishes the confidence in the IPCC’s attribution of the warming in the latter half of the 20th century.  Given the large uncertainties in forcings and different model sensitivities, how is it that each model does a credible job of tracking the 20th century global surface temperature anomalies (AR4 Figure 9.5)? Schwartz (2004) notes that the intermodel spread in modeled temperature trend expressed as a fractional standard deviation is much less than the corresponding spread in either model sensitivity or aerosol forcing (and this comparison does not consider differences in solar and volcanic forcing). This agreement is accomplished through each modeling group selecting the forcing data set that produces the best agreement with observations, along with model kludges that include adjusting the aerosol forcing to produce good agreement with the surface temperature observations. If a model’s sensitivity is high, it is likely to require greater aerosol forcing to counter the greenhouse warming, and vice versa for a low model sensitivity. Schwartz (2004) argues that uncertainties in aerosol forcing must be reduced at least three-fold for uncertainty in climate sensitivity to be meaningfully reduced and bounded.

These concerns raise the issue of fitness for purpose of the IPCC AOGCMS for attribution analysis.  While the kludging of model parameters and forcing produces model simulations that empirically adequate in representing aspects of the 20th century climate (which is useful for certain purposes), these models are not fit for attribution studies to the extent that kludging (tuning) of model parameters and forcing has been done to match the 20th century temperature time series.

JC’s recommendations

There are two major flaws in the design of the IPCC attribution experiments:

  • inverse modeling that tunes the model and forcing to reproduce the 20th century surface temperature observations
  • lack of account for uncertainty in the external forcing data

The relatively simple models used in the extensive suite of simulations for equilibrium climate sensitivity (Figure 9.20) should be used to conduct an extensive set of simulations for the 20th century with both natural and anthropogenic forcing.  A large ensemble of simulations should be conducted that includes the variations in sensitivity and also different combinations of external forcing data sets.

Forthcoming:  Part III

Hard to imagine that this is taking three parts, each of which exceeds 2000 words.  I have developed brain strain this week from trying to sort through all this; Part II is admittedly not my best writing but I think I have the arguments straight.  Stay tuned for Part III, which is the most interesting one, on the overall logic of the IPCC’s attribution argument.   Its already written, so I have very likely confidence that Part III will be the last part in this little series :)



67 responses to “Overconfidence in IPCC’s detection and attribution. Part II

  1. Prof Curry,

    Nice writing and very convincing post.

    There are some other factors that bug me:

    1) The possible conflict of interest between producing temperature series and model projection. GISS and met office and surely others produce both. An example to that conflict of interest is to try to make the temperature series to fit the projection. GISS is the only temperature reconstruction that was in line, until recently, to establish 2010 as the warmest year, and announced almost with regret that it may not be.

    2) The use of surface temperature as the metric to diagnose AGW. Thompson (2008) showed that there was a huge problem with sea surface temp. The resolution of this problem is still unknown and yet temperature series are still using the old data. There are other problem presented by surfacestation.org which won’t be resolved anytime soon either.

    • The problem with sea surface temperatures is that they have been “force fitted” to surface air temperatures via application of a stair-step correction of minus 0.4-0.5C early in World War II. This correction is applied because it has always been assumed that SSTs are valid air temperature proxies over all time scales and therefore must track air temperature series, although there is no proof that they are valid air temperature proxies over all time scales, and at least some evidence to suggest that they may not be. The correction is also justified on the grounds a) that an abrupt and permanent change from bucket to ship intake SST measurements caused a large artificial shift in the SST record early in World War II and b) that this shift is clearly detectable in the data. Assumption a) is speculative and assumption b) can be invalidated simply by examining Figure 2 of Folland et al. 1984, the paper that first identified the “shift”. This Figure clearly shows that there was no permanent shift in the SST record during World War II- only a spike (caused by bad data) that lasted only for the duration of the war. This is the “huge problem” that Thompson identified.

      And huge the problem is, because what it means is that the HadSST2 series contains a large and invalid correction, and HadSST2 contributes 60-70% to the HadCRUT3 land and marine series the IPCC compared its climate models against ikn the AR4. In other words, the climate models were “kludged” to match a demonstrably incorrect surface air temperature series. Clearly we should fix this fundamental problem before proceeding.

      • The problem is much more complicated than the few years following WW2. The problem is persistent throughout the post WW2 period, since there have been a lot of different technic used to gather sea surface temp.

        Anyhow the use of surface temp has the metric, promoted by the IPCC, which include a lot of people that owe their job to the fact that this is the metric used, to diagnostic AGW has a lot of uncertainty. This means that the basic metric used in the IPCC can be very misleading.

      • Roger Andrews

        Apologies for the delayed response, but I was away from my desk.

        The problem isn’t complicated at all. Anyone who is prepared to do the grunt work will find a) that there were no pre/postwar changes in SST measurement methods that could account for an artificial and permanent shift of 0.4-0.5C in the SST record during WWII, and b) that no such shift is visible in the SST record anyway. In other words, the shift does not exist, meaning that time series that “correct” for it – which include the IPCC series – are incorrect. This has nothing to do with metrics or with how many people owe their jobs to the use of any particular one. The simple fact is that we have gotten our basic data wrong and need to get it right before we do anything else.

        Is anyone out there listening?

      • Before the war there was only 2 or 3 methods used to measure sea surface temp, one being the wooden bucket, another, which appeared later was the engine intake. During the war mainly the engine intake was used which lead to the belief that all the sea surface temp after the war used that method. Which wasn’t the case and in the 60s new method started to appear like buoy, insulated bucket and satellite. Each year the amount of data taken from one source or another are different, not considering the problem with ship being higher etc, etc.

        I think that the fact that no correction has been posted since that paper came out is a testament of how hard it is what is what in the data.

  2. Thank you for nailing the circularity (rather akin to a singularity, methinks)

    It is as Roy Spencer has been pointing out for some considerable time: “Our conclusion must be correct because we assumed it to begin with – which we wouldn’t do if we were wrong”

  3. JC, this was a very good read and not only am I incredibly impressed, but I am incredibly greatful that you managed to go through the relevant IPCC statements and not only tie down what they were saying, but point out the major flaws in their conclusions. I have tried similar myself, but always got lost in the deliberatley vague use of language and had to give up. You’re a better man than i… as it were…

    Two pertinent points that you raised- one of which you’ve pre-empted by saying it will be part of your part 3. The other, was about the internal forcings, or the IPCC’s dismissal of these internal forcings (PDO etc).

    Do ‘we’ have the rational for the IPCC dismissing these as potential drivers, or do we think that it was dismissed simply as it was incomaptable with their theory?

    Finally- (and to try to curtail my penchant for massive posts i’ll be brief) Hasn’t recent studies suggested that solar minimums, i.e. times of low solar output could actually cause INCREASES in temp over time (cosmic/galactic radiation causing cloud cover variations)?? Has anyone else see this/heard of this? I assume this was not taken into account?

  4. Roddy Campbell

    Judith, this is OT in a sense – assuming that you are right (and the head-spinner sentence is a wonder), how difficult has it been to ‘discover’ what you have ‘discovered’, or, perhaps better, how difficult has it been to arrive at a position of greater uncertainty as to the IPCC’s methodology and conclusions? After all, it’s all in the IPCC reports.

    I’ve asked this question of you before, as have others, as to why, assuming you are right, no other ‘mainstream’ scientists have observed it, or said it. Is it as basic as that there are no incentives to do so? Indeed, are there incentives to either not bother, keep shtum, or indeed take the Realclimate route of not giving an inch?

    • I think it may be more a case of the way the argument was framed.

      People and manistream scientists spend a long time discussing the minutae of the claims, the evidence, the conclusions etc etc in the areas where they hold intrest or expertise. These exchanges are a step removed from the analysis that Judith and others are now (and have been) performing on the IPCC statements themselves.

      I find it very believable that the ‘assumption’ that the IPCC statements/position are true was taken by a wide range of people/scientists and then ‘they’ just concentrated on their own areas of research assuming the foundation to be true.

      It’s also in part the way that the arguments have been framed- discussions are almost always of a technical and scientific basis on very narrow areas, without taking in the greater picture or the basic foundations of the theory.

      A literal case of not seeing the forest for the tree’s.

    • Roddy, I already realized that I need a Part IV in this series, to address the question you raised.

      • Roddy Campbell

        Is it interesting? I hope so. It seems to me that your blogs, while often way over my head / out of my experience, without being rude are not rocket science? Is that fair? It seems to me that quite a lot of people (in climate science) would be capable of doing what you have done, observed what you have observed re IPCC and uncertainty, that might be a better way of putting it.

        And the explanation given here somewhere, that everyone just sticks to their knitting, their little area of hurricanes or whatever, doesn’t ring true to me. In my world I have a specialisation, about which I know ‘more’ than most, but I still read and explore adjacent areas, because they’re interesting, with, I like to think, a reasonable ability to understand the thrust.

        And what could be more interesting to at least a decent few climate scientists than the IPCC report?

    • The not giving an inch part is one of the most interesting of the social movement.
      In Astronomy, if an Astronomer makes a bad prediction, everyone fights about it, figures it out, and moves on. Hawking turns out to have been wrong in one of his major theories. Biology, ditto. But in cliamte science, the scientists promoting global cliamte disruption can never be wrong, and those doubting it can never be right. Dr. N-G was castigated for pointing out the IPCC boner regarding the Himalaya glaciers. Our hostess here has become anathema in many circles.
      Yet Hansen can give interviews where he talked about how Manhattan should be flooding by now due to ocean levels rising and it is OK.

  5. “I’ve asked this question of you before, as have others, as to why, assuming you are right, no other ‘mainstream’ scientists have observed it, or said it. Is it as basic as that there are no incentives to do so? Indeed, are there incentives to either not bother, keep shtum, or indeed take the Realclimate route of not giving an inch?”

    I think that it is incorrect that no other mainstream scientists have observed or said what Dr. Curry points out. Certainly, the issues regarding aerosol uncertainty and their use as a “kludge” is well documented in the peer reviewed literature. It is certainly well-known among climate modelers. It is much less well-known among policy makers and the general public.
    To me, this whole issue is an example of the IPCC obscuring and downplaying major weaknesses in its analysis. I will judge the next IPCC based in a large part on whether these basic issues are brought up and adequately discussed.

  6. Judith you say
    “There are two major flaws in the design of the IPCC attribution experiments:

    ■inverse modeling that tunes the model and forcing to reproduce the 20th century surface temperature observations
    ■lack of account for uncertainty in the external forcing data”

    Let me put it another way. The models used by climate scientists are non-validated, and tell us almost nothing about what is happening in the real world.

    Solar magnetic forcings are not understood at all. By only looking at the solar forcings we understand, we are bound to get the wrong answer.

  7. Dr. Cury,
    Two questions:
    1) So tell me again why we are facing ‘global climate disruption’?
    2) Will you please make sure we hear about the publication date of the book your work deserves to be turned into?

  8. Dr. Curry,
    Sorry about the typo. It is early, our cats are using the back of my chair as a jungle gym, and I am waiting on my first coffee.

  9. Dr. Curry,
    The link for the word “debate” does not seem to take one to the debate about satellite sensors.

    “Even in the satellite era, there is still debate regarding the calibration of satellite sensors and its impact on decadal scale trends.”

  10. “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.”

    Dr Curry,

    I agree that this is probably the most important statement in AR4 (or at least the one most retained by the media hence the public) . However, I think it is not so much the certainty , which you rightly question, but the little word “MOST” that is subject to wholesale misinterpretation.

    Nearly everyone I speak to seems to have presumed this to mean 80 or 90% !

    Indeed I think that’s probably what I interpreted it to mean to before digging into it.

    IPCC is very clear in defining the term as >50%, but few seem to have read that and somewhere along the lines this seems to have entered into the public conscience as “the vast majority”.

    Indeed the whole issue has further been reduced to a binary choice: is it due to man or not? Yes or No?

    It is ironic to think where the whole argument may be today if the IPCC had taken the %5 uncertainly the other side of the line and concluded that “most of the observed increase in global average temperatures since the mid-20th century is very likely due to natural causes.”

    We could all have drawn the binary conclusion it was “not due to man” and packed our collective carbon baggage and gone home.

    Indeed, your argument notwithstanding, this is within their margin of uncertainly. It is , in fact, pretty meaningless to be so certain about such a vague conclusion. But this is wordsmithery, the art of politicians and spin doctors, and why we all need to realise that above all and by conception the IPCC is an “intergovernmental” policitical body.

    This big lie is that it is presented as a scientific body.

  11. “This agreement is accomplished through each modeling group selecting the forcing data set that produces the best agreement with observations, along with model kludges that include adjusting the aerosol forcing to produce good agreement with the surface temperature observations.”
    yet when one argues that the models are “tuned” match 20th century temps, this is denied, by e.g. Michael Schlesinger in my debate with him, or others. Not only model tuning per se, but individual model runs that give odd results are rejected for purposes of scenario generation.

    • Craig, it seems like the main tuning is done for the current climatology (say since 1980), when the best data is available. The main tuning to the 20th century temps seems to be the forcing, rather than the actual model parameters (although according to what the IPCC says, some of that is done also). The “miracle” occurs when all of the climate models reproduce the 20th century decadal -scale surface temperature trends, even though the models have factor of 2 differences in sensitivity and use wildly different forcing data sets.

      • I include parameter tuning and choice of forcing data under “tuning”. I agree that the “miracle” is simply implausible if one put together a GCM and simply ran it without tuning. How much more likely that a model that freezes the tropics alerts the modelers that “something is wrong” which they go fix.

  12. Judith

    I just want to echo the comments of some other posters. Your blogs clearly involve a good deal of time and effort on your part. And the fact that you engage with posters is truly commendable. And again as others have noted, many of the issues that you are tackling are blindingly obvious to anyone with an open mind. I hope you can maintain this level of excellence

    Kind regards

    Gary

  13. David L. Hagen

    Judith
    Satellite calibration/errors have a major impact on solar vs anthropogenic causation. See:
    43) Nicola Scafetta and Richard Willson, “ACRIM-gap and Total Solar Irradiance (TSI) trend issue resolved using a surface magnetic flux TSI proxy model”, Geophysical Research Letter 36, L05701, doi:10.1029/2008GL036307 (2009) .PDF Supporting material.PDF

    Both ‘mixed’ composites demonstrate a significant TSI increase
    of 0.033 %/decade between the solar activity minima of 1986 and 1996, comparable to the 0.037 % found in the ACRIM composite. . . .Current climate models [Intergovernmental Panel on Climate Change, 2007] have assumed that the TSI did not vary significantly during the last 30 years and have therefore underestimated the solar contribution and overestimated the anthropogenic contribution to global warming.

    45) N. Scafetta, “Total solar irradiance satellite composites and their phenomenological effect on climate,” In press on a special volume for the Geological Society of America. (2009).

    See also NIPCC 2009 Climate Change Reconsidered, Chapter 5, Solar Variability and Climate Cycles, 5.2 Irrandiance PDF (1.1 MB)

    The results of these analyses indicated a much stronger statistical relationship between SATs and TSI, as opposed to SATs and CO2. Solar forcing generally explained well over 75 percent of the variance in decadal-smoothed seasonal and annual Arctic SATs, while CO2 forcing explained only between 8 and 22 percent of the variance.

    IPCC’s inference of anthropogenic forcing is weak compared to the stronger correlations with natural solar variations and ocean oscillations etc.

  14. “Separate draft guidance notes on the treatment of uncertainty, presented in Busan by IPCC working group co-chairs, suggest that, where evidence and understanding are overwhelming, IPCC authors could jettison uncertainty qualifiers altogether and present research findings as statement of fact. They should proceed with extreme caution. In a politically charged policy area, such interpretation is better done by policy-makers and society at large. To emphasize a remote possibility is probably a better strategy for scientists than to gloss over it altogether.”

    http://www.nature.com/nature/journal/v467/n7318/full/467883a.html

    Dr . C, This is slightly OT, but could you please parse what Nature magazine is trying to convey, by this ambigious passage?

    • I think you have raised a very interesting point here. I don’t really understand what they are saying either.

      I find that with lots of studies that are just very poorly written. I mentioned here previously that even after reading the same passage four times I wasn’t sure what conclusion the authors had come to (regarding sea levels) but the possibilties were diametrically opposed to each other-they had either definitely gone up or they had definitely gone down.

      I think written scientifc Engluish anyway leaves much to be desired these days, add in those technical authors more comfortable with computer graphics than words and you have a recipe for confusion. I date this lack of lucidity to around 1985 when computers replaced the need to draft your article many times before laboriously having to transcribe it.

      tonyb

    • there are new uncertainty guidelines for the ipcc, i’m digging into that over the next few days

    • Uncertainty now becomes certainty. That’ll convince us.

  15. In an interview published March 29, 2010 James Lovelock said that the major climate centres around the world are more than well aware how weak their science is. Further, he says that talking to them privately, they do not understand the physics of clouds and aerosols, important drivers of climate, and they are “scared stiff” about that.

    http://www.guardian.co.uk/environment/blog/2010/mar/29/james-lovelock

    If Lovelock is correct, Judith Curry is not the only mainstream climate scientist asking important questions, but she asks them publically.

    • Don B: Lovelock is a fascinating, outspoken fellow to be sure. I’m glad he’s around.

      However, his pronouncements are decidedly a mixed bag. For instance, in the interview you link, Lovelock speaks of the need “for a more authoritative world,” by which he means a more authoritarian world in which democracy and egalitarianism are pushed aside as society enters the war footing necessary to respond to a crisis like climate change.

      Elsewhere Lovelock declares that it’s too late to stop climate change. In twenty years the planet overheats irreversibly, no matter what we do, and most of humanity will die off by the end of the century. In the meantime, his advice is to “enjoy life while you can.” Thanks, Jim!

      Who knows? Lovelock could be right. I do agree with Lovelock that catastrophic climate change, if that is the case, is a crisis that humanity is poorly equipped to deal with, given the uncertainties, the long time scale, and the unclear solutions.

      • Lovelock, Ehrlich and Schneider are the troika who helped sell the idea of catastrophism as an underlying acceptable theme of science in the public square.
        The bizarre anthropomorphic idea of Gaia is great for SciFi story lines but should have been laughed out of any serious research. Ehrlich and Schneider made great careers and fortunes misrepresenting the problems and options facing us.
        None of them were ever right in the sense of predicting or solving anything.

      • Hunter and Huxley: To say that Lovelock is a “mixed bag” is being very polite. However, when he says what I paraphrased above, it undercuts his catastrophism theme, introducing uncertainty, suggesting that he is telling the truth. Why would he make up a lie to undercut his message?

      • Pere Lovelock may get frozen off his ass when they both get stranded at the snowy pass on the road to Damascus.
        =============

    • @Don B

      “but she asks them publically.”

      Yes indeed – that is the key. And all I’ve ever asked :)

  16. Roger Andrews.

    The SST’s are not worth the pixels they are written on. They are incredibly sparse on the ground anyway, and this lack of data gets even worse the further back in time you go and move away from the main shipping lanes-which themseves are very poorly represented anyway.

    Their accuracy is also highly suspect. I knew someone responsible for taking bucket measurements and he fell about laughing at the idea that Hadley sell data to gullible organisations based on the extremely random and inaccurate samplings taken by himself and many others like him (accepting that scientific expeditions did things differently).

    We simply can’t extrapolate data back to 1850 or even 1950 and believe they have any scientific merit.

    I reserve judgements on the SST satellite records. I know the sea level measurements well and they are inaccurate up to 6cm (those of us who use them would say 15cm) Whether there is the same potential for inaccuracies in satellite records of sea surface temperatures I can’t comment.

    Perhaps an article on SST’s Judith? You did a post on the Southern Ocean if I remember correctly, which you were brave enough to throw to the ravenous lions over at WUWT (I was one of them)

    Another great article-you are maintaining a very high standard.

    tonyb

    • I’m definitely digging into the SST issue, will have a post or two at some point on this

      • Judith: I look forward to your post on SST datasets. A few months ago, I put togther a post about the four datasets (ERSST.v3b, HADISST, HADSST2, and Reynolds OI.v2) used in the GISS, Hadley Centre and NCDC surface temperature products, covering their basic differences.

        http://bobtisdale.blogspot.com/2010/07/overview-of-sea-surface-temperature.html

        There are some other subtle differences between the Hadley Centre and NCDC SST data that I didn’t mention.

        And thanks for the link in the post above.

      • Thanks Bob, this is a very useful summary. Most of the differences before 1950 aren’t all that subtle. I am glad to see that people are looking at the SST datasets (whereas most of the focus has been on the land data sets).

    • Roger Andrews

      I agree that SSTs aren’t top-quality data but I’m not prepared to dismiss them out of hand. Unadjusted SSTs and air temperatures in fact show some very interesting relationships, but I don’t have time to go into that right now. Maybe in a later posting.

      Regarding sea levels, I can get only about 100mm of sea level rise since 1900 from the tide gauge records (compared to the IPCC’s estimate of, as I recollect +/- 170mm). What causes the difference? “Corrections”, again.

  17. David L. Hagen

    Judith
    In his presentation, Vaclav Klaus cited the following interesting paper:
    Fitting of Global Temperature Change from 1850 to 2009 using Random Walk Model, YAN Shao Min, WU Guang, Guangxi Sciences 2010, 17 (2) 148-150.

    … the random walk model could fit both the temperature walk model and the CRUTEM3 temperature, which provided an alternative approach for modeling of temperature, and suggested that global temperature change could be mainly due to the random mechanism.

    What would IPCC need to do in AR5 to justify their claim for >90% confidence in anthropogenic global warming causation, when by eye, this Random Walk model seems to do pretty well?

  18. Judith: It is clear that the IPCC has built a house of cards around the flawed idea that every 1-2 tenths of a degree of change in global temperature in the 20th century can be explained by changes in anthropogenic and natural forcings. Do you really expect any significant fraction of the IPCC or your peers to acknowledge the problem in a way that reaches policymakers or the public?

    I don’t understand what type of natural variability the IPCC’s models project. (I’m guessing that cyclical or non-cyclical variability with a longer period than ENSO and amplitude similar to a strong ENSO isn’t known.) Could you suggest a reference?

    The deepwater that is generated in the Arctic and Antarctic has tremendous potential to cause fluctuations in our climate if there is any variability in its rate or location of its return to the surface, especially if the change is amplified by changes in clouds. In my ignorance of the true causes of the ENSO, I think about El Nino as a slowing down of the return of deep water to the surface of the Western Pacific and La Nina as a rebound from that process. Whether or not this is an important factor in ENSO, there is certainly no reason why periodic or chaotic changes in ocean currents can’t produce changes of longer periods than ENSO.

    • Frank, I don’t know of a single reference that discusses this overall, unfortunately. The main multi-decadal oscillations of relevance are the North Atlantic Oscillation, Atlantic Multidecadal Oscillation, Pacific Decadal Oscillation, North Pacific Gyre Oscillation (googling these word will turn up some general info). I will do a post on these at some point.

  19. Leonard Weinstein

    Judy,
    It appear that the more you get into the details, the more skeptical you become that the case for CAGW is settled. I am sure there is some human effect, especially locally near large farming areas, cities, deforested areas, and industrial sites. I also would guess much of the atmospheric increase in CO2 since 1940 is due to human activity. I even would guess that a part of the modest average temperature increase since 1940 over the entire world is human caused. However, if as appears to be happening, the temperature trends down the next decade or so, would you agree the case for action is closed?

    • Leonard, the case for action is not closed. I have a post on this in the pipeline, looks like Nov 3 will be the right time in the sequence.

    • Leonard

      Even the Romans recognised the effects of UHI 2000 years ago when the citizens implored Nero to erect high buildings and narrow streets to mitigate the effects. Bad news for the ice sellers and those who rented their hill top villas to Romes leading citizens in the summer. But it illustrates that UHI can affect a urban area by up to 5 degrees or so-depending on circumstances. This figure being most likely in still weather on a winters night.

      Some cities would positively benefit from the warming, with others it would overload them.

      I would argue that we ought to try to mitigate proven UHI rather than theoretical Co2.

      tonyb

  20. Christophorus

    I look forward to seeing a post on climate sensitivity. Over at Skeptical Science, I’ve just witnessed a more than usually acerbic debate focused on the notion that a strong MWP and LIA imply high climate sensitivity and therefore automatically predict strong warming secondary to CO2 forcing. Paradoxically, when you look at the Anthropogenic and natural radiative forcings (IPCC AR4 Section 2.1), the aerosol and cloud albedo negative feedbacks when summed approach the CO2 forcing. That aside, the AR4 data show extraordinary margins of error especially in relation to cloud albedo. Somehow, the argument that a strong LIA/MWP with consequent high climate sensitivity automatically forebodes a powerful CO2 forcing over coming decades seemed to oversimplify the uncertainties. The argument seemed to have a somewhat circular ‘heads I win, tails you lose’ quality.

    At any rate, any comment around climate sensitivity and its validity as a predictor of future scenarios addressed in a less adversarial spirit would be welcome.

    • This argument about strong MWP and LI implying strong sensitivity drives me nuts. It implies that the MWP and LI are forced. If they are natural internal oscillations, then this would imply lower sensitivity to CO2. I am planning several threads on climate sensitivity, but I will need xmas break to put these together.

  21. David L. Hagen

    Judith
    Robert E. Levine summarizes a number of these IPCC flaws in:
    Climate Model Deception – See It for Yourself He coined the description: “Conceal the flaws.”

    • Well, i don’t see how these statements constitute concealing the flaws. This list of statements, out of context, and not presented in the context of an overall argument isn’t all that convincing to me. One of the things i want to address in Part IV is why the many critiques of the IPCC models etc. haven’t had much impact.

      • That was my reaction. I thought Levine was building up to some large refutation, then the article ended.

        I’m curious about the uncertainties and vagueness of the listed statements, but without more context and information of what depends on what, it’s hard to draw a conclusion.

        I’d love to read a good overview of the AGW arguments (not the SPM) that lays out the knowns and unknowns clearly at the level of a decent layman’s book like Stephen Gould’s “Wonderful Life.”

        Dr: Curry: I think you’d be great for that task, if it were something you wanted to do.

        In any event, someone on the climate change side ought to be working on this. There’s far too great a gap between technical and summary portions of the IPCC docs. Having Gavin Schmidt & Co. dole out information in their polemical, untrustworthy style at RealClimate isn’t the answer though.

  22. Christophorus

    Thanks for the rapid reply :-)

    New year will bring some interesting reading.

  23. Judy

    have you seen the recent posts at Jeff Id’s Tav? This seems to impact on the reliability of models but I have to confess the math is beyond me.

    see http://noconsensus.wordpress.com/2010/10/21/456345/#more-10680

    Gary

  24. Third Assessment Report: “In climate research and modeling, we should recognize that we are dealing with a coupled nonlinear chaotic system, and therefore that long-term prediction of future climate states is not possible.”

    Did they forget about this in AR4 (and now AR5)

    Just because you have bigger and faster computers, doesn’t mean that this does NOT still hold…..

    (why is it so hard to locate and download the earlier reports (current one is quite awkard as well))

  25. OT – I am continually impressed by this blog and Dr. Curry. If this dialogue was prevalent in the IPCC, I could be persuaded to their science as science. I truly don’t understand how you can carry out your regular duties as well as these excellent expositions!

    With Great Respect,
    A professional engineer in Texas.

  26. Everyone agrees that climate is warming and that humans have some hand in it. In his OP-Eds, E&E paper (July Aug 2007) and recent debate with Andy Dessler (11 Oct 2010, UVA Law School), Prof. Lindzen says that these findings are ‘trivial.’ The question remains about climate sensitivity. How much of the recent warming is driven by internal variability with multidecadal periods? What is the sign of the aerosol-cloud feedback?

    I find that the Murphy et al. (JGR, 114, 2009) is very informative about the energy balance of the earth. The First Law of Thermodynamics, the satellite data and the ocean heat content data make it clear that CO2 added to the atmosphere by humans has made a huge amount of energy available to the climate and ocean systems. There is much more energy rattling around in the Earth/Atmosphere/Ocean as a result of added CO2 than is needed to explain the observed atmospheric warming. Most of that energy is in the oceans. The ways that nature deals with the energy are not completely evident to us at this time. The ocean is deep, large and largely opaque. We can not see it all and can not yet track it all – (see Trenberth’s lament). But we do know that it is there. And we know that it will continue to accumulate. I think that figure 4 shows a negative slope to outgoing shortwave as temperature goes up. Does this not kill Prof. Lindzen’s argument (hope) for a negative cloud feedback balancing the CO2 forcing?

    So I am puzzled by some of the distinctions that Drs. Curry and Spencer are making about the internal variability (AMO, PDO….). Somehow, between the lines, we are to understand that if the warming is in phase with the AMO and PDO superposition (as if we will know the phases anytime in the next century!), then the warming is not driven by CO2-trapped energy. The excess CO2-trapped energy is there, and the vast majority is in the oceans. How exactly nature partitions it to melt ice, alter currents, heat the depths will be a mystery for some time. So who would be surprised if some of that energy were incorporated into El Nino, PDO or AMO and released to the atmosphere in phase with those cycles? Being able to correlate atmospheric warming with El Nino, PDO, AMO or other cycles does not show that the energy transferred did not result from anthropogenic CO2.

    Prof. Lindzen agreed early on that this is really a First Law problem. He blogged on the apparent discrepancy between the outgoing long wavelength radiation measured by satellite and that predicted by the models. In his debate with Dessler, he referred to the fact that those data have been revised (in the revision, they agree better with the models). He also acknowledged that the strong criticisms of his recent paper dealing with the energy balance in the tropics have caused him and his coauthor to undertake serious revisions which are not yet public. So, for the moment, the energy balance analysis does not favor the idea that the internal variability of the atmosphere will average to zero over long times.

    The energy balance analysis shows that the atmosphere/ocean/earth system is dealing with large amounts of energy added by anthropogenic CO2 emissions. Business-as-usual merely means that it will have increasing amounts of energy to deal with. The timing of the internal cycles is certainly interesting and will engage scientists for some time. But, it will not change the fundamental fact that incoming and outgoing radiation are not in balance and when they are again, we will be at higher temperature.

    How much higher? How should the body politic deal with the uncertainty (2C to 4.5C for a doubling of CO2 – IPCC and nearly everyone else -Prof. Lindzen is almost up to 0.8C)? This is a fine test for our culture.
    Chuck Wilson

    • I guess you’re one of those who does not believe that the climate has been cooling for most of the past decade. The rest, if I may borrow from religious history, is mere commentary.

      • Mark,
        There are 4 temperature series that are widely regarded as representing the global average temperature: Had-CRU, NASA GISS, NOAA and the Japanese Meteorological Agency. Then there are the interpretations of the NOAA Microwave Sensing Units (satellites). Folks like S. Fred Singer used to tell us that the satellites were the most accurate representations of global temperature. That was when he touted the University of Alabama retrievals that disagreed with NASA GISS etc. I don’t know what Fred says these days now that the re-retrieved satellite temperatures show warming right up to today as well. So if you don’t like Jones, Hansen and Karl, you can click over to Roy Spencer’s web site. He is skeptical of AGW, but his satellite temperature retrievals still show warming. Divide up the last 60 years into decades any way you want and you will find that every decade’s global average temperature is warmer than the one before it. (Spencer does not go back 60 years – or i am older than I think I am…..)

        Of course you can find years that are cooler than the preceding decade, but that would not be climate, would it? That would be weather.

        Regards,
        Chuck Wilson

      • Chuck you haven’t really given us a full answer here. What is the conclusion taking into account all the temperature series you mention?
        Is it the case that global average tempertures have flatttened out over the last decade or so, or not?

    • Pardon me, I should have typed Figure 2 and not Figure 4 in my discussion of Murphy et al.
      Regards,
      Chuck Wilson

  27. Re: Chuck Wilson: again you are assuming that the PDO etc cycles are strictly internal oscillations. What if they are driven by the sun etc? Especially since they predate the 1950s rise in GHG levels. An energy imbalance is not proof that it is due to GHG.

  28. Richard S Courtney

    Dr Curry:

    Sincere thanks for your two articles on ‘Overconfidence in IPCC’s detection and attribution’.

    I notice you say you have a Part 3 prepared and perhaps the comment I now provide would be more appropriate when that appears. But I now provide my comment in case it assists your Part 3.

    I write to address the underlying assumption in all the detection and attribution studies. None of these studies can have any confidence until their main underlying assumption is validated.

    The climate models are based on several assumptions that may not be correct. The basic assumption used in the models is that change to climate is driven by change to radiative forcing. And it is very important to recognise that this assumption has not been demonstrated to be correct.

    Indeed, it is quite possible that there is no force or process causing climate to vary. I explain this as follows.

    The climate system is seeking an equilibrium that it never achieves. The Earth obtains radiant energy from the Sun and radiates that energy back to space. The energy input to the system (from the Sun) may be constant (although some doubt that), but the rotation of the Earth and its orbit around the Sun ensure that the energy input/output is never in perfect equilbrium.

    The climate system is an intermediary in the process of returning (most of) the energy to space (some energy is radiated from the Earth’s surface back to space). And the Northern and Southern hemispheres have different coverage by oceans. Therefore, as the year progresses the modulation of the energy input/output of the system varies. Hence, the system is always seeking equilibrium but never achieves it.

    Such a varying system could be expected to exhibit oscillatory behaviour. And, it does. Mean global temperature rises by 3.8 deg.C from June to January each year and falls by 3.8 deg.C each year.

    Importantly, the length of some oscillations could be harmonic effects which, therefore, have periodicity of several years. Of course, such harmonic oscillation would be a process that – at least in principle – is capable of evaluation.

    However, there may be no process because the climate is a chaotic system. Therefore, the observed oscillations (ENSO, NAO, etc.) may not be harmonic effects but could be observation of the system seeking its chaotic attractor(s) in response to its seeking equilibrium in a changing situation.

    Very importantly, there is an apparent ~900 year oscillation that caused the Roman Warm Period (RWP), then the Dark Age Cool Period (DACP), then the Medieval Warm Period (MWP), then the Little Ice Age (LIA), and the present warm period (PWP).

    All the observed rise of global temperature in the twentieth century could be recovery from the LIA that is similar to the recovery from the DACP to the MWP.

    And the ~900 year oscillation could also be the chaotic climate system seeking its attractor(s). If so, then all global climate models and ‘attribution studies’ utilized by IPCC and CCSP are based on the false premise that there is a force or process causing climate to change when no such force or process exists.

    So, the assumption that climate change is driven by variations in radiative forcing needs to be substantiated for any confidence to be placed in the detection and attribution studies of the causes of climate change.

    Richard

    • In two previous threads I wrote about NAP (North Atlantic Precursor), showing good correlation with CETs. Data for the NAP for period pre 1600 is sparse and incomplete, so having some of the data available I plotted it NAP11-16, and indeed indicates that there was a MWP.

  29. Warning: off topic.

    Judith, as a climate scientist you are in a unique position to comment on what one could call the Reward System in climate science.

    I am sure you have heard the stories of careers being made and broken, grants being refused or accepted, scholarships, attention being given or refused, and so on and so forth.

    What mechanisms in the minefield of science/business/politics exist, and what impact might this have on the objectivity of science?

    It would be a very illuminating post for many of us “outsiders”, I believe.

    All the best and keep up the good work.
    Oslo.