Overconfidence in IPCC’s detection and attribution. Part IV

by Judith Curry

Last October, I introduced this topic in Part I and followed up with Part II and Part III, which formed an early draft of an argument I was using in a paper entitled “Climate Science and the Uncertainty Monster.”  I’ve gotten the reviews back on my paper, this post is a draft of the revised version of that particular section.

5.  Uncertainty in the attribution of 20th century climate change

“Give me four parameters, and I can fit an elephant. Give me five, and I can wiggle its trunk.” John von Neumann

Arguably the most important conclusion of IPCC AR4 is the following statement: “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.”  The IPCC’s conclusion on attribution is reached using probabilistic causation, whereby an ensemble of simulations are used to evaluate agreement between observations and forcing for simulations conducted with and without anthropogenic forcing. Formal Bayesian reasoning is used to some extent by the IPCC in analyzing detection and attribution.  The reasoning process used by the IPCC in assessing likelihood in its attribution statement is described by this statement from the AR4 (Chapter 9):

The approaches used in detection and attribution research described above cannot fully account for all uncertainties, and thus ultimately expert judgment is required to give a calibrated assessment of whether a specific cause is responsible for a given climate change. The assessment approach used in this chapter is to consider results from multiple studies using a variety of observational data sets, models, forcings and analysis techniques. The assessment based on these results typically takes into account the number of studies, the extent to which there is consensus among studies on the significance of detection results, the extent to which there is consensus on the consistency between the observed change and the change expected from forcing, the degree of consistency with other types of evidence, the extent to which known uncertainties are accounted for in and between studies, and whether there might be other physically plausible explanations for the given climate change. Having determined a particular likelihood assessment, this was then further downweighted to take into account any remaining uncertainties, such as, for example, structural uncertainties or a limited exploration of possible forcing histories of uncertain forcings. The overall assessment also considers whether several independent lines of evidence strengthen a result.”

The above statement raises a number of questions about the IPCC’s attribution statement.  Given that “very likely” corresponds to 90-95% probability, what was the original likelihood assessment from which this apparently minimal downweighting occurred?  How is this minor downweighting justified in context of substantial uncertainties in forcings and the models themselves? What other physically plausible explanations for the given climate change were considered?  And finally, what does “most” mean – 51% or 99%?  The high likelihood of the imprecise “most” seems rather meaningless.  From the IAC:  “In the Committee’s view, assigning probabilities to imprecise statements is not an appropriate way to characterize uncertainty.”

This section critically evaluates the IPCC’s attribution argument in the context of uncertainty.

5.1  IPCC’s detection and attribution argument

“What we observe is not nature itself, but nature exposed to our method of questioning.” Werner Karl Heisenberg

The problem of attributing climate change is intimately connected with the detection of climate change. A change in the climate is ‘detected’ if its likelihood of occurrence by chance due to internal variability alone is determined to be small. Knowledge of internal climate variability is needed for both detection and attribution. Because the instrumental record is too short to give a well-constrained estimate of internal variability, internal climate variability is usually estimated from long control simulations from coupled climate models.  The IPCC AR4 formulates the problem of attribution to be: “In practice attribution of anthropogenic climate change is understood to mean demonstration that a detected change is ‘consistent with the estimated responses to the given combination of anthropogenic and natural forcing’ and ‘not consistent with alternative, physically plausible explanations of recent climate change that exclude important elements of the given combination of forcings’.”

The IPCC AR4 (WG I, Chapter 9) describes two types of simulations have been used in detection and attribution studies. The first method is a ‘forward calculation’ that uses best estimates of external changes in the climate system (forcings) to simulate using a climate model the response of the climate system. These ‘forward calculations’ are then directly compared to the observed changes in the climate system.  The second method is an ‘inverse calculation’ whereby the magnitude of uncertain model parameters and applied forcing is varied in order to provide a best fit to the observational record.

The IPCC’s detection and attribution analysis, which is the basis of the “very likely” attribution statement in the AR4, is based upon the following argument:

  1. Detection. Climate change in the latter half of the 20th century is detected based upon an increase in global surface temperature anomalies that is much larger than can be explained by natural internal variability.
  2. Confidence in detection. The quality of agreement between model simulations with 20th century forcing and observations supports the likelihood that models are adequately simulating the magnitude of natural internal variability on decadal to century time scales.  From the IPCC AR4:  “However, models would need to underestimate variability by factors of over two in their standard deviation to nullify detection of greenhouse gases in near-surface temperature data, which appears unlikely given the quality of agreement between models and observations at global and continental scales (Figures 9.7 and 9.8) and agreement with inferences on temperature variability from NH temperature reconstructions of the last millennium.”
  3. Attribution. Climate model simulations for the 20th century climate that combine natural and anthropogenic forcing agree much better with observations than simulations that include only natural forcing.  From the IPCC AR4: “The fact that climate models are only able to reproduce observed global mean temperature changes over the 20th century when they include anthropogenic forcings, and that they fail to do so when they exclude anthropogenic forcings, is evidence for the influence of humans on global climate.”
  4. Confidence in attribution. Detection and attribution results based on several models or several forcing histories suggest that the attribution of a human influence on temperature change during the latter half of the 20th century is a robust result.  From the IPCC AR4: “Detection and attribution results based on several models or several forcing histories do provide information on the effects of model and forcing uncertainty. Such studies suggest that while model uncertainty is important, key results, such as attribution of a human influence on temperature change during the latter half of the 20th century, are robust.”

Whereas all of the IPCC AR4 models agree that the warming observed since 1970 can only be reproduced using anthropogenic forcings, models disagree on the relative importance of solar, volcanic, and aerosol forcing in the earlier part of the 20th century (IPCC AR4 WGI Section 9.4.1). The substantial warming during the period 1910-1940 has been attributed by nearly all the modeling groups to some combination of increasing solar irradiance and a lack of major volcanic activity. The cooling and leveling off of average global temperatures during the 1950’s and 1960’s is attributed primarily to aerosols from fossil fuels and other sources, when the greenhouse warming was overwhelmed by aerosol cooling.

5.2  Sources of uncertainty

“Not only does God play dice, but sometimes he throws the dice where we can’t see them.”  Stephen Hawking

Attribution of observed climate change that compares simulated and observed responses will be affected by errors and uncertainties in the prescribed external forcing and in the model’s capability to simulate both the response to the forcing (sensitivity) and decadal scale natural internal variability.

Uncertainties in the model and forcing are acknowledged by the AR4 (Chapter 9): “Ideally, the assessment of model uncertainty should include uncertainties in model parameters (e.g., as explored by multi-model ensembles), and in the representation of physical processes in models (structural uncertainty). Such a complete assessment is not yet available, although model intercomparison studies (Chapter 8) improve the understanding of these uncertainties. The effects of forcing uncertainties, which can be considerable for some forcing agents such as solar and aerosol forcing (Section 9.2), also remain difficult to evaluate despite advances in research.”

The level of scientific understanding of radiative forcing is ranked by the AR4 (Table 2.11) as high only for the long-lived greenhouse gases, but is ranked as low for solar irradiance, aerosol effects, stratospheric water vapor from CH4, and jet contrails. Radiative forcing time series for the natural forcings (solar, volcanic aerosol) are reasonably well known for the past 25 years (although these forcings continue to be debated), with estimates further back in time having increasingly large uncertainties.

Based upon new and more reliable solar reconstructions, the AR4 (Section 2.7.1.2) concluded that the increase in solar forcing during the period 1900-1980 used in the AR3 reconstructions is questionable and the direct radiative forcing due to increase in solar irradiance is reduced substantially from the AR3.  However, consideration of Table S9.1 in the AR4 shows that each climate model used outdated solar forcing (from the AR3) that was assessed to substantially overestimate the magnitude of the trend in solar forcing prior to 1980.  The IPCC AR4 states: “While the 11-year solar forcing cycle is well documented, lower-frequency variations in solar forcing are highly uncertain.”  “Large uncertainties associated with estimates of past solar forcing (Section 2.7.1) and omission of some chemical and dynamical response mechanisms make it difficult to reliably estimate the contribution of solar forcing to warming over the 20th century.”

The greatest uncertainty in radiative forcing is associated with aerosols, particularly the aerosol indirect effect whereby aerosols influence cloud radiative properties.  Consideration of Figure 2.20 of the AR4 shows that, given the uncertainty in aerosol forcing, the magnitude of the aerosol forcing (which is negative, or cooling) could rival the forcing from long-lived greenhouse gases (positive, or warming). The 20th century aerosol forcing used in most of the AR4 model simulations (Section 9.2.1.2) relies on inverse calculations of aerosol optical properties to match climate model simulations with observations. The only constraint on the aerosol forcing used in the AR4 attribution studies is that the derived forcing should be within the bounds of forward calculations that determine aerosol mass from chemical transport models, using satellite data as a constraint. The inverse method effectively makes aerosol forcing a tunable parameter (kludge) for the model, particularly in the pre-satellite era.  Further, key processes associated with the interactions between aerosols and clouds are either neglected or treated with simple parameterizations in climate model simulations evaluated in the AR4.

Given the large uncertainties in forcings and model inadequacies in dealing with these forcings, how is it that each model does a credible job of tracking the 20th century global surface temperature anomalies (AR4 Figure 9.5)? Schwartz (2004) notes that the intermodel spread in modeled temperature trend expressed as a fractional standard deviation is much less than the corresponding spread in either model sensitivity or aerosol forcing, and this comparison does not consider differences in solar and volcanic forcing. This agreement is accomplished through inverse calculations, whereby modeling groups can select the forcing data set and model parameters that produces the best agreement with observations.  While some modeling groups may have conducted bona fide forward calculations without any a posteriori selection of forcing data sets and model parameters to fit the 20th century time series of global surface temperature anomalies, the available documentation on each model’s tuning procedure and rationale for selecting particular forcing data sets is not generally available.

The inverse calculations can mask variations in sensitivity among the different models.  If a model’s sensitivity is high, it is likely to require greater aerosol forcing to counter the greenhouse warming, and vice versa for a low model sensitivity. Schwartz (2004) argues that uncertainties in aerosol forcing must be reduced at least three-fold for uncertainty in climate sensitivity to be meaningfully reduced and bounded. Further, kludging and neglect of ontic uncertainty in the tuning can result in a model that is over- or under-sensitive to certain types or scales of forcing.

            With regards to the ability of climate models to simulate natural internal variability on decadal time scales, “there has been little work evaluating the amplitude of the main modes of Pacific decadal variability in [coupled climate models].” (IPCC AR4, Chapter 80 .  Whereas most climate models simulate something that resembles the Meridional Overturning Circulation (MOC), “the mechanisms that control the variations in the MOC are fairly different across the ensemble of [coupled climate models.]”

5.3  Bootstrapped plausibility

“If it was so, it might be, and if it were so, it would be; but as it isn’t it ain’t.  That’s logic!”[4] Charles Lutwidge Dodgson (Lewis Carroll)

‘Bootstrapped plausibility’  (Agassi 1974) occurs with a proposition that is rendered plausible that then lends plausibility to some of its more doubtful supporting arguments. As such, bootstrapped plausibility occurs in the context of circular reasoning, which is fallacious due to a flawed logical structure whereby the proposition to be proved is implicitly or explicitly assumed in one of the premises. This subsection argues that the IPCC’s detection and attribution arguments involve circular reasoning, and that confidence in the evidence and argument is elevated by bootstrapped plausibility.

Consider again the following argument:

  1. Detection. Climate change in the latter half of the 20th century is detected based upon an increase in global surface temperature anomalies that is much larger than can be explained by natural internal variability.
  2. Confidence in detection. The quality of agreement between model simulations with 20th century forcing and observations supports the likelihood that models are adequately simulating the magnitude of natural internal variability on decadal to century time scales.
  3. Attribution. Climate model simulations for the 20th century climate that combine natural and anthropogenic forcing agree much better with observations than simulations that include only natural forcing.
  4. Confidence in attribution. Detection and attribution results based on several models or several forcing histories suggest that the attribution of a human influence on temperature change during the latter half of the 20th century is a robust result.

The strong agreement between forced climate model simulations and observations for the 20th century (premise #3) provides bootstrapped plausibility to the models and the external forcing data. This strong agreement depends heavily on inverse modeling, whereby forcing data sets and/or model parameters are selected based upon the agreement between models and the time series of 20th century observations. Further confidence in the models is provided by premise #4, even though the agreement of different models and forcing datasets arises from the selection of forcing data sets and model parameters by inverse calculations designed to agree with the 20th century time series of global surface temperature anomalies. This agreement is used to argue that “Detection and attribution studies using such simulations suggest that results are not very sensitive to moderate forcing uncertainties.”

Confidence in the climate models elevated by inverse calculations and bootstrapped plausibility is used as a central premise in the argument that climate change in the latter half of the 20th century is much larger than can be explained by natural internal variability (premise #1).  Premise #1 underlies the IPCC’s assumption (AR4, Chapter 9) that “Global mean and hemispheric-scale temperatures on multi-decadal time scales are largely controlled by external forcings” and not natural internal variability. The IPCC’s argument has effectively eliminated multi-decadal natural internal variability as a causative factor for 20th century climate change. Whereas each model demonstrates some sort of multidecadal variability (which may or may not be of a reasonable amplitude or associated with the appropriate mechanisms), the ensemble averaging process filters out the simulated natural internal variability since there is no temporal synchronization in the simulated chaotic internal oscillations among the different ensemble members.

The IPCC’s detection and attribution method is meaningful to the extent that the models agree with observations against which they were not tuned and to the extent that the models agree with each other in terms of attribution mechanisms. The AR4 has demonstrated that greenhouse forcing is a plausible explanation for warming in the latter half of the 20th century, but cannot rule out substantial warming from other causes such as solar forcing and internal multi-decadal ocean oscillations owing to the circular reasoning and to the lack of convincing attribution mechanisms for the warming during 1910-1940 and the cooling during the 1940’s and 1950’s.

Avoiding bootstrapped plausibility and circular reasoning in detection and attribution arguments can be accomplished by:

  • use of the same best estimate of forcing components from observations or forward modeling for multi-model ensembles;
  • conducting tests of the sensitivity to uncertainties associated with the forcing datasets using a single model;
  • improve understanding of multi-decadal natural internal variability and the models’ ability to simulate its magnitude; and
  • improve detection and attribution schemes to account for the models’ inability to simulate the timing of phases of natural internal oscillations and the meridional overturning circulation.

The experimental design being undertaken for the CMIP5 simulations to be used in the IPCC AR5 show some improvements that should eliminate some of the circular reasoning in the AR4 associated with using climate model results to determine 20th century attribution. In the CMIP5 simulations, the use of best estimates of forcing for solar and aerosols is recommended.  The NCAR  Community Climate System model 20th century simulations for CMIP5 (Gent et al. 2011) arguably qualifies as a completely forward calculation, with forcing data sets being selected a priori and no tuning of parameters to the 20th century climate other than the sea ice albedo and the low cloud relative humidity threshold.  The results of NCAR’s CMIP5 calculations are that after 1970, the simulated surface temperature increases faster than the data, so that by 2005 the model anomaly is 0.4oC larger than the observed anomaly.  Understanding this disagreement should provide an improved understanding of the model uncertainties and uncertainties in the attribution of the recent warming.  This disagreement implies that the detection and attribution argument put forth in the AR4 that was fundamentally based upon the good agreement between models and observations will not work in the context of at least some of the CMIP5 simulations.

5.4  Logic of the attribution statement

“Often, the less there is to justify a traditional custom, the harder it is to get rid of it.”  Mark Twain

Over the course of the four IPCC assessments, the attribution statement has evolved in the following way:

  •    FAR:  “The size of this warming is broadly consistent with predictions of climate models, but it is also of the same magnitude as natural climate variability Thus the observed increase could be largely due to this natural variability, alternatively this variability and other human factors could have offset a still larger human-induced greenhouse warming. The unequivocal detection of the enhanced greenhouse effect from observations is not likely for a decade or more.”
  • SAR: “The balance of evidence suggests a discernible human influence on global climate.”
  • TAR:  “There is new and stronger evidence that most of the warming observed over the last 50 years is attributable to human activities.”
  • AR4:  “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.”

The attribution statements have evolved from “discernible” in the SAR to “most” in the TAR and AR4. The attribution statements are qualitative and imprecise in the sense of using words such as “discernible” and “most.”  The AR4 attribution statement is qualified with “very likely” likelihood.   As stated previously by the IAC, assigning probabilities to imprecise statements is not an appropriate way to characterize uncertainty.

The utility of the IPCC’s attribution statement is aptly summarized by this quote from a document discussing climate change and national security:

“For the past 20 years, scientists have been content to ask simply whether most of the observed warming was caused by human activities. But is the percentage closer to 51 percent or to 99 percent? This question has not generated a great deal of discussion within the scientific community, perhaps because it is not critical to further progress in understanding the climate system. In the policy arena, however, this question is asked often and largely goes unanswered.”

The logic of the IPCC AR4 attribution statement is discussed by Curry (2011b).  Curry argues that the attribution argument cannot be well formulated in the context of Boolean logic or Bayesian probability. Attribution (natural versus anthropogenic) is a shades-of-gray issue not a black or white, 0 or 1 issue, or even an issue of probability.  Curry argues that fuzzy logic provides a better framework for considering attribution, whereby the relative degrees of truth for each attribution mechanism can range in degree between 0 and 1, thereby bypassing the problem of the excluded middle. There is general agreement that the percentages of warming each attributed to natural and anthropogenic causes is less than 100% and greater than 0%.  The challenge is to assign likelihood values to the distribution of the different combinations of percentage contributions of natural and anthropogenic contributions. Such a distribution may very show significant likelihood in the vicinity of 50-50, making a binary demarcation at the imprecise “most” a poor choice.

JC note:  This is a technical thread, comments will be moderated for relevance.  I look forward to your comments on this, particularly since I received such helpful comments on my previous draft uncertainty paper.

Some backstory on the uncertainty monster paper.  This paper was submitted to the Bulletin of the American Meteorological Society.  It was a very long and comprehensive paper, addressing many of the uncertainty topics raised at Climate Etc.  The comments from the reviewers suggested shortening and focusing, and hence I have deleted the section on decision making under climate uncertainty, which will be the focus of a future separate paper.

Given the kerfuffle surrounding the PNAS review of Lindzen and Choi, I’ll make a few comments here regarding the review of this paper, which has to be regarded as controversial and contentious since it includes criticisms of the IPCC.  I described my concerns about potential problems with with the review process in my submission letter, and requested that they not use any reviewers that were involved in the AR4 or AR5.  I suggested that I would be most appreciative of reviews from some of the authors that I referenced in the bibliography.  Unlike most journals, the AMS does not operate under the ‘pal review’ system, and the editors select the reviewers.  The editors selected two excellent reviewers, who both made extremely helpful comments that are resulting in substantial revision of the paper.  This is the peer review system at its best, IMO.

And finally a comment about “blog science.”  From the previous Parts I, II and III, you can see how my thinking on this has evolved, and a significant element in that evolution has been the comments and discussion on these threads at Climate Etc.


253 responses to “Overconfidence in IPCC’s detection and attribution. Part IV

  1. “Attribution (natural versus anthropogenic) is a shades-of-gray issue not a black or white, 0 or 1 issue, or even an issue of probability. Curry argues that fuzzy logic provides a better framework for considering attribution, whereby the relative degrees of truth for each attribution mechanism can range in degree between 0 and 1, thereby bypassing the problem of the excluded middle.”

    It certainly isn’t marketed this way by the political wings of the agw of “settled science” and “we must regulate co2” is it?? It’s the major reason those associated with alarmism are held in such contempt by so many.

    • You are exactly right.

      The message was promoted as “settled science” and “we must regulate CO2″ !

      That is exactly why those associated with false alarmism – world leaders, Al Gore, the UN’s IPCC, the US NAS, UK Royal Society, federal science agencies, and formerly respected research journals like Nature, Science, PNAS, and Proceedings of the Royal Society – are now held in such contempt by so many.

      I am grateful to Professor Curry for her persistence in exposing this scam and the great quotes by John von Neumann et al.

      With kind regards,
      Oliver K. Manuel

    • Today’s news report from the annual meeting of the solar physics division of the American Astronomical Society illustrates the absurd overconfidence of IPCC’s reports:

      “The solar cycle may be going into a hiatus,” Frank Hill, associate director of the National Solar Observatory’s Solar Synoptic Network, said in a news briefing today (June 14).

      “This is highly unusual and unexpected”

      “But the fact that three completely different views of the sun point in the same direction is a powerful indicator that the sunspot cycle may be going into hibernation.”

      “If we are right, this could be the last solar maximum we’ll see for a few decades”

      “That would affect everything from space exploration to Earth’s climate.”

      http://www.space.com/11960-fading-sunspots-slower-solar-activity-solar-cycle.html

      With kind regards,
      Oliver

  2. Agree with cwon, when the assumptions and attributions have not been defined specifically and parameters nailed down, to simply point to CO2, especially man made CO2 as the only culprit is stretching the attributions and assumptions to unreasonable levels.

    • Norm Kalmanovitch

      CO2 emissions from fossil fuels have increased in a rather jagged fashion as economics dictated the level of CO2 emissions. The most prominant of these is from the rapid rise in the price of oil in 1979 which created a four year “year to year” decline in CO2 emissions. The CO2 concentration, according to the sources used by the IPCC, has increased in a smooth slowly accelerating fashion reaching a near perfect linear increase of 2ppmv/year for the past decade, which shows no sigh of the economically induced changes in CO2 emissions. If CO2 emissions was the prime driver of the observed increase these variations would be seen on the CO2 concentration curve especially the four year decline which occurred after 1958 where it should have been detected by the consistant meaurements from Mauna Loa Observatory.
      This is not surprising considering that each year about 750 gigatonnes of CO2 is added to the atmosphere and a near equal amount is removed resulting in this 2ppmv increase but humans today only contribute 31gt of CO2 with the other 96% coming from nature, so it is far more likely that observed increases in CO2 are primarily naturally sourced and not from humans. The increase in atmospheric CO2 started long before human CO2 emissions and the year to year changes in CO2 emissions of less than one gigatonne do not match the 13 gigatonne year to year changes required for the 2ppmv anuuual increase so if one were to actually do the math it can be proven that CO2 emissions from fossil fuels are definitely not the prime driver of the observed increase in atmospheric CO2 concentration making this assumption of the IPCC completely false.

    • Indeed. To pick on one particular “hole below the waterline”, the acknowledged ignorance of cloud albedo variance and effects is enough to scuttle the whole “exclusion” logic base for the reports. I.e., the reason you (the IPCC) can’t explain climate and temperature changes without resort to CO2 increases is that you’ve set aside the prima facie strongest driver and intervening (both independent and dependent characteristics) variable.

      Major FAIL.

  3. Interesting information re: the data sets used for the CMIPs. I’m curious as to why one common data set wasn’t mandated (other than allowing subsets for models which do not include all components).

  4. Norm Kalmanovitch

    The Hadley CRU data shows that the world cooled slightly from 1880 to 1910 but CO2 emissions grew from 0.5Gt/year in 1880 to 3.5Gt/year by 1910
    The world warmed rapidly from 1910 to 1942 but the CO2 emissions only grew by 0.5Gt/year to 4.0Gt/year by 1942.
    The world cooled slightly again from 1942 to 1975 (with fears of an impending ice age) but CO2 emissions grew by 500% reaching 20Gt/year by 1975.
    There was only 13 years from 1975 to the formation of the IPCC in 1988 when both CO2 emissions and global temperature both increased at more or less compatible rates. When 95 years demonstrate no correlation and only 13 years demonstrate correlation no honest scientist would take this as a sign that there is a scientifically viable relationship worthy of the draconian measures of the Kyoto Accord.
    The IPCC 2001 TAR shows a graph of four datasets including Hadley CRU all showing the cooling from 1942 to 1975 but the 2007 4AR report shows the same Hadley CRU dataset modified (physically altered) to eliminate this cooling to show overall warming since the middle of the century allowing the statement “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.”
    The IPCC 2001 TAR and the IPCC basis for the CO2 forcing parameter both use preindustrialization as the benchmark and claim that the full century of warming is due to CO2 emissions but with virtually no emissions as the world warmed from 1910 to 1942 and a 500% increase in CO2 emissions as the world cooled from 1942 to 1975 the IPCC first had the data altered to eliminate the 33 years of cooling and then changed the time frame from 100years to fifty years because there was no CO2 emissions increases that correlated with the first fifty years of the 20th century.
    Since 2002 the world has once again been cooling and there is negative correlation between CO2 concentration and global temperature on both the Hadley Monthly CRUT3 and the UAH MSU datasets ( R=-0.42 Hadley and R=-0.31 UAH MSU 2002 to 2009). To hide the fact that the world was once again cooling since 2002 the IPCC applied a decadal average to the global temperature that his the five years of cooling and fraudulently allowed the statement that the world is still warming to be made. On January 1, 2009 the spike from the 1998 el Niño was removed from the decadal average and the exceptionally cold la Niña year of 2008 was added to the average making the claim of the 2007 IPCC Bali Declaration “The 2007 IPCC report, compiled by several hundred climate scientists, has unequivocally concluded that our climate is warming rapidly, and that we are now at least 90% certain that this is mostly due to human activities.” now patently false!
    This is not a question of attribution it is just plain fraud pure and simple!

    • Darryl A. Biehn

      Norm, would you please list the source(s) of all your data? Thanks

      • Norm Kalmanovitch

        Darryl,
        If you do a search using global cooling 2002 you will filter out most of the AGW garbage and find the references you request. If you add my name to the search you will find pretty well everything that I have stated.
        Go to http://www.climate4you and you will find all the raw data that you are looking for graphed out in consistent fashion.
        My abstract for the 2010 GeoCanada Convention http://www.geocanada2010.ca/uploads/abstracts_new/58_GEOCANADA_2010_ABSTRACT_Norm_Kalmanovitch.doc shows the Nimbus 4 satellite data depicting CO2 and ozone as the only clearly identifiable gases affecting the Earth’s radiative spectrum. I also have models showing the exponential decrease in effect from increasing levels of CO2.

      • Norm,
        I withdraw my criticisms of you, Please accept my apology.

    • Yes, in fact the deltas in the other 82 years go the wrong way: More added CO2, less warming; less added CO2, more warming.

    • Arfur Bryant

      Norm,
      Kudos on an excellent post.
      AB

    • Kent Draper

      For some reason, the term “tar and feathers” keeps coming to mind. For the life of me I don’t know why :)

    • Norm, your strawman appears to be that CO2 should be responsible for ALL warming and cooling observed in the 20th century. Even the IPCC would knock that one down with their report conclusions, as they allow for aerosols, solar and other GHG effects, so I don’t think your strawman has any backers.

      • Jim D,
        Your strawman is that you are simply dismissing his work.
        It is the AGW promoters who screech about CO2 being the climate driver of the last ~100 years.
        It takes a large amount of chutzpah ot think you can turn this on the skeptics and blame them for pointing out that CO2 as the main driver of climate is incorrect.

  5. Judith,

    I understand why you don’t want scientists involved with AR4 or AR5 to review your paper. There is however also another side to this:

    For a paper that is criticizing some previous result, it is customary that at least someone involved with that previous critiqued result is a reviewer, among others. Hence Steig was a reviewer for O’Donnell et al, for example, and most scientists regarded that as entirely normal and expected.

    Since you critique the IPCC (process), isn’t it by the same reasoning logical and useful to have at least one reviewer who is intimately involved with the IPCC act as a reviewer?

    • Norm Kalmanovitch

      When six of us wrote a letter to the Secretary General of the UN seven of the world’s leading scientists wanted their names added including three IPCC expert reviewers of the 2007 4AR. They did this because as expert reviewer’s comments which are contrary to IPCC conjecture are simply ignored as was our letter.
      Dr. John Christy who is the most respected expert on global temperature was a lead author of the 2001 IPCC report who fought hard to prevent the MBH 98 Hockey stick graph from being included in the 2001 TAR but his scientific claims were overruled by political necessity and the defunct hockey stick was presented half a dozen times in the 2001 report in spite of his objections on scientific grounds. If those intimately involved with the IPCC as lead authors and expert reviewers are ignored by the IPCC in pursuit of their political objectives it is only fair that those with motivation outside the advancement of scientific knowledge be excluded as reviewers of honest scientific research.

    • Bart, the paper criticizes the IPCC’s attribution statement. There are many many people knowledgable about this general topic on both sides of the attribution debate that could be selected that are not involved in the IPCC. A slight clarification, my request addressed people involved in the IPCC at the level of lead author or above. The reviewers that were selected (if I am inferring their identities correctly) have written papers about the IPCC and are very knowledgeable about uncertainty.

    • Reviewers should have no vested interest in whether a paper is published or not. Recent kerfuffles have shown that the temptation is too great to sabotage valid work. Let those whose work is criticized rebut the criticism elsewhere, but not be given the opportunity to squash it. Editors don’t need the added grief of navigating political waters and readers don’t need to have nagging suspicions that ulterior motives shaped a publication. The review process ought to be concerned first with the quality of research and not so much with its conclusions. The scientific process ultimately will judge those.

      • Nebuchadnezzar

        “Reviewers should have no vested interest in whether a paper is published or not. Recent kerfuffles have shown that the temptation is too great to sabotage valid work.”

        The role of the editor is important here. If a paper directly criticises something then it would be useful to have the point of view of the people who are being criticised. The editor can then weigh that point of view along with the comments of the other reviewers. The editor does not have to bend to every reviewer’s whim.

        It would be depressing to publish a paper critical of something and *then* discover you had made a series of factual errors that that reviewer would have spotted.

        It’s up to the editor though, isn’t it? Is there more grief in publishing a series of comments and responses? Or in exercising judgement?

    • andrew adams

      Since you critique the IPCC (process), isn’t it by the same reasoning logical and useful to have at least one reviewer who is intimately involved with the IPCC act as a reviewer?

      Yes, and it seems a bit much to applaud AMS for not operating a pal review (sic) system while at the same time insisting that certain individuals perceived as hostile are not chosen as reviewers.

    • Here we see on display fear of the Climate Scientist’s greatest enemy: Objectivity. Thanks, Bart.

    • Steve McIntyre

      bart says; “For a paper that is criticizing some previous result, it is customary that at least someone involved with that previous critiqued result is a reviewer, among others.”

      Unless, of course, the paper is criticizing a result from someone like me or McKitrick, or Christy in which case the ‘custom’ is ignored.

      • Steve,
        I seem to remember that Climatic Change invited you to review Wahl and Ammann’s paper criticising MM05?

      • Steve’s statement in Climateaudit

        ” I was a reviewer of Wahl and Ammann and published my review online here. I had an adverse interest but, according to the Climategate documents, my identity as a reviewer was disclosed to them.”

      • And Steve’s further comments in Climateaudit about his experience in reviewing Wahl & Amman 05, while discussing the L&C II thread

        ” Schneider totally ignored my review comments – which I tried to be objective about – and terminated me as a reviewer. Worse, as a reviewer, I asked Wahl and Ammann to disclose verification r2 and CE statistics – a point that was in controversy – and where I knew that they had got precisely the same results as us, because their code matched ours – something that they failed to disclose. They refused and Schneider refused to make them. Shameful. Worse, they told Schneider that their parallel submission to GRL had showed us up – without disclosing to Schneider that the GRL submission had been rejected. Their conduct was totally repugnant and, as a result, I filed an academic misconduct complaint – the only time that I’ve done so. Only then did they grudgingly disclose (in an appendix) the statistics that confirmed our findings. UCAR failed to investigate the complaint according to their procedures, but I didn’t pursue the matter at the time. Perhaps I should reconsider.

        Their abstract was very deceptive. In fact, they confirmed our findings that the MBH reconstruction did not possess the advertised statistical ‘skill” and robustness.”

        So this was the study of Wahl & Amman’s paper preview and publication, an object lesson of deception, unethical behaviour and downright repugnant behaviour.

      • Venter,
        Steve said “in which case the ‘custom’ is ignored.” And it wasn’t.

        As to anonymity, the review is here. The writer’s style is unmistakeable.

      • The contents of the review speak for themselves and the reviewers identity was leaked to the authors by the corrupted pally system employed by the IPCC insiders clique.

      • “leaked”? Well, as I say, the style of the review could leave no doubt. But when it includes observations like:
        “Further, our GRL article does not contain any reference to an alternate reconstruction. Thus, any criticisms levelled by WA against the alleged reconstructions in MM03 or MM05b (EE) (regardless of their blithe ignoring of caveats) simply have no application against MM05a (GRL).”
        then I don’t think any back channels were needed to pinpoint the reviewer.

      • The reviews were sent to the editor. The editor was Stephen Schneider. He shot down fair reviews and dismissed the reviewer.You do the maths. Benefit of doubt does not apply to the Team. Neither to their defenders.

      • On the one hand, as scientists we are ethically bound to the scientific method, in effect promising to tell the truth, the whole truth, and nothing but — which means that we must include all the doubts, the caveats, the ifs, ands, and buts. On the other hand, we are not just scientists but human beings as well. And like most people we’d like to see the world a better place, which in this context translates into our working to reduce the risk of potentially disastrous climatic change. To do that we need to get some broadbased support, to capture the public’s imagination. That, of course, entails getting loads of media coverage. So we have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have. This ‘double ethical bind’ we frequently find ourselves in cannot be solved by any formula. Each of us has to decide what the right balance is between being effective and being honest. I hope that means being both.

        from Stephen Schneider in Discover Magazine article 1989

        Nick,
        Are you being effective or being honest?

      • Teddy,
        “Are you being effective or being honest?”
        Both, I hope. How about you?

        You could try engaging the topic. Bart said it was normal when a paper is criticised, that the criticism is reviewed by the person criticised (or someone of similar viewpoint). Steve M said that didn’t apply to sceptics being criticised, and I pointed out that CC did invite him to review W&A. Do you have anything honest or effective to add?

  6. Dr,Curry was absolutely right in insisting that nobody from AR4 or AR5 author panel should be a reviewer of her paper.

    It is not customary for someone who’s work as being criticised to be an ” anonymous ” reviewer. What Steig and the journal did on the O’Donnel paper was unpardonable. You either notify the author that the person whose work he criticises will be a reviewer, or you give the person whose work was criticised a chance to respond to the article, prior to publication. And after being an anonymous reviewer, Steig’s antics on the review of the paper are well documented, a disgraceful episode.

    Kindly do not bring justification of that here in a thread where Dr.Curry is talking about IPCC’s confidence or overconfidence levels and attributions.

    • Professor Bob Ryan

      I agree. The appropriate steps for an editor is to obtain the service of two or three independent and anonymous reviewers. If those reviews support publication and the editor agrees then those who have been criticized are given journal space for a brief rebuttal.

  7. The short form of the IPCC’s statement is: “We are making a mathematically unsupportable judgment call and disguising it as mathematically supportable science”.

    I am not without sympathy. They were politically compelled to make a statement, and they did. The “noble quest” and other “end justify the means” reasoning were in play. It is plain to see that the real answer was very likely(TM): There is insufficient data to make a confident estimate at this time. I find it hard to believe (amusing?) that JC is one of the few people challenging this dogma.

    Arguing a person is wrong and arguing they don’t know the answer are two different debates. This subtlety is often lost immediately on blogs.

    What is of compelling is how this situation is going to be handled in AR5:

    1. Computer modeling in AR4 is used to justify a very likely statement for the source of warming being CO2. Other forcings or parameters are unable to reproduce the recent warming in their model simulations.

    2. However, using the same computer models in *** predictive *** mode, they perform very poorly against new actual measurements. They do not reproduce the recent plateau of temperatures, even though CO2 emissions continue unabated.

    There is no scientifically supportable way to resolve a 90% confidence level in a conclusion derived from model simulations that fail badly in predictive mode (poor predictive skill).

    Something is clearly very wrong here.

    Stop sweeping it under the rug and pretending it does not exist, and I am looking at you RC. Blogs run by modelers have not done many introspective posts on model results recently.

    • Not only recently. Blogs run by modellers have not done introspection at anytime. If they had spent a quarter of their time on that instead of plotting to prevent publication of dissenting work, maybe the model qualities could have improved.

  8. While reading the observations on aerosols, it reminded me that I’ve often wondered if anyone has made a credible stab at estimating the increase in man-made aerosols from a little kerfluffle memorialized in the history books as “World War II”? Because I feel confident that increase in aerosols (and not just from obvious sources like London, Dresden, and Tokyo burning; tanks, planes, and ships burning; bombing of oil refineries; but even just marching armies across terrain month after month) would have been significant.

    How significant? How long would the impact persist in having a global cooling influence? Could it go a long ways towards explaining the 1940s-1950s observations? I dunno. I wonder if anyone else does.

    • . . . expedited building of factories and running them 24/7. . . atmospheric nuclear testing continuing after the war. . . clearing debris and rebuilding after the war. . . whole lotta extraordinary aerosol production going on there from 1940 into the late ’40s or early ’50s.

      • ferd berple

        The problem is that the cooling in the 50’s and 60’s was not unique. it follows a similar pattern 60 years before, and we are seeing a similar patter 60 years later. While it is possible this is all due to coincidence, it opens up the possibility that there is a natural 60 year cycle that is not accounted for by the IPCC and the models that had lead to incorrect conclusions.

        The problem for the IPCC is that without aerosols, the AGW hypothesis fails. There is now way to explain why temperatures and CO2 have such a poor correlation, except to add a correction factor for these cooling period. This extra parameter is what allows the model to “wiggle its trunk”.

        The problem with adjusting parameters after the fact is that every theory can be made to work this way, no matter how right or wrong it might be. So, the ability of the model to fit the past becomes meaningless and the only test is if the model can predict the future.

        Which cannot be done, as the models will simply retroactively adjust the parameters to maintain their fit, claiming that they have discovered new science. That the previous values for aerosols were not correct and these new values are more precise. Should this fail, then a new parameter will be added and the process repeated, and justified under the argument that the new parameter reflects a new discovery.

  9. patrickinken

    I understand that heat is stored more in the oceans than in the atmosphere. But we live on the surface of the Earth not in the ocean and this seems like a good reason for the IPCC to focus on the average surface temperatures when it examines detection and attribution.

    There are many statistical/ econometric techniques which could be used in such an examination, but I have not read that this is what IPCC attempts to do. Is there a reason for this?

    I have seen a number of statistical models of surface temperatures which appear to achieve good fits using explanatory variables which play little role, if any, in the coupled ocean and atmospheric models that the IPCC relies on. Has the IPCC explained why these results should be disregarded?

    • “But we live on the surface of the Earth not in the ocean and this seems like a good reason for the IPCC to focus on the average surface temperatures when it examines detection and attribution. ”
      I disagree. The greenhouse effect purports to raise the heat retained by the “globe” (Global Warming). To this bear of little brain, that would suggest that
      – the heat retained by the oceans is of interest; and
      – the heat content of the atmosphere is of more interest than surface temperature.

    • Patrickinken
      I hate to point this out to you but the ocean makes up the great majority of the surface of the earth. Please don’t downplay the impact of the oceans on climate just because we don’t have gills. It also never ceases to amaze me that scientists think they can predict climate trends for the next 100 years based on a tiny snippet of temperature records run through a model. We are in the infancy stage in terms of understanding climate and it’s drivers and it’s arrogant to claim a computer model has any accuracy 100 years out. We can revisit this 500 years from now and it’s probable mankind will not have the ability to predict climate 100 years in the future even then.

  10. Interesting essay . . . I agree with a lot of it, and some of it I am still thinking about.

    It’s striking that estimates of solar forcing have been further revised downward, suggesting a higher climate sensitivity.

    I wonder if this paper might be stronger if Dr. Curry said more about how the AR4 uses observational evidence, like the sunspot record, or nighttime warming, in the attribution process. Even if they use model runs to generate Bayesian probabilities, they are clearly using direct evidence as well (as Dr. Curry alludes to) and that might be better elucidated.

    • The problem with that argument is that a lower solar forcing than assumed in AR4 is going to make the model fit for the early part of the 20th century even worse. As it is, the models pretty much miss the rise in global temps from 1910 to 1945, the big fat red line not withstanding.

      http://www.ipcc.ch/publications_and_data/ar4/wg1/en/fig/figure-9-5-l.png

      I suspect that trying to fix the even worse fit that would result from lower solar forcing by attempting to increase the ghg sensitivity would require some real handstands to refit the models to the late 20th century.

    • Robert – I may be wrong, but as far as I know, what has been revised downward are the values for total solar irradiance as a result of corrections to the instrumental record. However, “forcing” refers to a change in radiative flux rather than the magnitude of the flux itself, and I don’t believe the revisions alter that, because they affect TSI values from pre-industrial days onward, and so everything would be adjusted downward, with no effect on any change in values from then to now.

    • Robert – see also raises in TSI changes and indirect solar forcing:

      IPCC’s Argument from Ignorance:
      By claiming low solar variability, IPCC inferred major anthropogenic causation for warming in the last fifty years. However Scafetta’s (2009) bridging of contested calibration between satellites supports the lead investigator’s case for higher than direct Total Solar Insolation (TSI) variation. The review by Idso et al. (2009) found growing evidence for indirect solar modulation of galactic cosmic rays (GCR) ionization influencing clouds, that IPCC has dismissed as unsupported. Indirect solar/GCR may be several times greater than TSI, with others finding further supporting evidence. The American Astronomical Society (2011) notes Solar Cycle 24 is the lowest in a century. It now warns of a very low Cycle 25 potentially with no sunspots, similar to the Maunder Minimum of AD 1645 to 1715. e.g., see Livingston & Penn (2011). Climatic uncertainty needs thus allow for cooling and warming ranging from a “Little Ice Age” to a “Medieval Warm Period.” IPCC’s argument from ignorance undermines its >90% “very likely” anthropogenic attribution.
      See:
      Nicola Scafetta & Richard C. Willson, (2009) ACRIM-gap and TSI trend issue resolved using a surface magnetic flux TSI proxy model, Geophysical Research Letters, Vol. 36, L05701, doi:10.1029/2008GL036307, 2009
      http://www.fel.duke.edu/~scafetta/pdf/2008GL036307.pdf

      Craig Idso and S. Fred Singer, Solar Variability & Climate Models, Climate Change Reconsidered: 2009 Report of the Nongovernmental Panel on Climate Change (NIPCC), Ch 5, Chicago, IL: The Heartland Institute, 2009. ISBN: 978-1-934791-28-8
      http://www.nipccreport.org/reports/2009/pdf/Chapter%205.pdf

      N. Scafetta, (2010) Empirical evidence for a celestial origin of the climate oscillations and its implications. J. Atmospheric & Solar-Terrestrial Physics 10.1016/j.jastp.2010.04.015 arXiv:1005.4639v1

      American Astronomical Society (2011) What’s Down With The Sun? Major Drop In Solar Activity Predicted, AAS Solar Physics Div. Annual Meeting June 14
      (See also: “All three of these lines of research to point to the familiar sunspot cycle shutting down for a while.”)

      M. Penn & W. Livingston (2010) Long-term Evolution of Sunspot Magnetic Fields, IAU Symposium No. 273 arXiv:1009.0784v1 [astro-ph.SR] 3 Sep 2010

      • David L. Hagen

        You refer to IPCC’s “Argument from Ignorance” on natural (i.e. solar) forcing.

        In its list of anthropogenic and natural climate forcing components (AR4 WG1 SPM Figure SPM.2, p.4) IPCC concedes that its “level of scientific understanding” of “natural forcing components” (i.e. solar) is “low”.

        In other words, IPCC concedes its position of “ignorance” and one should go elsewhere to gain knowledge about these factors (as you have done, with the studies you have cited).

        The problem is that, operating from this conceded position of ignorance, IPCC still goes ahead with attributing essentially ALL climate change to anthropogenic forcing factors, thereby using an “argument from ignorance”, as you point out.

        If we correct for this error, the 2xCO2 climate sensitivity derived from the observed long-term CO2/temperature record (1850 to today) would be around 0.7C (half of observed warming caused by natural factors) to 1.4C (assuming all caused by CO2).

        Max

  11. Judith, the following definition is VERY incomplete:

    ‘Attribution. Climate model simulations for the 20th century climate that combine natural and anthropogenic forcing agree much better with observations than simulations that include only natural forcing.’

    As described in great detail in IPCC WG1 AR4 Chapter 9, attribution involves a lot stronger evidence than just model fit. It is about explaining the observations, using models to provide fingerprints and allowing the statistical model for up/downscaling of those fingerprints to assess which ones are needed to explain the observations and which ones are not, and at what significance level. It also carefully assesses the role of model and data uncertainty, and there is a section investigating alternative explanations, e.g. the role of solar forcing. I was under the impression we explained that very carefully in the chapter.

    • Gabi, thanks for stopping by. The one sentence that you cite does not imply that that is the only thing considered in the attribution studies, as discussed at length in my essay, although upon reflection I can modify that one sentence. It is my understanding that the AR4 solely uses climate models compared with 20th century observations in its detection argument. The fingerprint techniques and inferences are explained in the appendix, but unless I missed something, they all depend on the model simulations. In the Appendix, you describe the optimal fingerprinting and methods of inference by citing a few papers that use the methods you describe. Section 9.1.2 states:

      The consistency between an observed change and the estimated response to a hypothesised forcing is often determined by estimating the amplitude of the hypothesised pattern of change from observations and then assessing whether this estimate is statistically consistent with the expected amplitude of the pattern. Attribution studies additionally assess whether the response to a key forcing, such as greenhouse gas increases, is distinguishable from that due to other forcings (Appendix 9.A). These questions are typically investigated using a multiple regression of observations onto several fingerprints representing climate responses to different forcings that, ideally, are clearly distinct from each other (i.e., as distinct spatial patterns or distinct evolutions over time; see Section 9.2.2).

      It is not at all clear how this kind of methodology was factored into to the AR4 reasoning, and how complete the actual application of fingerprint analysis was. In terms of the broader attribution argument, fingerprinting methodology is a detail. While individual papers are cited (e.g Schurr and Hasselman, Min and Hense), there is no evidence that I can see in chapter 9 of a systematic treatment of all the forcing, model, and data uncertainties and also the possibility of strong multidecadal natural internal variability. If Min and Hense was the foundation for the attribution statement, it should be more clearly highlighted and made explicit.

      “careful explanations” are in the eye of the beholder, and the persons providing the explanations can have a very different perspective from the audience regarding carefulness and completeness, expecially when hidden knowledge is taken for granted by the person providing the explanation.

      In terms of alternative explanations, i regard the FAQ in chapter 9 to be very superficial
      http://www.ipcc.ch/publications_and_data/ar4/wg1/en/faq-9-2.html

      • Thanks Judith. I spent a bunch of time plowing through Chapter 9 and all the supplemental info. I was left hungry.

        Personally, I like to see a WG1 that focused more on
        1. A MUCH MORE complete exposition of the observation issues ( land temp, ocean teamp, OHC, etc and complete open documentation for forcing datasets

        2. Steve Mcs engineering quality report on the GHG effect.

        3. Chapter 9, done in more more detail.

        4. A much more complete treatment of sensitivity.

        5. Defining the best of breed for GCMs

        And get rid of paleo stuff that doesn’t directly relate to #4

        That’s just me.

  12. Here’s an excellent post from Dr.Pielke Sr. about the Ocean heat content and the climate models.

    http://pielkeclimatesci.wordpress.com/2011/06/14/2011-update-of-the-comparison-of-upper-ocean-heat-content-changes-with-the-giss-model-predictions/

    It’s the same old story. Observations don’t match model predictions.

  13. On tuning to aerosols, I assume that Dr. Lindzen was able to penetrate the prose of WG1 in AR4:

    Lindzen, Richard S. 2007. “Taking GreenHouse Warming Seriously.” Energy & Environment 18 (7) (December): 937-950. doi:10.1260/095830507782616823. https://www.cfa.harvard.edu/~wsoon/ArmstrongGreenSoon08-Anatomy-d/Lindzen07-EnE-warm-lindz07.pdf

    Page 11: To be sure, current models can simulate the recent trend in surface temperature, but only by invoking largely unknown properties of aerosols and ocean delay in order to cancel most of the greenhouse warming (Schwartz et al, 2007). Finally, we note substantial corroborating work showing low climate sensitivity.

    Ultimately, however, one must recognize how small the difference is between the estimation that the anthropogenic contribution to recent surface warming is on the order of 1/3, and the iconic claim that it is likely that the human contribution is more that [sic] ½. Alarm, we see, actually demands much more that the iconic statement itself. It requires that greenhouse warming actually be larger than what has been observed, that about half of it be cancelled by essentially unknown aerosols, and that the aerosols soon disappear. Alarm does not stem directly from the iconic claim, but rather from the uncertainty in the claim, which lumps together greenhouse gas additions and the cancelling aerosol contributions (assuming that they indeed cancel warming), and suggests that the sum is responsible for more than half of the observed surface warming. What this paper attempts to do is point the way to a simple, physically sound approach to reducing uncertainty and establishing estimates of climate sensitivity that are focused and testable. Such an approach would seem to be more comfortable for science than the current emphasis on models testing models, large ranges of persistent uncertainty, and reliance on alleged consensus. Hopefully, this paper has also clarified why significant doubt persists concerning the remarkably politicized issue of global warming alarm.

  14. (Wow). This is indeed a thought-provoking article (in whom it can!).

    One key passage I think, is:

    Whereas each model demonstrates some sort of multidecadal variability (which may or may not be of a reasonable amplitude or associated with the appropriate mechanisms), the ensemble averaging process filters out the simulated natural internal variability since there is no temporal synchronization in the simulated chaotic internal oscillations among the different ensemble members.

  15. (/sarc on) The sale of umbrellas in New York City is “consistent with” egg production in South Africa. Until Somalia, surface temperature is inversely “consistent with” the number of active pirates. (/sarc off)
    If one attempts to correlate A with B, and ignore or minimize the effect of C, one may get absurd results. :-)

  16. “but cannot rule out substantial warming from other causes such as solar forcing and internal multi-decadal ocean oscillations owing to the circular reasoning and to the lack of convincing attribution mechanisms for the warming during 1910-1940 and the cooling during the 1940’s and 1950’s.”

    Indirect solar forcing:
    Vindberg, Christer. 2011. Scientists at Aarhus University (AU) and the National Space Institute (DTU Space) show that particles from space create cloud cover. University. Aarhus University. May 16. http://science.au.dk/en/news-and-events/news-article/artikel/forskere-fra-au-og-dtu-viser-at-partikler-fra-rummet-skaber-skydaekke/

    Enghoff, Martin B., Jens Olaf Pepke Pedersen, Ulrik I. Uggerhøj, Sean M. Paling, and Henrik Svensmark. 2011. “Aerosol nucleation induced by a high energy particle beam.” Geophysical Research Letters 38 (9) (May). doi:10.1029/2011GL047036. http://junkscience.com/climate-features/evidence-that-cosmic-rays-seed-clouds/

    Ocean Oscillation:
    Wyatt, Marcia Glaze, Sergey Kravtsov, and Anastasios A. Tsonis. 2011. “Atlantic Multidecadal Oscillation and Northern Hemisphere’s climate variability.” Climate Dynamics (April). doi:10.1007/s00382-011-1071-8. http://www.springerlink.com/content/p1275t4383874p65/
    The “stadium wave” appears to have a ~60-year period, with different lags progressing through various ocean basins.

  17. On the peer review front, I have good news to report regarding my paper “Reasoning about climate uncertainty” that was submitted to the special issue of Climatic Change on framing and communicating uncertainty for the IPCC.
    http://judithcurry.com/2011/03/24/reasoning-about-climate-uncertainty-draft/

    The review history on this one is of some interest. I was invited by the editors to write this piece, with the assumption that my article would be critical of the way that the IPCC has treated uncertainty. The paper was reviewed by the two editors, Gary Yohe and Michael Oppenheimer. Plenty of comments and suggestions were made, I was able to accommodate most of them without toning down my main points. Kudos to Yohe and Oppenheimer for organizing this special issue and soliciting and publishing my article.

  18. There is more or less a direct contradiction in the AR4 SPM here.
    As Judith says, one of the key conclusions is the paragraph in large red letters
    “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations … Discernible human influences now extend to other aspects of climate, including ocean warming, continental-average temperatures, temperature extremes and wind patterns (see Figure SPM.4 and Table SPM.2).”

    But if we look at Table SPM.2 for supporting evidence regarding temperature extremes and winds, we see that the confidence level for human influence is only “more likely than not”, i.e. > 50%!

    Worse still, if you read the small print in footnote f, it says “Magnitude of anthropogenic contributions not assessed. Attribution for these phenomena based on expert judgement rather than formal attribution studies”!!

    • Paul, the entire structure of words such as “likely” and “significant” was all part of the shell game built over decades. The science was bought to be spun for co2 regulations and taxes.

      There should be shame but all I see are defensive regrets here and there at this point. Why doesn’t Dr. Curry just confirm the truth directly and learn to live with being called a Holocaust Denier or Nazi like many of us? How many in the IPCC circles have towed the line and kept a low profile on peer review abuse, data manipulations and defacto supported outlandish summary conclusions (goreism, mann, jones, hansen etc.) is a great shame of science.

      Thousands if not millions in the end will die premature deaths due to climate malinvestments and fraud. If fraud science becomes the global standard then the growing world Orwellian state is that much closer.

  19. Stephen Singer

    Section 5.1, para 3, sentence 1

    The IPCC AR4 (WG I, Chapter 9) describes two types of simulations {that] have been used in detection and attribution studies.

  20. Pooh, Dixie

    This just in: Solar Variability
    Watts, Anthony. 2011. The major AAS solar announcement: Sun’s Fading Spots Signal Big Drop in Solar Activity. Scientific Blog. Watts Up With That? June 14. http://wattsupwiththat.com/2011/06/14/the-major-aas-solar-announcement-suns-fading-spots-signal-big-drop-in-solar-activity/

    “It looks like Livingston and Penn are getting some long deserved recognition.

    “From the WUWT solar reference page. Note: when the B gauss reading of sunspots hits 1500, they will no longer have enough contrast to be visible. That may occur at or near the years 2015-2017.

    “The American Astronomical Society meeting in Austin has just made a major announcement on the state of the sun. Sunspots may be on the way out and an extended solar minimum may be on the horizon.”

    Graphs from http://www.leif.org/research/Livingston%20and%20Penn.png

    More here:
    Solar Cycle 24 / Spaceweather; Topic: Sunspot Magnetism—Livingston & Penn (Read 56,336 times)
    http://solarcycle24com.proboards.com/index.cgi?board=general&action=display&thread=855&page=1

    • game-Set-MATCH

    • A possible prolonged lull in solar irradiance, if it transpires, would likely partially ameliorate any anticipated warming from anthropogenic greenhouse gases or other forcings. The magnitude of the offset is uncertain but is likely to be small, even for a new Grand Solar Minimum, estimated in the linked reference at about 0.1 C by 2100, but with considerable associated uncertainty, which the authors estimate could lead to a temperature offset as much as 0.3 C, and perhaps even more. If the actual GHG-mediated warming were to remain below 1-2 C, this could make a discernible difference, but the amelioration would probably be minor for anticipated rises greater than 2 C.

      • Fred: …..”The magnitude of the offset is uncertain but is likely to be small,…..”

        Yes, the authors’ findings were based on some modelling and the usual assumptions about feedbacks etc. The real world may behave completely differently of course. The anecdotes about the weather during the Maunder minimum suggest it was far colder than would be forthcoming from a 0.3 degC decrease. I expect that was why it was called the LIA.

      • RobB – I don’t think “the usual assumptions about feedbacks” are particularly relevant. The reason is that we are interested in the relative strength of, for example, anthropogenic vs solar forcings, and over the long term, feedbacks are likely to affect each more or less similarly.

        Of course I agree with you that the real world may behave differently, because that’s always a possibility. We have to judge on the best evidence we have, without expecting absolute certainty.

        The authors, and many others, have discussed the LIA, and without trying to completely summarize an extensive literature, there appears to be agreement that the LIA was not a single, uniform global phenomenon, but a series of events happening at different times in different regions, not always coincident with sunspot minima, and that multiple factors contributed to the cooling. Certainly, reduced solar activity played an important role.

      • “The anecdotes about the weather during the Maunder minimum suggest it was far colder than would be forthcoming from a 0.3 degC decrease. I expect that was why it was called the LIA.”

        I find it laughable that the same people who criticize the MODERN record of thousands of thermometers, put such uncritical faith in the rather paltry record of temperatures from the LIA. And please don’t point to a painting of washington crossing the damn river. It wasn’t that terribly cold that night.

      • Somebody better tell the Swedes that it really wasn’t really that cold in 1658 when they marched an army of 10,000 infantry, cavalry and artillery across frozen straits that are part of the Baltic Sea to invade Denmark.

        Probably wind chill that fooled them into thinking it was cold enough that they were marching across ice.

      • Wake me up when you find that the Swedes marched around the globe.

      • He is obviously applying a version of Hansen’s 1200km extrapolation algorithm, except his version extrapolates around the globe. Some of these skeptics forget that they criticize Mann for teleconnecting bristlecone pines. teleconnecting Swedes are much more rigorous and robust. skeptic. petard. hoist

      • You address a quote about the Maunder minimum and then refer to an anecdote about Washington’s crossing the Delaware (which was over half a century after it ended).

        I gave you an anecdote from history which actually happened during the relevant time period and which demonstrates that temperatures were substantially colder than any we see today. Those straits rarely see any ice, much less get frozen solid.

        Just a tip Mosh — when you are screwing up your history, best to avoid snark about those who have it right.

        Also — best not to butcher logic when engaging in snark. My example clearly applies to the Maunder minimum which was the subject of you post. The only way that a distance of 1200km would seem to apply in this thread is that seems about how far you have strayed from making any sense.

      • Steven –
        Read some archaeological texts – or, for light reading, try Brian Fagan’s book “The Little Ice Age”. I should warn you though – his facts don’t support his conclusions. Not just my judgment, but others’ as well.

        put such uncritical faith in the rather paltry record of temperatures from the LIA

        How about verified written records of weather, crops, ice pack, etc. from Greenland and from Europe? Or records of the towns that were overrun by glaciers?

        It might not have been that terribly cold “that” night, but overall, it was a damn cold time to be living. And it was cold enough that night enough to keep professional Hessian soldiers from mounting an effective guard force.

        BTW, the recent paper wrt the Viking exit from Greenland is nothing but a rerun of the archaeological record that’s been around for a lot of years.

      • The other thing I find laughable is the failure of people to apply their skepticism equally toward all evidence. For example, people who criticize a modern record that has well over 30000 stations around the globe for being too SPARSE, are always eager to cite a few reports from scattered places in the LIA. People who discount the very IDEA of a global temperature are all too eager to argue that the average temperature in the LIA was lower. I’m also amused by people who argue about the imprecision of thermometers eagerly asserting the accuracy and resolution of ‘written” records. My point is NOT that it wasn’t colder in the LIA. It was. My point is rather this. We accept the evidence of a chilly LIA without much skepticism. And some of the very people who accept this evidence, sparse evidence, low resolution evidence , are far more skeptical of evidence that is spatially denser and higher resolution than they are of sketchy evidence. To put a fine point on it: if you believe firmly in an LIA, then you have no business doubting the modern record.

      • “To put a fine point on it: if you believe firmly in an LIA, then you have no business doubting the modern record.”

        This is obviously wrong. It is not black and white.

        I believe firmly in LIA and am very unconvinced of the accuracy of the modern record (ALW, confirmation/selection bias). It is accuracy of the modern record that some sceptics doubt, especially the last 3 – 4 decades.

      • Steven –
        And I find it amusing when people underrate the written record. That is, after all, history, and from a time when fiction wasn’t exactly a major industry. I usually find that they believe that record selectively – if it confirms their biases, then it’s good, if not, then they start hand waving.

        Also – you seem to be talking about someone else cause most of your rationale doesn’t fit me.

      • There is no comparison between “it was colder during the little ice age and warmer during the medieval warm period than the consensus claims” and the consensus claims to knowledge about temperatures current and past.

        If skeptics attempted to state the global average temperature of the LIA or MWP within tenths of a degree, and published trends purporting to show temperatures with such precision over decadal and century time periods, then it would be inconsistent with criticisms of the claims of the consensus regarding both current and paleo temperature sets. But that is not the case.

      • Gary –
        I found Steven’s argument underwhelming – in part because he tried to put me in a box. I’m an engineer – spent a LOT of time with instrumentation,l so telling me that the present global data set is “accurate” is laughable. In fact, the idea of using measured temps that are accurate to 1 or 2 (or 4) degrees to derive 3 decimal place accuracy is also laughable. But as long as you do it consistently with the same sources (and don’t “adjust” the data unnecessarily, it’s at least relatively accurate – or maybe consistently inaccurate. It’s what we’ve got, so for the most part I don’t mess with those arguments.

        Nor do I have any illusions about the “accuracy” of the CET or the written records kept by monks in the 14th C. But dissing those because they’re not as “accurate” as today’s data set is also laughable. They’re usable because they ARE accurate records of the “conditions” at the time — crop cycles and yields, rain, heat, cold, snow, ice pack, death records (both animal and human) and a lot more – and they give pretty accurate knowledge about what was happening and the general temp range at that particular time and location. Certainly not down to the tenth of a degree, but on a scale that nails down climate changes beyond doubt. Archaeology is not comfortable reading for most alarmists. Nor, for that matter, is a lot of history. Maybe that’s why they seem to be so dense about those subjects sometimes.

        You work with what you’ve got.

      • Jim,

        Oh, I agree. I was just rejecting the suggestion that skeptic’s acceptance of general historical records of the LIA and WMP is inconsistent with doubts about the exorbitant claims of precision made by the keepers of the consensus temperature record flame. We certainly have more data about current and recent temps, and better instrumentation, but I don’t believe we have anywhere near enough data, or accurate enough data, to justify the claims made by the consensus scientists.

      • I believe this started when someone claimed that .3C was way off

      • I’m not dissing the records. I am just noting that they are accepted by believers in the LIA without any due diligence. I am noting a DIFFERENCE in the degree of skepticism. I am noting that the same folks who show rigor in their questioning of the modern record, show no such rigor when looking at the LIA. I am noting that you NEVER see anyone argue against the concept of a global average temperature when discussing the LIA. Have you never noticed that? have you EVER seen anyone dispute the cooling in the LIA by appealing to Nyquist? or by arguing that the written records cover small areas 1000s of km apart? you’ve never seen those arguments. What I am pointing out is the ABSENCE of those skeptical arguments regarding a period that is less well known that the current period.

      • It is absolute idiocy of a statement to say ” If you believe in LIA you must believe in current temperature record “. The LIA is a matter of history and has been recorded with enough evidence seen. Go talk to archealogists and historians. It was the ” climate ” that prevailed that time which is being discussed. Not the accuracy of decimal points of temperature then.

        To equate that with current temperature records claiming accuracy to decimal points while having errors of measurements in degrees is stupid.

        World history existed before you all were born. The current adjustments and hyperventilation about the biblical sanctity of the temperature and models are being done by the climate scientists and modellers who have till date not shown any skills or capability to do that job fairly and ethically and especially present results as they are, without spinning or putting a slant on them or ” adjusting ” them.

      • Steven Mosher

        It looks like you have gotten yourself into a hole here by discounting the fairly extensive historical record demonstrating that there was a change in climate for the worse, which ended up becoming the LIA.

        In addition to the written record there are cases of actual physical evidence, such as remains of medieval farms found in the Greenland permafrost or remains of ancient vegetation and, more rarely, even signs of human civilization that are exposed when Alpine glaciers retreat, for example.

        Climate modelers sometimes tend to start believing their models, as if they were providing scientific evidence (which, of course, they are not).

        As you know, real scientific evidence comes from empirical data, based on physical observations or reproducible experimentation. These exist in only a very limited fashion for the LIA or (even less) the MWP.

        They do not exist at all for the hypothesis that AGW, caused principally by human CO2 emissions, has been the principal cause of past warming and thus represents a serious potential threat to humanity or our environment.

        A rational skeptic will examine the data critically before coming to a conclusion.

        If this rational skeptic gets the idea that he is being bamboozled, he/she will tend to become a bit more skeptical. Goodness knows that there has been quite a bit of bamboozling going on in climate science today, mostly on the side of those trying to sell the “dangerous AGW” premise (including IPCC itself, unfortunately).

        This may be an underlying reason for your observation that skeptics tend to be more skeptical of the “mainstream” view on AGW than they are on the existence of a LIA or MWP in the distant past.

        Max

      • I see your point but there is one factor you are not presenting or discounting. It’s only one side of the debate that is social and political costs for their views. Those refuting warming alarmism and all the trapping issues stand to only preserve their current rights. I find it disturbing that the burdens of proof of a general theory has been so lowered collectively that the total anti-science childs logic of “prove I’m wrong” is mainstay in our culture. It’s a serious decline of general reasoning of the public which is why so much nonsense is advanced by the alleged scientists.

        The temperture debate was always lame but it couldn’t be politically ignored. The concept of climate equilbrium, which has never existed and is an essential part of the IPCC and warmist logic also can’t be ignored for very similar reasons. It’s no surprise when so many illogical debating points are created that retorts also decline in logic quality. There are many such “framings” of what is important in the climate debate will at later time be seen as rather absurd. Some are really believed but many informed on both sides understand the game being played.

      • I leave it up to others to explain the ABSENCE of skeptical arguments versus the LIA. I merely note that the evidence is more sparse, more uncertain, and cannot be cross referenced to other measurement systems. I dont doubt that it was colder. The point is this. A methodological skeptic is going to be skeptical of everything until it is established. A opportunistic skeptic is going to only apply the tools of skepticsm where he sees a benefit, where he can use them to defend his pre conceived notions.
        I have, in the past, have people argue that they KNOW it was colder in the LIA because of the painting of washington crossing the deleware. Seriously.
        And my pointto him and others is NOT that it is warmer. It’s rather a call to self examination. Are you using the same skeptical tools on every claim? or only those claims you disagree with. That’s the difference between methodological skepticism and opportunistic skepticsm. the difference between good faith and bad faith

      • Steven Mosher says:

        “A methodological skeptic is going to be skeptical of everything until it is established”

        Real scientific sceptic, who knows history and philosophy of science, will become sceptical of anything as soon as it is ESTABLISHED. Established correlates very well with biased and corrupted.

      • Remember that precision and accuracy are different. More precision doesn’t mean it is more accurate. You can have high accuracy with low precision and low accuracy with high precision. 30000 stations sounds like a lot except when you take into account that the surface area of the earth is over 500,000,000km2. As time goes on there is different measurement bias. It could be changes to standards, methods, and measuring instruments and this may even out in the long run but it may not. Perhaps they were more consistent in the LIA.

        And remember that the Heisenberg Uncertainty Principle itself is about measurement. And that the act of measuring can even affect the system itself.

      • Steve Mosher:

        “I find it laughable that the same people who criticize the MODERN record of thousands of thermometers, put such uncritical faith in the rather paltry record of temperatures from the LIA”

        I thought my use of the words ‘anecdotal’ and ‘suggest’ made it clear that there is room for doubt in our knowledge of the LIA. Carry on laughing, Steve.

      • Oh, and just to be factually clear. I do not have substantial doubts about the modern temperature record and nothing in my remarks indicate that I do. I will await your apology.

      • Rob Starkey

        Fred

        Sometimes your extreme ability to cite “opinion articles” and state them as facts is astounding. You regularly seem to take positions regarding AGW that assume that anything that may contribute to a warmer planet is a dire threat to humanity and those pieces of information that would indicate it to be a less of a threat to be in error or of minimal impact.

        FYI- the link that you posted bases its analysis on the following—their words not mine-
        “Here we use a coupled climate model to explore the effect of a 21st‐century grand minimum on future global temperatures, finding a moderate temperature offset of no more than −0.3°C in the year 2100 relative to a scenario with solar activity similar to recent decades.

        Fred—what leads you to accept the climate model that they used as reliable??? Since I am not aware that a single GCM has produced reliable result over the times scales referenced in the article, doesn’t is make the claims of the article “somewhat doubtful”.

      • Rob – You may be reluctant to accept the conclusions of the paper, but it was not an “opinion piece”. It could well be inaccurate, but probably not by several hundred percent, and so the likelihood that even a grand solar minimum would competely offset significant anthropogenic warming should not be overestimated. I think it behooves all of us to be tentative in our conclusions. In any case, interested readers can visit the paper to draw their own conclusions.

      • Rob Starkey

        Fred– that is simply WRONG. The conclusions of the paper were based upon using a climate model that has been shown to be WRONG. Upon what do you base your statement. It could easially be off by several percent since it is based on a model that is invalid.

      • I can see, Rob, that arguing with you will be futile, but I repeat my suggestion that other readers visit the piece to judge for themselves. I’m not sure why your comment was so VEHEMENT – nature, not we, will be the final arbiter, and so what we wish to be true or false will ultimately be irrelevant.

      • Rob Starkey

        Fred

        I may be acting inappropriately, but believe that you frequently cite articles or make points that are factually WRONG in trying to claim AGW as a dire threat and you do not acknowledge when you have been demonstrated to be wrong.

        You wrote that you could prove the human contribution to total CO2 based upon radiocarbon data- wrong

        You tried to defend that the earthquakes off Japan may have been worsened due to global warming–wrong

        You claimed that hundreds of thousands were killed due to flooding that was increased due to sea level rise—wrong

        On all these and other occasions you have taken and twisted science to support an unsubstantiated opinion. The latest article is another example. It is a puff piece without merit

      • Rob – I think you’ve misrepresented some of my past comments and correctly described others. I would be willing to defend the latter, but I doubt that many readers remain interested at this point.

      • Latimer Alder

        @fred

        Perhaps Rob likes to be short ,sharp and to make his points forcefully.

        Others here seem to prefer verbiage in the mistaken hope that quantity will be mistaken for quality. In the words of Winston Churchill

        ‘Forgive this being a long note. I did not have time to write a short one’

        Maybe in academia, verbosity is prized. Outside it is shunned.

      • Rob Starkey – can you link to an article/paper that demonstrates the CLIMBER‐3a climate model is WRONG (or whatever model you are referring to)? Well, it doesn’t have to shout it, but just say it, with proof, of course.

      • JCH –
        The evidence from history and archaeology gives a good idea of the temp ranges during the LIA – and then there are, of course, temp records for the later years of the period (the CET, for example). Now, given the “accuracy” of those records, I wouldn’t take them to be the “last word”, but those, in conjunction with the verbal and written accounts provide a good picture of conditions during that time. And Fred’s paper (or rather the authors thereof) show an amazing ignorance wrt the LIA. Consider – we’re supposedly at close to the same temp as the MWP – which was the precursor to the LIA; Sun conditions (as well as other factors) are supposedly heading in the same direction as at the beginning of the LIA – and Rahmstorf & Co are telling us that we’ll cool by some VERY minor amount, when the previous LIA was a major cooling period. Sorry, but no sale.

        Notice – I didn’t even get to the model. But FYI – since the paper was based on model results, I’m not likely to be overwhelmed anyway.

        Bottom line is that it’s interesting, but REALLY unlikely to have anything to do with reality. YMMV

        Also FYI – i started predicting this over two years ago – and I don’t actually want it to happen. Personal opinion is that “warmer is better”. I’ve been COLD before – it’s not that much fun.

      • What caused the so-called LIA to be global?

        Anyway, no sale on comparing a climate event with CO2 levels at or below 290 ppm, and a potential climate event with CO2 levels at 390 ppm and above.

      • JCH –
        Anyway, no sale on comparing a climate event with CO2 levels at or below 290 ppm, and a potential climate event with CO2 levels at 390 ppm and above.

        Yeah, that IS a question in my mind, too. But then I’m not a believer that CO2 is the “main driver” (or at least, the “only driver”) so I just find it “interesting.”

        What caused the so-called LIA to be global?

        If you figure that out, you may have a future in science. :-)

      • JCH,

        CO2 is irrelevant.

      • Using a different model, Song et al arrived at similar conclusions that a repeat of the Maunder Minimum (MM) would be expected to exert only a small cooling influence. What is interesting about their analysis, however, is that it yields substantial regional differences that appear to resemble those recorded for the MM. While the global effect is very limited, significant cooling is shown for Northern Europe, Northern Asia, and North America, with many other regions exhibiting a warming trend. The LIA itself was characterized by significant regional differences. Some of the effects appear to be mediated by shifts in the North Atlantic Oscillation to a negative phase, similar to what has been observed in winters of the past fewyears.

      • As with warming, we would expect higher latitudes to be more sensitive to cooling due to the ice albedo feedback, which in this case is from growing ice coverage, so this makes sense to me.

      • Dr. Moolten, what apppeals to me about these papers, is that if Dr. Hill and the others are right, and we do go into a Maunder type event, then we will have real observation about real effects. I think in 10-15 years, we will have a collective wisdom about climate that is missing from today’s world.

      • Fred,
        The paper may be sincere, but it is not correct.
        I believe it will still be useful to discuss CO2 mitigation and the likelihood it could be an illusory goal.

      • andrew adams

        hunter,

        What is the basis for your claim that the paper is not correct?

      • aa,
        Good question.
        This thread is so nested in so many other threads, I am having a lot of trouble sorting out which comment of Fred’s I was referring to.
        What a tangled web we weave, when we post on limited blogware. ;^)
        My main interest, however, is exploring CO2 mitigation strategies and finding any that can actually work in the real world.
        So far, nothing at all seems to work as advertised.
        Do you have any insights on this?

      • andrew adams

        hunter,

        I was assuming you were referring to the Rahmstorf paper, if not hen no worries.

        Your question re mitigation will require a longer response – I’ll reply later.

      • Say we’re about to enter into a MM-type situation: a solar snooze. There is this odd theory that CO2 will warm the earth. We could mitigate a MM-type event by spewing a whole bunch of CO2 into the atmosphere and avoid being victimized by another so-calle LIA.

  21. heh. Like I said 50/50.

    • Rob Starkey

      LOL =/-50%

      • Rob Starkey

        meant +/- 50%

      • hehe. I bet I end up being closer to the truth than those who say 0% or 100%

        The trick would be figuring what kind of information would move me off this estimate..
        that’s a puzzler

      • Rob Starkey

        What will also be interesting is to see what will happen if other factors actually do minimize or hide any warming actually due to human influence for an extended period. The reactions of people on the issue will be something like “see it isn’t warmer let’s go to sleep” vs “good we have more time to build better infrastructure”.

      • I’d vote for better infrastructure. jobs are good thing. taxes, not so good.

      • andrew adams

        How do we pay for the better infrastructure?

  22. 5.2 Sources of uncertainty

    “Not only does God play dice, but sometimes he throws the dice where we can’t see them.” Stephen Hawking
    _______

    But Hawking doesn’t seem uncertain about the risks of continued global warming from man’s activities.

    http://350orbust.wordpress.com/2010/07/23/carl-sagan-and-stephen-hawking-on-effects-of-global-warming-the-runaway-greenhouse-effect-on-venus-is-a-valuable-reminder-to-take-the-increasing-greenhouse-effect-on-earth-seriously/

    • SEE: Pooh, Dixie | June 14, 2011 at 1:44 pm | Reply

      It’s the sun, stupid.

    • nutso fasst

      There must be a more appropriate quote somewhere. What did Linus Pauling have to say about climate change?

      Pliny the Elder: “In these matters the only certainty is that nothing is certain.”

  23. Judith Curry wrote:

    The IPCC’s conclusion on attribution is reached using probabilistic causation, whereby an ensemble of simulations are used to evaluate agreement between observations and forcing for simulations conducted with and without anthropogenic forcing.

    Judith ( or anyone else), I wonder if you could give your thoughts on the validity of using an ensemble of simulations to determine anything. Has it every been used in any field of science other than climate science?

    I can understand that if one were measuring, say the charge of an electron, how repeated measurements would converge to a value. I can see that for any particular computer program, increasing the precision of floating point numbers (e.g. going from 64 to 128, or 128 to 256 bits) would give different results for the same input, but the results would converge with increasing precision. Now increasing the number of simulations might (or might not converge) but I’m confused as to what the ensemble would converge to.

    I hope I’m not missing anything obvious

    klee12

  24. Just some quick reactions from a layperson’s read:

    The 50:50 vs PDF issue I think only becomes a problem if you don’t carefully separate the problem of estimating a PDF from the problem of decision making under uncertainty. The latter requires the former (and maybe IPCC can be criticised for not paying this enough mind), and regrettably strongly bimodal distributions don’t abound, but I’m not sure the two isses are quite in the tension that you seem to imply.

    On the bootstrap plausibility and related issues I find it useful to consider in an abstract sense “what additional information is being added by this process”. Stepping back it is usually very simple to recognise that any refinement process that relies on the information you are trying to replicate (e.g. tuning) creates risks of contamination, and any such process that doesn’t add new information is likely sophistry.

    Finally I wonder whether explicit mention of the bias in GCMs is worthwhile in discussing ‘confidence in attribution’. For me the bias in GCMs (plus the understatement of uncertainty, which you do discuss) instinctively makes me feel uneasy, particularly given the likely non-linear nature of forcings.

  25. “The American Astronomical Society meeting in Austin has just made a major announcement on the state of the sun. Sunspots may be on the way out and an extended solar minimum may be on the horizon.”

    This is major, a potential complete game changer. Instead of warming, a possible second “little ice age.” This global warming stuff I’m convinced will go down in history as modern man’s greatest blunder. Social scientists will be discussing it for years to come under the general rubric of “The Madness of Crowds and Popular Delusions.”

    But then again, I think that’s probably wrong, at least for the alarmist climate scientists:. In their case, the explanation is entirely obvious and direct: money and power corrupt.

    • Pokerguy – The sunspot predictions are still very tentative, based on the articles rather than the news report. Nevertheless, a prolonged minimum would have some impact. It would be unlikely to prove a “game changer”, however, as mentioned above at Comment 75522.

      • Fred Moolten

        You have opined that the cause (solar or ?) for the most recent lack of warming of both the atmosphere at the surface (HadCRUT) and the upper ocean (ARGO) is “unlikely to prove a ‘game changer”</em.

        Well, Fred, I'd say if the "lack of warming" continues for another few years it will definitely be a “game changer”, inasmuch as it will represent a direct falsification of the premise that AGW has been the principal driver of our climate and, hence, represents a serious potential threat to human society and our environment\, the premise upon which the entire IPCC storyline is based.

        Don’t you agree?

        Max

      • Max, another few years will have almost no informative value regarding basic climate principles. If you look at temperature records over the past 100 years, a few years or even a decade of flat or declining temperatures failed to alter the overall warming trend. There is nothing unusual about recent temperatures in that regard. The reason is I believe fairly obvious- anthropogenic greenhouse gases are not the only factor affecting temperature. They are probably, however, the major factor likely to operate over multiple decades to a century.

      • Fred Moolten

        Max, another few years will have almost no informative value regarding basic climate principles. If you look at temperature records over the past 100 years, a few years or even a decade of flat or declining temperatures failed to alter the overall warming trend.

        If by <em"basic climate principles" you mean the GH theory, the fact the CO2 is a GHG and that CO2 is produced by human activity, then you are correct.

        If you refer, however, to the IPCC model-based notion of high 2xCO2 climate sensitivity, and to the projections for the future, which are based on this notion, then you are wrong.

        And this is the real point of debate here, Fred (i.e. overconfidence in IPCC’s detection and attribution of past climate change and the resulting notion of high 2xCO2 climate sensitivity), not the basic GH theory, which no one disputes.

        Some reports have stated if we have 15 years of no warming despite continued increase of CO2 to record levels, this will constitute a direct falsification of the model-based high climate sensitivity estimates.

        I personally believe we are getting close to that point at which one can say that AGW has been shown not to be the principal driver of our climate.

        Other than the observed multi-decadal cooling cycles in between the observed multi-decadal warming cycles there has been no “decade of flat or declining temperatures”. Certainly there has been none since that late 20th century warming started around 1975, until the most recent decade.

        Do you believe that if we have another decade of no warming of the atmosphere as well as the upper ocean despite continued CO2 increase to record levels, the notion of a high 2xCO2 climate sensitivity will have been falsified or not?

        How many years will it take, Fred?

        Or do you believe that the theory will continue to live long after the facts on the ground have shown it to be false?

        Max

      • Fred Moolten

        You referred to the “overall warming trend.

        This has been at a rate of 0.041C per decade since the modern record started in 1850, occurring in three statistically indistinguishable 30-year warming cycles, followed by 30-year cycles of slight cooling.

        This “overall warming trend” started long before there were any human CO2 emissions to speak of (as we have been emerging from the Little Ice Age) and has continued through the most recent warming.

        No one argues that this trend exists, Fred. I just doesn’t have much to do with AGW, that’s all.

        Max

      • “They are probably, however, the major factor likely to operate over multiple decades to a century.” – Fred Moolten
        Segalstad, Tom V. 1998. Carbon cycle modelling and the residence time of natural and anthropogenic atmospheric CO2: on the construction of the “Greenhouse Effect Global Warming” dogma. In Global Warming: The Continuing Debate, ed. R. Bate, 184-219. European Science and Environment Forum (ESEF). Cambridge, England. http://folk.uio.no/tomvs/esef/ESEF3VO2.htm

        The three evidences of the United Nations Intergovernmental Panel on Climate Change (IPCC), that the apparent contemporary atmospheric CO2 increase is anthropogenic, is discussed and rejected: CO2 measurements from ice cores; CO2 measurements in air; and carbon isotope data in conjunction with carbon cycle modelling.

        It is shown why the ice core method and its results must be rejected; and that current air CO2 measurements are not validated and their results subjectively “edited”. Further it is shown that carbon cycle modelling based on non-equilibrium models, remote from observed reality and chemical laws, made to fit non-representative data through the use of non-linear ocean evasion “buffer” correction factors constructed from a pre-conceived idea, constitute a circular argument and with no scientific validity.

        Both radioactive and stable carbon isotopes show that the real atmospheric CO2 residence time (lifetime) is only about 5 years, and that the amount of fossil-fuel CO2 in the atmosphere is maximum 4%. Any CO2 level rise beyond this can only come from a much larger, but natural, carbon reservoir with much higher 13-C/12-C isotope ratio than that of the fossil fuel pool, namely from the ocean, and/or the lithosphere, and/or the Earth’s interior.

        The apparent annual atmospheric CO2 level increase, postulated to be anthropogenic, would constitute only some 0.2% of the total annual amount of CO2 exchanged naturally between the atmosphere and the ocean plus other natural sources and sinks. It is more probable that such a small ripple in the annual natural flow of CO2 would be caused by natural fluctuations of geophysical processes.

        13-C/12-C isotope mass balance calculations show that IPCC’s atmospheric residence time of 50-200 years make the atmosphere too light (50% of its current CO2 mass) to fit its measured 13-C/12-C isotope ratio. This proves why IPCC’s wrong model creates its artificial 50% “missing sink”. IPCC’s 50% inexplicable “missing sink” of about 3 giga-tonnes carbon annually should have led all governments to reject IPCC’s model. When such rejection has not yet occurred, it beautifully shows the result of the “scare-them-to-death” influence principle.

        IPCC’s “Greenhouse Effect Global Warming” dogma rests on invalid presumptions and a rejectable non-realistic carbon cycle modelling which simply refutes reality, like the existence of carbonated beer or soda “pop” as we know
        it.

      • Un-peer-reviewed assertions are un-peer-reviewed.

      • And peer-reviewed doesn’t mean it’s right.

    • andrew adams

      You seem to have very high confidence in the outcome of reduced sunspot activity compared to the outcome of a change in the levels of GHGs in the atmosphere, despite the fact that AFAIK the effects of the former are not as well understood as the latter. This seems to be another case of the kind of selective skepticism which Steven Mosher points out above.
      I’m certainly not denying that such a decease in sunspots would have an effect, but to claim it could lead to another little ice age would seem to assume fairly high climate sensitivity.
      Of course any such effect on temperatures due to this unforseen change in solar activity would not mean that AGW is not a threat, although it would delay the onset of that threat and give us more time to prepare so it would certainly be welcome. Solar minimums (minima?) don’t last forever though.

      • aa –
        Of course any such effect on temperatures due to this unforseen change in solar activity would not mean that AGW is not a threat, although it would delay the onset of that threat and give us more time to prepare so it would certainly be welcome.

        Assuming you are right, tell me that “welcome” stuff in 20 years when the Thames is frozen again. :-)

        Solar minimums (minima) don’t last forever though.

        IIRC, the MM lasted 70-75 years. And the effects lasted a lot longer. And I’m not that “hot” on cold. :-(

      • At 280 ppm and dropping, those poor Europeans were doom to chatter teeth.

        We’re at 394 ppm and counting, we can eat MMs for breakfast.

  26. Gary Mirada

    Judith I spent some considerable time reading AR4 a couple of years back and came to the conclusion it is more about politics than science. As the comments of Norm Kalmanovitch make clear, the writers of the report are not interested in the science only the message. The simple fact that NGO’s participate in the process demonstrates the extent to which it is a charade. In the age of the internet this will not wash. The IPCC has zero credibility amongst those members of the public who have taken the trouble to inform themselves. Whilst I admire your attempts to put the record straight, my fear is that it does not matter what the science says, the politicians will continue to subvert the process to their own ends.

    But it does look like the world will actually be saved by a solar minimum. How ironic

    Regards Gary

  27. Fred,

    I’ve no desire to root for another little ice age, but according to the long range meteorologists I greatly respect, we’re headed for 20-30 years worth of moderate cooling anyway. Add a a major solar minimum into the mix, and this could be potentially disastrous.

    The idea that it’s going to somehow save us from broiling to death is based on the premise that AGW concerns are valid, which I don’t buy.

  28. Judith Curry

    [Somehow the formatting for this post got screwed up, so am reposting – please delete first attempt]

    Thanks for revisiting one of the key open questions regarding the confidence level of the IPCC premise that AGW has been a principal cause of past warming, i.e. the “very likely” attribution statement in AR4, upon which everything else rests.

    You wrote:

    The IPCC’s detection and attribution analysis, which is the basis of the “very likely” attribution statement in the AR4, is based upon the following argument:

    1. Detection. Climate change in the latter half of the 20th century is detected based upon an increase in global surface temperature anomalies that is much larger than can be explained by natural internal variability.

    There is a basic error in logic here, Judith. IPCC has stated repeatedly that the models cannot explain the late 20th century warming without anthropogenic forcing.

    AR4, Ch.3, p.240:

    The 1976 divide is the date of a widely acknowledged ‘climate shift’ (e.g. Trenberth, 1990) and seems to mark a time (see Chapter 9) when the global mean temperatures began a discernable upward trend that has been at least partly attributed to increases in greenhouse gas concentrations in the atmosphere…The picture prior to 1976 has essentially not changed and is therefore not repeated in detail here.

    AR4, Ch. 9, p.684 (the paragraph you cited)

    The fact that climate models are only able to reproduce observed global mean temperature changes over the 20th century when they include anthropogenic forcings, and that they fail to do so when they exclude anthropogenic forcings, is evidence for the influence of humans on global climate.

    “Evidence”? Hmmm…

    AR4, Ch. 9, p.685

    Climate simulations are consistent in showing that the global mean warming observed since 1970 can only be reproduced when models are forced with combinations of external forcings that include anthropogenic forcings.

    Much is made of the late 20th century warming period. Yet the statistically indistinguishable early 20th century warming period is hardly mentioned at all, except for the brief reference in Ch. 3 and this sentence in Ch.9, p.691

    Detection and attribution as well as modelling studies indicate more uncertainty regarding the causes of early 20th-century warming than the recent warming. A number of studies detect a significant natural contribution to early 20th-century warming,

    So we have the following logic:

    a. Our models cannot explain the early 20th century warming.

    b. We know that the statistically indistinguishable late 20th century warming was largely caused by anthropogenic forcing.

    c. How do we know this?

    d. Because our models cannot explain it any other way.

    This is an “argument from ignorance”, Judith, and hence a basic logical fallacy.

    This basic error in logic carries through to the next paragraphs you cite:

    2. Confidence in detection. The quality of agreement between model simulations with 20th century forcing and observations supports the likelihood that models are adequately simulating the magnitude of natural internal variability on decadal to century time scales. From the IPCC AR4: “However, models would need to underestimate variability by factors of over two in their standard deviation to nullify detection of greenhouse gases in near-surface temperature data, which appears unlikely given the quality of agreement between models and observations at global and continental scales (Figures 9.7 and 9.8) and agreement with inferences on temperature variability from NH temperature reconstructions of the last millennium.”

    3. Attribution. Climate model simulations for the 20th century climate that combine natural and anthropogenic forcing agree much better with observations than simulations that include only natural forcing. From the IPCC AR4: “The fact that climate models are only able to reproduce observed global mean temperature changes over the 20th century when they include anthropogenic forcings, and that they fail to do so when they exclude anthropogenic forcings, is evidence for the influence of humans on global climate.”

    4. Confidence in attribution. Detection and attribution results based on several models or several forcing histories suggest that the attribution of a human influence on temperature change during the latter half of the 20th century is a robust result. From the IPCC AR4: “Detection and attribution results based on several models or several forcing histories do provide information on the effects of model and forcing uncertainty. Such studies suggest that while model uncertainty is important, key results, such as attribution of a human influence on temperature change during the latter half of the 20th century, are robust.”

    As you can see, the ” attribution of a human influence on temperature change during the latter half of the 20th century” is anything but ”robust” until the early 20th century warming can be explained and the same (natural) factors that caused it can be definitively ruled out for the latter warming period.

    To this point you wrote:

    Whereas all of the IPCC AR4 models agree that the warming observed since 1970 can only be reproduced using anthropogenic forcings, models disagree on the relative importance of solar, volcanic, and aerosol forcing in the earlier part of the 20th century (IPCC AR4 WGI Section 9.4.1). The substantial warming during the period 1910-1940 has been attributed by nearly all the modeling groups to some combination of increasing solar irradiance and a lack of major volcanic activity.

    This is simply another way of stating ”our models cannot explain the early 20th century warming” and even worse for the “confidence level”: ”our models cannot rule out that the same natural factors, which may have been primarily responsible for the early 20th century warming were not the same natural factors, which may have been primarily responsible for the late 20th century warming”

    No matter how IPCC tries to rationalize this major discrepancy, it is there. And it represents a major weakness relating to the level of confidence of the ”attribution of a human influence on temperature change during the latter half of the 20th century”.

    And until this basic weakness can be satisfactorily resolved, all the rest of AR4 (regarding model-derived climate sensitivity and projections of future climate trends, etc.) rests on a weak foundation and is highly questionable.

    Max

  29. Judy – If you are revising your paper, your principal focus will inevitably be directed toward reviewer comments rather than ours. Nevertheless, I’ll offer my impression for whatever use you may find for it during the revision process.

    Even before seeing Gabi Hegerl’s comment, I had come to a somewhat similar conclusion – that your draft gave the impression that IPCC attribution of most warming since about 1950 to anthropogenic greenhouse gases was based exclusively on the difference between model runs with and without anthropogenic influences. While those model comparisons served an important confirmatory role, additional observational data almost certainly provided a basis for the conclusion that was at least partially model-independent.

    For any apportionment of attribution, I believe the principal task would be to estimate the relative “forcing” of different climate processes, anthropogenic and natural, averaged over the 1950-2007 interval. I put “forcing” in quotes because the actual phenomena undoubtedly included internal climate processes in addition to imposed perturbations, but for these purposes, what seems most relevant is that all relevant processes were likely to invoke similar feedbacks over the long term, given that climate feedbacks are basically a response to a temperature change rather than to the cause of the change. For these reasons, precise estimates of climate sensitivity to forcings are relatively unimportant, because they will have little effect on the apportionment.

    A legitimate point you’ve made on several occasions regards the uncertainty and potentially significant magnitude of natural climate fluctuations. Because of this, it becomes important to evaluate those fluctuations that are the most plausible source of a large net effect. Of these, the most salient are probably ENSO, the AMO, the PDO, and solar irradiance, with the first three incorporating significant degrees of chaotic behavior. Small changes in stratospheric water vapor have also been noted, but increases and reductions tended to offset each other – some of these may have been anthropogenic rather than “natural”. Light-scattering anthropogenic aerosols undoubtedly played significant roles, but the dimming of the 1950’s-1970’s appears to have been offset by the subsequent brightening due to aerosol reduction.

    How much have these fluctuations altered the temperature trend of the 1950-2007 interval? ENSO events have exerted only temporary effects, but other processes have operated over longer intervals. If one looks at the AMO, the PDO, and Solar Irradiance, what strikes me is that each of these phenomena at some point would likely have dominated the temperature record, but averaged out over the 57 years, their net overall effect has probably been small. It needn’t have come out this way, which is why it’s important to evaluate fluctuations with a specific interval in mind, but in fact, the averaging does appear to result in net effects much smaller than any peak effects.

    This suggests that anthropogenic trends for the interval specified by AR4 may have been altered rather slightly by other identified variables. In fact, I would suggest that probably the main competitor to the greenhouse gases cited by the IPCC (presumably mostly CO2, methane, and tropospheric ozone) was anthropogenic black carbon, which Ramanathan has estimated to have contributed significantly to the warming trend. I wouldn’t find it unreasonable to assign as much as 90 percent of the identified warming to anthropogenic factors, while limiting the role of greenhouse gases to a smaller contribution. Even so, that role might comfortably exceed 50 percent.

    None of this excludes the possibility that significant but undetected phenomena beyond those mentioned might have affected the temperature trend. While that would forbid AR4 from assigning anthropogenic factors a major role with 100 percent certainty, the lack of evidence for other factors of the long cycle length needed to challenge that conclusion would seem to me to make “very likely” a justifiable description for the IPCC conclusions.

    If you want to incorporate any of this in your revision, the two points I would most recommend are (1) to acknowledge that the AR4 conclusions are not exclusively model-based, and (2) to identify to the extent feasible major fluctuations that might compete with GHGs rather than refer to them abstractly as climate variability, so that readers can assess for themselves how important they believe these sources of variation might have been during the particular interval cited by AR4, and whether it is necessary to invoke unidentified variables to make natural variation a potent competitor to anthropogenic forcings.

    • James Evans

      “observational data almost certainly provided a basis for the conclusion”

      Are you assigning a probability of “almost certain” to the idea that the IPCC used observational data to provide a basis for the conclusion? Is it not clear in the report whether they did or not?

    • Fred, the issue about conclusions being exclusively model based is puzzling. Of course they are comparing models to observations, they are not just looking at model simulations. In chapter 9, there isn’t a single figure that doesn’t include climate model results (although 9.4 only includes and EBM). ALso, i don’t see this as an averaging issue over 1950-2005, but rather a trend issue. Multi decadal oscillations with nominally 60 year periods are completely ignored (only high frequency ENSO is considered and dismissed). Aerosols including black carbon are a total wild card. And I am not convinced we have correctly sorted out the solar forcing over this period, not to mention the possibility of indirect solar effects. And the main issue is CO2, not net anthropogenic effects.

      • “Multi decadal oscillations with nominally 60 year periods are completely ignored”

        Which oscillations are you referring to, Judy? My comment cited the PDO and AMO, which do tend to “average out” in the sense that positive and negative phases tended to balance over the interval in question. These phenomena are mentioned in AR4, atlhough you are probably right that they aren’t specifically invoked in the attribution assessment.

        Regarding the comparison of model simulations with observations, I probably failed to clarify what I meant. Certainly, the simulations were compared with observations, and those that included an anthropogenic component fared better. However, the models, while simulating various natural fluctuations, did not, as far as I know, specifically, reproduce the observed AMO and PDO curves, and so the latter are at least semi-independent data sources. This is where I don’t see a trend despite the magnitude of the peaks and troughs within the interval.

        The main issue is not net anthropogenic effects, as you point out, including black carbon as a “wild card”, but I believe it includes all anthropogenic GHGs, and not merely CO2. Haven’t methane and tropospheric ozone also contributed to GHG effects since 1950?

  30. Fred Moolten

    Your advise Judith

    (1) to acknowledge that the AR4 conclusions are not exclusively model-based, and (2) to identify to the extent feasible major fluctuations that might compete with GHGs rather than refer to them abstractly as climate variability, so that readers can assess for themselves how important they believe these sources of variation might have been during the particular interval cited by AR4, and whether it is necessary to invoke unidentified variables to make natural variation a potent competitor to anthropogenic forcings.

    Point 2 makes very good sense for those “major fluctuations that might compete with GHGs”, which are known, but it does not cover the (very likely) much more significant category of those, which are unknown. This leaves the assessment by readers wide open. There are just too many uncertainties and unknowns to make a meaningful “assessment by difference”.

    To your point 1, I would not spend too much time trying to find a few actual real-time observations used in determining the model-derived climate sensitivity reported by IPCC (if there really are any).. One could add the word “almost” before “exclusively”, if that would make you happy, but let’s face it: the 2xCO2 climate sensitivity estimates of IPCC are essentially all model-based.

    Just my take on this, of course.

    Max

    • Fred and Judy

      Looks like my post crossed with your exchange on this same topic.

      Max

  31. If we focus only on the first assumption: “Detection”, we don’t have to go any further with #2 etc. VS on Bart Verheggen’s blog a year ago demonstrated mathematically, at least to my satisfaction that the temperature time series 1880 to 2008 fell within a normal range of variability when using land based data, never mind including what was happening in the oceans. Tomas Milanovic has argued that we do not have the tools at present to assess a temporal spacial chaotic system. Andrew Edmonds points out the perils in attempting to mathematically project chaotic events into the future; that uncertainty compounds upon itself making future projections highly UNLIKELY that we might be certain about any future event occurring. What is the rush to certainty when “there is still time brother.”

  32. Peter Smith

    I don’t think the IPCC are the only outfit that suffers from the problem of over confidence in their predictions.

    http://wattsupwiththat.com/2011/06/14/the-major-aas-solar-announcement-suns-fading-spots-signal-big-drop-in-solar-activity/

    http://science.nasa.gov/science-news/science-at-nasa/2008/11jul_solarcycleupdate/

    If the AAS prediction is correct then the sensitivity of temperature to C02 level may be the least of our worries.

    Of course it is predictions we are comparing, so obviously there has to be some question as to the confidence of the AAS prediction too. :)

  33. I am a little busy with a new water and environment consultancy – but this needs a brief comment on the major source of uncertainty in climate.

    The Royal Society says that climate has internal climate variability as climate is an example of a chaotic system. Chaotic systems have control variables and multiple feedbacks. They are complex and dynamic and have particular properties. Small changes in control variables drive spontaneous and nonlinear responses in the state of the system. At specific junctions chaotic systems fluctuate to extremes before settling into a new state entirely. This can be seen in the Pacific Ocean – a very significant driver of interannual to decadal hydrologic and climate variability – in changes in hydrology and ocean states around 1910, the mid 1940’s, the late 1970’s and after 1998.

    It is no coincidence that shifts in ocean and atmospheric indices occur at the same time as changes in the trajectory of global surface temperature. Our ‘interest is to understand – first the natural variability of climate – and then take it from there. So we were very excited when we realized a lot of changes in the past century from warmer to cooler and then back to warmer were all natural.’ Anastasios Tsonis

    I Googled the Tsonis quote and the first hit was a 2010 Quadrant article by me – well worth reading if I say so myself – http://www.quadrant.org.au/blogs/doomed-planet/2010/02/ellison/page:printable

    The Pacific modes have feedbacks of winds, clouds, ocean currents and bottom water upwelling – going well beyond the metaphysics of complex systems to observational data. The problem goes to what sort of a handle we have on the Royal Society’s ‘internal climate variability’ and how this might be modelled? Almost negligible and with great difficulty respectively. Attributing change – against a background of internal variability that happens spontaneously and abruptly in the absence of forcing in the accepted sense – seems a futile occupation.

    As a rider I always have to add that uncertainty in the science does not provide a compelling rationale for neglecting greenhouse gas emissions. Carbon dioxide levels are at 10 to 15 million year highs. Terrestrial botany is showing the effects on stomatal size and density – with unknown implications for hydrology. The gas must have an impact on surface pH and therefore on plankton species – the base of the oceanic food chain. The radiative effects are theoretically a control variable in the complex climate system with unknown risks.

    This should be sufficient to engender caution IMO – although I despair of a widespread and rationale discussion on solutions. We are stuck at idiomatic narratives superficially in the language of science – for and against the mitigation of greenhouse gases. ‘But, as we are increasingly coming to understand, it is often not questions about science that are at stake in these discussions. The culturally potent idiom of the dispassionate scientific narrative is being employed to fight culture wars over competing social and ethical values.’ http://eprints.lse.ac.uk/24569/

    • Norm Kalmanovitch

      Before you can attribute GHGs to global temperature changes you must first demonstrate how this is possible. The Earth only radiates energy over a specific range dictated by its temperature. This varies at different place around the globe and also varies with time of day as demonstrated by the temperature of the desert surface at sunset when it is ove 40°C to daybreak when it may have dropped to near freezing. On average the Earth radiates to a close approximation to a blackbody with a temperature of 288 kelvin. The peak energy is at about 16microns and the energy drops to below 10% of this peak value bleow 7microns and above 100microns. Water has a permanent dipole moment so it has a rotational mode tha is effected by the entire spectrum but water is not mentioned as a GHG in the Kyoto Accord. Ozone has a measurable influence on the Earth’s radiative sprectrum centred at around 9.5microns but it too is not mentioned as a greenhouse gas in the Kyoto Accord (because it would confuse the ozone depletion issue which the environmentalist lobby also promote). CO2 is mentioned in the Kyoto Accord but most of the energy in the 14.77micron band that CO2 can affect is already affected and there is not enough energy remaining for even a ten fold increase in CO2 to cause any more than a degree C of further warming. Methane only has an effect at the 7.7micron range and this is a very low energy portion of the Earth’s radiative spectrum which is already saturated by water vapour so even a hundred fold increase in methane would be incapable of any more than a tenth of a degree C of further warming. All the rest of the GHG’s named in Kyoto have an effect outside the range of wavelengths generated by the Earth at hige enough levels for any of these GHG’s to cause even a tenth of a degree C of warming even if their concentration grew a hundred fold. Since the total greenhouse effect is between 33°C and 35°C depending on who is citing the value and clouds and water vapour account for well over 90% of the greenhouse effect there is only 10% of the effect attributable to GHG’s plus ozone which is not mentioned as a GHG. This is only 3.3 to 3.5°C and since CO2 is the only GHG in the range of the Earth’s spectrum that can have a significant effect but well over 80% of the possible effect has been achieved; additional CO2 can only have an effect between 0.66°C and 0.7°C regardless of concentration because there is only 20% of the energy remaining to be accessed.
      Paleo temperatures from oxygen isotope ratios from the greenland ice cores show that a temperature increase of 0.7°C will not bring the current temperature up to the level of the Medieval Warm Period let alone the Roman optimum and the MInoan Warm period which were both warmer than the medieval warm period. Since this is ice core data this means that the ice didn’t melt during these warm periods and there was no abnormal sea level rise indicated during these warm times either. The bottom line is that it is a physical impossibility for either any detection or attribution of GHGs to be the prime cause of any observed global warming

      • Norm – I don’t want to be mean-spirited, but you have stated elsewhere that you are a geophysicist, and I find that rather hard to believe, so I wonder whether you can state the credentials that qualify you to make that claim. (I’m referring to someone with a PhD or at least a Masters degree).

        I say this because almost every one of your comments in the various threads over the past week or so has been partially or completely wrong, including those involving some of the most basic geophysical principles (e.g., the role of window region OLR as a function of global temperature change or the rotational quantum transitions that are critical to the greenhouse properties of CO2 and its non-saturability even at high concentrations).

        It seems to me that if you want to participate in these threads from the asserted vantage point of a geophysicist, you will have review and revise many of your perceptions that are currently contradicted by modern understanding of the field. Obviously, you needn’t be knowledgeable in geophysics to participate here, but if you’re not, you shouldn’t claim to be.

      • Fred,
        Norm may or may not be a complete fraud, but perhaps you can also tell us more about yourself as well?

      • Fred has a post on the Denizens thread, I encourage everyone to post a brief bio on that thread
        http://judithcurry.com/2010/11/12/the-denizens-of-climate-etc/#comment-11223

      • Dr. Curry,
        Thank you very much.
        Fred is a fascinating and patient poster.
        I learn much from him even when I disagree, and his civility is at least as informative.

      • Hunter – I wouldn’t use the word “fraud”. I suspect that Norm does not have graduate level credentials in geophysics, or at least none based on a geophysics education in recent decades. He is certainly entitled to express his views on these subjects, but when he asks for the implied respect due a “geophysicist” while demontrating considerable ignorance of basic principles, it’s reasonable to ask him to document his credentials.

      • Fred

        How does having a PhD or a Masters change if he is right or wrong?

        “You don’t want to be mean-spirited”?

      • Teddy – By mistake, my reply ended up below, at Comment-75823

  34. The attribution for the stated period (let’s say 1950-2000) should take into account these numbers. Approximate warming=0.6 C, CO2 added=18%.
    The no-feedback response (take 1 C per doubling) to this CO2 addition would be 0.25 degrees.
    What can we infer? That negative feedback looks more unlikely than positive feedback because more heating would be left to explain by other means. A positive feedback of 2.4 C per doubling would give 0.6 C in this period, but we suspect aerosols caused cooling, so this could be regarded as a lower limit, unless other GHGs like methane have offset the aerosols. I think the case for not only CO2 attribution, but positive feedback is strong enough to merit “very likely”, mainly because of the gap to explain between the no-feedback response and the actual warming, which is too large for any decadal internal variability so far detected.

    • Jim D

      You have selected a piece of the modern temperature record to estimate the 2xCO2 climate sensitivity (and end up with 2.4C).

      Let’s put on the thinking cap and widen the horizon a bit.

      Instead of looking at a selected period 1950 to 2000, let’s look at the entire modern record, from 1850 to today, to get a better picture.

      We have seen a total linear warming of 0.041C per decade or 0.66C over the entire 160-year HadCRUT record (this has occurred in 3 statistically indistinguishable 30-year warming cycles, with 30-year cycles of slight cooling in between, as Girma has shown us graphically).

      CO2 has increased from around 280 ppmv to 390 ppmv over this same period, with no 30-year cycles and most of the increase occurring after Mauna Loa measurements were installed in 1958.

      The temperature record is a random walk, statistically speaking, and there is no robust statistical correlation between CO2 and temperature, so it is obvious that there is something else at play here (possibly in addition to CO2).

      IPCC has estimated the various radiative forcing components (AR4 WG1 SPM Figure SPM.2.)

      Here we see that the total net anthropogenic forcing components from pre-industrial time to 2005 represent an estimated radiative forcing of 1.6 W/m^2, while that of CO2 alone is 1.66 W/m^2. This tells us that over this period all other anthropogenic forcing components (aerosols, other GHGs, land use changes, surface albedo changes, etc.) essentially cancelled one another out, so we can ignore your statement “we suspect that aerosols caused cooling”, as this is already compensated for by other anthropogenic warming beside CO2.

      So, if we assume that all warming since pre-industrial time has been anthropogenic, we have

      C1 = 280 ppmv
      C2 = 390 ppmv
      C2/C1 = 1.3929
      ln(C2/C1) = 0.3314
      dT (1850-2010) = 0.66C
      ln 2 = 0.6931
      dT(2xCO2) = 1.38C

      We know, however, that all warming has not been anthropogenic, particularly the warming observed in the first half of the 20th century and the latter 19th century, when there was hardly any increase in human GHGs.

      Several solar studies have estimated that around half of the total warming can be attributed to the unusually high level of 20th century solar activity (highest in several thousand years), with a large percentage of this warming occurring in the first half of the century (Shapiro et al. 2011, Scafetta + West 2006, Solanki et al. 2004, Shaviv + Veizer 2003, Lockwood + Stamper 1999, Geerts + Linacre 1997, Gerard + Hauglustaine 1991, among others).

      On this basis, we have the same calculation as above, but with only 0.33C warming attributed to AGW (= CO2).

      We then arrive at a 2xCO2 climate sensitivity range of 0.7 to 1.4C.

      This appears to be the best estimate based on actual observations.

      Max

  35. JC



    Knowledge of internal climate variability is needed for both detection and attribution.

    Is not the following internal climate variability?

    http://bit.ly/emAwAu

    If it is not, what is it please?

  36. Forget the overconfidence. Should one have any confidence at all in what the IPCC says?

    Look at the below post by Steve McIntyre on Climate Audit regarding IPCC WG3’s statement on ” 80 percent of word energy supplies met by renewables by the middle of the century.

    http://climateaudit.org/2011/06/14/ipcc-wg3-and-the-greenpeace-karaoke/#comments

    The entire press release by IPCC was based on the upper end of a scenario of a report written by Greenpeace. And the lead author of the IPCC assessment of the Greenpeace scenario was the same Greenpeace employee, who prepared the Greenpeace scenarios and whose foreword was written by Rajendra Pachauri.

    Like Steve McIntyre says ” Everyone in IPCC WG3 should be terminated and, if the institution is to continue, it should be re-structured from scratch. ”

    Totally agree with that. IPCC has completely lost it’s credibility and is not to be trusted on anything they say.

    • Venter

      Forget the overconfidence. Should one have any confidence at all in what the IPCC says?

      My “Webster” gives me several of definitions of “confidence”:

      con-fi-dence

      n
      1. faith, trust
      2. a feeling of self-assurance
      3. a relation of trust
      4. certitude
      5. secret; a communication made in confidence

      adj
      of or relating to swindling by false promises

      Which one are we talking about here?

      Max

  37. Dr. Curry,
    As you know, one of my pet peeves is the asinine claim “we know all the forcings” or “we know the basic physics.” It is simply untrue. I just found this paper which introduces a new climate forcing which you may wish to discuss here. See http://research.myipcn.org/science/article/pii/S0277379111001181

    Perhaps even more importantly, we don’t understand solar forcing, what causes changes or how these solar forcings amplify on Earth’s climate. The American Astronomical Society meeting today is predicted an extended, perhaps decades long, solar minimum. Space.com and WUWT have good stories on this major announcement.

  38. “IPCC has completely lost it’s credibility”

    I’m not sure Steve McIntyre has enough credibility to cost anyone else theirs.

    If you don’t like the IPCC, you can always read of the primary literature, which the IPCC basically collects and summarizes. If you did away with the IPCC and it’s staff of fifty, you would still have the problem of thousands of scientists creating thousands of papers that reflect the reality of rapid global warming.

    Demonizing the IPCC doesn’t help you with your real problem, which is that your beliefs are not compatible with the observed evidence. Which is the real credibility problem.

    • Robert,
      Too bad you did not bother to read Steve McIntyre’s report before commenting. You would have saved yourself the embarrassment.

      • As usual, fact free post from Robert. He does not need read through what’s written, with evidence. Today Steve McIntyre has a lot more credibility than IPCC.

    • Latimer Alder

      Robert

      Suggest that you read the linked article first, then modify you r boiler plate defence of the warmist theory in light of what has been presented, not om what you think may have been shown.

      I;m sure that you would wish to do the same due diligence that you instruct others to carry out.

      As it stands you have completely missed the point and are ‘answering’ the wrong charge about the IPCC. It is not only their credibility that is in question……..

    • Robert,
      Thank you so much for a humorous insight intot he true believer.

    • Robert

      Demonizing the IPCC doesn’t help you with your real problem, which is that your beliefs are not compatible with the observed evidence. Which is the real credibility problem.

      There was a brief time, when you could (almost) have been right.

      Temperature appeared to be rising (i.e. the globally and annually averaged land and sea surface temperature anomaly construct seemed to show warming of a fractional degree C).

      El Nino was playing along, and even gave us an all-time “warm” year in 1998, hailed at the time as the “warmest year on record”.

      This trend started back in 1976 and was projected to be “never ending” (unless we humans all stopped emitting CO2). Earlier warming and cooling cycles were essentially ignored, since these did not support the theory of dangerous AGW.

      But then a strange thing happened, Robert.

      All those thermometers out there, even the ones next to AC exhausts and asphalt parking lots, began telling us that the warming had stopped. Basta, finito! The trend actually showed slight cooling of our atmosphere, believe it or not!

      Even worse, the new, very comprehensive ARGO measurements showed that the upper ocean was also cooling since they started measuring in 2003. (So much for Hansen’s “hidden in the pipeline” postulation.)

      Gadzooks! A cooling planet, despite increase of atmospheric CO2 to new record levels! (No wonder Kevin Trenberth called this new “unexplained” “observed evidence” a “travesty”.

      Others (like you?) have simply stuck their heads in the sand and either ignored the “observed evidence” or called it “irrelevant”.

      Max

  39. Robert

    you would still have the problem of thousands of scientists creating thousands of papers that reflect the reality of rapid global warming.

    http://bit.ly/jo1AH4

    Do you mean to say “rapid global” cooling?

  40. Comes under the heading of not seeing the woods for the trees. Most radiative change happens in the shortwave – roughly an albedo of 0.25 in a blue green planet to 0.50 in a snowball earth. About an 85 W/m2 change due to changes in ice, cloud, vegetation, dust etc. We have a very poor handle on these and especially how they have changed prior to satellites. Indeed – I find it amusing that the satellites continue to say the wrong thing with increasing precision – i.e. that most recent change in radiative balances occurred in the shortwave with longwave trending in the wrong direction entirely.

    However, there are biological changes that are happening. And indeed, the chaotic element implies that small changes, say in greenhouse gases, can result in nonlinear changes in outcomes – including very cold conditions that can and have occurred in a matter of months to decades.

    You’re continuing to insist on back of the envelope calcs with no consideration of multiple feedbacks and nonlinear dynamics. It just doesn’t work with any credibility at all. Doesn’t stop people trying.

  41. Pooh, Dixie

    Fred, you cite some model findings that the decline in temperatures is likely to be small. You concede that the real world may behave differently. May I suggest that the real world has behaved differently under the same circumstances (few or no visible sunspots). The historical record is fairly consistent in places where historical records were kept (absence of evidence not being evidence of absence).
    http://en.wikipedia.org/wiki/Little_Ice_Age

    Look up “Year Without a Summer” for its impact (Dalton Minimum, 1816). To be complete, the Mount Tambora eruption is a coincident driver. (You may still read about other effects: Frankenstein, or The Modern Prometheus, The Vampyre, and In Lord Byron’s Darkness.)

    I really think you should take a look at Livingston & Penn’s graphs of Umbral Intensity and Umbral Magnetic Field (B Gauss).
    Both graphs available at http://www.leif.org/research/Livingston%20and%20Penn.png

    If there is a connection between Umbral Magnetic Field and Climate, one might consider the decline in Solar Wind and its magnetic field during a quiet sun (not just TSI). The Svensmark theory (low cloud generation on aerosols through GCR ionization in the atmosphere) may be the mechanism.

    The IPCC and the European Union are vocal supporters of the Precautionary Principle in the Global Warming controversy. Now, I would think that the Precautionary Principle would equally apply to a prolonged minimum, and we should be cautious about discarding 70% of our energy supply.

    More discussion on Livingston & Penn here:
    Solar Cycle 24 / Spaceweather; Topic: Sunspot Magnetism—Livingston & Penn (Read 56,336 times)
    http://solarcycle24com.proboards.com/index.cgi?board=general&action=display&thread=855&page=1

  42. Andrew Dessler’s comment over at Masterresource is instructive:

    “There’s one additional piece of information missing from this post: this paper was originally submitted to JGR, and it was rejected by that journal, too. When I talked to Lindzen last Oct., he railed about how unfair the reviews from that journal had been. At that point, I think Lindzen recognized that his paper was never going to make it through any kind of legitimate peer review, so he next submitted it to PNAS so he could select his own reviewers. Kudos to PNAS for not letting him select the entirely unqualified Happer or Lindzen’s wholly-owned subsidiary, Choi. But now Lindzen thinks PNAS is being unfair to him. Of course, after so many rejections by so many reviewers, there’s another possibility that Lindzen seems to not consider: his paper is not very good.”

    http://www.masterresource.org/2011/06/lindzen-choi-special-treatment/ comment 27

    • Dessler left the same comment at Climate Audit. So let’s see the responses he got for that

      a.] From Lubous Motl

      Dear Dr Dressler, I wonder whether you agree that your confusion of the names Choi and Chou – the latter hasn’t written anything with Richard for 7 years – destroys 1/2 of the evidence that you have offered us.

      Aren’t you worried that the foundations for the remaining 1/2 of the argument are not very good, either? I find the description of Prof Happer as an unqualified person amazing.

      http://scholar.google.com/scholar?hl=en&q=william-happer

      Happer has done lots of things that use qualitatively similar – but more advanced – physics as the greenhouse effect. In particular, his optical pumping paper has 1100+ citations. There are many other highly influential papers he has co-authored and he has investigated the climate topics in some detail for years.

      What’s really special about the specialized, “qualified” climate scientists whom you would prefer as referees is that they have never contributed anything genuine to the real science – and they form a clique. I don’t think that any of these two features should be presented as an advantage.

      b.] From a poster called Dan Zeise

      Andrew Dessler
      I could care less if Dr. Lindzen’s paper was originally rejected by JGR. Just another example of pal review in my mind.
      In his February 23, 2011 letter to Dr. Schekman, Dr. Lindzen wrote, “The use of simple regression over the entire record (as in the procedure in Trenberth et al, 2010 and Dessler, 2010) is shown to severely understate negative feedbacks and exaggerate positive feedbacks – and even to produce significant positive feedback for the case where no feedbacks were actually present (viz Figure 7 and Table 1 of the revised paper). Equally important, the simple regression approach leads to extraordinarily small values of the correlation on the order of 0.02. Such values would, in normal scientific fields, lead to the immediate rejection of the results of Trenberth et al and Dessler as insignificant.”
      Maybe you should address the low correlation from your published paper or address the merits of Dr. Lindzen’s paper instead of building a strawman. Your very post above, in my humble opinion, is confirmation of the bias in peer review.

      Dessler turned and ran. He never responded. Very instructive.

    • Surprised that you would post Dessler’s ‘Chou vs Choi’ confused remark, Bart.

  43. Judith,
    Your text demonstrates pretty well that the detection and attribution processes have not been as well defined and systematic as AR4 tells. This is a point shared by IAC. Reading the IAC report, my conclusion was that they are criticizing the correct points but that they were not drawing full conclusions of their critique. Either they didn’t realize, what the full logical conclusions are, or they chose to leave them out of the report thinking that this may ultimately lead to better outcome, when others are forced to see them in the process of implementing their recommendations.

    What I mean above by the full conclusions is specifically related to the content of your arguments. They mean that there’s not enough objective data and not sufficient basis for agreement on priors to apply systematically and formally the Bayesian approach. Thus the requirements for presenting quantitative levels of certainty are not objectively attainable, but remain highly dependent on subjective judgments. Further it’s not possible to combine the subjective judgments of climate scientists to form an essentially more objective overall judgment as these views are not independent by highly influenced by each other.

    You propose briefly at the end of your text that fuzzy logic could be useful. On that point I remain very skeptical. My view that this means only leaving the problems unanswered and replacing one form of vagueness by another that’s only worse and less useful as an basis for reducing the gaps in understanding. Fuzzy logic means that we accept that ignorance is different from randomness, but it means also that we don’t even try to make best possible use of what we know anyway. The rules of combining uncertainties in fuzzy logic are not optimally efficient and they may lead to seriously misleading conclusions.

    You mention also that your original text had a section on decision making under climate uncertainty. This issue is very much related to the whole problem. My view is that the IPCC procedures of handling uncertainty are very much influenced by the fact that the authors have had the decision making and more specifically the precautionary principle in the back of their mind. They have formulated the text thinking that the precautionary principle justifies the conclusions with less certainty, but having doubts that others would not in all cases understand this well enough. Therefore they have decided to support the application of the precautionary principle by stating the uncertainties as less than any objective systematic analysis can support and using subjective judgments as a more reliable basis for quantitative conclusions than they really are.

    The biases related to circular reasoning are ubiquitous in climate science. In addition to the problems that you list in testing the climate models against climate data of the instrumental period they affect certainly also most paleoclimatic studies. These studies are based on indirect evidence that is interpreted based on storylines of the past. The data series are not thermometer readings but something influenced by a multitude of factors and the interpretation requires always many supporting assumptions and models of some related processes. This kind of analysis is extremely prone to confirmatory bias, when several attempts to extract information are usually tried, and the one used in final publication is very often the one that fits best the accepted storyline.

    All the empirical information is of value, and it tells something about the climate, but formal approaches of combining information and ending up in unique objective conclusions on uncertainties are not within our present capabilities.

    • That was an excellent summation, Pekka

      • Perhaps I should emphasize that I’m not exactly a skeptic. I see all these problems making it difficult to decide precisely, how far the results of main stream climate science should be trusted. I do still have quite a lot of trust in the climate science, although less trust in the estimates of uncertainty that have been presented.

        I do accept the idea of the precautionary principle as correct, but applying the principle as far more difficult than many of its promoters have told.

        In my view the risk of climate change has to be taken seriously, but the present policies of UNFCCC and EU are not appropriate. All the central issues should be debated openly by people best qualified to discuss policies taking also the scientific knowledge into account. IPCC has collected much of the relevant knowledge. It’s weaknesses in expressing uncertainties about climate science (covered by WG1) are a problem, but they do not invalidate the scientific knowledge. There are many more problems in the further steps related to WG2, WG3 and beyond.

        The problem of formulating wise policies is really difficult, but that doesn’t mean that the right choice would be to just forget the risks.

      • The problem of formulating wise policies is really difficult, but that doesn’t mean that the right choice would be to just forget the risks.

        Had the natural global warming trend at the beginning of the last century continued, we would have global mean temperature anomaly of about 0.8 deg C as shown in the following graph.

        http://bit.ly/lVkSkw

        The current observed global mean temperature anomaly is about 0.4 deg C instead of 0.8 deg C.

        Is this a risk?

        The risk of AGW is not supported by the data.

    • Joe Lalonde

      Pekka,

      Science has a hard time grasping that they generated mistakes.
      Take physics. Totally based on a perfect Universe and solar system where formulas and mathematic equations will exist forever. The planet does not rotate except to give us night and day and the solar system is constant.
      Changes in climate and solar system takes many years/decades/centuries. This gives false theories life of a short term pattern in a long term Universe.
      This is based on laboratory observations that do not take into account many areas that need to be included.

    • Pekka, thanks very much for your analysis. Your point about decision making under climate uncertainty is spot on, that was a major point of my argument. On one level I regret taking it out of this paper, but in the end it does make sense to do this. Later this summer I am going to start working on a paper on decision making under climate uncertainty that addresses these issues.

      • Joe Lalonde

        Judith,

        Has anyone looked at time of year storm patterns?
        I know these have shifted with the shifting of the ocean heat.
        Interesting in the northern hemisphere that in winter they travel more south to north on the east coasts and in summer travel more angular west to east from the west coast.

  44. To people who ask about evidence of LIA

    Feulner G., Rahmstorf S. (2010),
    “On the effect of a new grand minimum of solar activity on the future climate on Earth”,
    Geophysical Research Letters, 37, L05707.
    doi:10.1029/2010GL042710

    Abstract

    The current exceptionally long minimum of solar activity has led to the suggestion that the Sun might experience a new grand minimum in the next decades, a prolonged period of low activity similar to the Maunder minimum in the late 17th century. The Maunder minimum is connected to the Little Ice Age, a time of markedly lower temperatures, in particular in the Northern hemisphere. Here we use a coupled climate model to explore the effect of a 21st‐century grand minimum on future global temperatures, finding a moderate temperature offset of no more than −0.3°C in the year 2100 relative to a scenario with solar activity similar to recent decades. This temperature decrease is much smaller than the warming expected from anthropogenic greenhouse gas emissions by the end of the century.

  45. Joe Lalonde

    Judith,

    There is a huge confusion that generated a great deal of uncertainty.
    Scientists have generated perfect models and perfect science in a Universe that is totally different in the perception of the past.
    Many parameters were not included in the past as the technology and research was still young and at the time the conclusions sounded reasonable.
    Build on many years of past down teaching of the same formulas and mathematics plus added on theories and you have a giant mess of science.

    Knowing what and where the mistakes where made helps to clear away some of the uncertainty and opens up new areas of understanding.

  46. Judith, there is a word missing in the last sentence: “may very show”

  47. Actually I find this thread totally pathetic. Water vapor is mentioned only twice, and then only as stratospheric rather than atmospheric. Has Judith Curry ever read John Tyndall (1861)? Apparently not, as he rates no mention in her book cited below.

    I would have thought the co-author of Thermodynamics of Atmospheres and Oceans would have had more to say about it. I am sorry to say this, but it appears to be symptomatic of the creeping corruption of ALL climate science even from the obviously well-intentioned curryja. Back to basics please!

    • Have you bought my book or otherwise read it? Water vapor is mentioned in each of the 13 chapters (except the one on ocean thermohaline processes), and is a dominant topic in about a third of the chapters. So in terms of the basics, read my book.

  48. Yes I have, but you don’t cite Tyndall there, and on this Thread atmospheric WV is absent. Bring it back please!

    • I cited very few references in the thermodynamics text, mostly other text books, and the greenhouse effect plays a very small role in my book. I agree that WV is very interesting and important topic, and I plan on addressing these issues in other threads.

  49. Teddy – He is wrong in almost all of the many comments he has made in recent threads. I doubt that any geophysicist with a recent PhD (or even a Masters) in a relevant discipline would make these errors, and at such a basic level. My point was that if someone advertises himself as a geophysicist, he is implicitly asking others to respect his level of knowledge. If Norm had not referred to himself in that manner, it would be unnecessary to discuss credentials.

  50. Matt Skaggs

    Dr. Curry,
    These threads get long fast! This is the best piece on uncertainty that I have seen so far. I just wanted to point out that the way the IPCC has handled uncertainty was the primary trigger for what I call “the rebellion of the engineers.” Many of the genuine skeptics are or were engineers, and I believe that is because we just are not allowed to get away with anything anywhere near as weak as what we find in IPCC publications with respect to uncertainty. There are systematic ways of handling hierarchal uncertainty. Admittedly, in climate science, one must eventually get to some form of Bayesian reasoning, since empirical results with carefully controlled variables are beyond reach. But that need alone puts 95% confidence beyond reach. And by the way, I suspect there will never be open debate about AGW for precisely the reason that attribution with high confidence is beyond reach. It seems that the rebellion of the engineers has waned, and climate science must open a new front to combat the rebellion of the the statisticians! Meanwhile, where are the dendros?

  51. Since I always like to know a bit about who’s commenting:

    “Norm Kalmanovitch is a geophysicist with over 35 years of experience, and recently retired from Penn West, a Canadian oil and natural gas energy trust based in Calgary, Alberta.”
    http://www.thepeoplesvoice.org/TPV3/Voices.php/2009/10/20/the-way-ahead

    Robert Ellison: An Australian hydrologist. Climate page:
    http://www.earthandocean.robertellison.com.au/

    Bart Verheggen:
    “I am an atmospheric scientist working in the Netherlands (and living 1 ft below sealevel), and I am particularly interested in science communication.”
    http://ourchangingclimate.wordpress.com/about/

    Fred Moolten:
    A musician whose current interests include climate change, folk music, and politics.
    Weblog: http://tpmcafe.talkingpointsmemo.com/talk/blogs/fredmoolten/

    HTH, Pete Tillman
    Consulting Geologist, Arizona and New Mexico (USA)

    • Pete Tillman

      Your listing of “qualifications” of the various posters here does reinforce one conclusion I have drawn.

      When it comes to the logic behind the many scientific, political and economic issues relating to “global climate” which exist today, academic qualifications play a small role.

      While I do not agree with hardly any of the statements of Fred Moolten, it is not because he lacks academic qualifications as a climate scientist that this is so. He appears to have a very good grasp on the “science” out there (in particular the “science” that supports his position in the ongoing AGW debate) and (as a writer) he knows how to use words in order to express himself very eloquently.

      Pekka Pirilä has a lot of academic qualifications, which still does not mean that I agree with everything he writes.

      The same goes for some of the others you have mentioned.

      A basic grasp of technical and scientific principles, a notion of economics plus an interest in checking out the literature out there and an ability to think logically and objectively are the most important qualifications needed to engage meaningfully in this debate.

      It appears to me that this blog (Climate Etc.) has been able to attract posters who generally share these attributes, no matter which side of the debate they stand on. Many other sites are infested with trolls who use ad hominem attacks on those with whom they disagree, but I have seen very little evidence of this on Climate Etc.

      I also think that a good part of this can be attributed to our host, who has brought (mostly) interesting topics, which she has presented objectively (as opposed to dogmatically) and encouraged an open dialog.

      Max

  52. Pekka Pirilä: Finnish physicist (D.Sc. 1973), specializing in energy economics and energy systems analysis.
    http://pirila.fi/energy/about/

    • Peter – curryja provides a link above to the denizens thread where you can read about Fred the musician, as well as many other regulars who have science or engineering backgrounds.

  53. On the issue of to what extent attribution “evidence” derived from GCMs/AOGCMs (the validity of which is dependent on their climate sensitivities being realistic) can be relied on, three academics who have published extensively on climate sensitivity, Chris Forest, Peter Stone and Andrei Sokolov, wrote about GCMs in “Constraining Climate Model Parameters from Observed 20th century Changes” (Tellus A, 2008) as follows:

    “Much of the work has focused on evaluating the models’ ability to simulate the annual mean state, the seasonal cycle, and the inter-annual variability of the climate system, since good data is available for evaluating these aspects of the climate system. However good simulations of these aspects do not guarantee a good prediction. For example, Stainforth et al. (2005) have shown that many different combinations of uncertain model sub-grid scale parameters can lead to good simulations of global mean surface temperature, but do not lead to a robust result for the model’s climate sensitivity.

    A different test of a climate model’s capabilities that comes closer to actually testing its predictive capability on the century time scale is to compare its simulation of changes in the 20th century with observed changes. A particularly common test has been to compare observed changes in global mean surface temperature with model simulations using estimates of the changes in the 20th century forcings. The comparison often looks good, and this has led to statements such as: ”…the global temperature trend over the past century …. can be modelled with high skill when both human and natural factors that influence climate are included” (Randall et al., 2007). However the great uncertainties that affect the simulated trend (e.g., climate sensitivity, rate of heat uptake by the deep-ocean, and aerosol forcing strength) make this a highly dubious statement. For example, a model with a relatively high climate sensitivity can simulate the 20th century climate changes reasonably well if it also has a strong aerosol cooling and/or too much ocean heat uptake. Depending on the forcing scenario in the future, such models would generally give very different projections from one that had all those factors correct.”

    The “Randall et al., 2007” study that the above-quoted statement referred to as “highly dubious” comes from in fact constitutes the complete Chapter 8 “Climate Models and Their Evaluation” of the IPCC AR4 Working Group 1 report.

    I think that Chris Forest et al. are exactly right on this issue, and that no reliance should be based on attribution or climate sensitivity estimates derived from AOGCM simulations. Certainly, there is considerable evidence (see, e.g., Hansen’s recent paper) that AOGCMs’ rate of ocean uptake is substantially too high in most cases, which (if the forcings used are accurate) must mean that they overstate climate sensitivity – otherwise they would not match 20th century temperature trends in the way that they do.

    • See my comment below to Dr Curry and the linked reference. Forest’s argument is correct, I believe, but does not go far enough. Because of the mathematics of the problem, even if the forcings are known exactly, it is possible to match the mean temperature series and OHC with a wide range of ECS values. Skill in hindcasting these series tells us almost nothing about the validity of estimates of sensitivity, and hence of future projections.

      • Paul
        Noted, thank you. I have accessed both your articles at Lucia’s Blackboard and will study them with interest later today.
        I have, incidentally, found using a multilayer diffusive ocean model that there is a near complete identity in the path of the model surface temperature response to a step forcing, for the better part of a century, over a wide range of equilibrium climate sensitivities if effective ocean diffusivity is varied to compensate. Alternatively, varying the aerosol forcing can achieve the same effect, of course.

  54. Speaking of climate sensitivity:
    Pat Frank posted an interesting analysis at Jeff Id’s recently:
    http://noconsensus.wordpress.com/2011/05/24/future-perfect/

    — in which he suggests a sensitivity of 0.45 deg C (or less). Fully empirical negative feedback model: 0.42 C warmer by 2100, at current delta-CO2 rate.

    I didn’t see any show-stoppers in a quick read-through. It is interesting that all the empirical studies of sensitivity come in at the low end. Anyway, as a geologist, the whole idea of the climate system balanced on a positive-feedback hilltop is just too silly for words. Since the climate system has been basically stable for a billion years or more, now…. I mean, doh? Think about it, fellows…

    Cheers — Pete Tillman
    Professional geologist, amateur climatologist

    • “the whole idea of the climate system balanced on a positive-feedback hilltop is just too silly for words. Since the climate system has been basically stable for a billion years or more, now…. I mean, doh? ”

      For what it’s worth, this is pretty much the position from which I started looking into Climate “science”. Good to see it expressed by someone whose scientific credentials so greatly surpass my own!

      Since then, I still haven’t seen a satisfactory rebuttal to this position, but I’ve found an awful lot more fallacy besides.

  55. The substantial warming during the period 1910-1940 has been attributed by nearly all the modeling groups to some combination of increasing solar irradiance and a lack of major volcanic activity. The cooling and leveling off of average global temperatures during the 1950’s and 1960’s is attributed primarily to aerosols from fossil fuels and other sources, when the greenhouse warming was overwhelmed by aerosol cooling.

    Let us assume the “natural” global warming during the period 1910-1940 continued. Here is what the current global mean temperature anomaly trend would look like.

    http://bit.ly/lVkSkw

    The current observed global mean temperature anomaly is about 0.4 deg C instead of the projected 0.8 deg C.

    Why is the world talking about global warming when it has actually cooled?

    • Girma

      Your “woodfortrees” graph shows clearly that the warming rate in the first half of the 20th century was around twice that over the entire century, apparently telling us that the warming rate has decelerated (if we use the same “smoke and mirrors” approach used by IPCC in it infamous AR4 WG1 FAQ 3.1, Figure 1 (p.253) with trend lines drawn over ever decreasing time periods, trying to demonstrate an acceleration in the warming trend).

      Max

    • Why do you start with HadCrurt temperature data for 1910 when it goes all the way back to 1850?

      http://www.woodfortrees.org/plot/hadcrut3vgl/plot/hadcrut3vgl/trend

      My guess is you start with 1910 because if you started with 1850 you would have to acknowledge that nature both cooled and warmed the globe between 1850 and 1940, and there would go your argument down the drain.

      http://www.woodfortrees.org/plot/hadcrut3vgl/plot/hadcrut3vgl/trend

      • M. carey

        In the woodfortrees graph I posted I simply showed that “the warming in the early part of the 20th century was around twice that over the entire century”.

        You probably missed my point here when you throw in

        Why do you start with HadCrut temperature data for 1910 when it goes all the way back to 1850?

        My guess is you start with 1910 because if you started with 1850 you would have to acknowledge that nature both cooled and warmed the globe between 1850 and 1940, and there would go your argument down the drain.

        I was paraphrasing IPCC AR4 WG1 SPM (p.5), who compared a 100-year warming trend from 1906 to 2005 with a steeper 50-year warming trend from 1956 to 2005 to claim that there had been an acceleration in the warming rate.

        This is a “bogus” comparison of shorter and longer-term trends, as is the IPCC chart in AR4 WG1 Ch.3 (FAQ 3.1 Figure 1), which carries this “bogus” comparison to extremes, in an attempt to show the same (non-existent) acceleration in warming.

        My comparison was also “bogus” – only it showed an apparent deceleration in warming rate over the 20th century.

        BOTH COMPARISONS ARE BOGUS, M carey, but one of the two is in the supposed “gold standard” AR4 climate report by IPCC while mine was just a joke to point out how silly the IPCC comparison really was.

        Have you understood this now?

        Max

      • Why do you need 1850-1910 data to show that the warming rate from 1910 to 1940 was at least as great as that of 1970 to 2000?

  56. Max

    To claim the evidence for man global warming is robust when the recent warming rate is not greater than the previous “natural” one deserves physicist Hal Lewis characterisation of AGW:

    It is the greatest and most successful pseudoscientific fraud I have seen in my long life as a physicist.

    • Here we go again with that logical fallacy. If nature has caused global warming, man can’t.

      • Logical falacy here (Straw man):

        “If nature has caused global warming, man can’t.”

        Girma is not claiming that. I don’t know any sceptic who claims that.

  57. M. carey

    You state:

    Here we go again with that logical fallacy. If nature has caused global warming, man can’t.

    There is no logical fallacy here at all. The sentence you quote is rubbish.

    There is absolutely no question that nature can change our climate. It has been doing so for millions and millions of years.

    There is also very little question that man can change local weather patterns and, hence our climate on a local or possibly even a regional scale.

    However, there is a question whether or not man can make a significant change to the global climate by emitting CO2 into the atmosphere at the rate of a few ppmv per year..

    The jury is still out on that one, M. carey.

    Max

    • Probably not a significant change in our lifetimes, or at least mine. But the emission of CO2 into the atmosphere adds up over time. So the notion it can never make any significant change is wishful thinking.

      • M.carey

        I would not call it “wishful thinking” to conclude that human CO2 additions have not caused most of past warming and, hence, do not represent a serious potential threat to humanity and our environment.

        It is rather “doomsday thinking” to believe otherwise.

        Max

  58. I think Girma’s trying to imply it, or at least suggesting that perhaps man isn’t responsible for any or much recent global warming because nature has cause all the warming in the past. Look at his post onJune 15, 2011 at 10:34 pm.

    • No. You are trying to shift the burden of proof. Consensus scientist claim that the “global warming” is unprecedented. It is obviously not unprecedented. Girma is just disproving that point in order to falsify the AGW hypothesis. You know, the scientific method.

      • Can you cite scientists who say recent global warming is “unprecedented” in the history of the world?

        Gima using part of the HadCRUT global temperature series ( 1910 – 2000) rather the entire series (1850 -2000) is not scientific method, it’s cherry picking.

      • Not history of the world! Have you already forgotten about hockey sticks?

        If it’s not unprecedented, then great! We agree. We can move to falsifying next point of “evidence” for AGW. One by one.

      • You haven’t shown that scientists claim that the warming is unprecedented.

        Evidently you can’t.

        Now you have added another dubious claim — that the warming being unprecedented (which you can’t find a source for) is a point of evidence for AGW. But you haven’t offered any examples of scientists arguing that either.

        You seem to be addicted to these straw men; when confronted about one straw man you answer with another. How about you cite the specific statements which you claim to be referencing.

      • M carey

        You ask:

        Can you cite scientists who say recent global warming is “unprecedented” in the history of the world?

        No, but I can get pretty close.

        IPCC AR4 WG1 SPM (p.9) tells us:

        Paleoclimate information supports the interpretation that the warmth of the last half century is unusual in at least the previous 1,300 years.

        Max

      • Thanks Edim

        The issue is whether human CO2 emission has enhanced the natural global warming. However, there is no evidence for that.

        http://bit.ly/lVkSkw

  59. The question is not if man does or not cause the climate to change.
    We have for millenia, and will continue to do so unelss the “Time’s Up!” xenocide becomes a reality.
    The question is : Will CO2 from human industry cause a climate catastrophe?
    I believe the answer is no,and I believe the claims it is have been produced without evidence to support the claims.

  60. If by “catastrophe” you mean wiping out mankind, probably not. Even the plague didn’t do that. What is your measure of catastrophe?

  61. “unelss the “Time’s Up!” xenocide becomes a reality”

    [xeno- + -cide (as in homocide, suicide, etc.)] the killing or attempted killing of an entire alien species; the killing of a single alien. Hence xenocidal, adj.

    Read more: http://www.answers.com/topic/xenocide-1#ixzz1PTP86cwp

    What you are trying to say is idiotic; what you are actually saying is nonsensical.

    • What you are trying to say is idiotic; what you are actually saying is nonsensical.

      Xenos is the Greek word for “stranger” or “other”, although that only conveys part of the meaning. Thus xenocide implies the killing of
      “others”, which could include alien species but is not restricted thereto. In the sense and context in which hunter uses it, I beieve it means all those “others” who do not share ones biases/convictions. hunter can correct me if I’m mistaken. Robert doesn’t have that privilege because what he’s saying is ignorant.

      • Back so soon after your last spanking, Jimmy?

        So even with the definition of “xenocide” in front of you, you don’t know what the word means, and feel it is important to share your ignorance with the rest of us.

        Obviously, what you wish the word would mean is irrelevant.

        You lose (again).

      • lol!! I’ve never lost anything to you. Nor is it likely to happen.

        The only way you’ll ever spank me is to drag my dead body out of the coffin. And then you’d have to deal with my sons and daughter (and her husband). They’re all shooters. You wouldn’t enjoy the experience.

        BTW, your definition is incomplete– but your ignorance IS complete. Get some education – it might help you to keep up with me. But I doubt it.

  62. I think you are misusing IAC’s “imprecise” language. Here is the context from the IAC:

    Many of the 71 conclusions in the ‘Current Knowledge About
    Future Impacts’ section of the Working Group II Summary for Policymakers
    are imprecise statements made without reference to the time
    period under consideration or to a climate scenario under which the
    conclusions would be true. Consider, for example, the statement:

    In Central and Eastern Europe, summer precipitation is projected to decrease,
    causing higher water stress. Health risks due to heatwaves are projected to
    increase. Forest productivity is expected to decline and the frequency of peatland
    fires to increase. (High confidence; IPCC, 2007b, p. 14)

    There is no indication about when these events are expected to occur or
    under what conditions. What changes in climate would give rise to these
    results? What is assumed about adaptation? It could be argued that, given
    the imprecision of the statement, it has an 80 percent chance of being true
    under some set of circumstances.

    I would argue that in comparison, “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations” is sufficiently precise. “Most” = “more than 50%”. Observed increase since the mid-20th century (since 1950s or so). “Very likelY”: 90-99%. Could they have been more specific? Yes, but not without adding false precision (which I’m sure you would lambast them for if they had). Would you have preferred “At least 56% of the observed increase in temps since March 1961 is very likely due to human emissions of GHGs”? More precise, but not in a helpful way.

    I admit, had I been writing this section, I might have tried to place an upper bound as well as a lower bound – and my guess is that the upper bound would have been more than 100% (given that natural variability, aerosols, and land use change could all have contributed cooling from 1950 to present). This would have been important because I do not think that there is actually general agreement that “the percentages of warming each attributed to natural and anthropogenic causes is less than 100% and greater than 0%”: I think it is entirely possible, possibly even probable that natural causes contributed less than 0%, and anthropogenic causes more than 100%, to warming since the the mid-20th-century.

    • M: Actually this is not (a) precise or (b) true: ‘“Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations” is sufficiently precise. “Most” = “more than 50%”. ‘

      “Most” only means more than 50% if there are only 2 factors, and even AR4 WG1 admits there are more than 2, so “most” could be as little as 40% if factors B and C account for 30% each.

      But in fact Chap.9 of WG1 where the “most” claim originates leaves out atmospheric water vapor ([H2O]) from its “non-human” or “natural” prime movers of radiative forcing which would have surprised Arrhenius and Tyndall; what it asks us to believe is that [H2O] ONLY originates from rising temperature caused by CO2, and that the role of the sun in producing 99% of atmospheric water vapor is irrelevant and not a natural forcing. Moreover sun-induced evaporation is variable depending on cloud cover at any given time and place.

      It is true that [H2O] is not a long lived GHG, with perhaps only a week before it precipitates, but the average level is a major player, given that evaporation is continuous at all times and places except Antarctica. Tyndall (1861) showed in the lab that [H2O] is THE prime mover, 13 times more so than atmosphere containing only CO2 as a forcing agent or radiation absorber. He was a real scientist unlike any of those involved in WG1.

      The truth is the “Detection and Attribution” of climate change “mostly” to human CO2 in WG1 Ch. 9 is bogus and devoid of any scientific basis whatsoever.

      MOST temperature variation is due to natural evaporative effects, as I have shown in my forthcoming Econometrics paper (advance copy at http://www.timcurtin.com).

      • simon abingdon

        You say
        “Most” only means more than 50% if there are only 2 factors, and even AR4 WG1 admits there are more than 2, so “most” could be as little as 40% if factors B and C account for 30% each.
        and
        MOST temperature variation is due to natural evaporative effects, as I have shown in my forthcoming Econometrics paper (advance copy at http://www.timcurtin.com).

        What do you mean by MOST?

  63. The results of NCAR’s CMIP5 calculations are that after 1970, the simulated surface temperature increases faster than the data, so that by 2005 the model anomaly is 0.4 deg C larger than the observed anomaly.

    Is this overestimating by 0.4 deg C somehow related to the following projection?

    http://bit.ly/lVkSkw

  64. Dr Curry,
    Excellent article. You wrote…
    “This agreement is accomplished through inverse calculations, whereby modeling groups can select the forcing data set and model parameters that produces the best agreement with observations.” … [and] “The inverse calculations can mask variations in sensitivity among the different models. If a model’s sensitivity is high, it is likely to require greater aerosol forcing to counter the greenhouse warming, and vice versa for a low model sensitivity. Schwartz (2004) argues that uncertainties in aerosol forcing must be reduced at least three-fold for uncertainty in climate sensitivity to be meaningfully reduced and bounded. Further, kludging and neglect of ontic uncertainty in the tuning can result in a model that is over- or under-sensitive to certain types or scales of forcing.”

    The use of aerosols as a matching paramater is well documented after Kiehl 2007. However, far less well documented is the true magnitude of uncertainty arising from the formulation of feedback mechanisms, often termed “structural uncertainty” in the IPCC reports. There are a number of statements in the IPCC report which are quite correctly qualified with respect to structural uncertainty, and yet the conclusions, very often, do not take these qualifications into account.

    I would strongly urge you to take a few minutes, before you finalise your paper, to read this article which I posted recently on Lucia’s site.
    http://rankexploits.com/musings/2011/equilibrium-climate-sensitivity-and-mathturbation-part-2/
    It shows, using the GISS-E model data as a ‘proxy’ for realworld data, that EVEN IF THE FORCING DATA ARE PREDEFINED, it is still possible to match GMST and OHC series over a very wide range of ECS values, merely by varying the order of terms accounted for in an Energy Balance Model. I believe that the implications are profound, and like all good science, this knows no friends or foes. It destroys many arguments which have been put forwards by sceptics and warmists alike. Amongst other things, I believe that it calls into question Schwartz’s assertion that bounding aerosol forcing would per se meaningfully bound sensitivity.

    • Paul, very nice article, i’ve bookmarked it. and thanks for reminding me of Kiehl (2007), i will definitely add that reference

    • I am neither a computer modeler nor a climate scientist, but from the concluding remark:

      In conclusion then, we can say that the fact that a simple model with a climate sensitivity of 1.3 deg C for a doubling of CO2 has the ability to hindcast as well as a GCM does not prove that the GCM is in error nor does it prove that the climate sensitivity is actually 1.3. On the other hand, we can also conclude that the fact that a GCM can match temperature and OHC data at any level of ECS tells us quite literally nothing about the validity of the ECS effective in that GCM. This stems directly from the mathematics of the problem. To claim superiority over any other estimate of ECS, the GCM would have to demonstrate that its estimate is better constrained by its ability to match other critical data. At the present time, the GCMs all singularly fail to do this, and hence do not form a sensible basis for assessing the likely range of ECS values.

      I drew the very simplistic conclusion a) that GCMs cannot really determine ECS, and b) that an ECS value of 1.3C hindcasts as well as any other value.

      If this simplistic interpretation is correct, does this mean (by extrapolation) that he model-based climate sensitivity estimates used by IPCC to project future temperature trends are essentially meaningless?

      That’s the way it sounds to me.

      Max

  65. I am an engineer and not a climate scientist or ‘scientist’ but I have been an active blog reader and have been educating myself and my friends relying on the likes of McIntyre, Watts, and, more recently, yourself. I am fortunate to have found this thread via Bishop Hill today because I too have been looking at AR4 Section 9 and have a question that I hope you find the time to answer.

    My basic understanding of the IPCC argument for CO2 alarm is this:
    First, we assume that there are four main climate forcings: solar irradiance, greenhouse gases, volcanos, and aerosols (presumably human-sourced). The 1970-2000 warming is attributed to CO2 because nothing else in the model can account for it. Aerosol cooling assumptions are modified as necessary to account for all non-warming periods.

    Now, what I don’t see in this discussion are feedbacks. As I understand it, for the IPCC to predict a 3 degree increase for a doubling in CO2 requires 2 degrees from feedback. The feedback temperature effects are caused by increased water vapor. However, nobody claims to be able to accurately model the temperature effects of increased water vapor.

    Since there doesn’t seem to be a way to directly incorporate accurate feedback effects into the IPCC models, how does feedback fit in the IPCC attribution argument? In other words, do they use feedback as a fudge factor in the 1970-2000 warming in a way that is analogous to their use of aerosols for cooling periods?

    Thank you for any help you can provide on solving this for me and correcting any of my misunderstandings.

    David Fura
    Engineer
    ae911truth Petition Signatory

    P.S. I noticed that you have mentioned the 60-year PDO as a parameter missing from the IPCC models. Do you also see a role for the multi-century warming trend since the LIA?

    • David, feedbacks are simulated in the global models through parameterizations of physical processes such as convection. So sensitivity is a result of the model, which responds to forcing and includes feedback

      • Thank you Judith.

        Do they tune the parameterizations to match the temperature record or is the parameterization based on an understanding of the physics? If the latter, then is the understanding considered complete and the parameterizations accurate?

        Dave

      • David, I found the seminars run by the UK Issac Newton Institute a useful resource in thinking about models see http://www.newton.ac.uk/programmes/CLP/index.html and http://www.newton.ac.uk/programmes/CLP/seminars/ . On the particular issue you raise here have a look at Tim Palmer’s presentation 23/8/10 http://www.newton.ac.uk/programmes/CLP/seminars/082310001.html – it explicitly looks at the issue. In a later summary presentation of the seminar series (24/11) he notes “..we do not know how to close the equations with deterministic bulk formulae and produce a model which has no significant biases against observations …. A major component of climate prediction uncertainty lies in the mathematical formulation of the closure term”.

      • Some of both. The parameterization is based on establishing the correct degrees of freedom and relations with other variables, but then there i tuning. For example, cloud parameterizations are tied to relative humidity, but the threshold relative humidity to form a cloud gets tuned to produced reasonable cloud properties

  66. Judith Curry

    A point that keeps coming up here is “decision making under uncertainty”.

    I know that I am repeating myself here, but it seems that a logical and rational decision making process should be followed rigorously . (A simplified diagram of this process is here).
    http://farm6.static.flickr.com/5267/5695851735_713e9422ee_b.jpg

    We have too many people who want to re-define or even by-pass this process completely, citing the precautionary principle as a justification for a sense of urgency to act now despite all the uncertainty that still exists.

    It appears to me that this approach is motivated by an underlying fear (undoubtedly a strong emotion) of horrible consequences unless we do something now, rather than by a rational, emotion-free thought process. In some cases, it appears that this fear has even manifested itself as hysteria.

    The more rational (and hence more logical) approach is to follow the decision-making process step-by-step.

    This means we should reduce the uncertainty as much as possible before making the decision, rather than reversing the “ready, aim, fire” sequence by shooting ourselves in the foot.

    Just my thoughts on a topic I know you and many others have wrestled with.

    Max

    • Max, I hope to get back to this topic soon. But many of the uncertainties in the climate change problem are irreducible (see my original uncertainty monster post), and not acting is a decision also. So the situation is decision making under deep irreducible uncertainty

      • Act or not to act?
        Easy answer, don’t decide.
        Wait, a watching cat.
        ============

      • Hmmm, I now prefer

        Easy answer, can’t decide.

        I think I’ll go for a game of adage with the cats.
        ================

      • Judith Curry

        Thanks for your response.

        You know much better than I do that many of the complexities and uncertainties relating to our planet’s climate are irreducible (at least with today’s knowledge).

        There is no doubt, however, that we will gain more knowledge. I believe that this will not come from model simulations or interpretations of reconstructed data from selected paleoclimate events or periods, but rather from reproducible experimentation or actual real-time physical observations.

        Examples are the CLOUD experiment at CERN or the various satellite observations being made and interpreted continuously.

        So there is no doubt in my mind that we will know more about what makes our climate work in 10 years than we do today. My guess is that we should know at least 10 times as much as we do today (but that may be optimistic). I am not sure exactly what role the IPCC will play in this (if any), but that is another story.

        You have stated to a US congressional committee last fall:

        Anthropogenic climate change is a theory whose basic mechanism is well understood, but whose magnitude is highly uncertain.

        and

        The threat from global climate change does not seem to be an existential one on the time scale of the 21st century even in its most alarming incarnation.

        This tells me a lot about your thoughts regarding the uncertainties and urgencies regarding global climate change.

        You have also stated here that

        not acting is a decision also. So the situation is decision making under deep irreducible uncertainty

        By “not acting” I presume you are referring to “not rushing today to implement mitigation actions whose consequences we cannot really predict” rather than “stopping all research aimed at better understanding the many uncertainties related to our planet’s climate”.

        In that case I would agree with you that the prudent and more cautious action today is to continue this research to gain more knowledge concerning the many uncertainties rather than to rush into implementing mitigation actions.

        This is also what I understood from your statement below to the committee:

        It seems more important that robust policy responses be formulated rather than to respond urgently with policies that may fail to address the problem and whose unintended consequences have not been adequately explored.

        You wrote you hoped to get back to this topic soon on a new thread. I am looking forward to that, in order to see how other posters here see this.

        Max

    • Change “trash” to “mistake” or “learning experience” or something. And if the outcome is positive then it’s not necessarily trash.

      “No experiment is a complete failure. It can always serve as a negative example” -Arthur Bloch

      • I don’t think that diagram can be saved…it’s far too simplistic in places (it’s unlikely that warming would be universally dangerous or inconsequential, same for negative or positive…there will most likely be both winners and losers) and dead wrong on item number two (if anthropogenic warming could be dangerous, so too could natural…it just changes the options for mitigation and adaptation).

      • Gene

        Agree with you that the diagram is a simplification and that there will be “winners and losers” and to the other points you made.

        But before we start to consider “mitigation” actions, we need to know that the net overall result of these actions will be beneficial. And before we know that, we will have to know whether or not our climate has changed primarily due to anthropogenic forcing (as assumed by the IPCC models) or by natural factors (as some others have concluded), whether or not the net impact of potential future human forcing of our climate (i.e. the 2xCO2 climate sensitivity) will be positive or negative and inconsequential or substantial and whether or not the specific mitigation actions we propose will have any perceptible impact on our climate.

        There is a decision process here. And it can be drawn up.

        There is also a final step in our democratic society (which is not shown), which involves the ratification of the “mitigation” proposals by the voting public (who will end up paying for these) either directly or indirectly through the elected representatives.

        And there are efforts underway to by-pass this decision process and jump right into “mitigation”.

        If you have a better suggestion for how to expand this diagram to incorporate your objections, I’d be very interested in your thoughts.

        But to just write it off as “too simplistic” would be a cop-out.

        Max

      • Max,

        We’re agreed on a great many points, but there is one I’d dispute.

        That point of contention would be worrying about the net result of responses (I’m certainly not limiting responses to just mitigation). The problem here is the old saying of “stick one foot in a bucket of boiling water and the other in a bucket of ice; on average you’re comfortable”. It’s demonstrably (though I’d decline to be the test subject) false. Effects are local and concentrating on global results is misleading. Those affected by the actions need to consider the relative costs and decide accordingly. I find it hard to fault those who would be in the “winner” category for failing to take actions that would shift them into the “loser” category.

        We should also take into account those actions that may or may not be significant from a climate change aspect, but that carry collateral benefits. Promoting efficiency and conservation (in terms of avoiding waste as opposed to mandating that people learn to live with less) are good examples. Another would be better controlling our methane emissions. Regardless of whether it affects temperatures, escaping methane is wasted fuel (and a potential physical danger).

        I wrote the diagram off as simplistic because of the rather limited pathways through it. Certainly the conditions you listed are important, but their affect on the decisions is not binary. I suspect that if I tried to make a one-pager that captured the actual nuances, I’d end up with a spider’s web.

        Lastly, I’d say we’re in complete agreement that any suggestion that actions be imposed without the consent of the electorate is a non-starter. Attempting to short circuit that is wrong, regardless of the motivation.

        Gene

      • Gene

        Thanks for your post.

        Sure, the “decision process” diagram is simplified and too “binary”. But the key point is that we still have several steps in the process before we should start talking about implementing actions to “mitigate” against human-induced climate change, as some are already proposing, invoking the “precautionary principle”. Our host here has made the same point last fall in testimony before a US congressional committee.

        My point on the proposed “responses” (in this case primarily regional or global mitigation steps aimed at reducing human GHG emissions) is that they should be actionable and subjected to a cost/benefit analysis.

        By “actionable” I mean that there should be a specific course of action with a specific result. An example: The proposal by James E. Hansen et al. to shut down all coal-fired power plants in the USA by 2030, replacing them with non-fossil fuel fired plants.

        A cost/benefit analysis of this proposal shows that it would theoretically reduce global warming by year 2100 by 0.08°C at an investment cost by 2030 of $1.5 trillion.

        The WWF has recently come up with an energy proposal to phase out essentially all fossil fuel (plus nuclear) power generation by 2050, replacing this with renewable sources (primarily solar and wind). I have calculated that this proposal will involve a global investment of around $13 trillion by 2050 in new solar plus wind installations plus gas-fired standby plants (to cover the 70+% of the time, when there is no sun or wind). The proposal will reduce global temperature by 2100 by around 0.6°C. [WWF have claimed that the long-term costs could be offset by energy savings in all areas, but this appears to be based largely on wishful thinking and, at any rate, no discounted cash flow analysis was made to include the investment cost, nor was any estimate provided for the amount of global warming that would be averted.]

        A third specific proposal I have seen was in an article by Rutt Bridges on an earlier thread here:
        “Economic Challenges for Carbon Capture-Storage and the Role of Natural Gas.”
        This article raised several open technical questions regarding the safety of CO2 sequestration schemes. In addition, a cost/benefit analysis showed that this proposal would achieve hardly any reduction in warming at very high investment cost.

        So far, all the specific actionable proposals suffer from this same basic problem: they provide essentially no “bang” for a lot of “bucks”. The investment cost seems to be around $1 trillion invested now per tenth of a degree warming theoretically averted by year 2100.

        What I do not consider to be an “actionable” proposal is a blanket pledge by a political leader to “reduce carbon emissions of his/her nation to X% of the level they were in year Y by year Z.” Even worse is a pledge to “hold global warming to no more than 2°C”. These statements are no more than hollow political posturing.

        The silliest of all is to consider imposing direct or indirect carbon taxes. These will have no impact on our climate (no tax ever did).

        Shuffling around “guilt payments” to the underdeveloped world at the same time blocking these nations from developing a reliable, low-cost energy infrastructure in order to pull themselves out of abject poverty is hypocritical nonsense.

        You point out that we should promote improved energy efficiency, reduced waste and real pollution abatement. I fully agree. These steps are obvious “non-brainers”.

        I also agree fully with your statement that adaptation actions will, by definition, be local or regional. These should be considered and implemented if and when it appears that they will become necessary. As you say, there will be “winners” and “losers” from a slightly warmer climate, so many regions will need no measures at all.

        I also believe (as you do) that, in a democratic society, it will eventually be up to the voting taxpayer and consumer to decide whether or not he/she wishes to make the “investment” in order to see the “benefit”. Whether this ratification is done by open referendum or through the representative government males little difference.

        So it appears that we are in general agreement on all points.

        Max

      • So it appears that we are in general agreement on all points.

        Max,

        Indeed it does.

        I think your comment above would serve as the outline for a pragmatic way forward. I’d be curious to see who agrees and who considers it “delaying meaningful action”.

  67. There appears to be a simple way to demonstrate that AR4’s attribution statement is unreasonable. AR4’s estimate of anthropogenic forcing is 0.6-2.4 W/m2 (95% ci). Most of this forcing occurred in the 20th cenury. AR4’s estimate of equilibrium climate sensitivity is 2-4 degK/2xCO2 (90% ci) or about 0.45-0.90 degK/W/m2. Multiplying gives an equilibrium temperature change of 1.3 /- 1.2 degK (95% ci). We have experienced transient temperature change that is some fraction of this expected equilibrium change. Whatever this fraction may be, the confidence interval will be much to large for attribution.

    How does the AR4 obtain a much smaller uncertainty? They use a group of climate models – characterized as “an ensemble of opportunity” in AR4 – that don’t reflect the full range of uncertainty in our knowledge of climate sensitivity. They use forcings with no uncertainty. In some cases, they apparently let the modelers input smaller forcings into models with high sensitivity and larger forcings into models with high sensitivity.

    Large uncertaintiy in climate sensitivity and anthropogenic forcing makes it impossible to make accurate calculations about past or future warming.

  68. Thanks for the auspicious writeup. It if truth be told was once a entertainment
    account it. Look complex to far added agreeable from you!
    By the way, how could we keep in touch?