What climate sensitivity says about the IPCC assessment process

by Rud Istvan

If climate sensitivity is high, then modest GHG increases cause significant warming. If it is low, then significant GHG increases will not. Analysis of the IPCC assessment of sensitivity provides another window into the ‘government-climate research’ complex and its propensity to overstate future warming, misrepresent findings, and dismiss challenging evidence.

Equilibrium climate sensitivity (ECS) is the total eventual temperature response of surface temperature to a radiative perturbation. For anthropogenic global warming (AGW), the principal perturbation is from rising atmospheric CO2 concentration. The IPCC comparison norm is a doubling of CO2 (in the range of scenario SRES A2), the so-called ΔT2x. Doubling CO2 produces a directly increased forcing of 3.7w/m2 according to 3rd IPCC TAR. Zero feedback ΔT0 can be calculated using the Stefan-Boltzmann law. ΔT0 has been canonically calculated as 1°C. [1]

GCMs have multiple positive and negative feedbacks such as water vapor, lapse rate, and clouds. ECS is therefore an emergent property of the overall climate system, and of its models. Different feedbacks operate on different time scales. Since there can be no certainty about the future of any complex nonlinear dynamic system, ECS may be described as a probability density function (PDF). Substantial work since IPCC AR4 has gone into better PDF estimation (e.g. using improved Bayesian approaches). Recent research suggests that the high ECS ‘fat tail’ of IPCC AR4 is extremely unlikely, and that ΔT2x is not more than 4°C with 95% confidence.[2]

ECS provides an important check on overall GCM realism. IPCC AR4 WG1 9.6 describes several ways climate observations can be used to constrain (i.e. ‘quality control’) GCM ECS. There is a revealing incongruity in the last paragraph of summary 9.6.4: “Results from studies of observed climate change and the consistency of estimates from different time periods indicate that ECS is very likely larger than 1.5°C with a most likely value between 2°C and 3°C. …Nevertheless, constraints from observed climate change support the overall assessment that the ECS is…a most likely value of approximately 3°C.” Which is the same value essentially since the 1979 Charney Report. The AR4 consensus selected the top value from the most likely range, perhaps because the IPCC AR4 GCMs mean ECS is 3.2. [3] This suggests consensus ‘high’ bias in several interesting ways.

The simplest energy balance description of ECS is a climate sensitivity parameter λ such that

ΔT = λ * ΔF

where T is a new equilibrium temperature in °C, sensitivity λ is a constant expressed as °C/(w/m2), and F is radiative forcing in w/m2.  Zero feedback with doubled CO2 λ0is 0.3°C/(w/m2).[4]

The other simple way to calculate climate sensitivity is by netting all positive and negative feedbacks ‘ƒ’ around ΔT0. It is the climate system’s amplifier gain.[5]

ΔT = ΔT0 / (1-ƒ)

Obviously net positive ƒ causes nonlinearly increasing ΔT gain, while net negative ƒ damps asymptotically toward no temperature change. This model of ECS grows explosively as ƒ approaches 1, so has observational bounds.

The gain parameterization can be rearranged for an ECS ‘S’:

S = ΔT/ΔT0 = ΔT/1°C = ΔT = 1/(1-ƒ)

The consensus 4th IPCC value for SRES A2 is ΔT of 3.4°C for the 21st century. [6] SRES A2 has CO2 slightly more than doubling, specifically from about 380ppm in 2000 to about 860ppm in 2100.[7] That is about a (860/380)=2.26 times increase in CO2 over the present atmospheric concentration.

Linear approximation calculates an AR4 consensus of ([2.0/2.26]*3.4) 3°C for a doubling of CO2. S = 3 corresponds to ƒ = (2/3) 0.67, and λ = (3/3.7) 0.8°C/(w/m2).

I use ECS point estimates while IPCC dealt in PDFs and ‘likely’ values. WG1 10.5.1 box 10.2 figure 2 has the most likely IPCC S (green) at 3 (P(0.5)=3) and the GCM S (blue) at 3.2. I just calculated the IPCC’s consensus ECS must be S = 3. WG1 9.6.4 said the best estimate was 3. Although 9.6.4 also said the most likely value was between 2 and 3, which the AR4 10.5.1 graphic definitely does not say. In a lawyerly way, we have again just shown the consensus ‘high’ bias, and a bit of graphical misrepresentation.

S (and translation to/from λ[8]) can be used to evaluate the overall net GCM feedback ƒ. Modeled values of ECS should correspond to inferred values from observation.

There are at least three ways that S can be estimated:

  1. Bayesian GCM
  2. Inferred ƒ
  3. Inferred S

There may be studies in addition to those cited below where this has been or can be done. Please email them, as I wish to avoid selection bias. There are also papers that are very tempting, but which cannot fairly be used. [9]

Bayesian GCM

Annan (footnote 2) showed that uniform priors do not take full advantage of actual climate knowledge, and result in greater probability of a high S.

Annan published his resulting PDF’s in his Figure 2. He did not discuss the resulting most likely S of 1.8-1.9. Enlarge the figure digitally to verify this; dotted is prior, solid is posterior. These are far below 3, and outside the AR4 9.6.4 most likely band. The paper gives a long discussion about this method and how it constrains unrealistically high values. Nothing about what the improved estimation otherwise says about the consensus S. (I read the paper twice to make sure).

Using a blend of uniform and informed priors, Tomassini found S using the Bern 2.5D GCM of around 2 to 2.3.[10] Case f is with (dashed) and without (solid) ocean heat (Levitus 2005) in the prior.

Using expert priors, Aldin estimated a mean S of 2°C with a mode of about 1.9.[11] The PDF is

A possible criticism is that Aldin uses a simpler climate model than those above.

Inferred ƒ

Forster’s study of ERBS radiation budgets estimated the observed net radiative feedback at 2.3w/m2 above ΔT0 3.7w/m2. [12] This implies S=([3.7+2.3]/3.7) = 1.6.[13] A much more sophisticated evaluation of this paper reached essentially the same conclusion using very different methods.[14]

This is important because GCMs are not involved. The authors said,

If our results are accurate, it could mean either that there is little or no positive water vapor feedback and a neutral cloud feedback or it could imply that the longwave cloud feedback is negative… [compared to GCMs]”. Both suppositions can be shown directionally correct compared to observation, since all GCMs have roughly constant UTrH and positive cloud feedback. (Both are at variance with observed trends since 1975. The directionally correct answer is lower but still positive water vapor feedback and neutral to negative cloud feedback.) Climate Etc. followers also know this important paper’s result was improperly ‘transformed’ by AR4 to misleadingly indicate higher sensitivity and a worrisome ‘high S’ fat tail.

A different study of radiative satellite data found the observed imbalance was 0.9W/m2. [15] The 4th IPCC’s consensusnet imbalance is +1.6W/m2. Using the same reasoning as for Forster, the inferred S is about 1.7. A potential criticism is this study’s short observational period.

Inferred S

Chylek used the Vostok ice cores to estimate that λ was about 0.49, and S was 1.3-2.3 with 95% confidence. [16] The analysis covered both a warm to cold period, and a cold to warm period, each thousands of years long and capturing all feedbacks. It followed their paper using satellite data to infer infer a similar ECS.[17] Chylek’s study was criticized by a comment from Hargreaves and Annan for using paleoclimate starting and ending points.[18] Hargreaves and Annan pointed out that the accepted method was to use averages, did so, and found S =2.4. And then they said, “ The resulting climate sensitivity is in line with most energy balance analyses of paleoclimate data”. That is simply not true. The comment specifically referenced Hansen 1993 [19], who says ice age paleoclimate gives S =3. So does Knutti’s review, below (footnote 27), available prior to the comment. Even more telling, Hargreaves and Annan closed with: “The authors have not presented any significant evidence to challenge existing estimates of climate sensitivity”. Yet the comment itself does, and ignored the fact. Hargreaves and Annan also ignore their own Bayesian finding of S ≈ 1.9.  Chylek published a 2009 open letter to the climate research community saying it had “substituted the search for truth with an attempt to prove one point of view.” [20]

Schmittner’s paleoclimate data from the last glacial maximum estimates a median sensitivity of 2.3°C, as well as constraining the likely range to 1.7-2.6.[21] There is now the usual critique on grounds that the findings don’t agree with the consensus value of S=3, and so are suspect.[22] The 2011 ‘consensus’ paper cited in support of this view[23] gives improper values and has many things biased ‘high’. As a specific example, ΔT0 is given as 4w/m2 when the IPCC TAR value is 3.7.  As another specific example, the peak Eemian temperature is shown as 1°C above the present when other proxies have it regionally as much as 6°C higher, and Vostok shows 3°C.[24] (Yet again showing proxy selection bias.) They even criticize Schmittner for only going from cold to warm (like AGW itself).

Schwartz used ocean heat data from Levitus (2005) to observationally (and erroneously) calculate ΔT as 1.1°C±0.5 without reliance on GCMs. [25] The period covered was 125 years. The finding was so surprising that he critiqued several possible error sources himself.

This provoked three responses. One pointed to ocean data quality to basically dismiss the result. Levitus (2009 and 2012) continues to refine the ocean heat data; those refinements do not alter the corrected conclusion of this paper.

The second pointed out a problem in the calculation of relaxation time, and offered a corrected version (that also agrees with other findings). Schwartz republished with the correction and found λ = 0.51 and S = 1.9. That is how good science should work.

The third was a response from Foster et al. (with Mann as senior author).[26]  They plugged Schwarz’ observational findings into GCM GISS-ER, and reproduced his results! So they argued Schwartz’ data were wrong, because “this model is known to exhibit a true equilibrium climate sensitivity of 2.7 C under doubled CO2 conditions.” They then did a similar thing for his time scale parameter using a 14 GCM ensemble. They said, “the estimates of time scale produced by this method are generally unrealistically low in comparison to the known behavior of the models in response to changes in GHG forcing.” They in effect said twice in one comment that the GCM models are trustworthy, and evaluation of 125 years of actual climate observations isn’t. Schwartz’ reply says much: “It is questionable whether measurements should be rejected because they do not agree with models.”

This comment finally referencing a (then new) sensitivity review by Knutti, saying Schwartz’ result was inconsistent with other observational data.[27] Knutti’s review has the most likely S from a century of instrumental observation at about 2.3, and from the last millennium’s proxies at about 2.1. Only the review’s paleoclimate data supported S=3. The new Chylek (corrected) and Schmittner paleoclimate work suggests S=2.3-2.4. All three classes of observations are closer to Schwartz’ corrected 1.9 than to the consensus value of S=3. It apparently never occurred to these commenters that the review’s observational judgments implied that GCMs are oversensitive.

If you have been checking footnotes, you know I have excluded Lindzen and Choi’s corrected 2011 paper that finds S = 0.7. Sea surface temperature is not just a response to radiative imbalance. It is also a function of ocean oscillations like PDO and AMO. Over most of their observational period, AMO was in a cold minimum and PDO was declining. Both would (if uncorrected) result in underestimation. Which appears to have been the case to some degree.

Conclusion

GCMs are oversensitive.  IPCC future warming forecasts are too high. IPCC AR4’s own sensitivity discussion irrefutably illustrates this ‘high’ bias.

The consensus places unshakable faith in simulation models that provably do not correctly reproduce the most important two feedbacks. For both of the principal atmospheric feedbacks there is a ‘high’ bias in simulation compared to observation that collectively explain why S is also too high. Forster and Gregory pointed this out half a decade ago. Don’t damn the data. Fix the models.

JC comment:  Rud Istvan emailed me this post several days ago, we have gone back and forth on this several times and I have done some light editing.   Rud is author of the forthcoming book Arts of Truth.  This is a guest post, and the views presented here are those of Rud Istvan.


ENDNOTES

[1] For details see for example, CO2 no feedback sensitivity posted on Climate Etc 12/11/10. This is not settled. Fiedler derived 1.1°C in METR 5223, GCM Sensitivity Analysis, School of Meteorology, University of Oklahoma (2009), hereinafter Fiedler.  4th IPCC WG1 8.6.2.3 says GCMs calculate 1.2°C, which is a possible red flag. Differences involve ‘black’ versus ‘grey’ Earth. I use 1°C here for simplicity even though it may slightly overstate S.

[2] Annan and Hargreaves, On the generation…of probabilistic estimates of climate change, Climate Change 104:423-436 (2011)

[3] AR4 WG1 8.6.2.3 table 8.2. The median is also 3.2. The mode is 3.4.

[4] Cess (1990) at pubs.giss.nasa.gov

[5] Nothing new. See, for example, Torn and Harte, Missing Feedbacks, Asymmetric Uncertainties, Geophys. Res. Lett. 33: L10703 (2006). Fiedler, GCM Sensitivity Analysis (METR 5223), University of Oklahoma School of Meteorology, (2009). Lindzen and Choi, On the Observational Determination of Climate Sensitivity, Asia-Pacific J. Atmos. Sci. 47: 377-390 (2011). Curry, chapter 13 galley proofs (Thermodynamic Feedbacks), 2012.

[6] 4th IPCC Table SPM 3

[7] EPA.gov/climatechange/science/future

[8] Fiedler gives a precise method. The text uses simpler linear approximation.

[9] For example, Murphy et. al., An observationally based energy balance, J. Geophys. Res. 114: D17107 (2009). The authors expressly said, “the values used in this paper should not be interpreted in terms of equilibrium climate sensitivity.” Fair enough. (They followed the methods of Forster, but with a different goal.) But if one unfairly did, it would be about ([1.25/1.6]*3) an S of 2.3.

[10] Tomassini et. el., Robust Bayesian Uncertainty Analysis of Climate System Properties, J. Climate 20: 1239-1234.

[11] Aldrin et. al., Bayesian estimation of climate sensitivity, Envirometrics doi:10.1002/env.2140 (2012). See also the presentation from Norsk Regnesentral at nr.no/~Aldrin (2010)

[12] Forster and Gregory, The Climate Sensitivity…,J. Climate 19: 39-52 (2006)

[13] Fiedler’s method via λ calculates 1.5.

[14] Nic Lewis, IPCC’s Alteration of Forster and Gregory… posted on Climate Etc 7/5/2011

[15] Trenberth et. al., Earth’s Global Energy Budget, Bull. Am. Meteor. Soc. 90: 311-323 (2009)

[16] Chylek and Lohmann, Aerosol radiative forcing and climate sensitivity…, Geophys. Res. Lett. 35: L04804 (2008)

[17] Chylek et. al., Limits on Climate Sensitivity derived from recent satellite and surface observations, J. Geophys. Res. 112: D24S04 (2007)

[18] Hargreaves and Annan, Comment on CL08, Clim. Past. 5:143-145 (2009)

[19] Hansen et. al., How Sensitive is the World’s Climate? available at pubs.giss.nasa.gov

[20] His letter is available, for example, at thegwpf.org/opinion-pros-s-cons/218

[21] Schmittner et. al., Climate Sensitivity from …Reconstructions, Science 334: 1385-1388 (2011)

[22] See, for example, Skepticalscience/schmittner.

[23] Hansen and Sato, Paleoclimate implications… available at pubs.giss.nasa.gov

[24] For an excellent book length detailed overview compared to Hansen’s 2011 paper, see Prof. Antón Uriarte’s new e-book, Earth’s Climate History (2011). Worth every penny.

[25] Schwartz, Heat Capacity, Time Constant, and Sensitivity of Earth’s Climate System, J. Geophys. Res. 113: D15195 (2008) [corrected]

[26] Foster et. al., Comment on …Schwartz , J. Geophys. Res. 113: D15102 (2008)

[27] Knutti and Hegerl, The Equilibrium Sensitivity…, Nature Geocience/ngeo3387 (2008)

204 responses to “What climate sensitivity says about the IPCC assessment process

  1. “If climate sensitivity is high, then modest GHG increases cause significant warming. If it is low, then significant GHG increases will not.”

    The premise is false. Significant warming has occurred and is continuing, and even a very low climate sensitivity implies further significant warming going forward.

    • David Wojick

      Significant is not a scientific term, unless you mean statistically significant. Moreover, given the significant differences between the various methods for estimating global temperatures, you need to specify your method and time period. Otherwise “significant warming” is meaningless hype.

      I prefer UAH as the best instrumental record, vastly superior to the Jones type surface statistical models. The only significant warming UAH shows over the last 32 years is a single step up coincident with the big 1998-2000 ENSO cycle. This looks like a small abrupt event, with no evidence of GHG warming over the entire period. CO2 sensitivity thus appears to be zero, making your claims incorrect.

      • Wow, rebuttal of the day:

        “That premise isn’t false! It can’t be true or false because it’s meaningless!”

        You perspective is welcome, but I’m going to stick with the commonsense definition of “significant,” in which case the premise is simply false.

      • Who are you quoting? I said nothing about premises. Try replying to what I say when you reply, instead of fabricating quotations.

      • Robert should contemplate the “fountain of energy” Copernicus discovered at the core of the Solar System:

        1543: http://tinyurl.com/7qx7zxs

        NASA: http://tinyurl.com/d2eesm

        Facing trial in 1633 for promoting the 1543 discovery, Galileo insisted the fountain of energy” is simply part of the Divine order that persistent observers can detect.

        http://www.pbs.org/wgbh/nova/physics/galileo-sobel.html

        Space-age research showed this “fountain of energy” created our elements, controls Earth’s climate and sustains life, including the Divine rights and creative talents acknowledged in the US Declaration of Independence in 1776

        http://www.ushistory.org/declaration/document/

        The confrontation between creativity and authority was:

        Reality versus Religious Dogma in 1633
        Reality versus Scientific Dogma in 2012

        Today world leaders, Al Gore and the UN’s IPCC are playing the role the Austrain Emperor played in 1782 when he suggested the composition by Mozart had “too many notes.”

        http://tinyurl.com/bnhuom2

        – Oliver K. Manuel
        Former NASA Principal
        Investigator for Apollo
        http://www.omatumr.com or
        http://omanuel.wordpress.com/about/#comment-284

      • Steven Mosher

        david. The physics used to calculate the temperature for UHA is called radiative transfer. That physics says thatdoubling C02 will add 3.7 watts to the budget.

        If you rey on UHA you accept as a logical consequence all math and physics required to calculate those temperatures. That physics says doubling C02 will add 3.7 Watts. you are disagreeing with yourself.
        That physics is used to design and build the instruments of war that protect your freedoms. You are un -american and subversive.

      • 3.7 watts of physics doesn’t add catastrophe to the human future. We are stumbling slowly into a refrigerator. More CO2 and more warmth will stave off catastrophe.

        The alarmists are merely stumbling backwards into the refrigerator. Poor Ol’ Jeff Dog.
        ============

      • Steven Mosher

        I have no issue with you kim. you are at least consistent. but david uses the results of radiative physics to argue against radiative physics

      • Steven, I have no argument with the radiative physics. You have consistently misunderstood this. My point is (1) that physics is just part of a complex nonlinear system, such that the radiative forcing need not actually cause warming, and (2) the UAH data shows that it does not cause warming. AGW is therefore falsified by observation, for now anyway. It is very simple.

        There is nothing odd about this. Nonlinear systems often respond to a forcing in ways other than that suggested by the forcing, including by nullifying the forcing. This seems to be what the climate system is doing. The scientific question is then how it does it? But to pursue that question the community first has to admit that the forcing has not caused any warming. Not likely, alas.

      • ” radiative forcing need not actually cause warming,”

        Did this guy actually go to school?
        Grade: F-

      • Webster that is not as stupid as you make it sound. Heat capacity may change, but temperature does not have to change. By “warming” he is most likely referring to temperature.

        A good example is every year the Southern hemisphere gets over 40Wm-2 more forcing, but the oceans and clouds quickly respond to limit temperature. There are a variety of thermostats.

        Oh, never mind. You only consider one phase change as a limit.

      • Steven/David/Kim,

        If you accept the assumptions of any particular dogma, including

        a.) The SSM (standard solar model) of the Sun’s steady output and Earth’s steady input of heat, and

        b.) The physics of radiative transfer in Earth’s atmosphere,

        You will be led to the conclusion promoted by that dogma.

        The basic problem: Assumption a.)The SSM (standard solar model) of the Sun and the steady input of energy into Earth’s atmosphere have been repeatedly falsified by:

        c.) Precise space-age isotope measurements on samples from the Sun, solar wind, solar flares, Earth, Moon, meteorites, Jupiter and Mars [1], and

        d.) Observations of hidden magnetic portals connecting Earth to the stormy Sun itself, solar flares squirting from rigid, iron-rich structures beneath the fluid photosphere, and stars forming on pulsar remains of older stars:

        _ Earth/Sun video: http://tinyurl.com/bnvdrwo
        _ Solar flare video: http://tinyurl.com/dynchr4
        _ Formation of star: http://tinyurl.com/373697v
        _ Solar flare photo: http://tinyurl.com/d2eesm

        Reference: 1. ”Neutron Repulsion”, The APEIRON Journal 19, 123-150 (2012)

        http://redshift.vif.com/JournalFiles/V19NO2pdf/V19N2MAN.pdf

        With kind regards,
        – Oliver K. Manuel
        Former NASA Principal
        Investigator for Apollo
        http://www.omatumr.com

      • thisisnotgoodtogo

        ” ” radiative forcing need not actually cause warming,”

        Did this guy actually go to school?
        Grade: F-”

        It appears that you, WHT, are indulging in efforts to misinform.

      • Again, how can an El Nino permanently warm the earth? It’s an obstinate cycle. In the ~1998 cycle heat left the ocean and the temperature of the atmosphere went way up: hottest year in the record and by a fairly wide margin. Then that heat dissipated into space. It can’t come back. Relaxed back to trend. Gone, long gone. After 1998 it appears to me UAH shows the atmosphere continued to gradually warm from its pre-1998 temp.

      • No JCH, after the big ENSO (not the El Nino) the trend is again flat, as it was before, but at a higher temperature. The pattern is a simple step function. It needs to be explained, but first it must be accepted.

      • An inclined sinusoid will produce steps.

      • Flat, step, flat can be produced by a gradual warming plus noise.

      • David Wojick

        AJ & Nil, these are interesting conjectures, and anything is possible, but I see no evidence for them. Tamino (Foster) has actually produced a conjecture where natural noise has by coincidence completely masked the gradual warming. It just looks like theory saving to me.

      • This is what is wrong with over investing in UAH. It starts in a period El Nino and neutral and more El Nino, so it started a bit hot and it has skewed the longterm trend for the series.

        If UAH had started in the early 1970s, its trend would be very close to that of GISS, which is pretty much what UAH’s post-1998 trend has been.

      • Not sure what you are saying. There is no physical trend in UAH, just a single step function. The physical pattern is flat-step-flat. The mathematical upward trend is due to the step. Plus of course it is an oscillator.

      • Your steps are a mirage. You are deceived by when the series starts. You have fooled yourself. The planet is gradually warming.

      • David Wojick

        Actually I start the first flat line at the first point where the temperature is close to the average, so as to avoid being biased by a partial oscillation. It is very flat from that beginning until 1997. This was well known at the time, but people forgot about it when the big 1998 El Nino hit.

      • Add a UAH trend line from 1982 to present.

        Look on at the three trends at the top of the graph:

        Brown – Giss, 1970 to present
        Yellow – the UAH trend
        Green – UAH, 1982 to present

        It is almost identical to the long-term GISS trend. If you want to know what UAH would have looked like if it had started in 1970, that is exactly what it would have looked like. UAH and GISS are remarkably alike.

    • MattStat/MatthewRMarler

      Robert: The premise is false.

      Which premise is false?

      Significant warming has occurred and is continuing, and even a very low climate sensitivity implies further significant warming going forward.

      Granting some ambiguity to the word “significant”, a problem is to determine whether any of the observed warming was caused by GHG increases. If the increase since 1979 or since 1850 was caused entirely by processes other than GHG increases, then the sensitivity can’t be very high, and may be 0, and future GHG accumulation will not cause significant warming going forward. If the entire increase since 1979 is due to GHG increases, then the sensitivity must be “high”, and future GHG accumulations must cause “significant” increases above what the other natural processes cause (either warming, cooling, or nothing.) Putting it the other way around, if we knew the exact value of the sensitivity, we could estimate how much of the increase since 1979 was caused by the GHG accumulation, if we also knew the TCS (or some other way to estimate the rate of warming induced by GHG.) Then we could compute how much future increases in GHG will alter the future temperature course, but not whether the other background processes will produce warming or cooling, or over what time span.

      • Scientists have proven that a majority of the warming since 1970, with a central estimate of 100% (could be more, could be less), is caused by human activities. That’s the scientific fact of the matter.

        If you have discovered a natural forcing that can explain the observed warming, you should write it up and get to a journal right away. The burden of proof is on you.

      • You could be on stronger ground if you would say since 1950 or 1960, but 1970 may well be close to the minimum of natural variability and thus warming since that year may have more contribution from natural variability.

        Whether that’s true or not, the word “proven” does not fit what scientists have achieved.

      • lurker passing through, laughing

        Claiming that science has ‘proven’ that ‘a majority of the warming since 1970 is caused by ma’n is a really funny thing for someone who claims to be smart and informed to assert.
        And even funnier is to claim that stating otherwise is what has to be proven.
        Thanks,

      • dennis adams

        Scientists havent proven anything. They only theorize that it is so.

      • Latimer Alder

        @robert

        ‘Scientists have proven that a majority of the warming since 1970, with a central estimate of 100% (could be more, could be less), is caused by human activities.’

        Why the very strange construction ‘ a majority……with a central estimate of 100% (could me more, could be less)’…., which doesn’t seem to say anything at all?

        Why not just say ‘most of the warming’ if that is what you mean? And if you don’t mean that, what do you mean? And where was it ‘proven’?

        Methinks you speak with forked tongue.

      • MattStat/MatthewRMarler

        Robert: Scientists have proven that a majority of the warming since 1970, … .

        Sooner or later, everyone here goofs up, and that is your goof. No harm done if you rewrite it. Even the IPCC only claimed that no model without a dominant CO2 term could adequately model the observed change, not that they had proven that no such alternative was possible.

      • Primum non nocere.
        ==============

      • Steven Mosher

        Robert, you are over egging the pudding. Evidence indicates that a majority of the warming since MID CENTURY can be explained by anthropogenic forcing. That warming cannot be explained by our current understanding of non anthropogenic forcing. That doesnt amount to proof. Nothing is proved in science.

      • Steven, you say, ” Evidence indicates that a majority of the warming since MID CENTURY can be explained by anthropogenic forcing.” I am quite happy to follow you, Lukewarmer San, if you provide links that support: 1. the evidence you indicate. 2. what precisely is majority. 3. When in mid century (perhaps 1950). This is trickier than you might think – beware of confirmation bias.

      • A simple regression model with global temperature as the dependent variable and including GHG forcing and the AMO as independent variables suggests two things:
        1. Most of the warming since 1856 has been due to GHG forcing.
        2. Superimposed on the warming trend is the influence of the AMO (or whatever it represents). Over its cycle of around 65 years temperatures are elevated/depressed by ± 0.2 °C. This suggest that over the period of rapid warming from 1976 to 2005 (when the AMO went from a minimum to a maximum) half the warming was due to GHG forcing and the other half to the influence of the AMO.

        See http://www.climatedata.info/Discussions/Discussions/opinions.php .

      • I dunno about half but this is a reasonable hypothesis otherwise (and not too far off from what Isaac Held has blogged). Yes, what Robert is saying is that it could be >100% AGW in the sense that anthropogenic aerosols could have been the second strongest (and negative) source of forcing/variability. Depending on your hypotheses about aerosols, maybe the AMO doesn’t exist, which I doubt. Mosher is right to call Robert on the over-egging, and provides the better context (timeframe) for the paraphrase of the AR4 statement. But there is still plenty of uncertainty to go around….if the internal variability was strong enough, and aerosols weak enough, the feedbacks due to water vapor/clouds could still turn out to be a lot lower than in any of the models (I don’t see any reason though that water vapor by itself could ever be a negative feedback).

      • I think I’ve never heard so loud
        The quiet message in a cloud.
        =======================

  2. “There is a revealing incongruity in the last paragraph of summary 9.6.4: “Results from studies of observed climate change and the consistency of estimates from different time periods indicate that ECS is very likely larger than 1.5°C with a most likely value between 2°C and 3°C. …Nevertheless, constraints from observed climate change support the overall assessment that the ECS is…a most likely value of approximately 3°C.” Which is the same value essentially since the 1979 Charney Report. The AR4 consensus selected the top value from the most likely range, perhaps because the IPCC AR4 GCMs mean ECS is 3.2. [3] This suggests consensus ‘high’ bias in several interesting ways.”

    Interesting use of ellipses there. The second ellipses only save about a half dozen words. Funnily enough those half dozen words turn out to be very inconvenient to your narrative. Here’s the full quote:

    “Nevertheless, constraints from observed climate change support the overall assessment that the ECS is likely to lie between 2°C and 4.5°C with a most likely value of approximately 3°C”

    Properly expanded we now see the likely value of 3C is being taken from a different ECS range, not the 2C-3C range mentioned earlier. In other words what you call a ‘high’ bias is nothing of the sort. The report is saying the 2C-3C range is compatible with the 3C+-1.5C range.

    • You are right, there’s no incongruity at all in that paragraph. At the beginning one class of evidence was considered, at the end it was combined with other evidence to get a consistent but slightly different conclusion.

      This kind of false arguments make one certainly suspicious on the whole Rud Istvan article.

      • Steven Mosher

        I certainly think they could have worded it better, and I would argue that there is evidence that the models do run a bit hot. There is, you have to admit, evidence of that. whether that is the cause of the difference between observations and models is another matter. But they clearly and objectively run hotter than surface temps.

      • Steven,

        I agree that there are issues with wording of WG1. I have commented earlier that most of the caveats seem to be mentioned in the report but often in a way that leads one to believe that some authors wish to deemphasize them intentionally.

      • By quite a bit and that is the key to the whole issue actually.

  3. ‘In summary, although there is independent evidence for decadal changes in TOA radiative fluxes over the last two decades, the evidence is equivocal. Changes in the planetary and tropical TOA radiative fluxes are consistent with independent global ocean heat-storage data, and are expected to be dominated by changes in cloud radiative forcing. To the extent that they are real, they may simply reflect natural low-frequency variability of the climate system.’ AR4 S 3.4.4.1

    One possibility emerging form the satellite data is that very little of the recent warming is the result of greenhouse gases. I can’t disentangle natural variability from some anticipated greenhouse gas response. The sensitivity notion can’t imply that temperatures will actually be 2 or 3 degrees warmer as that would require no other changes in the system in the interim. That seems very unlikely and thus the utility of the notion itself seems questionable. It seems unlikely also that we can discern with sufficient discrimination the fine details of climate varibility in paleoclimatic data. Especially as we can’t seem to do it with satellite data. So while there may be numbers applied the error bounds seem at least as relevant.

    Model results are not merely questionable but profoundly unreliable as a result of sensitive depedence and structural instability.

    So the argument continues about whether this ‘sensitivity’ is 1 degree or 10 degrees. Angels on a pinhead as far as I am concerned.

  4. It’s a pretty analysis, but it is, as Robert has pointed out, susceptible to the problem of the faulty premise.

    My own view is that every projection by models may represent a real possible outcome. Yet projections by models have widely divergent outcomes in terms of global temperature in the long run. This is not a failing of the models. Each of these outcomes is realistic. Any of them could happen. They may have varying probabilities, but even then there are multiple modes or ranges of outcomes at significant probability levels, too.

    The failing is in our premise of one ECS. There isn’t such a thing. It is a hydra in an Imaginary future. There are multiple solutions that satisfy the constraints. Further, temperature is only a single measure — and we’re coming to appreciate that surface temperature alone is an inadequate one more and more — among many vectors that adding CO2E will perturb the climate system through.

    The extremes, the unusual, the very different world we must now accept will happen in consequence of our actions but without our ability to know in advance its exact shape or dimension, those are our doing; the outcome of lucrative commercial and government policy actions affecting the Carbon Cycle through ignorance, neglect, indifference or maniacal compulsion.

    It appears likeliest the Arctic transformation will include a summer that is essentially ice free, certainly enough to allow use of the Northwest Passage and to simultaneously encourage the illusion that the Arctic is more, not less, hospitable to drilling. The obvious is of course true: a high arctic summer with unstable and active weather will have more and deeper swells and more energy in currents and wind. Rigs will faulter more, not less, under such conditions. And as we don’t know more than a few years ahead (if at all) the particular range of climate on the Arctic Ocean, but rig builders must anticipate decades ahead the conditions to design to, we’ll have disaster after disaster in that region.

    This is not catastrophism, nor alarmism, but simple engineering and economic fact, about a zone with seven highly competitive nations clamoring for control and exploitation of. Someone’s going to do something stupid up there, and in the Arctic, stupid gets people killed fast. Risky. Expensive. Not necessary, as the world can now more cheaply produce solar and wind than oil or natural gas, in particular from such remote and difficult locales, so it doesn’t even make business sense to do this thing, but international politics and greed, graft and nationalism, will trump good business sense.

    • BartR, I agree that exploiting the Arctic natural resources is not a great move, but this, ” Not necessary, as the world can now more cheaply produce solar and wind than oil or natural gas, in particular from such remote and difficult locales, so it doesn’t even make business sense to do this thing, but international politics and greed, graft and nationalism, will trump good business sense.” is not a valid point. Wind and solar do not replace transportation fuels and electric cars have a long road before they can replace internal combustion.

      There are other ways to replace transportation fuels, but they tend to get pushed aside in favor of limited and expensive alternatives that are the darlings of the left.

      • captdallas2 0.8 +0.2 or -0.4 | July 22, 2012 at 4:09 pm |

        And yet, remove entirely oil and gas from other sectors than transportation, build vehicles to more efficiently use fuel, and even without a single new electric or hybrid or alternate fuel vehicle built you will drop fossil fuel consumption potentially to some 13% of its current level.

        Take coal out of the system, and you overshoot the CO2E reduction targets of politicians by technology innovation that makes better business sense on price today (as in right now, currently, not in some far-flung future) alone.

        The only reasons people are still even talking about pipelines and new drilling is because of the vested interests and sunk costs of a few who invested badly in what we knew five years ago was outmoded capacity and what we knew twenty years ago was immoral.

        I have no sympathy for immoral idiots. Let them lose their shirts.

      • B y that logic, just improve efficiency in all sectors and there is no need to change anything :) I am up for that, but you have the messy ROW situation. Perhaps leading by example and thinking outside of the box might be useful.

      • captdallas2 0.8 +0.2 or -0.4 | July 22, 2012 at 4:42 pm |

        The way to obtain that logical improvement in efficiency is to stop subsidizing inefficiency. :D

        As Fee & Dividend systems with price set by the Law of Supply & Demand achieve this when combined with an end to direct extrinsic and intrinsic subsidy by government and leveling the playing field in international trade, I’m as far outside the old box as you please. See what a wonderful example I’ve been setting? (And yes, I do drive a vehicle in the efficient end of what’s available, and have driven such fuel stingy CO2E sparing machines for most of my adult life, as well as avoid fossil fuels in non-transport applications.Is that leading by example enough for you?)

        Right now, today, for the USA all non-transportation energy needs could be met more cheaply by wind and solar technology (see Makani for wind and ZenithSolar for solar) than by tar sands or Arctic oil. There is zero real business need for additional pipelines in America, period. The best of the solar is market ready for commercial deployment today in the best of the locales for solar generation. The best of the medium-altitude wind technology looks to be within three years of deployment ready in the best of the wind locations. As economies of scale, further innovation, and marginal improvements in technology evolve, by the time the current fossil energy facilities are ready to be decommissioned, wind and solar would be more than ready to replace them — and more cheaply — to the predicted demand level and beyond.

        If you want real figures, and real supporting information, I invite you to check out the leading dozen or so companies in wind and solar, information from Stanford and MIT and any three or four leading academic or research facilities, and what various bodies like the EIA say. Makani’s predicting 60% the price of coal for its wind power. Zenith seems reluctant to divulge, but appears to be about 95% the price of coal. That’s unsubsidized, zero emission, price. (Zenith is using the trick that most of its generation is thermal, not electric, but the comparison remains valid, because coal uses its thermal cogeneration to sweeten its price for these comparisons, too.)

        http://www.eia.gov/forecasts/aeo/electricity_generation.cfm gives the figures for comparison of the current installed US technologies. Reduce the figures for wind and solar to one quarter and one third, respectively, to reflect late developments in medium-altitude wind generation and multijunction hybrid concentrated photovoltaic/thermal solar. That’s where the market will stand in five years on the technology in the pipeline. What is the business justification for government to invest in further fossil infrastructure, given these factors?

      • “What is the business justification for government to invest in further fossil infrastructure, given these factors?”

        Diversity. Developing a pipeline specifically for tar sands is nonsense. Developing an infrastructure for mixed energy is intelligent. There are a lot of other products that can be pumped through those lines more cost effectively than transmission over a power grid. Right now the price of nat gas drives most of the USA electrical market but solar and wind still need a storage means that is cost effective. You want to plan for the real future not your desired future.

      • captdallas2 0.8 +0.2 or -0.4 | July 22, 2012 at 8:16 pm |

        Uh huh. Pull the other one. Tar sands megagiants Enbridge and Trans Canada Pipelines — who together have paid over a billion dollars in the past half decade to repair oil spills — are expropriating 5,000 miles by 100 feet of US soil to diversify and carry natural gas, while electric transmission lines are inefficient.

      • Latimer Alder

        @bart r

        ‘And yet, remove entirely oil and gas from other sectors than transportation, build vehicles to more efficiently use fuel, and even without a single new electric or hybrid or alternate fuel vehicle built you will drop fossil fuel consumption potentially to some 13% of its current level.’

        This is a remarkable proposition.

        Are you discussing this being possible locally, nationally or worldwide? Can you justify it with some numbers? And how about the practicality? What energy sources would you use instead? How long would it take to do so? How much would it cost? And how much reduction in GAT would it achieve by what timescale?

      • Lattie, You are forgetting that you are in England. MYOB.

      • Phil Cartier

        Bart, if you remove oil and gas entirely from all sectors of the economy except transportation you no longer have an economy. No energy no steel in the electric car, no copper for the electric motor, no oil no plastic for all the lightweight parts, no aluminum for the batteries, or lithium for that matter.

        Re wind and solar, if we depend on them to power the economy its efficiency would drop about 75%. The hidden cost of wind and solar is that they require a kWh for kWh backup energy supply with nearly instantaneous response. If you build that much advanced combined cycle electric generation why bother with the wind and solar? The money saved there would more than make up for the energy they supply by being put to more productive use elsewhere.

        You can already see good old capitalism going to work to maximize profits and efficiency. Shell is in the throes of trying to build an ethylene cracker in western PA to use the cheap natural gas now available to produce a large array of much more valuable chemicals.

        We ought to be going LFTR nuclear as soon as possible. Use cheap thorium and spent nuclear fuel rods for electricity, including producing transportation fuels, and save the oil, gas, and coal for the materials industry.

      • Phil Cartier | July 27, 2012 at 5:11 pm |

        I’d disagree, but industry has done it for me.

        Steel, while it can’t be made without carbon at all, can be made with electricity from many non-carbon sources. Niagara Falls produced the electricity to make much of America’s steel for decades. Likewise, Aluminum takes enormous electricity reserves.. generally hydroelectric. To make Natural Gas into LNG takes electricity, often from hydro, too.

        Copper needs coal like a fish needs a bicycle. Likewise, lithium.

        Indeed, if you BURN THE PETROCHEMICALS YOU CAN’T MAKE THEM INTO PLASTIC AT ALL!

        You assert the efficiency of the economy would drop to “about 75%” on wind and solar. What the heck does that even mean? If wind and solar are cheaper — which right now, today, the best of the new wind and solar technologies are better than tar sands by far, and by coal, and once the EWE’s hit the demand for fertilizer through the Haber Cycle, below half the highly volatile price of natural gas no matter how much you frack.

        And this “hidden cost” argument.. how does it not apply to any energy generating facility? Do you really expect the whole world to think there’s machines that never cycle down for maintenance? Sure, a cunning accountant could make it seem like anything’s “too expensive”, using Hollywood accounting practices, but that just doesn’t make it so. The advanced solar facilities have thermal storage schemes that keep them producing long after sundown, and medium-altitude kites are projected to be more reliable than gas-powered plants in terms of uptime. Where did you come up with your claims? A textbook printed in 1952?

        And calling Shell a capitalist is rich irony indeed. Here’s a hint: if it takes charity from the government, it’s not a capitalist. It’s a rent-seeker.

  5. lurker passing through, laughing

    The faulty premise the consensus extremists hold to with a death-grip is that we have had significant, unusual and/or dangerous warming.
    The funnest part is how they deny (ahem) that they are stuck on catastrophic alarmism.

  6. David Wojick

    The concept of ECS is worse than irrelevant; it is highly misleading, as climate is a far from equilibrium system, with additional independent forcings to boot. It is like noting that basic physics says that in a vacuum a feather falls as fast as a baseball, but so what? We are not in a vacuum. My feather did not fall at all, rather it blew up into a tree. Note that basic physics is not falsified by this complex reality, it is just misleading.

    Here is a question I have asked many times and never yet got an answer. Suppose CO2 doubles but we fall into the next ice age, so global temperature drops 10 degrees. Is the CO2 sensitivity minus 10 degrees? If not then what use is it? All we want to know is what the climate is going to do, not what it would do if we ignore what it will do, which is what sensitivity does.

    I also note that none of these PDFs go below zero. Nonlinear dynamical systems often exhibit counter intuitive behavior, which in this case means that adding a CO2 forcing triggers nonlinear feedbacks that cause cooling. Until I see this case I must assume that the modeling community is not taking nonlinear dynamics seriously, or is even repressing it.

    In short arguing about the precise form of the positive ECS PDF is arguing about something that does not matter, as though it did. This is probably the most basic fallacy in AGW.

    • “Suppose CO2 doubles but we fall into the next ice age, so global temperature drops 10 degrees. Is the CO2 sensitivity minus 10 degrees?”

      No, because of CO2 we would be 1.5 to 4.5 C less frozen :) Since the oceans are an infinite heat sink with no thermal capacity restrictions of any kind, that of course cannot happen.

      • tempterrain

        “Since the oceans are an infinite heat sink with no thermal capacity restrictions of any kind, that of course cannot happen.”

        Not infinite. Of course ice ages, or more correctly glacial and interglacial periods have happened in the past and could again in the future if CO2 levels were low enough. You could argue that a small increase in CO2 levels is a good thing, to maybe to 350ppmv or so.

      • Temp, I take it you are claiming that the next ice age cannot come on as long as the CO2 levels remain above 350. Good to know, not that I believe it. Speculation presented as fact is the state of the science.

      • Wojick said:

        ” also note that none of these PDFs go below zero”

        Hey F- boy, your last comment you said that you don’t believe radiative forcing can warm. If radiative forcing will cool, according to your logic, then of course it is impossible for the PDF abscissa to go above zero.

        Is this Wojick the guy that was going to prepare educational material for the next generation of reactionary zombies? How perfect, it will keep them from getting jobs in science.

      • David Wojick

        Web, it is a different argument. Do try to keep up. I study, and present, all the arguments.

      • thisisnotgoodtogo

        WHT said
        “Hey F- boy, your last comment you said that you don’t believe radiative forcing can warm. ”

        That is untrue. You seem intent to provide disinformation.

      • tempterrain

        David Wojick,

        I’m not claiming that personally. I’m not a climate scientist. :-)

        My comment was based on work done by Dr Luke Skinner at Cambridge Uni. It could well be that 325 ppmv rather than 350ppmv of atmospheric CO2 is all that’s needed to defer the onset of the next ice age. But as the level is now 390ppmv, and rising fast, it is not a decision we’ll have to make any time soon!

        http://www.cam.ac.uk/research/news/ice-age-interrupted/

    • Here is a question I have asked many times…

      Questions without satisfactory answers may be ill-conceived.

      • Yes, that is the point of the question, that the concept of sensitivity is ill conceived. Hence the question has no sensible answer.

      • The question is ill conceived, David, not climate sensitivity.

        Some organized and regular study of the science might help you understand where you went wrong. Sadly, people who invoke “nonlinear dynamical systems” without understanding the math behind them or basic physics are often prone to that sort of confusion.

      • Robert, you are confusing my first paragraph, about the incoherence of sensitivity, with the second one, about nonlinear dynamics. They are not related. And I understand the latter quite well thanks. If you have a point of substance to make, which I doubt given your record, please make it.

      • One can define sensitivity in a meaningful way, but your question may be ill-conceived because its premise is incompatible with that meaningful definition.

      • What premise is that Yikess?

    • Steven Mosher

      david

      “Suppose CO2 doubles but we fall into the next ice age, so global temperature drops 10 degrees. Is the CO2 sensitivity minus 10 degrees? ”

      you are confused.

      Climate sensitivity is the ration between TOTAL FORCINGS and the toal response in temperature.

      If C02 doubled that would mean an additional 3.7Watts of forcing. if the sun dimmed by 6 watts.. the net forcing would be – 2.3 Watts.

      You cannot, conclude anything by mereley looking at the change in C02 and the change in temperature. because the physics say that the temperature is the result of ALL FORCING. not just C02, not just the sun, not just methane, not just changes in albedo, ALL FORCINGS.

      Doubling C02 adds 3.7 watts to the budget. This is known and tested science. The physics required to calculate this are the same physics used in the defense of our country. climate sensitivity is a different beast altogther. Look at the equations: sensitivity is the response in C to a change in watts. whether those additional watts come from C02, land use change, a change in the sun, change in clouds, change in dust etc

      • But that is not how the concept is defined or used Steve. It is CO2 sensitivity, not total forcing sensitivity, right? Nor is temp change merely a matter of ALL FORCINGS ( why shout?), but also feedbacks.

        Please answer my simple question. If CO2 doubles and temp goes down by 10 degrees, what is the sensitivity?

      • Steven Mosher

        “But that is not how the concept is defined or used Steve. It is CO2 sensitivity, not total forcing sensitivity, right? Nor is temp change merely a matter of ALL FORCINGS ( why shout?), but also feedbacks.”

        You are wrong. When we speak about the sensitivity to C02 doubling, what is meant is what is the response to a change of 3.7Watts. plain and simple. read the article above.

        climate sensitivity is the change in C to a change in Watts. whether those watts are increased solar output or increases from GHG forcing.

        It is all forcings ( yes and the the climate system feedbacks ) that determine the response.

        READ

        “Equilibrium climate sensitivity (ECS) is the total eventual temperature response of surface temperature to a radiative perturbation. ”

        what is the response to a change of 3.7Watts? roughly 1.2C with no feedbacks.. 3C if you buy all the feedbacks of AGW.

        doubling C02 produces 3.7 watts. that entails ~ 1.2C warming with no feedbacks, more if you buy all the feedbacks of AGW.

        I can explain this to you but I cannot understand for you.

      • Steven Mosher

        “If CO2 doubles and temp goes down by 10 degrees, what is the sensitivity?”

        The sensitivity of the climate is not determined by C02 doubling.
        You cannot estimate the sensitivity by looking at one forcing, its not even defined in those terms.

        ““Equilibrium climate sensitivity (ECS) is the total eventual temperature response of surface temperature to a radiative perturbation. ”

        nothing about C02 is required to estimate the sensitivity. As I have pointed out to you many times we can estimate the sensitivity of the system by looking at the relaxation response, for example, after volcanic forcing.

        If C02 doubles and the temp goes down by 10C, the sensitivity of the system is still what it always is. For example, if the C02 doubles ( +3.7Watts) and the sun decreases by 16 Watts, you’d expect around a 10 C cooling. Sensitivity to c02 doubling is still the same as it ever was.
        If C02 doubles and other forcings dont change youd get around 3C of warming. Sensitivity is still the same.

      • Rob Starkey

        Steve wrote-

        “If C02 doubles and other forcings dont change youd get around 3C of warming.”

        Steve- There are a lot of assumptions in that sentence beyond the basic math and that number is largely speculation

      • David Wojick

        Exactly Steve, but do you not see that you are using two very different concepts of sensitivity? One is the total system response to everything that happens. In this case the sensitivity is minus 10 degrees. The other is the pure response to CO2 doubling, with no other forcings, which you estimate at plus 3 degrees. It is the confusion between these two very different concepts, which happen to have the same name, sensitivity, that I am pointing to.

        Everywhere I look I see people mistakenly thinking that solving the CO2 sensitivity problem solves the total system sensitivity problem, but it clearly does not. In the present case it is these PDFs, which are about CO2 sensitivity.

      • David Wojick

        I agree with Rob. I note that the CO2 sensitivity estimates do not allow for a cooling response. Given the nonlinear negative feedbacks that is probably a mistake. For a modest sum I am pretty sure I can construct a simple climate model that cools when CO2 is doubled.

        How that relates to to the radiative perturbation I do not know, because I do not know what perturbation you are referring to. But if everyone is assuming that warming must occur that cannot be correct. There is always the possibility that the negative feedbacks will negate or even reverse the forcing. This is a common occurrence in systems like this. The probability may be small but it cannot be zero. Nor do we know it is small, I think. In fact the satellite data suggests it is happening.

      • tempterrain

        David,
        You should think of climate sensitivity in mathematical terms ie the partial differential.
        Delta Temp / Delta [c02]

        So all other variables do have to be held constant to obtain the correct answer.

        Furthermore the answer will only be correct at one particular value of [co2]. It is possible, but unlikely, that there could be an inflexion in the curve which would indicate a local negative sensitivity.

      • lurker passing through, laughing

        Yes, the budget potential of 3.7 watts, in a system that is operating in the hundreds of watts, is acknowledged.
        So what?
        Tying that budget increase to catastrophic, much less unusual weather manifestations is the question.
        And so far, the answer is pretty clear that except for heroic abuses of stats and rewrites of history, not much is happening.

    • CO2 sensitivity is an illusion. Co-relation is not necessarily causality and as you and many others have said: science proves nothing. But neither does polemics. Back from an extended tour of the UK and a great holiday to boot!

  7. Lance Wallace

    As a review article, it lacks the expected final graph showing all results with error bars. For completeness, probably ought to include the Lindzen Choi estimate (your reason for not including it seems to be little more than a speculation).

    • How do you get error bars on a PDF?

      • Wojick said:

        “How do you get error bars on a PDF?”

        Are you scientifically retarded? A PDF defines aleatory and/or epistemic uncertainty which includes all known error bars.

      • David Wojick

        But the uncertainty is already expressed by the distribution. As you put it, the PDF already includes all known error bars. Adding error bars to the PDF would express some additional uncertainty, uncertainty in the form of the distribution. That is, the bars would be on the curve, not in the curve. Where do they come from?

    • Rudyard Istvan

      It was not and never will be a review article. Why review nonsense? This was intended as a de novo examination of the assertions being made. It will stand (as the book will) as nothing more than that. As for Lindzen and Choi, I spent a personal half day with Prof. Lindzen at MIT in May getting his pointed critique of my own thinking. Have you?
      Regards

  8. To me, the problem with ECS is the albedo uncertainty due to glaciers and sea ice changes. The last time we had 560 ppm was around the end of the Cretaceous when there were no glaciers on Greenland and Antarctica, and Arctic summer sea ice didn’t exist. It was 6-8 C warmer. Does that make the double-CO2 sensitivity 6-8 C? I would suggest that the difficulty is with time scales and hysteresis. The next time we hit 560 ppm around mid-century, these glaciers will still be there increasing albedo, so it won’t be that much warmer, but what about summer sea ice? I suggest that the difference between sensitivities of 2 and 3 C is how much of that is assumed to be gone. Later, over the next few centuries (short on climate timescales), the continental glaciers will be gone, and the Cretaceous iceless hothouse conditions would return, even if CO2 was maintained at 560 ppm without going higher, though most would expect an equilibrium rather higher the way things are going.

    • The last time we had 560 ppm was around the end of the Cretaceous when there were no glaciers on Greenland and Antarctica, and Arctic summer sea ice didn’t exist. It was 6-8 C warmer. Does that make the double-CO2 sensitivity 6-8 C?

      It certainly raises that possibility. So often, people who condemn climate model outputs as alarmist do not realize that, A) Lots of climate science is about investigating the paleoclimate for empirical evidence of how it behaves in different circumstances, and B) Many of those investigations of ancient climate are far less reassuring than the model studies.

      • The change in forcing from a snowball Earth to a blue green planet is about 60 W/m^2 (an albedo of 0.5 to 0.25). Does that make CO2 almost entirely irrelevant?

        The models are well – unreliable. ‘Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable. They echo other famous limitations on scientist’s expectations, namely the undecidability of some propositions within axiomatic mathematical systems (Gödel’s theorem) and the uncomputability of some algorithms due to excessive size of the calculation.’ James McWilliams

        What maths of complex dynamical systems would that be Robert? Really – let’s have your take on it.

      • It makes albedo a potent feedback to changes in atmospheric composition.

      • The usual theory is that orbital changes set up conditions for runaway ice and snow expansion. But whatever floats your boat Jim. Personally – I would think about multiple factors. Cloud, dust, thermohaline circulation, dimethyl sulpide emissions, yada, yada.

        ‘The global coupled atmosphere–ocean–land–cryosphere system exhibits a wide range of physical and dynamical phenomena with associated physical, biological, and chemical feedbacks that collectively result in a continuum of temporal and spatial variability. The traditional
        boundaries between weather and climate are, therefore, somewhat artificial.’ Hurrell et al 2009

        Still I know that some people prefer simplicity to complexity – despite reality. Somehow webby comes to mind.

      • As I have mentioned when you brought this quote up before, this was about decadal prediction, not long-term climate-scale trends.

      • Jim D, I somewhat disagree. In the context of the Holocene, the distinction between chaotic weather and stable climate has an tenuous hold on existence. In the context of the rest of earth’s climate history, perhaps not so much.

      • Not true Jim. The paper was called ‘A UNIFIED MODELING
        APPROACH TO CLIMATE SYSTEM PREDICTION’ It considered a number of scales for initialised prediction to 2050 and projections beyond that. See figure 1.

        ‘The accurate representation of this continuum of variability in numerical models is, consequently, a challenging but essential goal. Fundamental barriers to advancing weather and climate prediction on time scales from days to years, as well as longstanding systematic errors in weather and
        climate models, are partly attributable to our limited understanding of and capability for simulating the complex, multiscale interactions intrinsic to atmospheric, oceanic, and cryospheric fluid motions.’

        It is difficult to imagine that the ‘complex, multiscale interactions’ cease after a decade or so and don’t go on influencing climate. But whatever takes your fancy Jim.

      • Heh. Citing “climate scientists” who investigate paleoclimate just reminds us of the Hockey Stick, Yamal, Upside Down Tijlander, and all the other frauds that “climate scientists” have put forth.

        Let me remind you that real scientists follow the Scientific Method. “Climate scientists” refuse to do so.

    • tempterrain

      Judith has already agreed that these high figures are quite possible. Not over the course of a single century though. Its worth looking at this posting to which is discussion of climate sensitivity too.

      http://judithcurry.com/2011/02/26/agreeing/

    • This is an appealing idea for the lukewarm hypotheses. If one accepts ice albedo as a true positive feedback, but that takes a long time to reach equilibrium, I think you can go a long way towards explaining glacial delta T without the atmospheric feedbacks, which are usually the far more significant in producing alarming estimates.

  9. Rob Bradley

    “GCMs are oversensitive. IPCC future warming forecasts are too high. IPCC AR4’s own sensitivity discussion irrefutably illustrates this ‘high’ bias.”

    Interesting–and victory to the nonalarmists! Gerald North of Texas A&M has long been in the 2C for 2x camp. I finally got his confidence interval, 1.75–2.25C, at a lunch where Andy Dessler, his colleague, sitting next to North, said he feared that future generations would have to live underground because of all the anthropogenic warming and such.

    North does not have a model but has his intuition. He also has been suspicious of moderls: http://www.masterresource.org/category/climate-change/north-gerald-texas-am/.

    For us amateurs listening to the experts, it is polarized indeed!

  10. Let me get on my hobby-horse. In the end, if climate sensitivity is high, then sooner or later global temperatures will have to climb much more than is expected from the rise seen in the data from about 1850 to the middle of the 20th century. This must then show up as a CO2 signal in the temperature/time graph. If no CO2 signal appears, then we know the climate sensitivity is small.

    Since, to date, there is no CO2 signal at all in temperature/time graphs of data from the 20th and 21st centuries, we can conclude that the actual climate sensitivity of CO2 is indistinguishable from zero.

    The question no proponent of CAGW will attempt to answer is, “How long do we wait for a CO2 signal to appear, and no such signal comes, before we conclude that there is never going to be a CO2 signal at all?”

    In the end, the empirical data on climate sensitivity will prove that CAGW is just plain wrong. And all these hypothetical estimations will finish up in the garbage cans of history.

  11. Stephen Wilde

    Climate sensitivity seems to me to be high but in a negative direction.

    Any factor that tries to upset the system energy content as set by surface atmospheric pressure (combined with the level of solar input) simply results in a change in the atmospheric circulation instead of a change in system energy content.

    Thus more GHGs do have a climate effect in terms of a change in the atmospheric circulation but it is infinitesimal as compared to the effects on the atmospheric circulation of ocean and solar cycles.

    Ultimately the ocean cycles depend on the amount of solar energy that gets into the oceans so it is solar effects that dictate the system energy content at a given surface atmospheric pressure.

    The sun controls the amount of energy getting into the oceans by varying the vertical temperature profile of the atmosphere via variations in the mix of wavelengths and particles which interact with the chemical composition of the atmosphere, especially ozone.

    Those solar variations alter the slope of the tropopause height between equator and poles thus allowing the climate zones to shift latitudinally either poleward or equatorward beneath the tropopause. That shift alters the degree of zonality or meridionality of the jetstreams.

    A zonal jetstream scenario reduces global cloudiness and allows more energy into the oceans to fuel El Nino events at the expense of La Nina events.It also allows faster loss of energy to space.

    A meridional jetstream scenario increases global cloudiness and allows less energy into the oceans to weaken El Nino events as compared to La Nina events. It also reduces the rate of energy loss to space.

    An active sun causes increased zonality with warming of the oceans and troposphere but faster energy loss to space.

    An inactive sun causes increased meridionality with cooling of the oceans and troposphere but slower energy loss to space.

    The Pacific Multidecadal Oscillation (PMO not PDO) is overlaid on the solar effect so that we get upward temperature stepping from one positive PMO phase to the next during a warming period such as LIA to date and downward temperature stepping from one negative PMO phase to the next during a cooling period such as MWP to LIA.

    The question today is whether the recent solar inactivity will last long enough to convert the recent upward stepping to long term downward stepping.

    We know from anecdotal evidence that the climate zones and jetstreams shifted a great deal between MWP and LIA and LIA and today.

    In some regions such as the western approaches to Europe the latitudinal shift was 100 miles or so.

    Maybe human emissions of CO2 contibute less than a mile unless someone can prove otherwise.

    • Rudyard Istvan

      Might be true. Extensive evidence in Arts of Truth suggests slightly positive ( best guess of S is now <1.9).

  12. Dave Springer

    There’s some heated debate (pun intended) on how apropos it is to apply S-B law to a water planet with exceedingly uneven heating. S-B law applies to an evenly illuminated massless plane in thermal equilibrium.

    • Stephen Wilde

      Hi Dave.

      Good point and you will know that I said elsewnere that one should only apply S-B at the point where energy in equals energy out i.e. at the top of the atmosphere where all disruptive effects of the atmosphere can be excluded.

      Applying it at the so called ‘effective radiating height’ is just plain wrong because that point is still within the atmosphere and so subject to disruption caused by the atmosphere.

    • S-B law cannot be used to much else than descriptive purposes, it’s too integrated to anything else and it applies really to black bodies only (it would apply to real gray bodies, but those don not exist even to the extent black bodies do).

      For real calculation something more detailed is needed, Planck’s radiation law is the next step and much more applicable.

      I don’t think that S-B law is used for any serious calculation in climate science.

      • Michael Hart

        Does anyone use an “ECS” constant for any serious calculations in climate science?

      • Dave Springer

        The infamous “greenhouse gases make the earth 33C warmer than would be otherwise” is straight from S-B.

        Are you saying that’s not a serious calculation, Pekka? Or that it’s not taken seriously by the climate community?

        I look forward to seeing how you weasel out of this one, Pekka “The Weasel” Pirila.

      • I’m saying that Stephan-Boltzmann law is not used in that calculation as it’s not detailed enough for that use.

        Stephan-Boltzmann law can be used in the derivation of results like

        – The effective radiation temperature of the Earth is 255K.

        That’s, however, nothing more than a restatement of the fact that the total IR emission of the Earth is as large as that of a black body of the same size that has the uniform temperature of 255K. S-F law is used for the calculation of the temperature of the imaginary black body when the IR flux is known, but it’s not used for the calculation of the flux or for any related calculation that concerns the real Earth. Those calculation are done with more detailed formulas.

      • Dave Springer

        You da weasel!

        Very entertaining. Thanks.

  13. The easiest way is to measure the damned thing.
    Plot the log of [CO2] vs the temperature anomoly and the slope gives you the sensitivity; 2x[CO2] = 2.2C

    http://i179.photobucket.com/albums/w318/DocMartyn/LNCO2vstemp.jpg

    It is not hard to do.

    • Steven Mosher

      Problem is you are assuming all other forcings to be equal and you assuming that the time span is long enough for both fast and slow feedbacks to kick in

      • “assuming that the time span is long enough for both fast and slow feedbacks to kick in”

        You think the change in temperature on any place on the Earth is going to be greater than the dirinial cycle temperature change? If the answer is no, then the maximum lag in the system is going to be one seasonal cycle, 1 year.
        If you treat the system as a heat sink starting at 5C sitting on a 15C hot plate, then you can observer a lag, due to thermal inertia.
        If your hot plate cycles between 5 and 25 degrees the steady state temperature, according to 2(Tmin+Tmax), is going to be about 15 degrees. Change the cycle from 5 to 25 into 6 and 26 degrees, then the second cycle will be same as the 3rd, 4th, 5th, ….
        No major lag, beyond a cycle transition. Just like changes in you blood flow from a pulsing heart.
        This is the biggest difference in a dynamic system, compared with an equilibrium system.

      • Steven Mosher

        You’ll want to se teh work that Lucia and arthur Smith did on estimating lags using a two box model. Its not a year. not by a long shot

      • Mosh is on the right track, as usual. Compartment or box models that feature any kind of diffusional factor will in general lead to a characteristic lag that has a long tail. What confuses people is that an intial transient often shows up that can mimic a fast exponential decline. The more slab layers that go into the compartment model, the easier it is to see this behavior.

      • Which is why box models are the wrong approach.
        The Mediterranean is a good place to look at the dynamics of of heat transfer. The surface shows a large seasonal oscillation in temperature, as one descends in depth the oscillation becomes attenuated, until it disappears. The deeper bottom areas of the Mediterranean have a temperature which is lower than the seasonal Tmin.
        During the summer solar evaporation makes the surface highly saline. When the surface cools during the winter, this waters sinks, about 36 meters per hour, to the bottom. The water temperature at the bottom is less than the surface winter (Tmin+Tmax)/2.

      • Dave Springer

        DocMartyn is correct. The lag time can be determined by the actual response of the system in question to increased and decreased forcing. Maximum insolation in the northern hemisphere happens June 21. Maximum surface temperature happens Aug 1. Minimum insolation occurs December 21 and minimum temperature occurs Feb 1. The lag time is about 6 weeks in both cases. There’s no need to use a fictional hypothetical model for this when we have The Real McCoy to look at instead.

        QED

      • Doc, Since the Med has a fairly deep current through Gibraltar, maybe you can compare the land locked seas like say the Caspian?

        http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA522817

      • Just because something has a duty-cycle attached to it doesn’t mean you can disobey the laws of physics.

        Wow, this is argument by assertion at its finest.

      • ” WebHubTelescope
        Just because something has a duty-cycle attached to it doesn’t mean you can disobey the laws of physics”

        Averaging the peak to trough of an oscillating system and then treating the resultant figure as an average influx of heat is obviously a more realistic representation of the thermodynamics of the system is it?

        The temperature and salinity of surface water on the coldest winter night and on the hottest summers day represent actual physical states; whereas the averages of the two do not.
        Would you plunge you head into boiling water and your feet into liquid nitrogen happy in the knowledge that your average body temperature will be unaffected?

      • Dave Springer

        Yes. The vast bulk of the ocean/atmosphere system sits is below the oceanic mixed layer and its temperature is a pretty constant 3C. When the easily warmed surface layer above it eventually equilibrates with the frigid bulk of the ocean short term CO2 sensitivity will fall to some undetectable value.

        Oh wait, that’s not what you wanted to say, was it? You wanted some fairy story about how the tail wags a dog. Sorry.

      • Mosh it may interest you to know that climate sensitivity has now completely changed
        Using the latest version of GISS, but keeping everything else exactly the same, the [CO2]x2 is now 2.7.
        Half a degree added by increasing the post-1980 slope.
        The difference between the version that ended in 2008 and today’s version is 0.04 degrees per decade from 1951.
        Now am I a suspicious bastard, of are we dealing with a bunch charlatans?

    • DocMartyn,

      Your graphs are interesting, and useful, in that they indicate a 2.2 degC lower limit.

      But as Steven says, time lags have to be taken into consideration and these are going to act in the direction of increasing this figure. One seasonal year, as you’ve later suggested, is likely to be much too short to allow for the effects of ocean heating and changes in the Earth’s albedo.

      • Alexej Buergin

        It is true that DocMartyn calculates from concentration and temperature increasing to c&t increasing instead of equilibrium to equlibrium. But the lag is at the beginning as well as at the end, and that cancels out:
        http://www.ipcc.ch/ipccreports/tar/wg1/345.htm
        ECS goes from year 0 to year “much later”, but it is the same for year 70 to year 140.

      • How can you have an equilibrium when you have both a diurnal cycle and seasonal cycle?
        In an equilibrium there is no overall exchange of energy between the system and the external bulk environment. The temperature of the surface of the oceans shows both diurnal cyclic and seasonal cyclic changes in temperature.
        The amplitude of the peaks and troughs is attenuated with depth, but is noticeable until we reach the thermocouple.
        Looking at the Mediterranean, where we know that all the water sources going in are warmer than the bottom temperature, we can observe the rapid heat exchange over 24 hours:-

        See Fig 2 and 3.
        Diurnal changes in the bio-optical properties of the phytoplankton in the Alborán Sea (Mediterranean Sea)
        http://www.sciencedirect.com/science/article/pii/S0272771406001879

      • Dave Springer

        DocMartyn | July 23, 2012 at 2:18 pm | Reply

        “How can you have an equilibrium when you have both a diurnal cycle and seasonal cycle?”
        .
        :shock:
        .
        Add my name to those interested in an answer to that.

      • Alexej Buergin

        I agree with you; but if you look at the IPCC DEFINITION of TCR and ESC, you see that both start at eqilibrium, the first ends when concentration is constant, and the second has a lag and ends after 1000 years or so. Remember: This is just the (very simple) MODEL T=x*ln(C2/C1), that has no other parameters than CO2.
        Since CO2 has been increasing for quite some time, we cannot go from eq. to eq. and use measured temperatures.
        What climatologist (including Mosher) do understand is there is a lag at the end; but they seem not to understand that there is the same lag at the beginning, too. So, if you look at IPCC Fig. 9.1, ESC should be read from zero to year 250 (green curve), but that is the same as from years 70 to 140 (red or green straight line).
        http://www.ipcc.ch/ipccreports/tar/wg1/345.htm

      • tempterrain

        Alexej,

        I can’t see how your link confirms your line of argument at all. If CO2 concentrations were to stabilise, at the exact point when the temperature rise measured 2.2 deg C, the Earth would still carry on warming slightly afterwards.

        If the IPCC’s figure of 3 degC is correct it will warm by another 0.8 degC.

        So Doc Martyn’s graph isn’t at all inconsistent with IPCC findings.

      • Alexej Buergin

        I absolutely agree. What I dispute is the assumption (of others) that the lags will change anything.

      • Alexej Buergin

        IPCC Fig. 9.1 shows TCR=2°C and a lag-rise of 1.5°C, which results in ECS=3.5°C. The assumption is an exponential CO2-increase of 1% per year, which amounts to a doubling in 70 years and a linear rise of the temperature.

  14. Several authors have attempted to estimate climate sensitivity by comparing global average temperature (GAT) at the last glacial maximum to the the GAT at pre-industrial conditions using estimated forcings and the known difference in CO2 concentration. These are reviewed and summarized in a pdf report available at http://www.spaceclimate.net (scroll down to “ancient climates”). Hansen and Sato obtained the canonical value 3°C (surprised?). However, when more realistic data are used, this drops to 2.2°C. Chylek and Lohmann obtained 1.8°C. Kohler et al. estimated 2.4°C but when I carried out their calculation, I ended up with 1.3°C.
    In addition this pdf report summarizes data on CO2 and GAT back to 550 million years ago seeking a relationship between CO2 and GAT. Unfortunately the data are too sparse and noisy to draw conclusions.

  15. Steven Mosher

    Chylek used the Vostok ice cores to estimate that λ was about 0.49

    http://rankexploits.com/musings/2011/a-simple-analysis-of-equilibrium-climate-sensitivity/

  16. Rud,

    They plugged Schwarz’ observational findings into GCM GISS-ER, and reproduced his results! So they argued Schwartz’ data were wrong, because “this model is known to exhibit a true equilibrium climate sensitivity of 2.7 C under doubled CO2 conditions.” They then did a similar thing for his time scale parameter using a 14 GCM ensemble. They said, “the estimates of time scale produced by this method are generally unrealistically low in comparison to the known behavior of the models in response to changes in GHG forcing.” They in effect said twice in one comment that the GCM models are trustworthy, and evaluation of 125 years of actual climate observations isn’t.

    It appears you haven’t understood the point here. Schwarz’s method was meant to be able to discern climate parameters empirically from measurements of Earth system responses – it shouldn’t matter whether the “Earth system” is the real Earth or the representation of Earth in AOGCMs. However, unlike on the real Earth, the parameters Schwarz was trying to calculate through his method are known values in GCMs. The fact that it could not get close to these known values indicates a problem with the method. Note that Knutti et al. actually advance the same argument in their comment.

    Schwartz’ reply says much: “It is questionable whether measurements should be rejected because they do not agree with models.”

    If you understood the point above you would realise that this doesn’t make sense as a reply. And there’s a reason it doesn’t make sense: this sentence by Schwarz is actually responding to an entirely different matter in a different paper (Knutti et al.)

    • Rudyard Istvan

      Actually, I do understand it after having studied the article, replies, and corrections for over one year. Paul, there is no need to debate ‘ships passing in the night’. I provide a clear logic. You should do likewise. Any time you assert that observations cannot get close to GCM values (and charitably I do not exactly quote you above), you have at least gotten to the essence of the problem.
      Lest you think I do not understand the GCMs, please refer to the NCAR/Tn-464-SRE (2004) precise technical description of GCM NCAR-CAM3. Now, email me your interpretation of what might be wrong or right in that GCM given Schwarz, by section. I will respond only with exact ‘quotation’s’ from a previously written summary of where I think its problems are, mathematically–and why. Previously provided to others as a verification. Deal? Hints. Start with section 3.2.2, then 3.3.6, then 4.1.1…(and then 4.1.2 and 4.2….).
      Bring your best logic, facts, and data. Not opinions. Which are a big problem on this and all other climate blogs.
      Regards

      • Rud,

        You’re still talking about this issue as if it is about comparing observations with GCMs which indicates that you really don’t understand the point.

        I’ll provide this quote from the Knutti et al. comment to see if it helps:

        An obvious test for the method proposed by SES is to apply it to the simulation of the 20th century by AOGCMs and compare the estimated sensitivity with the known sensitivity of the models, determined from simulations in which atmospheric CO2 is doubled.

        Imagine we have two “Earths”. On one “Earth” we don’t know it’s climate sensitivity and can only attempt to estimate it from observations. On our second “Earth” we (somehow) know it’s climate sensitivity. Now, let’s say someone comes up with a model for estimating the climate sensitivity of Earth 1 from various measurements. Someone else then tests this model by using the same methods to estimate Earth 2’s sensitivity, a value which is already known. What would you think of the methods if the estimate produced was different from the known sensitivity?

      • Please excuse apostrophe fail.

    • As far as I can tell, what Schwartz has is a simple model with one parameter for the earth’s heat capacity, and another for a time constant. He proceeds to fit this simple model to observations to obtain climate sensitivity, and calls that ‘observations’. This can’t be compared with a GCM with a full 3d ocean model, and if I had to trust one, it would be the GCM rather than Scwartz’s 1-parameter ocean.

      • “his can’t be compared with a GCM with a full 3d ocean model, and if I had to trust one, it would be the GCM rather than Scwartz’s 1-parameter ocean”

        Yes, a much more complex model with many assumptions is always bound to be more accurate and useful than a simple one with far fewer assumptions.
        They is why Copernicus models was so very clearly inferior to those wonderful epicircles.

      • He had one parameter to represent the whole earth’s heat capacity. There is a use for simple models, but you can only go so far with them, not claim it is better than a GCM. At least Spencer used multiple levels, but that would be epicycles to you.

      • If it was a one-parameter model of a first-order response function (i.e. an exponential lag), that is indeed a huge failure in applying thermal physics.

        Hansen understood this wouldn’t work at least 30 years ago.

        So DocMartyn, it is not a question of whether it is a simple model or not, it is a matter of applying the right master equation that can do the spatio-temporal energy balance properly. James Hansen knows how to do it, but you apparently can’t.

      • Web, can you tell me when the total photon efflux for the top of the atmosphere EXACTLY matches the influx.

      • Dave Springer

        Ptolemian Epicycles

        That’s a first attempt at spelling without looking it up. How’d I do?

      • Dave Springer

        Hmmm… well it’s safe to say Ptolemaic is the preferred version according to the google fight.

        http://googlefight.com/index.php?lang=en_GB&word1=ptolemian&word2=ptolemaic

      • Basically, I think this characterises Schwartz’s model, though I can’t see any cause for a slanging match between GCMs and simple models. Both have their place, advantages and disadvantages. However, the point I’m making, taken from the two comments on Schwartz’s paper, isn’t about comparing GCMs with observations or comparing GCMs with simple models.

        Both GCMs and the Earth can produce surface temperature and ocean heat content time series. Schwartz purports to be able to discern the climate sensitivity of the Earth by processing these time series through his model. A reasonable test of this claim, therefore, is to run Schwartz’s model with the same outputs (surface temperature, ocean heat content) but from a GCM, for which climate sensitivity is known through controlled experiment. This was what was done for both the Knutti et al. and Forster et al. comments on Schwartz’s paper, and both found the performance of the model to be poor – it didn’t get close to the real sensitivity of the GCMs.

        If the model is unable to accurately discern climate sensitivity in a system where the true value is known, why would we believe it can provide any meaningful information about Earth?

  17. “Scientists have proven that a majority of the warming since 1970, with a central estimate of 100% (could be more, could be less), is caused by human activities. That’s the scientific fact of the matter.”

    This is a canard. Pure and simple.

    • Facts are facts, whether you like them or no.

      Feel free to publish a paper explaining the majority of the recent warming by as yet undiscovered natural forcings.

      Just burying your head in the sand and wishing the facts away won’t help. ;)

      • Alexej Buergin

        Great moments in American philosophy:
        A rose is a rose is a rose (Gertrude Stein)
        An inch is an inch (Richard Petty)
        Facts are facts (Robert)
        A canard is a duck, pure and simple (pokerguy)
        You cannot prove science (Feynman and many others)

  18. And if this is a representative sample of the way you think, and from what I’ve seen it is, you might want to discard everything you think you “know” about CAGW and start over again.

  19. I am not sure that climate sensitivity means much as a predictor in such a non-linear system as climate. For example, climate sensitivity calculated on 1939 observational values would be vastly different from those calculated in 1941.

    IMO water vapour should not be classified as a feedback. It absorbs heat from the earth’s radiation directly, just as CO2 does, the difference being that CO2 is a narrow band absorber at its resonant frequency of IR whereas water vapour is broad band, mostly at longer wavelength than the Stefan-Boltzmann peak.

    “Since there can be no certainty about the future of any complex nonlinear dynamic system”

    I have never found non-linearity to be adverse to the predictability of dynamic systems. It just makes the construction of simulators harder.

    However I agree with the author that AGW has been exaggerated.

    • ” Alexander Biggs
      I am not sure that climate sensitivity means much as a predictor in such a non-linear system as climate. For example, climate sensitivity calculated on 1939 observational values would be vastly different from those calculated in 1941.”

      What worries me is that the temperature of location A measured in 1939 is changes in 1985, then again in 1999 and again in 2011. In a few decades the central problem in Climate Science is not going to be the value of the ECS, but why the USA was not covered by glaciers in the 1930’s.

      • Doc, “What worries me is that the temperature of location A measured in 1939 is changes in 1985, then again in 1999 and again in 2011. In a few decades the central problem in Climate Science is not going to be the value of the ECS, but why the USA was not covered by glaciers in the 1930′s.”

        I was looking at some climate wagers. The toughest part is predicting the adjustments :)

      • In 1930 the world was going through an unusually rapid rise in temperature (0.15C per decade). But I see your point: if you work backwards in a linear fashion from the present exaggerated estimates then you would conclude it should have been much colder in 1930.

  20. Rud
    Thanks for the review and evaluation. I encourage you to list ALL climate sensitivity estimates, then add discussion, error bars etc. e.g., see:

    Chapter 2 Forcings and Feedbacks PDF (0.6 MB) Climate Change Reconsidered, Interim Report 2011 NIPCC

    Chapter 2 Feedback Factors and Radiative Forcing PDF (0.6 MB)
    Climate Change Reconsidered 2009 NIPCC

    Climate Sensitivity links, NIPCC

    • It seems to me that water vapor and CO2 levels rise with increasing temperatures. But in terms a control knob, clouds have biggest and most immediate effects. So increasing water vapor or CO2 doesn’t have as measurable effect as to increasing or decreasing the amount of clouds.

      What is important in terms average global climate, is solar energy on tropical ocean area- and effected by clouds. Clear skies in tropics will cause “global warming”, *excessive* cloudiness in tropics will cause global cooling. cloudiness in other regions are not as significant in terms of global temperature, though obviously effect regional temperatures.

      So if you want to increase or decrease global temperature one would control the amount clouds during the day, in the tropics and in particular tropical oceans. By having no clouds one could have somewhere around +10 to global temperatures, and by having complete cloud coverage during day and over ocean -10 C from global temperature.
      To get such extreme temperature extremes requires a long time- hundred if not thousands of years, because what is occurring is heat up the ocean and you only increasing the amount add energy by a fairly small amount per day or year.
      One can also include summer time heating ocean above the tropics- cloudiness in winter time in these regions is less important.
      During the icebox period the other large knob is snow cover in temperate regions, which significantly reduce warming if snow is in large areas during summer.
      The other main knob is not very movable, it’s the locations of land masses on a mostly ocean covered world.

      • “It seems to me that water vapor and CO2 levels rise with increasing temperatures. But in terms a control knob, clouds have biggest and most immediate effects. So increasing water vapor or CO2 doesn’t have as measurable effect as to increasing or decreasing the amount of clouds.”

        But clouds start out as water vapour in the atmosphere. In general, more humidity results in more clouds. When water evaporates from forests, lakes or the oceans, it cools both due to the latent heat of evaporation. When clouds turn into precipitation, the same amount of heat is added to the resulting water, so they cancel? No, because the heat can more easily escape at high altitude, there is a net cooling effect.

      • “It seems to me that water vapor and CO2 levels rise with increasing temperatures. But in terms a control knob, clouds have biggest and most immediate effects. So increasing water vapor or CO2 doesn’t have as measurable effect as to increasing or decreasing the amount of clouds.”

        “But clouds start out as water vapour in the atmosphere. ”
        Yes, clouds are formed by water vapor.

        “In general, more humidity results in more clouds. ”
        Yes, generally. And to form into water droplets, H2O gas generally condenses on some liquid or solid.

        “When water evaporates from forests, lakes or the oceans, it cools both due to the latent heat of evaporation.”
        Generally most evaporation occurs when there warmer conditions- such as during the day [and mostly ocean surfaces].

        ” When clouds turn into precipitation, the same amount of heat is added to the resulting water, so they cancel?”
        Cancel and/or transferred.
        ” No, because the heat can more easily escape at high altitude, there is a net cooling effect.”

        When clouds form into droplets, heat is released and cloud are a few miles up. Though much the heat is heating up the air [convection, rather than radiant heat.]

        I am going into why one has more clouds or less clouds, rather just stating the obvious: less cloud coverage in tropics during day results in warming earth, and more clouds would large factor in cooling Earth [in terms of average global temperature].

      • Dave Springer

        Clouds are a negative feedback, obviously. If they were positive the tropical ocean would never be exposed to sunlight because of an unbroken layer of clouds that kept ocean temperature maximized and air saturated with water vapor.

        In fact what happens is that clouds break which allows sunlight to reach the water which raises evaporation which causes clouds to form which block sunlight to the ocean which lowers evaporation which decreases clouds. Lather, rinse, repeat. This yields a very stable equilibrium point of some fixed percentage of cloud cover for any given amount of insolation. Anyone who would try to dispute that clearly isn’t playing with a full deck.

  21. ‘‘In summary, although there is independent evidence for decadal changes in TOA radiative fluxes over the last two decades, the evidence is equivocal. Changes in the planetary and tropical TOA radiative fluxes are consistent with independent global ocean heat-storage data, and are expected to be dominated by changes in cloud radiative forcing. To the extent that they are real, they may simply reflect natural low-frequency variability of the climate system.’ AR4 S 3.4.4.1

    ‘The overall slight rise (relative heating) of global total net flux at TOA between the 1980’s and 1990’s is confirmed in the tropics by the ERBS measurements and exceeds the estimated climate forcing changes (greenhouse gases and aerosols) for this period.’ NASA/GISS ISCCP-FD

    IR up at TOA in fact increased (cooling) in both ISCCP-FD and ERBS. I would publish but I am not sure that this is news to anyone except Robert.

    I am sure there is some explanation for this that rescues AGW – I await with bated breath.

  22. Say, Tempterrain, July22 @7.33pm, Captain CO2 fighting the good fight, using his powers ter keep the next ice age at bay. Sigh of relief, i was getting worried there fer a while.

    • tempterrain

      Yes this is a typical climate deniers argument. If a little extra CO2 may be a good thing for keeping the next ice age at bay a lot extra CO2 should be even better.

      I wouldn’t recommend you follow that kind of ‘logic’, if that’s the correct word for it, next time you have a headache. You shouldn’t think that because one or two aspirin tablets, or whatever, have made you feel better that it is would be a good idea to swallow the entire contents of the bottle.

    • Beth and tempterrain

      The good “cap’n CO2” may be fightin’ the noble fight to keep us all from freezin’, but if Rud Istvan is right, he’s runnin’ out of muscle power (as he did the past 10-15 years).

      Max

  23. Rud Istvan

    A well written article.

    Thank you.

  24. On the IPCC assessment process:

    …the IPCC is not any more an assessment of published science (which is its proclaimed goal) but production of results. The softened condition that the models themself have to be published does not even apply because the Japanese model for example is very different from the published one which gave results not even close to the actual outlier version (in the old dataset the CCC model was the outlier). Essentially, I feel that at this point there are very little rules and almost anything goes.

    http://bit.ly/p5pWWs

    • Girma

      Do you still believe that 2012 will be cooler than was 2011?

      • Back-to-back-to-back La Nina dance needs work.

      • “Rob Starkey
        Do you still believe that 2012 will be cooler than was 2011?”

        I believe that 1930 will be colder in 2020 than it was in 2011, which is colder than it was in 1999.

  25. Rud’s message concerns, he says, Truth. Or truth.

    That’s his story. His book. His narrative.

    Like anything interesting enough to base a narrative on, it’s more complex than it appears on its unexamined face: a coin with more than one side, if flipped better seen for what its opposite tells us than itself alone reveals.

    Rud’s not talking about truth. Or Truth. He’s making points about truth/Truth through the power of its opposites, Truth’s tesseract spun on some axes through the dimensions we can see to expose the dimensions hid from us.

    He uses, ably and with a practiced hand, the language of business, and of technology; his clarity, straightforwardness, forthrightness are admirable and fitting to the topic of truth and of Truth. But he has the problem that those thorny opposites are not forthright, straight or clear.

    It’s the language he doesn’t use, the things he artfully omits, that illustrate in the whitespace what he’s really talking about. Propaganda, deception, FUD, not where he points them out but where he doesn’t.

    A very cunning device.

    • Ever thought you may be over analyzing things?

      I believe Rud mentioned that he has some agricultural interests. Those interests would give him a different perspective. A lot of engineers are skeptical, probably because they have a different perspective.

      I look at what the models miss. That is what I was taught to do, look for the problems and fix them, don’t invent problems to fix.

      To me it looks like land use impact is grossly underestimated. I dig a little bit, look at Soviet agricultural expansion, dig up a few tid bits of information since the Soviets where not all that willing to share information.
      “Science and advanced practical work recommend, in such cases, prompt breaking down of the ice incrustation and the settled snow with trailer implements. The fields should be sprinkled with peat dust for rapid snow thawing.” http://www.dtic.mil/dtic/tr/fulltext/u2/a337161.pdf

      Think about what northern hemisphere industrial expansion might do,

      https://lh6.googleusercontent.com/-osd5eiqGKEE/UAtS_rxcADI/AAAAAAAACkc/6fG_pkrEXdM/s800/black%2520carbon%2520greenland.png

      And I really don’t see as much CO2 specific impact as I do other impacts.

      One main points Rud made is the sensitivity estimates are generally decreasing. That is also a touch odd doncha think?

      http://i122.photobucket.com/albums/o252/captdallas2/climate%20stuff/whatsnormal.png

      Just about all the satellite data indicates that there has been a climate shift, a shift is expected in a bi-stable system from time to time.

      So you ignore the heart of the message and creatively discredit the messenger as be deceptively cunning. Brilliant!

      • captdallas2 0.8 +0.2 or -0.4 | July 23, 2012 at 10:25 am |

        Over. Under. Doesn’t make it wrong. I’ve mentioned my own agricultural interests from time to time.
        Could be I have some insight into his perspectives.
        I’ve also done a fair bit of work with computer generated constructs, and once made the mistake of removing the pixels that should be missing from a fractal curve..
        Which made the whole thing vanish.
        If you look for what should be missing, you’ll often overlook what’s right there in front of your eyes.
        Whereas with Rud’s piece, there’s plenty that shouldn’t be missing that is.
        So, yes, land use impact might be significant.
        However, CO2 is the major forcing that perturbs the complex system and moves it to a new state. Land use comes up at a fraction of CO2, in terms of energy effects. It’s big, and it’s a serious issue, but just sit down and do the math yourself and tell me what you come up with, instead of just citing what some of the impacts might be. Parameterize. Maximum land use impact compared to minimum CO2 radiative transfer impact.
        If you get more than 1:3, you’ve used some very, very unexpected numbers in your calculations.
        And sensitivity estimates amount to this: all future sensitivities are equally true until they have been observed, and the universe of possibilities is collapsed. There is no one right sensitivity.
        And I ignore nothing. I just know the difference between true and false.

  26. Ţhe truth is out there.

  27. Rud, good article. In light of how the IPCC handled Forster’s studies, do you think they will be more honest and open minded for their next publication?

  28. Your reading of the Foster et al comment [26] is wrong, as Schwartz’ response was. They did not compare model-based estimates versus observation-based estimates. They tested the method designed by Schwartz to estimate climate sensitivity in artificial systems (climate models) in which the correct answer ( each model’s sensitivity) is known. They found that the method underestimates each model’s sensitivity. Whether this conclusion can be extrapolated to the real world is another matter. But what they did show is that, when applied to controlled systems, the method fails

    • I’ve pointed this out above. However, it turns out that Rud Istvan understands perfectly everything that is in the original paper, three comments and response, having studied them for over a year. For daring to suggest otherwise you are now compelled to write a full technical report on the NCAR-CAM3 atmospheric model. He’ll be expecting it post-haste.

      I think you are incorrect to say that Schwartz’s response was wrong (assuming you’re referring to the statement which Rud presented at the bottom of the paragraph: “It is questionable whether measurements should be rejected because they do not agree with models.”). While that is a direct quote from Schwartz’s response paper, it is actually from an entirely different section dealing with Knutti et al.’s criticism of his handling of observational ocean heat content data. Given Rud’s immaculate understanding of everything in these papers I can only conclude that he misattributed the target of that sentence deliberately.

  29. Beth Cooper

    Kim 23/0 1.23pm: “I think I’ve never heard so loud …’
    And as it gets cloudier,
    The message’ll get loud(i)er.

  30. Judith Curry

    Thanks for a very informative paper.

    After reading the Rud Istvan post carefully and checking out some of the cited references, it becomes clear to me that the IPCC GCM-based mean value estimate for 2xCO2 equilibrium climate sensitivity (ECS) of 3.2°C is a “can of worms” with a high bias.

    Observational data seem to point to a much lower value – probably somewhere between 0.7°C (Lindzen + Choi, 2011) and 2.4°C (Chylek, Schmittner), or around half of the IPCC value.

    IPCC has also estimated CO2 increase by 2100 for various model-based “scenarios” and “storylines”. Whether or not these are also exaggerated is a moot point.

    The lower estimate B1 (560 ppmv) seems reasonable enough (continuation of the current exponential rate of increase), but the higher estimates A2 (850 ppmv) or A1F1 (960 ppmv) do not – there is only enough carbon in all the optimistically estimated remaining fossil fuels on our planet (WEC 2010) to arrive at around 1,030ppmv when they are all completely consumed, so it is highly unlikely that these estimates are realistic.

    At any rate, the “take home” from all this is that the IPCC projections of 21st century warming are based on exaggerated estimates for ECS as well as most likely also for atmospheric CO2, so should be taken with a large grain of salt.

    Max

  31. The actual simple fact is we don’t know what the sensitivity is and we are not going to know for another 5-10 at the earliest. End of discussion. CO2 is now into an area where it has not been for 10 million years.

  32. Rud Istvan

    …when I carried out their calculation, I ended up with 1.3°C.

    Please show us how you got 1.3 deg C for climate sensitivity?

  33. I’m not sure we should be so ready to throw hadcrut3 out yet. Has anyone done an analysis on the comparison between the water vapor found in the Vonder Haar paper and the different temperature indices? My eyeball approach seems to indicate a better match between hadcrut3 and water vapor than hadcrut4 and water vapor if you assume, as I have, that water vapor is a good proxy for temperature.

  34. Lauri Heimonen

    A quotation from ‘manecker’ http://judithcurry.com/2012/07/22/what-climate-sensitivity-says-about-the-ipcc-assessment-process/#comment-221648 :
    ”At any rate, the “take home” from all this is that the IPCC projections of 21st century warming are based on exaggerated estimates for ECS as well as most likely also for atmospheric CO2, so should be taken with a large grain of salt.”

    I agree with what ‘manacker’ says. In fact, the expression ‘exaggarated’ can be regarded still as too mild. Based on findings in reality the recent increase of CO2 content in atmosphere is dominated by natural warming, where any influence of anthropogenic CO2 emissions can not be found. By calculating you can find only 4 % anthropogenic CO2 in the recent increase of CO2 in the atmosphere; http://judithcrry.com/2011/08/04/carbon-cycle-questions/#comment-221259 .

  35. Lauri Heimonen | July 24, 2012 at 9:57 am |

    4% compounded interest over 260 years with a starting capital of 280 (as in ppmv CO2 in 1752, the start of the Industrial Revolution) would result in a total interest of 7512942.48 (http://math.about.com/library/blcompoundinterest.htm)

    By calculating, you find that only a mathematically blinkered logic could call 4% anthropogenic CO2 ‘exaggarated’ [sic].

    This much external forcing perturbing the Carbon Cycle is enormous; it’s amount is so great as to be unprecedented by orders of magnitude for any single species on any timescale beyond a single lifetime. Herds of buffalo on the Great Plains were legendary for kicking up enough dust to block the sun, and must have belched phenomenal volumes of methane, and yet do not scale to even a fraction of a percent of human influence. Algae blooms are thought to be capable of bringing on an ice age over the course of tens of thousands of years, but they work at a fraction of a percent of the timescale of this one single human caused factor.

    Dismissing AGW because subjectively 4% sounds small to you is, simply, an aesthetic argument fraught with faulty premises and implicit assumptions about what is and is not significant, and further appeals to the Salsbyist Fallacy that the inputs to an iterative cycle vanish after the cycle has integrated them. The whole point is they’ve already changed the state of the system in that iteration as an external perturbation, and their continuation over many iterations only makes it more of a perturbation.

  36. Beth Cooper

    Kim’s motif 23/07 12.23pm … “I think I’ve never heard so loud…’
    and add ons:

    ‘ And as it gets cloudier
    The message ‘ll get loud(i)er,

    ‘ … but could it get any rowdier?’

    ‘It won’t in a climate journal
    Because It ain’t allowed in there.’

  37. Chad Wozniak

    Mr. Oliver K. Manuel:

    The term “scientific dogma” would seem to be an oxymoron, rather like “Soviet journalist,” or, as the academic establishment, the New York Times and Josef Goebbels would define it, “free speech” (I.e., you’re free to say whatever we tell you to say.)

    Ohter than that I agree with your comment 1,000 percent.

  38. Chad Wozniak

    Postscript to my last comment: today’s contest is between reality and criminal-reactionary-leftist political ideology.

  39. Hello, Neat post. There is an issue with your site in internet explorer, could test this? IE still is the market leader and a huge portion of other folks will miss your excellent writing because of this problem.