Climate sensitivity follow-up

by Nicholas Lewis

JC note:  Pursuant to Nic’s post on “The IPCC’s alteration of Forster & Gregory’s model-independent climate sensitivity results,” he has sent a letter to Gabi Hegerl, who was coordinating lead author on chapter 9 of the IPCC AR4.

Dear Gabi,

I asked you yesterday whether you were aware that two of the climate sensitivity PDFs in Figure 9.20 of the IPCC AR4 WG1 report were not in fact based on a uniform prior in equilibrium climate sensitivity (ECS or S), despite it being stated in Table 9.3 that they were so based.  I stated that one PDF was effectively based on a uniform prior in the climate feedback parameter (Y or λ), which rather detracted from the IPCC’s case for transforming the Forster/Gregory 06 results from that basis to a uniform prior in ESC basis.

You responded saying that you would be surprised if there was an error in the Figure as the Table and Figure had been double checked with paper authors, but if there was an error you would like to know.

The Figure 9.20 climate sensitivity PDF that is effectively based on a uniform prior in the climate feedback parameter Y is that for Gregory 02.  As it happens, the arguments for that being the correct basis are very similar to the arguments applying in the case of Forster/Gregory 06.  In both cases the (assumed linear) physical relationship between observables that is being estimated relates to Y (or λ), not S, and the error characteristics are similar – assumed gaussian, with errors in temperature data dominated by errors elsewhere.

Why do I say that the PDF for Gregory 02 in Figure 9.20 – which is almost the same as that in the original paper – is effectively based  on a uniform prior in Y or λ, not a uniform prior in ESC as stated?  Gregory 02 estimated λ from the equation Q – F = λ ΔT , where Q is imposed radiative forcing, F is heat flux into the ocean and ΔT is the change in global average surface temperature, underbars indicating the difference in means thereof between 1861–1900 and 1957–1994.

As one would expect, due to errors in Q and F being dominant and being assumed to have a gaussian distribution, the PDF for the climate feedback parameter has an almost gaussian distribution.  The situation is very similar to that in Forster/Gregory 06, where it was correctly stated:

“For the uncertainty analysis we assume, like Gregory et al. (2002), that errors in the observable parameters (N, Q, and ΔTs) all have Gaussian distributions. In a Bayesian statistics framework, this is equivalent to assuming Gaussian observational errors and a uniform “prior” in each of the observables. Since the uncertainties in Q and N are much larger than in ΔTs (a factor influencing our choice of regression model; see appendix), uncertainty in Q – N is linearly related to uncertainty in Y, so our assumption is also approximately equivalent to assuming a uniform prior in Y.”

In Gregory 02, the estimate of Y is not obtained by regression in the usual sense of the word: as there are only two observations (the means for 1861–1900 and for 1957–1994) a perfect straight line fit results.  But, as in Forster/Gregory 06, uncertainty in QF is linearly related to uncertainty in Y, and it is equally the case that the error assumptions in Gregory 02 are approximately equivalent to assuming a uniform prior in Y.

The estimated PDF for the climate feedback parameter Y was not shown in Gregory 02.  Figure 1 below shows what I calculate it looks like. The PDF has been computed in the same way (apart from the reciprocal relationship) as the climate sensitivity PDF in Figure 2 in the original paper, using the same data and error distribution assumptions but with a larger number of random samples to improve accuracy.

Figure 1

Gregory 02 in fact computed the probability distribution of the climate sensitivity directly from the random samples of the variables.  The PDF from their paper (Fig. 2) is shown as the green line in Figure 2, below.  The PDF from Figure 9.20 of AR4 WG1 is shown as the magenta line.

The blue line in Figure 2 shows my calculation of the PDF for climate sensitivity S, derived by a simple inversion of the PDF for Y shown in Figure 1.  By simple inversion I mean that given by the standard rules of probability for converting PDFs between two parameters, being multiplication by the absolute value of the derivative of the old parameter with respect to the new one, or by 3.7/S2 in this case, and renormalization. That procedure does not involve any change to the effective prior distribution used, which remains approximately uniform in Y.  My blue line is virtually identical to the green line from the Gregory 02 paper, except that as I used a larger sample size my line is less jagged.

The small differences between the three lines are due partly to digitization errors, partly to fluctuations caused by the restricted sample size used in the Gregory 02 paper, and partly to smoothing in Figure 9.20.

Figure 2

As in the actual IPCC Figure 9.20, climate sensitivity in Figure 2 has been truncated over the range 0 – 10°C.  In the case of Gregory 02, doing so involves a substantial truncation of the distribution of S at the top end.  In fact, there is a considerable area of the PDF for Y below Y=0, which can be ruled out by the long term stability of the Earth’s climate.  That suggests to me that the Gregory 02 estimate of Q  may be too low and/or its estimate of F too high.  I will come back to this point later.

The orange line in Figure 3, below, shows what the Gregory 02 climate sensitivity PDF would have looked like in IPCC Figure 9.20 had it actually been computed on the basis stated in Table 9.3.  As you know, that involves multiplying by S2 and renormalizing, reversing out the effect of the multiplication by 3.7/S2 on converting parameter from Y to S.   I have computed the PDF using my larger-sample simulation of the Gregory 02 results, as using the transcribed Figure 9.20 graph is not very accurate at high sensitivity levels and the PDF computed from it fluctuates considerably.  I have shown the uniform prior in Y based PDF for comparison – the blue line.

Figure 3

I think it a pity that the IPCC failed to show in Figure 9.20 the sensitivity PDF for Gregory 02 on the uniform prior in S basis that the IPCC claimed it was stated on.  Had it done so, many more people might have realised just how inappropriate the uniform prior in S basis is.  I am not suggesting that the failure to transform the Gregory 02 sensitivity PDF to a uniform prior in S basis, in the way that was done with Forster/Gregory 06, was deliberate.  I presume that it was just a simple error.  But it is surprising that neither the IPCC writing team nor the paper author realised that the Gregory 02 sensitivity PDF embodied an approximately uniform prior in Y, not in S.  That point seemed fairly obvious from reading the paper.  And it does raise doubts about how informed was the consent of the Forster/Gregory 06 authors (or at least that of Jonathan Gregory) to the transformation of the results of that paper onto a uniform prior in S basis.

I said earlier that the fact that a substantial portion of the distribution of the estimate for Y was below zero – a physically unrealistic result – suggested that the estimate of Q  may be too low and/or the estimate of F is too high.  And I think that there are grounds for that suspicion.  The central value of 0.32 W m-2 (per unit area of the Earth) for the change in heat flux into the ocean, F, over 1957-1994 was based on the Levitus et al. 2000 dataset. That dataset was corrected in 2005, giving a lower figure of 0.20 W m-2 (over the same period), or possibly slightly less.  The corrections made to the Levitus dataset in 2009 would reduce the ocean heat flux figure further, of course, but that post-dates AR4.

Also, the Gregory 02 central estimate for the total change in forcings between 1861–1900 and 1957–1994 of 0.36 W m-2 looks on the low side to me.  I note that the key Sulphate aerosols component was derived using the HadCM3 AOGCM, rather than being based directly on observational estimates.  If one uses instead the well known GISS observational forcings dataset, from 2007, used in NASA GISS model E, the total change in forcings between the two periods is twice as high, at 0.72 W m-2 [i]. The comparable net change in radiative forcings illustrated in AR4 WG1 Figure 2.23, as used by another GCM, seems to be even higher, at around 1 Wm-2 between 1861–1900 and 1957–1994.

If the corrected 2005 Levitus dataset ocean heat flux data and the GISS change in radiative forcings estimates were used, (Q F) in the Gregory 02 equation (3) would be centred on 0.68 Wm-2 instead of  on 0.20 Wm-2.  Recomputing the Gregory 02 results, simply substituting those corrected/revised central estimates for the original ones, would produce a substantially lower estimate of S, with a far thinner upper tail to its PDF – much more in line with the Forster/Gregory 06 results.  Figure 4, below, illustrates this.  The PDFs are all given on the original, approximately uniform in Y, basis.

Figure 4

I look forward to hearing from you once you and your colleagues have investigated this matter.  I appreciate that if you need to call in external statistical expertise, that may take some time.

Yours sincerely

Nicholas Lewis

7 July 2011


[i]  This is treating missing 1861-1879 values as the same as for 1880, except for stratospheric (volcanic) aerosols.

Moderation note:  this is a technical thread and comments will be moderated for relevance.

370 responses to “Climate sensitivity follow-up

  1. The discussion of the previous thread brought up in several comments the observation that the Figure 9.20 of WG1 might most appropriately be considered a representation of likelihood functions that describe the results of various studies based directly or indirectly on empirical data. This interpretation means that there is nothing strange even in the curve presented above by Nic for the Gregory 02 paper. The likelihood functions need not be normalizable and they may very well provide information on either a lower or an upper limit.

    Neither is there anything strange in the case, where the likelihood function extends to an unphysical range, as that tells only that the empirical data is not accurate enough to tell more.

    The likelihood functions are not PDF’s in the Bayesian inference, but combining them with the prior PDF result in a posterior PDF that takes the empirical data into account.

    • The orange line in Fig. 3 shows how strongly the IPCC’s favoured use of a uniform prior in sensitivity dominates a not very informative likelihood function, giving the appearance of a high probability of a very high climate sensitivity that is very largely a reflection of the prior distribution used.

      I doubt whether most people have as extreme Bayesian views as you have, Pekka.

      • Nic,

        You might like my interchange with Pekka on the other thread.

        I write to myself that science investigates “assertions” about the “shared world”, where “assertions” are propositions that are intended to be true or hortatory (“the world is warming because of CO2 and so we must stop burning coal” is an example). When Bayesians make assertions about the shared world they encounter all the problems that they claim frequentists encounter, but in higher dimensional spaces and their inferences are harder to compute (indeed, the difficulties in computation introduce other approximations that reduce accuracy.)

        I think the history of science shows that assertions in the absence of evidence are likely to be false. If someone has evidence in support of some of the priors discussed in your threads, it has not been presented. So experience shows (cf Samaniego) that a chosen prior is likely to be wrong — any assertion based on it is unlikely to be accurate.

        No evidence has been presented that the MLE can be improved upon.

      • Theo Goodwin

        Mattstat,
        You write very politely. Could you or someone tell me who believes that Bayesian statistics can be used to assess evidence for a claim about the objective world, the world that we all share? Bayesian statistics were designed and can be used very effectively to learn about one’s own gambling behavior. That is the extent of their usefulness. Bayesian statistics are inherently subjective. They can work wonders in situations popular among game theorists where decisions are made “under uncertainty:” that is, under conditions where objective evidence is clearly inadequate. So, to use them for assessment of evidence in the real world is to misapply them.

      • Theo – That’s certainly incorrect. Bayesian statistics are extremely useful in medical diagnosis, among other areas, including the interpretation of test results. For example, suppose a man has a PSA test for prostate cancer that yields a result >4 ng/ml. What is the probability he has prostate cancer (the test’s so-called positive predictive value)? This can be computed by knowing the likelihood of a PSA >4 in prostate cancer, combined with a prior knowledge of the prevalence of prostate cancer, along with prior knowledge of the frequency of PSA>4 in normals and men with prostate cancer combined.

        It turns out that the positive predictive value of that test is rather meager, which is one reason, among others, why the popularity of PSA testing has declined in recent years.

        Where I would tend to agree with you involves Bayesian inference utilizing priors that are arbitrarily based on subjective preferences. Some subjectivity may be inevitable in circumstances where frequency or probability data are sparse, but even then, it’s desirable to have some objective basis for what ends up as a subjective prior.

      • Well, I borked the URL coding, so here’s one without. Hopefully, others can find their way to the references.

        IMHO, using Bayesian techniques is equivalent to spreading BS—because it implies “that uncertainty and degrees of belief can be measured as probabilities” (http://en.wikipedia.org/wiki/Bayes'_theorem). I’m a frequentist and give no credibility to Bayesianism.

        See these for a quick review: http://en.wikipedia.org/wiki/Frequency_probability and http://en.wikipedia.org/wiki/Bayesian_probability. And the debate goes on, and on, and on….

        Here are few more: “Leonard J Savage’s Foundations of Statistics” http://www-history.mcs.st-andrews.ac.uk/Extras/Savage_Statistics.html, “Facts versus Factions: the use and abuse of subjectivity in scientific research” http://chetday.com/healthhoax.html, and Savage’s works at http://cepa.newschool.edu/het/profiles/savage.htm and http://www-history.mcs.st-andrews.ac.uk/References/Savage.html.

        As for using Bayesian methods WRT “climate sensitivity,” I answer: NOT VIABLE.

      • Theo Goodwin

        Thank you for this information. My guess is that getting Bayesians to fully explicate their reasoning will be tantamount to pulling teeth. Scientific Method forces one to make assumptions explicit.

      • Since there is one Earth, there is no value in frequentist methods, unless you have another couple of examples in your pocket.

      • Fred wrote: Bayesian statistics are extremely useful in medical diagnosis, among other areas, including the interpretation of test results.

        Excellent example. And for those cases, considerable effort and funds are expended to determine what prior distribution to use, and the relevant prior is actually the marginal distribution of the desired quantity.

        For Theo Goodwin, consider the paper by J. B. Kadane “Toward a more ethical clinical trial”, where Prof Kadane presents the case that the use of Bayesian inference can reduce the misallocation of risk in clinical trials. Prof Kadane worked with clinical trialists at John Hopkins University to implement his recommendation.

      • Theo Goodwin

        It is truly difficult to defend Bayesian reasoning. And I do not mean to criticize you. But to defend Bayesian reasoning, you select a case where Rules of Thumb have been deployed and developed for generations. That is what explains the success, not the Bayesian reasoning. In any case, you agree with me that Bayesian reasoning does not force one to make explicit their assumptions, unlike the method of hypothesis, inference, and test.

      • Theo Goodwin, you wrote: But to defend Bayesian reasoning, you select a case where Rules of Thumb have been deployed and developed for generations. That is what explains the success, not the Bayesian reasoning.

        I disagree with that. The Bayesian reasoning substantially improves the use of medical diagnostics, and shows exactly where, and by how much, the rules of thumb are inaccurate. Bayesian reasoning is strongly to be preferred when there is actual reliable information to substantiate the choice of a prior probability density. For a somewhat antique text, check out “Signal Detection Theory and ROC Analysis” by John A. Swets, published about 1972.

        Some Bayesians do (Donald Berry) and some Bayesians don’t (Joseph Kadane) insist that the prior distribution be strongly and explicitly supported by shareable evidence. Francisco Samaniego showed in his book that the claimed philosophical superiority of Bayesian inference depends in practice on the prior being at least accurate enough.

      • That’ll do me as a summary. And you believe what about current climate sensitivity estimation, Matt? That there isn’t “reliable information to substantiate the choice of a prior probability density”, I take it.

      • Theo Goodwin

        Mattstat writes:
        “Bayesian reasoning is strongly to be preferred when there is actual reliable information to substantiate the choice of a prior probability density.”

        I am criticizing Bayesian reasoning because it is used when it is not the case that “there is actual reliable information to substantiate the choice of a prior probability density.” I thought that was obvious. If you eliminate all cases where there is not reliable information then the IPCC would be out of business. Agreed?

        By the way, I meant to reply to Fred. Pardon me for mistakenly replying to you. Now, I am replying to you.

      • Richard Drake asked me: And you believe what about current climate sensitivity estimation, Matt?

        On present evidence for priors, the Bayesian estimate can not justifiably be said to be more accurate than the MLE.

        A paper by Kass and Greenhouse on the ECMO study (published in Statistical Science in about 1988) presented Bayesian analyses with 75 priors. You can always pick a “prior” so that the posterior distribution matches your belief.

        We don’t need more evidence and discussion to justify a best prior — it’s too late for that. We need more experiments in the future to produce more MLEs.

        One more thing about “climate sensitivity”. there has been an assumption, implicit or explicit, that the climate sensitivity is constant. However, there is a possibility that, starting from now, increased earth temperature might cause increased cloud cover which might reduce insolation and cause a reduction in temperature. If the climate sensitivity is a function of temperature, then the current discussion about the best estimate of climate sensitivity is misdirected. That is what I believe about current climate sensitivity estimation.

      • Rats. Somehow I misclosed the italics around the quote from Richard Drake. Sorry.

      • Don’t worry about rogue italics. Great answer, thank you.

      • Doug Badgero

        MattStat said:

        “One more thing about “climate sensitivity”. there has been an assumption, implicit or explicit, that the climate sensitivity is constant. However, there is a possibility that, starting from now, increased earth temperature might cause increased cloud cover which might reduce insolation and cause a reduction in temperature. If the climate sensitivity is a function of temperature, then the current discussion about the best estimate of climate sensitivity is misdirected. That is what I believe about current climate sensitivity estimation.”

        Finally! We begin to discuss the possibility that climate sensitivity is not constant and *gasp* it may not be simply linearly variable either. I think I hear the halleluiah chorus.

      • MattStat wrote: One more thing about “climate sensitivity”. there has been an assumption, implicit or explicit, that the climate sensitivity is constant. However, there is a possibility that, starting from now, increased earth temperature might cause increased cloud cover which might reduce insolation and cause a reduction in temperature. If the climate sensitivity is a function of temperature, then the current discussion about the best estimate of climate sensitivity is misdirected. That is what I believe about current climate sensitivity estimation.

        A similar thing has crossed my mind about feedback mechanisms, which are very much related to climate sensitivity. Take for instance snow cover. Suppose for the sake of the argument that higher global temperatures cause more precipitation globally, which in winter would be snow. Snow increases the albedo of earth, and that may be considered a negative feedback. However, the mechanism only occurs below zero (not entirely true, but bear with me). Lower the temperature form, say, 15 C to 5 C and there is no feedback at all.

      • ferd berple

        Given that average paleo temperatures for the earth orver the apst 600 million years have been almost entirely within the range of 11C – 22C, over a wide range of CO2 levels, and the current average temperature of the earth is 14.5C, what possible justification is there to assume a uniform prior of 1C – 18.5C?

        The only prior that could be assumed would be from -3.5C to 7.5C. (11-14.5, 22-14.5). The net effect of choosing 1C to 18.5C is to skew the sensitivity towards the higher range, creating a false picture.

      • ferd berple

        ps: the choice of 1-18.C may not have been accidental. Reading the IPCC review notes, I find that the original choice for upper limit was 10C, but from my reading it appeared that a reviewer Mann suggested this be increased. From what I read it was increased from 10C to 18.5C, but truncated at 10C. This would it seems to me increase the sensitivity for CO2 in the IPCC report, above what was in the original data.

      • You be careful how you step in them pies.
        ==================

      • We are – again – plagued by badly defined question. MattStat is presenting facts and even theorems on certain issues, I have been looking at the same basic settings with different questions in mind, and Fred a third set of questions.

        As long as we present parametric results as a way of describing the outcome of the analysis without specifying precisely, why we are interested in the data, we continue to face this problem.

        I could say also: It’s semantics again.

        To give an example on a rather well posed question, we might ask: What does the F+G 06 paper tell about the likelihood that S is between 3.00 and 3.01 compared to it being between 2.00 and 2.01.

        This is so well posed, because the narrow ranges guarantee that the same result is obtained almost independently on the prior used, if Bayesian inference is used. The result would be changed by a prior with a singularity at the end point of either range or some other really extreme prior, but not by any prior considered plausible by most.

        It’s also well posed, because similar arguments can be presented for the choice of variable. Any variable that has a smooth functional relationship to S over those two narrow ranges will also give the same answer in frequentist approach.

        Widening the question to something likely to be of more interest to most will open up the dependence of often arbitrary choices done in the analysis.

        Bayesian approach is open on these influences of the choices done, other methods are mostly not as open, but no less dependent on the choices. For every Bayesian choice of prior, there is an equivalent choice of variables for the frequentist method. Using a uniform prior in the variable used to represent the results is often easiest to interpret, but even that involves assumptions that may be difficult to defend.

        Often it would help that its emphasized that the results are only likelihoods or conditional probabilities that summarize the empirical analysis, and that these may have a very different functional form than the PDF that describes the best judgement of some specified people taking into account all information available to them.

      • Pekka Pirilä, 7/8/11, 3:54 am, sensitivity II

        You have written perhaps the most profound point posed on Dr. Curry’s blog:

        We are – again – plagued by badly defined question.

        Examples:

        (1) It applies to Bayesian analysis. Bayes decision criteria has a rigorous and important role in science. There it is fully objective. See Van Trees, H. L., Detection, Estimation, and Modulation Theory. Used subjectively, it has no role in science. It employs subjective probability, a concept expressly excluded from science. See Papoulis, A., Probability, Random Variables, and Stochastic Processes. Most of the discussion on Bayes is hopelessly vague or uselessly subjective. Fred Moolten’s recent discussion of Bayes used in deciding drug efficacy seemed to be founded in science.

        (2) It applies to the discussion of Lindzen and Choi in the recent thread. No one seemed to care that L&C used a unique definition of feedback (a rigorous though elementary one from system science). They compared their work to that of Trenberth and Dressler, who applied two other, differing definitions of feedback. In addition, IPCC had three different definitions yet, one explicit and two implicit. Altogether we had six different, unresolved definitions of feedback for the discussion on the implications of no-feedback sensitivity. Again we have a thread on climate sensitivity with no recognition by the posters of the problem that measurements are closed-loop while the models are open-loop with respect to the most powerful feedback in climate (albedo, and esp. cloud cover).

        (3) It applies to all discussions relying on equilibrium, or equivalently balance, as in temperature, radiation, CO2, and the ocean or its surface layer. To rely on equilibrium, one needs to invoke a governing physical principle, such as the 2d Law, Minimum Kinetic Energy, or Zero Torque. None of these apply, but the modeling and discussion proceeds apace, making equilibrium assumptions willy-nilly, ignorant of the facts and physics. What regulates the climate is feedback, so once we get that defined correctly, we might begin to converge on the problem of climate. See Callendar (1938), and esp. the remarks of George Simpson, director, Met Office, 1920-1938.

        (4) It applies to the matter of the accuracy of Sea Surface Temperature. This is a scientific problem in the field of estimation theory. To have a meaningful discussion on SST accuracy, one must first specify what is being estimated. Tonyb lamented the inaccuracy as if he was trying to estimate SST(latitude, longitude, time), perhaps resolved at the level of a 5º x 5º cell on a secular time scale. Posters wrote about the SST as if they meant SST (time|average over the whole ocean). The climate problem, whenever it is well-posed, involves the Global Average Surface Temperature, which includes SST integrated into a sampled GAST(time). Each of these estimators has its own accuracy. SST seemed unusable to Tonyb, but the resulting GAST(time) is excellent, and much to the credit of the investigators.

        When we’re discussing subjective climate, none of this matters. Just don’t expect your climate predictions to be validated.

      • Another good contribution. Concordant with what I wrote citing Swets and Samaniego.

        I try to avoid arbiting what “is science” and what “isn’t science”, and focus on the claim that with solid evidentiary backing for the prior (and, of course, the likelihood function), Bayesian inference will produce more accurate inferences than likelihood based inferences (which also assume an accurate likelihood.) Thus I concur with the well-phrased conclusion: When we’re discussing subjective climate, none of this matters. Just don’t expect your climate predictions to be validated.

      • John F. Pittman

        MattStat:

        From the discussion, I woulds assume that many of the authors did engage in subjective climate, and have claimed that their predictions will be validated. Do you agree? J Glassman points to 6 differing sensitivity definitions in his 2). At least two of them have made forecasts or likelihood projections. Should the authors not expect their projections to be validated?

      • John F. Pittman wrote: Should the authors not expect their projections to be validated?

        Yes they should, as I understand the foundations of Bayesian inference: even the most subjective of Bayesians, I think, would assert that Bayesians believe their own beliefs. For B. de Finetti and L.J. Savage, probability theory is based on expectation. However, a Bayesian can make a forecast based on a prior with very broad support on a parameter (and broad support on a family of models), which results in a very diffuse prediction (or “range of predictions”) which can be nearly nondisconfirmable on on a century’s worth of data. With a uniform distribution on [0, 18] for S, and a uniform distribution across the models, nearly any outcome provides some degree of “validation”.

        Consider that there were alarming warnings in 1998. Thirteen years of subsequent non-warming have not changed the beliefs of anyone (anyone? well maybe not) who had a belief in AGW at that time. For at least one example, read tamino’s analyses, in which he shows that imprecision in the model parameter estimates produces imprecision in the model predictions, such that no model has been disconfirmed; contribute broad uniform priors on all the parameters, and the range of predictions become even more broad.

        A prominent Bayesian told me a humorous joke: he said that frequently as more and more data are accumulated, the competing parties
        create more and more divergent priors, so that the posterior distributions always match their unchanging beliefs. It’s difficult to tell what anyone truly expects.

        And to your first question, I do not know. Some of the writing, including sections of the IPPC writing, is in the form of “warnings” rather than “predictions.” Paraphrasing, “It could be as bad as the end of civilization by 2050, so the world needs to spend $140 trillion to prevent that from happening.” I am not sure that there is a real prediction in there.

      • Nic,

        It may already be clear, but I repeat:

        The problem of the Figure 9.20 is that it’s presented as PDF’s based on a specific prior rather than as a set of likelihood functions that are not PDF’s.

        If we are interested in, what the listed analyses tell about the value of S, that figure is the right way of presenting the data, but it’s nature should be emphasized so strongly that all the misinterpretations seen in the previous and this thread (and perhaps also in the AR4 reports) would have been prevented.

        When the figure is described correctly it would also be clear that the total evidence is not the average of the curves, but the product curve formed by multiplying the values of independent curves for each value of S. The requirement of independence makes the process difficult, but the idea should be described clearly.

        Unwillingness to combine the evidence in this way might be justified by the difficulties of estimating the full range of uncertainties of each analysis, but if the likelihood curves are taken seriously, combining all independent evidence is a natural procedure that should be done.

      • Pekka,
        I can see an argument for presenting the information from the various studies in the form of likelihood functions, as you suggest, but only if that is done in the natural parameter space for the likelihood function. To obtain a likelihood function by estimating the climate feedback parameter and then to present it as a likelihood function in climate sensitivity, a reciprocal parameter, alongside other likelihoods that may have been derived in the sensitivity parameter space, seems to me misleading.

        I quite agree with the desirability of combining multiple lines of evidence multiplicatively, although there is a real problem of obtaining truly independent evidence about climate sensitivity. I think it much preferable to combine evidence from all lines of observational evidence and to use an appropriate uninformative prior than to use a supposedly expert prior.

      • Nic,

        When we know, what the parameter is that we wish to learn about, that’s the only right parameter to use on the x-axis for looking at likelihoods. The likelihood value is independent of the choice of parameter as long as it represents the same physical situation.

        Thus, if we wish to learn on S, we present the likelihood as function of S. If we wish to learn about Z = S^2, we plot it versus Z and get a different shape, but the same value, when the value of Z is indeed S^2. Plotting the same values versus different parameter on x-axis, some of the choices may be normalized to one, but most are not, and need not be. We should be interested only in relative values and normalize after combining with a prior to form a posterior PDF.

      • I should add that the Fig 9.20 had been much more instructive, it they had not made that error with Gregory 02. The figure should have been supplemented by the information that the curve representing Gregory stays at a relatively high value all the way to infinity. This could have been used to explain that likelihoods need not be normalizable and that the figure actually represents likelihoods over a range that does not cover the whole range, where the likelihood is defined.

    • If the likihood function has a portion in a non-physical range, it is not a comment on the quantity or quality of the data, but on the choice of distribution used to best represent it.

      If an uncertain quantity X cannot have negative values, then the use of a normal distribution is simply inappropriate, regardless of its tractability.

      The use of a 0-18 uniform distribution as an apriori distribution is a transparent use of the IPCC’s “Thumb on the Scale”. As far back as 1979 there was a far narrower estimate of the range. And there is no reason to assume uniform where trapezoidal or triangular or lognormal would be far more physically reasonable.

      The standard modern estimate of climate sensitivity — 3°C, plus or minus 1.5°C — originates with a committee on anthropogenic global warming convened in 1979 by the National Academy of Sciences and chaired by Jule Charney. Only two sets of models were available; one, due to Syukuro Manabe, exhibited a climate sensitivity of 2°C, the other, due to James E. Hansen, exhibited a climate sensitivity of 4°C. “According to Manabe, Charney chose 0.5°C as a not-unreasonable margin of error, subtracted it from Manabe’s number, and added it to Hansen’s. Thus was born the 1.5°C-to-4.5°C range of likely climate sensitivity that has appeared in every greenhouse assessment since…”[13]
      http://en.wikipedia.org/wiki/Climate_sensitivity
      …Well, at lease in every greenhouse assessment where it preserved their desired outcome.

      • Stephen,

        The likelihood curve does not represent a probability density distribution. The curve tells only, what information the particular empirical analysis has produced. If the empirical work cannot exclude values that are unphysical for other reasons, the curve may extend to that region. Whether the curve is presented also for the unphysical region or not doesn’t matter for inference, because the other arguments exclude that range. It’s often informative on the uncertainties of the empirical work to present also that part of the curve.

      • I do not think you are correct. The empirical work can be in the physical region, but an ill-chosen distribution may extend tails into the non-physical space. The choice of distriubtion matters whether it be for inference or other purposes.

        A while ago there was a thread here on Black Swans and Dragon Kings. My belief then as now is these are manifestations of poorly chosen distributions.

      • I may be barking up the wrong tree, but… The likelihood of sensitivity being below zero is very low.. The likelihood of global temperature response being below zero is greater. So while it may be semantics, the uniform prior starting at zero misrepresents the probabilities. The Bayesian with expert priors focuses on the most likely range. Results outside that range are not discounted completely, just de-emphasized as highly unlikely. So the Bayesian with expert is a tool for realistic planning or wagering.

        The use of the uniform prior of 0 to 18 is also an expert prior discounting near and below zero response. If the data was limited to only CO2 response, the uniform prior would make sense. But natural variability, which is also ignored, makes the 0 – 18 disingenuous.

        No method would eliminate long tails. The use of the uniform prior virtually eliminates the low end tail while emphasizing the high end tail. The Bayesian with expert emphasizes the maximum likelihoods, the improbable tails are not discounted.

        So if you just want to chase tail, the frequentest method is for you. If you want to make informed decisions, you should be Bayesian.

  2. we’re basically getting a taste of what the pre-publication review of AR4 could have been but was never allowed to be.

    • Profoundly true, four years after publication. This got me thinking about the Third Assessment Report (TAR) in 2001 and the launch of Climate Audit in January 2005. The next milestone I immediate think of is this on hurricanes in August 2006:

      If Curry is unaware of these issues, what does that say? If she is aware of these issues and ignored them, what does that say?

      Judith’s gameness in arriving at CA and replying changed history. But AR4’s account of climate sensitivity is no sideshow. All these issues should have been thoroughly gone through: James Annan’s comments, and even Michael Mann’s, show that the coordinating lead authors were not left completely in the dark on the matter. Nic’s spotlight now is upon them. We eagerly await what they have to say.

    • omno,

      The pre-publication discussion and review were unnecessary. The science was already settled.

      And apparently it remains settled as we are informed that the UN has already decided that AR5 is going to tell us that the climate catastrophe is much worse. In keeping with their renewed determination to pursue a ‘better communication strategy’ I seriously doubt that any unnecessary discussions and reviews such as this one will be allowed to cause problems for the new strategy.

  3. Impartial, as opposed to whom?

  4. I will be very interested to read Hegerl’s response to this letter. If he engages positively and provides an explanation for what was done then I will retract my comments in the previous post concerning the seeming lack of IPCC accountability. Never too proud to eat some humble pie.

    • I’ve always thought of Gabi Hegerl as of similar gender to our esteemed host (Bayesian adjustments permitting, of course) – as the photo on the second link from the top seems to confirm.

    • Kent Draper

      I wouldn’t expect a quick reply. Although, if it really warms up then maybe quicker than later. I see it snowed in a very dry area in South America and there was some snow skiing here in sunny California on the 4th or July so it
      may take a little longer than I hope :)

    • Steve Fitzpatrick

      Don’t hold your breath. I will be surprised if there is any reply at all, and shocked if there is a reply which addresses the substantive issue. We have seen this movie before, and we know how it ends.

      • The bigger the temblor the longer the shearing forces are resisted.
        =================

  5. Not sure if this meets the technical-relevance bar or not. I’ll repost on another thread if it is deemed more appropriate.

    I am not suggesting that the failure to transform the Gregory 02 sensitivity PDF to a uniform prior in S basis, in the way that was done with Forster/Gregory 06, was deliberate. I presume that it was just a simple error.

    I acknowledge the clarity of that language (although I will say that I believe the previous post would have been improved with similarly clear language).

    My question below is to understand more clearly how to reconcile another part of today’s statement with part of your statement the other day.

    From today:

    But it is surprising that neither the IPCC writing team nor the paper author realised that the Gregory 02 sensitivity PDF embodied an approximately uniform prior in Y, not in S. That point seemed fairly obvious from reading the paper. And it does raise doubts about how informed was the consent of the Forster/Gregory 06 authors (or at least that of Jonathan Gregory) to the transformation of the results of that paper onto a uniform prior in S basis.

    I acknowledge the clarity of that statement as well, but this seems to me to be somewhat different from yesterday’s statement (elipses used for relevance).

    “… Forster and Gregory … were Contributing authors for chapter 9 of AR4:WG1 and, presumably, accepted (at least tacitly) the IPCC’s treatment of their results.

    Putting the two statements in relief, it seems that you are saying that their tacit acceptance was based on a lack of information – is that correct?

    I’ll add that I am not doubting your motivations (which I assume are to improve the statistical analysis of climate change).

    • Have you contacted Forster and Gregory and asked them what happened? The IPCC probably deleted all of the notes about the changes. Maybe Forster and Gregory kept their emails and would be willing to share?

    • “it seems that you are saying that their tacit acceptance was based on a lack of information – is that correct?”

      Objection, your honor. Counsel for the defense is asking the witness to speculate about matters for which the witness obviously lacks personal knowledge.

      Counsel is speculating about the motive of the witness and seeks to cause the witness to speculate about the motives of the of the authors. Of course, we can only speculate as to counsel’s motive in seeking to learn of the witness’ motive as he tries to force the witness to speculate about the authors’ motive.

      Move to strike on the grounds that the various speculations regarding motives and the motives regarding the speculations are beyond the technical jurisdiction of the court.

  6. Impartial? Since Bayesian statistics is an approach for learning from evidence as it accumulates and since there has essentially been no global warming since 1979, avoiding preconceptions will be very, very difficult.

  7. The irony I like is that the uninformed priors were actually informed by the ignorance of the policy makers. They bought a pig in a poke, and now that all the county fairgoers are guessing at the weight of the pig, they are looking like fools.
    ============

    • Do you see
      Richard T.
      How the science
      Bad so be?
      ======

      • The use here for the Bayesian approach is to include an adjustment for missing data; however, the integrity of the data to begin with is not beyond question. Additionally, the use by climatologists of opaque computational algorithms run throught supercomputers using Markov Chain Monte Carlo sampling to simulating distributions of random quantities does not inspire confidence given the prior history of fraud and collusion and the fact that the the climate scientists keep coming up with a different answer to the same problem. The Bayesian approach is not a substitute for sound science.

      • Brian G Valentine

        Interestingly, someone from China sent me a preprint to look at (that nobody would publish evidently) that showed, he could reproduce any decade period of global temperatures he wanted to by a random walk generation started from a “seed” number that eventually produced the (observed) path of temperatures.

        The interesting conclusion was, the distribution of these “seed” numbers was evidently random [throughout the interval (0.1)] and so there was evidently no a priori method of selecting a seed value that worked.

        This suggests to me anyway that Bayesian approaches are not going to produce anything meaningful

    • As Surowiecki relates in “The Wisdom of Crowds” about the experience of scientist Francis Galton at the country fair in 1906 and the weight-judging competition for an ox, the fairgoers as a group (but acting independently) will be more accurate than the ‘experts’.

  8. Climate sensitivity follow-up, 7/7/11

    From the standpoint of what a good review might have done for AR4, for IPCC, and for AGW, consider the difference between probability density functions and probability distribution functions, both unfortunately given the same abbreviation as Adobe’s file format. The densities are great subjectively and for non-technical types, the distributions great for objectivity. Distributions have certain properties of convergence that densities do not enjoy, whether the convergence is between probability functions or a probability function and data. The rule should be always work in distributions first, and forget histograms.

    Another helpful hint is to graph in standard deviations and to analyze MMSE estimation in sigma space, not probability space. A Gaussian distribution is a straight line in sigma space, so that is a great space for determining whether the power in the tails is a problem.

    If one is mentally stuck in the density domain, at least run a cross check on comparative distributions. As it is, IPCC’s trimming and normalizing of the data to make the densities match is like the radiologist who touches up your x-rays.

    Similar rules hold in radiation transfer and spectral analysis, and in measurements in general. Sometimes a change in coordinate system can tame ill-behaving probability functions, as in converting from displacement to energy.

    IPCC was careful in AR4, Chapter 9, to refer uniformly to the densities, but it should have analyzed distributions. The tails and mid-range slopes of the distribution are critical, and IPCC ruined them by invisible truncation and normalization. The fact that all the tails of the treated densities seem to cluster is meaningless. The part that was trimmed away was vital. It contains the answer to the question of whether the probability is realizable. Some apparently good fits are not.

    Nicholas Lewis presented several charts comparing densities, mislabeling them distributions. They should all be redone and analyzed as distributions. IPCC made the same mistake to break with its own standardization in AR4, caption to Figure 2-20, p. 203.

    But in the last analysis, why perfect open loop climate sensitivity modeling when all the real world data are closed loop? (For the uninitiated, climate is not controlled by an imagined equilibrium with arbitrary preferred states, but, confidently, by albedo: surface albedo in the cold state, and cloud cover albedo in the warm state (Callendar (1938)).

    • I take it by (probability) distribution function you mean what I referred to in the previous post as a cumulative distribution function (CDF)? Unfortunately, the term probability distribution function is ambiguous nowadays, even if it used not to be. I titled my charts probability distributions, which term is sometimes used to describe PDFs, so as not to put off people who didn’t know what a PDF is. But I made clear in the Y axis labels that what was represented was probability density.

      • Nic Lewis, 7/8/11, 1:22 am, sensitivity II

        Yes, re. the equivalence between CDF and the probability distribution function. Criticizing your titling density charts was trivial and nit picking. Sorry for that. So too was criticizing IPCC for mere inconsistency. It has been quite good about standardization in its Assessment Reports. It just fumbled in AR4 Figure 2.20 (but also Box TS.5: Extreme Weather Events, p. 53; Figure 3:38, page 301; referring to densities as distributions in Box 10.2, p. 798) compared with the AR4 Glossary definition of PDF, defining it as density, and its brief use of the (redundant) Cumulative Distribution Function (CDFs), Table 11.3, p. 922.

        The reason for the nit picking is provide a consistent background for criticizing IPCC for failure to use the best analytical techniques. Investigators need to be pressured to rely on the distribution domain for analyses.

        What IPCC did with Figure 9.20, however, is off the bottom of the scale for principled science and objectivity. It threw away important information to create an unwarranted image of agreement. It did that by truncating and normalizing data, and then by presenting the results in density space. This verges on intentional misrepresentation of data. It raises suspicion, especially in light of a pattern of IPCC practices. It created false charts of human fingerprints on atmospheric CO2, creating parallel data by graphical shenanigans. It calibrated CO2 data records from different stations in the network, making them agree without publishing the calibrations. It misrepresents MLO data by selecting, separating, filtering and reconstituting the data, missing wind data, and turning data measurements into unverifiable, subjective proxies.

        Much to IPCC’s credit, it appears to have fixed part of the Figure 9.20 presentation in AR4, Figure TS.25, Cumulative Distributions of Climate Sensitivity, p. 65. Unfortunately, TS.25 is missing a legend for its dozen traces (compared to only 9 in Figure 9.20), and it appears to inherit the problems of truncation and normalization.

  9. I appreciate that if you need to call in external statistical expertise, that may take some time.

    I hope I am not considered overly cynical by the Bayesians present when I say that I don’t think it’s just statistical expertise that is going to be needed on receipt of this missive. Smelling salts and emergency public relations consultancy are two of the other ingredients that spring to mind.

    • When you consider that different choices of prior information, or different choices of the model, can produce different results, you begin to understand understand why a scientific skepticism has become the only rational approach to the use of the Bayesian approach. Who will buy off on a change in the prior information or the model when the scientific validity of the later stage information information and models is in question?

  10. I object to this posting. Are we all invited to send challenging letters to whomever we please and post them as technical threads? This is not right.

    • Welcome to the internet.

      • In case anyone hasn’t noticed, this is Judith Curry’s slice of the Internet. She chooses or disposes – and is judged by her readers accordingly. Her humility in giving Nic this opportunity, in the original post and in this brilliant follow-up, with no substantive commentary from herself, but incurring as always the wearisome weeding of comments … is for this reader absolutely awesome. YMMV as they say elsewhere and that of course is your Internet right.

    • You are kidding, right?

    • David – I partly agree with you. I wouldn’t say I object to the post, but I would have preferred a separation between two disparate elements (and perhaps intentions) in Nic’s letter to Gabi Hegerl. The first was to raise the issue of inadequate “housekeeping” by the IPCC in ensuring the accuracy of information in its tables. I expect that Gabi will welcome that effort, or at least appreciate its necessity.

      The second element, however, was the use by Nic of a communication to her to promote his own judgments about climate sensitivity by recalculating the results of Gregory-02, substituting his own input values for those of the authors. The original paper reports a climate sensitivity range with a lower 90% CI boundary of 1.6 K, a median of 6.1 K, and a modal value of 2.1, putting it on the higher side of climate sensitivity estimates (Fig. 2 above). Nic’s recalculations (Fig. 4) lower that value. That struck me as a legitimate use of comment space in this blog, but not necessarily the subject of a letter on “housekeeping” to an IPCC lead author – or at least, not the same letter.

      It might have been more effective as a communication for the post to have limited itself to the actual numbers in Gregory-02 in the letter to Gabi if the intention was to have her respond sympathetically to his concerns.

      That’s a personal impression, but I don’t feel strongly that it’s wrong to have this post.

      • I notice that IPCC Fig. 9.20 shows a different climate sensitivity median for Gregory-02 than cited by the authors – about 3.2 K. Whether that reflects the truncation at 10 K or some other disparity isn’t clear, but 3.2 K looks like a better value to me than 6.1 K.

      • Fred,
        The reduction in hte median reflects truncation at 10 K – the IPCC cut off more than 40% of the total probability, which lay above 10 K. To me, the fact that so much of the total probability in the original Gregory 02 results lies at such high levels – nearly 60% above a sensitivity of 4.5 K, the top of the IPCC’s ‘likely’ range, suggests that the data it is using are likely to have been erroneous. Indeed, the ocean heat data was known to be wrong well before AR4 was put together. And, as I wrote in the letter, the forcings data (not all observational) was well out of line with that used elsewhere – and way below that cited in AR4 as an example of the historic evolution of forcings. I don’t see why you should object to my having pointed out these facts, and their implications, in the last section of my letter. It is surely desirable that the people who are responsible for the IPCC report should be aware of them, not just readers of this blog.

        PS I think that it is misleading to claim that I substituted my own input values – the ocean heat data I used came from the main such dataset, and the forcings data from one of the best known forcings datasets.

      • Looking at the paper of Gregory et al 2002, the Fig. 1 tells very clearly, what the analysis could conclude.

        The range allowable by the experiments is to a large extent (about 40%) beyond the limit of instability. The only thing this analysis could produce is a lower limit somewhere in the range 1.2 – 2 C for the climate sensitivity.

      • Hi Nic,
        While there was a substantial basic maths error in Levitus et al 2000, which was pointed out by James Annan long ago, I think we need to be careful in assuming that the XBT data is so bad that it needs the multiple subsequent downsizings after correction of that error. It seems to me that the real problem is with attribution. We have the ISCCP and Earthshine data which show a reduction in tropical cloud cover from the start o the satellite age to 1998 and an increase afterwards. As Roy Spencer points out, it doesn’t take much of a change in cloud cover to account for global warming due to increased insolation *at the ocean surface*. My suspicion is that there is a bias in interpretation of XBT data to maintain the idea that the warming of the upper ocean since 1976 is due to increased co2, and the rescaling of XBT data works to reduce the impact of the ARGO data, which shows a ‘slight cooling’ according to Craig Loehle and Josh WIllis (before his arm was twisted), and only a very slight increase according to Levitus 2010. However, a further complication is the big jump in the OHC content reconstruction just prior to the splice between the XBT and ARGO datasets.

        I think you will be interested in this post on my blog. Please take the time to read it:

        http://tallbloke.wordpress.com/2010/12/20/working-out-where-the-energy-goes-part-2-peter-berenyi/

      • Thanks, tallbloke. I will read your post when things have got a bit quieter.

      • Nic – I think you missed my point, or perhaps I didn’t explain it carefully enough. If you believe the IPCC mislabeled a table, it’s reasonable to contact Gabi Hegerl about it, provide the evidence, and explain why it makes a difference..

        On the other hand, using that device to introduce your own opinions about climate sensitivity unrelated to a mislabeled table struck me the same way it struck David Wojick – as a means to promote your views publicly rather than to communicate a concern about the ostensible reason for your letter. That’s fine here, but questionable if your main interest is for an IPCC lead author to correct a mistake. The impression it leaves is that addressing the letter to Gabi is being used as an excuse to gain wider attention for your opinions about climate sensitivity than you would enjoy simply by making comments in this blog. In any case, that was how it came across to me, although I don’t think the matter is important enough to dwell on.

        I do think that you’re struggling a bit hard to make climate sensitivity come out lower than what can be deduced from the authors you quote. Even truncated, the Gregory-02 median is 3.2 C, but more importantly, the most recent offering from these authors (GF-08) is incompatible with the lower estimates from either the 2006 paper or Gregory-02 with the recalculations you applied to it. That may be one reason why Gabi Hegerl,, Jonathan Gregory, or others may be less sympathetic to your letter than they would be if it focused exclusively on the mislabeling.

      • This is an issue a policy maker can understand. It is also an issue that gives insight into how the threat from CO2 got exaggerated. Particularly simple to understand is the tension between observations and modeling.

        It’s time to ‘fess up, before the policy makers start getting really interested.
        ===============

      • Fred,

        The point that Nic made about the representation of Gregory 02 results is that it is not based on the uniform prior in S, but the Gregory et al paper presents the data using a prior that is essentially uniform in Y and this representation is taken to Fig 9.20 of WG1. It is, they did change the F+G 2006 paper to a prior different from the one used in the paper, but didn’t change Gregory 02, in spite of the fact that it should have been changed.

        Cutting the prior at S = 10 makes a sizable effect on the average value given by Gregory 02, when using the prior uniform in S, but this value is not of interest, because the Gregory 02 data gives effectively no information on the high end of the distribution and therefore no sensible expectation value. It produces only a lower bound. Nic seems to think that there is something wrong with that, but I disagree. It tells only that the Gregory 02 methodology was not efficient in determining S, and in particular it would allow values close to 1.5 for the feedback factor, when the value 1.0 is the stability limit.

        I think Nic is right that there is an error, but it means in practice that the Gregory 02 paper is of so little power that it could have been dropped totally out of the Fig 9.20 with good justification. When it’s in, it should of course be there correctly.

        My present revised view is that the Fig. 9.20 is a good way of presenting an overview of empirical evidence (except for the error in Gregory 02), but the text is not as clear as it could be. The difference between the concepts of likelihood and PDF should have been presented differently and the prior brought to the discussion in a different way.

        The caption mentions both terms (likelihood and PDF), but the text discusses PDFs, while the best choice might have been to use only likelihood and explain, how the likelihood is the concept that describes empirical evidence.

      • Pekka – I think those are good points. In particular, a Gregory-02 lower bound estimate is more reliable than their median value, given the upper bound uncertainties. This observation also underlines the fact that what Gregory-02 reports is an estimate of “effective climate sensitivity” derived under non-steady state conditions from estimates of ocean heat uptake as a function of forcing. This introduces considerable uncertainty due to uncertainties in the ocean data, particularly pre-1900. It tells us that ocean uptake data that overestimate upper ocean heat uptake will overestimate sensitivity, but conversely, data that inadequately account for transfer to the deep oceans will overestimate equilibration rates and correspondingly underestimate sensitivity.

      • Fred – I don’t think that the Gregory 02 lower bound estimate of sensitivity is at all reliable, because it is highly sensitive to the particular values of the ocean heat and forcings data used. Using corrected ocean heat data and the GISS forcings data, the bottom of the 10%-90% confidence interval is a sensitivity of about 0.8 C. And using the forcings in AR4 WG1 Fig. 2.23, it would be about 0.6 C. The upper bound is even more dependent on the ocean heat and forcings data used – the data actually used in Gregory 02 provide no real upper bound.

        The ocean data used by Gregory 02 goes down to 3000m, hardly what most people think of as the upper ocean. But I agree that ocean heat flux pre-1900 is extremely uncertain – there is no real data. Another reason why Gregory 02 doesn’t provide a reliable lower bound.

    • Latimer Alder

      As a non-academic, can you please explain what you see as ‘not right’.

      The original contributors have been paid by the public to do a job of work on our behalf and have put their conclusions in the public domain..with benefit (presumably) to their careers and citation indices. And received wide publicity for their conclusions. They are not shy little church mice hiding away in a corner.

      There appear at least to be some serious questions to be answered with that work. And it is clear that ‘normal channels’ have failed to raise them, let alone answer them.

      What is wrong with giving the questions decent publicity and to allow a multiplicity of minds to consider them? If They are stupid questions, then an online ‘peer-review’ like this will quickly so establish. If not, then the recipients have a case to answer. They’ve had several years to think about it already, so should be able to provide the relevant info pretty quickly.

      • What I think is not right is this. This is basically what is called an “open letter.” It is a well known journalistic device and highly adversarial. As such I believe it violates the standards of civility that we have all worked hard to maintain here.

        The open letter has an adversarial logic that is reminiscent of the well known question “have you stopped beating your spouse?” That is, if one does not reply then one is condemned for not replying. If one replies then one is forced into a public confrontation, in this case with a multitude of adversaries. I would not reply.

        So, simply put, I believe this open letter tactic is beneath us.

        My secondary concern is that we are giving credence to a very bad pro-AGW study.

      • The irony here is that Nic’s letter offers Gabi et al their best opportunity in the most righteous manner to settle the matter quickly and effectively, by admitting that the priors should not be uninformed. Will they have the courage to strike through the horns of their public and private dilemma?

        Really, they should take it. The tide is ebbing on the hysteria. The globe is cooling; for how long even kim doesn’t know.
        ==================

      • This is not about Gabi et al, it is about us. Are we now to be a forum for open letters, as opposed to discussion? What is the submittal process?

        Given that Nic’s letter is an internal AGW issue, sensitivity being assumed, we need some skeptical letters fast. To whom shall we send them? Get ready to write.

      • David
        If there was a formal mechanism for addressing IPCC scientific errors then I would agree that the blogosphere might not be the best forum for this letter. However, there is no formal mechanism so this seems as good a place as any. It is also in keeping with Judith’s idea about scientific review and the role of the internet.

      • “Are we now to be a forum for open letters, as opposed to discussion?”

        David. this question sets up a false limitation of choice. Why can’t it be a forum for discussion and open letters?

        And what is the point of discussion, unless it leads to action? We discussed Nic’s original post.He was emboldened by the positive reception his ideas got. It crystallized his need for a response. He then wrote a brilliant letter. Judith decided it was good enough to be a new post in it’s own right. The inspired reformulation has led to an even more focussed discussion. Gabi Hegerl pretty much has to respond or make the IPCC look even more irrelevant to the progress of science.

        What is not to like?

      • tallbloke, 7/8/11, 4:51 pm, sensitivity II

        What’s not to like?

        AGW is not based on a theory, i.e., a model with validated predictions. That would be science.

        AGW is based on consensus. Open letters are rebellious and threatening to the claim of a consensus. Hence defenders of the movement rise to urge that the slightest skepticism (e.g., science), even questioning, be quashed. They would have the Internet subject to the same peer censorship as practiced in so many professional journals.

        IPCC may address our political leaders, i.e., the Policymakers, but we may not participate or be seen to respond. Would I be permitted to state which political system features this tenet?

      • Jeff, I hear what you are saying. If the IPCC choose not to be part of the scientific progress we are making, then we will surely supercede the IPCC as the cutting edge of climate science.

        When people are empowered by their own emancipation through self education, they lose their fear of totalitarian regimes, and after they have finished ridiculing them, they overthrow them and then ignore their erstwhile leaders.

        Something analogous is happening in climate science. The journals are losing their grip, and the blogosphere is becoming the place to have studies peer reviewed and to exchange ideas and publish rebuttals etc.

        I look forward to Judith devoting a section of her excellent blog to that purpose.

      • I regard the SPM as an open letter to the governments of the world too…

      • A UN report is not an open letter. This is a serious issue in my mind, not a metaphorical one. Is this blog now to be a publisher of open letters?

      • Given the reluctance of the IPCC to engage with the valid criticisms it’s critics have, you have to expect the clamour for answers to get louder, more strident, and more insistent.

        If Judith sees fit to use this blog for the purpose of providing a platform to someone she sees as being in with a chance of getting an answer, then that’s her call.

        If you want to control the action, start your own blog. That’s what I did when I found that it was a problem to discuss a subject I am researching on the blog I was most active on.

      • Latimer Alder

        OK

        How would you have recommended Nic proceeded otherwise?

        Please illustrate your reply with various examples from ‘climate science’ where following the general theme of your advice has brought the matter to a satisfactory scientific conclusion.

      • If Nic wanted to send an open letter he should have published it on his own blog, or in a newspaper, or even as an informative comment on his own first post. But making it a post in itself makes us a publisher of open letters, which I think changes the nature of this blog.

        To put it another way, if this blog is going to become a publisher of open letters I just want to know how many I can submit? We need some skeptical open letters fast. Nic’s is pro-AGW because it only questions the form of a significant climate sensitivity estimate, not its validity.

      • David,

        I think you are attributing parameters/standards to this blog that it never had. Dr. Curry has posted a lot of (frankly) doo-doo topics, so I’m not sure why you think this topic has violated some kind of undisclosed MO of the blog. That it’s in the form of a letter means nothing significant that I can see.

      • David,
        “making it a post in itself makes us a publisher of open letters, which I think changes the nature of this blog.”

        What you mean ‘we,’ kemosabe?

        Your use of the first person plural may be a bit presumptious. I was of the understanding that this blog was Dr. Curry’s. If she posts the letter, I’m not sure how it reflects on you.

      • I, like many others, have invested a great deal of time and effort into this blog, and so consider it very important. This gives us the right to an opinion as to how it is run. We are the content and the content has a voice, many voices in fact. How this blog is seen reflect on all of us.

      • tempterrain

        David,

        I can’t see that there is anything wrong with open letters per se. St Paul wrote a few and they ended up in the Bible.

        Yes, they are adversarial. But, then again, is the whole AGW debate in both political and scientific terms. I’d just ask that those making strong arguments on the latter, on both sides of the so-called debate, would come clean on their motivations on the former too.

      • David Wojick:

        As such I believe [Nic Lewis’ post] violates the standards of civility that we have all worked hard to maintain here.

        That is really funny, for two reasons. A few have worked hard on civility here and number of others haven’t. But that isn’t the main problem with this sentence and the whole of your message.

        I know I’m not alone in thinking that Nic has delivered both posts, especially this one, with outstanding courtesy as well as clarity. In an tour-de-force of stating the opposite of the truth you seem to be trying to seed the idea in the public mind that he has committed some kind of irredeemable faux pas of internet and scientific etiquette, justifying this extraordinary conclusion:

        I would not reply.

        Let’s just ask ourselves if IPCC lead authors, as a general rule, respond well when they feel they are being unfairly treated. You seem intent to wet-nurse a sense of false grievance that oozed out of every page of the Climategate emails. I can’t possibly know your motive for this. But I have to say I find it very, very strange.

      • Richard, I do not object to Nic’s actions, but rather to Dr. Curry’s. I myself have been a Web publisher and an editor, and I have a ideal for this blog that does not include publishing open letters. This has nothing to do with the content of the letter, or the IPCC, etc. It is a matter of editorial principle, a matter of who we are here.

      • Open question,
        Open letter,
        Open science,
        Open mind.
        =========

      • Richard, I do not object to Nic’s actions, but rather to Dr. Curry’s.

        So you are saying that if Nic had posted this in a more obscure place you would think it essential for Gabi Hegerl to reply. But that because it was posted at Judy’s blog:

        I would not reply.

        Please do explain why ‘an ideal you have for this blog’ should so drastically affect the reaction of the person addressed in the open letter.

      • Richard, I am not saying that at all. You are pulling my sentences out of context, even out of different comments. I would not reply to an open letter in any case, because it is adversarial. I do not think we should be publishing open letters here, because they are adversarial.

        My ideal for this blog is as a place of civil debate among the members. That does not include publishing open letters, which is an entirely different enterprise. Seems simple to me.

      • So you do object to Nic’s actions as well as to Dr. Curry’s. I think I’ve been entirely fair in quoting you. Our judgment on this matter differs almost 180 degrees. As long as the recipient of Nic’s excellent letter isn’t influenced by this false view I’m happy to leave it there.

      • RichieRich

        I do think David has a point. No matter how polite and courteous an open letter, it amounts to publicly calling someone out, and thus is (to my mind at least) inescapably adversarial. Why couldn’t Judith and/or Nic simply have e-mailed Gabi, offering her a space to respond here when she’s good and ready, and notified the readership that this offer had been made?

      • What’s that about trying the same thing over and over and expecting a different result? Sophistry flourished immediately, it didn’t need warning.
        =======================

      • RichieRich

        Not sure I follow. Are you suggesting that, on various topics, Gabi has been offered space on this blog to respond and has consistently declined?

      • Insensible flower.
        =============

      • Latimer Alder

        And the history of climatology is littered with polite requests for clarification that have gone unanswered for many years. Especially if made by somebody viewed as being an ‘outsider’.

        Nic could have e-mailed the authors a zillion times a day for a year and nothing would have happened. The IPCC is structurally and institutionally incapable of admitting error without great pressure being applied..and then only very grudgingly (see Himalayagate). It believes in its own infallibility.

        At least this way the subject gets a proper airing, and more than just a few people will be aware that there is a case to answer. Better than a polite e-mail that the recipients can ignore until doomsday.

        And by having the ‘peer-review’ of Nic’s comments visible for all to see the arguments get a good airing too, Not hidden behind closed doors in pal-review. He is putting his reputation on the line too. And very visibly.
        Surely senior members of the IPCC should have equivalent cojones.

      • Keith Jackson

        Exactly.

      • RichieRich, why didn’t Gabi email them instead? To disagree with someone is adversarial so either way there is no escaping something adversarial here. Being open can help start a wider dialogue, it clarifies Lewis’ position, and reduces anyone saying it was just covered up, or someone later claiming that Lewis didn’t bring this issue up.

      • I haven’t had a chance to think through this properly, I have been dealing all week with the death of my colleague Kurt Frankel and just returned from the memorial service in Cleveland. My interest in this post is airing interesting an important scientific issue. Nic raises valid issues that need airing. Making things public is preferable in my mind.

        The “Dear Gabi” part of this post was arguably unneccessary, but I didn’t think of this in advance. I may be naive, but I did not view the the “Dear Gabi” as adversarial, but as a gesture of respect in that she is regarded as someone who can be trusted to sort this out.

        And no one is obligated to respond to an open letter. But I sure hope that these issues are being considered by the authors of AR5. While I have no interest in putting Gabi Hegerl into an uncomfortable position, the IPCC in the past has not always behaved in an accountable way. In the end, I hope this exchange improves the analysis of this issue of the AR5. Which is the priority.

      • Nic raises valid issues that need airing. Making things public is preferable in my mind.

        Check and check.

        The “Dear Gabi” part of this post was arguably unneccessary

        My goodness, even the spelling of the last word shows that you are fallible!

        But that’s the only mistake I can see here. The moment I read this post I felt it was immensely appropriate for it to be addressed to Gabi Hegerl. It’s entirely respectful in tone and it focuses on some crucial issues that should have been dealt with in detail before AR4 WG1 was published.

        I am hopeful that Hegerl will take responsibility and respond positively and honestly. But I also think it’s entirely right for the discussion to have begun here, even before her response is received. The issue of climate sensitivity, used as the basis for advocacy of immediate policy making, is far too important to be wrapped in any kind of red-tape or false etiquette.

      • Latimer Alder

        I seem to remember that way back when (eg at the foundation of the Royal Society) members were invited to present their work in person and in public. Debates – sometimes very lively and distinctly hostile – followed and the value of the work was, in part, determined by the outcome of such debates.

        All the internet and ‘open letters’ such as Nic Lewis’s do is to update that tradition for the modern age of instant communications and the possibility of an instant global readership and knowledgeable contributions independent of geography and affiliation.

        It is not good enough for publicly paid scientists to scrape through pal review by a couple of chums, publish a paper and wash their hands of any further involvement. They should be capable not only of producing the original work but of publicly defending and justifying it too. Not just against their friends worrying about minor details, but against those who are not so well-disposed.

        Even twenty years ago such instant debate would have been impossible. We are lucky indeed to live at a time when technology is starting to bring the motto ‘Nullius in Verba’ back to life once more.

      • Bad Andrew

        “It is a well known journalistic device and highly adversarial”

        So?

        The ice cream social part of this ended a long time ago. In fact, I don’t think it ever began.

        Andrew

      • Wojick: “The open letter has an adversarial logic that is reminiscent of the well known question “have you stopped beating your spouse?””

        Kind of like this one?

        “We are deeply disturbed by the recent escalation of political assaults on scientists in general and on climate scientists in particular. All citizens should understand some basic scientific facts.”

        “Many recent assaults on climate science and, more disturbingly, on climate scientists by climate change deniers, are typically driven by special interests or dogma, not by an honest effort to provide an alternative theory that credibly satisfies the evidence. ”

        http://www.guardian.co.uk/environment/2010/may/06/climate-science-open-letter

      • Bruce, 7/8/11, 5:16 pm, sensitivity II

        Nice catch on last year’s US National Academy of Sciences open letter in defense of their shameful endorsement of AGW.

        We should view the Academies as holding a professional duty to the public, a fiduciary duty, to endorse scientific claims only after thorough, published review. In the process and afterwards, they should provide a generous, open forum for dissent.

        USNAS compares AGW to the Big Bang, the now failing model that extrapolates beyond the measurable to the infinitesimal and to infinity. They actually write that atmospheric gasses trap heat! They endorse fear, the hysteria of climate catastrophe, of a rising sea level caused by man soon to swamp coastal communities. They equate skepticism, a virtue among scientists, with McCarthyism. They argue that because evolution is overwhelmingly accepted, AGW deserves endorsement. This group isn’t worthy of the name Science. It is certainly not engaged in scientific work.

        They not only blindly certified what is scientifically unsupportable, but did so in the names of their vast membership roles. They contributed nothing to science except to endorse by their actions that science is determined now by consensus, the everything-out-of-nothing Big Bang of Epistemology. They could have at least been consistent in their model of consensus science by putting the matter to the open vote of their members.

      • Odd how the believer community needs to make false claims about ‘assaults’ and ‘threats’ and then complains when skeptics write respectful letters.

      • David,
        You are being silly. Science is supposed to be open and transparent. And people are supposed to get credit for the work they do. Steve McIntyre has found plenty of problems by climate scientists. Sometimes they remember to thank him and sometimes they do not. By posting the error on a blog, it is more likely to be acknowledged and corrected and the person who found the error is guaranteed to get credit for his contribution to the science. Nic has made a valuable contribution. Don’t try to muddy his contribution by making worthless accusations.

    • Peter Davies

      I agree that Richard’s remarks should have been moderated as non-technical and therefore irrelevant to this thread.

      However, I have been most interested in this discussion, especially on the appropriateness of the Bayesian approach (in which relative probabilities are assumed to be linearly additive and each statistic capable of inversion) for studies of climate data sets when not much is known at all about the likely shape of the probability distribution function.

  11. Relationship between observed CO2 concentration and observed global mean temperature data.

    http://bit.ly/qesusQ

    AGW has failed for 13 years!

    Is climate sensitivity a negative?

    • Rattus Norvegicus

      No.

      • I take out 2011 as it’s not yet a baked pie. It’s just starting to bake. I don’t think you’re going to like this 2011 pie as much as the year continues to pass by.

        And the one-note HADCRUT is getting old. Let’s try UAH

      • JCH, take a look at Dr. Spencer’s latest post. Seems that UAH is now getting funding from Greenpeace!

      • Well, as an ExxonMobil shareholder, I can’t say I think we’re getting our money’s worth!

        If Dr. Roy keeps this up, he’s going to end up on RC’s blogroll.

      • It’s be funny to see what Girma does when we get a couple more hot months.

      • It’ll be even funnier when the current spike subsides and the UAH anomaly falls right back down below zero. I’ll give it five months.

        Well, funny for climateers, not funny for farmers or hungry people.

      • Compare the daily temps here – http://discover.itsc.uah.edu/amsutemps – 2011 is distinctly cooler than 2010. Cooling occurs in a La Niña – so where is ENSO going?

        ‘Stay tuned for the next update (by July 9th, or earlier) to see where the MEI will be heading next. With its unprecedented one-month rise for this time of year, La Niña conditions have actually almost expired in the MEI sense, making ENSO-neutral conditions the safest bet for the next few months. However, a relapse into La Niña conditions is not at all off the table, based on the reasoning I gave in September 2010 – big La Niña events have a strong tendency to re-emerge after ‘taking time off’ during northern hemispheric summer, as last seen in 2008.’ http://www.esrl.noaa.gov/psd/enso/mei/#discussion

      • I don’t know where ENSO is going. Some models say indicate back to La Nina and some back to El Nino. That’s helpful.

        On back-to-back events, two-peats appear more common for El Nino.

        Does Chief Hydro do secret voodoo ceremonies to influence ENSO?

      • As usual – incoherence and insults. Two-peats? I assume you mean peaks – but fail to see the meaning or relevance in that either.

        The models are predicting neutral conditions – http://www.bom.gov.au/climate/ahead/ENSO-summary.shtml – pretty much as expected for the SH winter.

        But I quoted the world’s leading expert on ENSO – and you are an idiot.

      • Roy Spencer has a precursor dataset which tells him SST’s will be heading from La Nada to La Nina in the next couple of months. This is in line with what my own simple model predicted last year.

        I know you don’t like predictions Chief, but farmers do.

      • Having read somewhere a paper comparing ENSO models with a random walk – the walk performed better 3 months out. If Roy can predict ENSO as more than a seasonal probability – he is worth a Nobel Prize

        The SOI seems to be a leading indicator some 7 months in advance – and this was in record positive territory this year.

        There are physical conditions that increase the probability of a return to La Nina in the southern spring. The cool phase PDO which is associated with more intense and frequent La Nina. The large La Nina – Claus Wolter puts 2011 at 2nd in the record I believe – tend to carry over into the second year as was said in my quote. I would wager on a return to La Nina this year – it seems much better then even odds.

        I keep telling clients that it is going to flood in north east Australia for the next decade or so. It is good for my business – but seems to me to be simply an acknowledgement of the current state of the Pacific and that these states tend to last 20 to 40 years.

      • Claus Wolter has updated his MEI page.

        ‘Stay tuned for the next update (by August 6th, or earlier) to see where the MEI will be heading next. La Niña conditions have at least briefly expired in the MEI sense, making ENSO-neutral conditions the safest bet for the next few months. However, a relapse into La Niña conditions is not at all off the table, based on the reasoning I gave in September 2010 – big La Niña events have a strong tendency to re-emerge after ‘taking time off’ during northern hemispheric summer, as last seen in 2008. I believe the odds for this are still better than 50/50.’

        I think I said that yesterday. I’m impressed with my own perspicacity.

      • This is bad news for the US regional drought, as well as the flooded regions created by the deflection in typical weather patterns.
        If this La Nina extends into the next growing season,we very well may see a significant impact in US grain production.
        This year will hardly be good.

    • I think there a so many variables in climate like CO2, SO2, carbon soot, bright sunshine, aerosols, clouds, orbital dynamics etc …

      For anyone to claim they can single out ONE varialbe that influences climate … they would have to be insane. Or dishonest.

      • Or, a statistician. Honest.

      • Thanksufully the only people who single out a single variable are people who say its all the sun. Read AR4, or read a gcm you’ll see C02,S02,soot,aerosols, clouds (and hence sunshine) orbital dynamics.. ect. They all influence. The questions are

        1. Do we have them all
        2. are they all represented with enough accuracy

      • 1) No
        2) No

        For the record I don’t think it’s all the Sun, and I believe the other variables mentioned all play a part. I do think the solar contribution has been heavily underestimated though, and the co2 contribution heavily overestimated.

      • Agreed.

      • Guess what?

        No and No is the answer and you’ll find a good number of climate scientists who agree. The question is what do you do about it?

        throw rocks or build houses

      • Throw rocks until the shabby foundation has been excavated and rebuilt, then build the house.
        ==============

      • Throwing rocks will not excavate a foundation; it will make a pile of rocks.

      • Steve,
        they need to step and say so. Once the uncertainty is recognized, accepted and publicly admitted, progress can be made.

      • Steven Mosher:
        “…2. are they all represented with enough accuracy…”

        Tallbloke
        “…2) No…”

        Steven Mosher
        “No and No is the answer and you’ll find a good number of climate scientists who agree.”

        I am just curious how many is a “good number?” I don’t think a single member of the “consensus” would agree that climate variables as a whole “are not represented with enough accuracy” to justify the policy proscriptions they favor. I haven’t read of a single scientist one could describe as a member of the consensus, who doesn’t think our knowledge of the variables are sufficient to justify carbon taxes, cap and trade, regulation of CO2 by the EPA, etc.

        I guess we could quibble about the meaning of the word “enough,” do we ever really know enough metaphysically speaking? But I think in the context of the climate debate, belief that we do not know the variables with “enough accuracy” is limited to the skeptic and lukewarmer camps.

      • Mosh at 3.40

        Answers are probably No and No but theyre great questions.

        tonyb

      • IPCC: “Carbon dioxide is the most important anthropogenic
        greenhouse gas”

        There we go … a big huge lie right off the bat. No mention of water vapor.

        http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-spm.pdf

        If only they had consider bright sunshine hours, instead of just TSI. There were lots of global dimming/brightening papers for them to consider.

        I do believe the fix was in and they really only did consider Co2. And they did a piss poor job of pretending to investigate other variables.

      • Like the idiots who believe that it is Only C02 or only the Sun, there are always the morons who think that it is only the sunshine.

        takes all kinds of idiots.. next up, iron sun idiot, magnetic field idiot, air pressure idiot, sea current idiot, sea salt idiot, cow fart idiot.

        Every hobby horse you can imagine.

        I’m interested in the mental shift people make.. it goes like this.

        Neglected: ‘hey, you forgot to mention the one thing that I really know a lot about”
        Science: “Its in the footnotes”
        neglected; “hey, its way more important than that”
        Science;” shut up and publish”
        Neglected: The Thing I know is THE MOST IMPORTANT THING .

        It’s how the “marginalized” react that interests me.

        Watch what happens now. Its highly predictable because the marginalized can be manipulated rather easily. Most will act exactly as I predict.

      • Can’t find bright sunshine mentioned anywhere in IPCC when I search.

        Can you find it?

        I’ll go read it if you find it.

      • Science: “shut up and publish” (if we let you!)

        Science: “go ahead, we’ll publish your crap even if Tijlander is contaminated and upside down”

      • Okay, I am a cow-fart idiot. It’s in my DNA. My dad was a veterinarian. We used to go to farms to treat cows for bloat. If there were any farm kids around, dad would entertain them by striking a match to ignite the gas venting out of the bloat needle. The cow would flare off like a Kuwaiti oil rig, dad would dive for cover as though the cow was about to blow, and the kids would be rolling on the ground laughing their heads off. Because of this I have decided the kill off of the American buffalo in the last half of the 19th Century reduced GHGs, and led to a temperature drop. This is, of course, a crazy theory, but I’m a cow-fart idiot and I’m sticking to it.

      • A cow-fart surely has the unique advantage of being both the foundation of the theory and the appropriate reaction to everyone else’s.

      • DeNihilist

        JCH really? If true, man I woulda loved to have been around for one of your dad’s visit’s!

      • Ya gotta love the sophistication and eloquence of the technical threads. So much better than when the low life non-scientists are allowed to comment.

      • DeNihilist

        Bruce, the way I understand that statement, is that watwer vapour is not really anthropogenic. So what heating gas has risen due to man more then all others? CO2.

        Now in light of the paper by Dr Curtin that i brought to Dr. Loehles attention the other day, about mankind releasing upwards of 300Gtons of water vapour annualy into the atmosphere, then just maybe we have to start to look at water vapour as a anthropogenic GG now.

      • Absolutely. Think about all those massive sprinklers in the fields spraying water into the air and always some of it will increase humidity.

        http://wattsupwiththat.com/2011/04/04/an-investigation-of-ushcn-station-siting-issues-using-a-cleaned-dataset/

        “This study has established that there are significant differences in temperature trends
        between weather stations sited at urban, agricultural, and low impact locations. In addition, it has been shown that urban stations exhibit significantly higher trends, which are created by numerous urban factors that are not related to global CO2 levels. Evidence has also been provided that agricultural locations experience a warming trend related to increases in humidity.

      • steven mosher, 7/7/11, 3:40 am, sensitivity II

        One principle of system science, or modeling if you prefer, is that given enough variables, a model can be tuned to match any record in its domain, or to match any other model. Another principle of system science is parsimony. The trick is not to add variables, but to decrease the number of variables to a sufficiency for the desired predictive power.

        Do we [?] have them all[?} AR4 describes a surfeit of variables, some quite wrong (especially feedback and equilibrium), and at the same time it is missing essential variables (especially dynamic cloud cover) and observations (esp. the amplification of the Sun).

        are they all represented with enough accuracy. This is a “beating your wife question” because the GCMs have no predictive power.

        The Sun is sufficient. It’s as good as the best smoothed estimate of temperature over the history of thermometers, and it requires only five parameters. If anyone doubts that read SGW by clicking on my name, and do get back to me if you find an error. You’d be the first.

      • And of course the idiot who uses the wrong datasets always shows up.
        Lets not forget him.

        here a clue. To explain the climate you need to do more than manufacture correlations between TSI and a metric ( Global temperature INDEX)
        Especially when that metric has no physical meaning.

        You have to explain temperature, OHC, currents, rain, winds.. Lots of stuff.

        The first error you made was easy to spot. and it wasnt a dataset error

      • The biggest idiot of all is the one who thinks the IPCC got important things right.

      • steven mosher, 7/8/11, 11:46 am, sensitivity II

        Your first sentence I infer welcomed me as an idiot using the wrong datasets. Your last sentence implied that you caught an error in my model, but that it wasn’t a dataset error. If you had actually read SGW instead of guessing about it, you’d have realized that the datasets I used were the Wang et al. (2005) TSI model and the HadCRUT3 GAST model. IPCC explicitly endorsed these sets in AR4 as the best of their class. I also adopted the same running average analysis technique featured by IPCC in its analysis of Global Temperature Trends. AR4, Technical Summary, Figure TS.6, p. 37.

        You misunderstand correlation. Science doesn’t manufacture correlations, and of course I manufactured none. Correlation, and especially the correlation function, are measurements. In science, measurements are observations quantified and compared to standards. These are also called facts.

        You may not realize that science is about creating models of the real world, models which embody cause and effect. Correlation guides a scientist where to look for C&E, and provides information on how to model it. Candidate causes for any model need to replicate the same correlation function. Studying correlations is a fertile field IPCC has left mostly unplowed.

        When a scientist develops a Fourier transform of a dataset, did you know that each coefficient is determined by correlation? The Fourier transform contains information about the frequency response of a system and how it might be modeled. Correlation is the method by which the 11 and 22 year solar cycles are analyzed. It’s the way we decide whether climate is influenced by orbital cycles. These are just some of the applications of correlation. It expresses the relationships between datasets and the internal structure of a single dataset. It is mathematical, it occurs in nature, and it is never as you imagined manufactured by an investigator.

        Nor did I create anything so simple as to develop a metric. So, there could be no physical meaning for your imagined metric. Investigators of modest skill will frequently express relationships between datasets in terms of a correlation number. This is usually regarded as a metric. What must be developed in modeling is a correlation function, which contains the critical information for causation: that cause must lead effect is a principle of science. The usual metric is one point on the correlation function, that for zero lag.

        That is not to say that the correlation function is sufficient. One needs to analyze the relationship between datasets, which is best be visualized by scatter plots. A functional fit between a pair of datasets is a sound basis for establishing a transfer function between the two. The functional fit is determined by correlation techniques. A transfer function is the mathematical relationship between the inputs and outputs of a stage of a thermodynamic flow model, as distinguished from IPCC’s radiative transfer model.

        What I demonstrated was that Earth’s 140 year GAST, which I’d estimate might require roughly 30 parameters to represent (the number will depend on the efficiency of the orthogonal function set), can be represented as accurately as IPCC’s smoothed estimator itself with a linear transfer function with just 5 parameters. Each of those parameters is estimated by correlation.

        The physical meaning for the transfer function is likely in the ocean currents, which IPCC has neglected. Developing that relation between ocean currents and the lags in the transfer function is the next task.

      • Mosh, I want to publicly apologize for being so wrong about you!!

        ” To explain the climate you need to do more than manufacture correlations between TSI and a metric ( Global temperature INDEX)
        Especially when that metric has no physical meaning.”

        Yup, there you are admitting that the Global Temperature Index has no meaning!! I could kiss you!!

      • –> “Thankfully the only people who single out a single variable are people who say its all the sun. Read AR4, or read a gcm you’ll see C02, S02, soot, aerosols, clouds (and hence sunshine) orbital dynamics.. ect. They all influence…”

        Care to restate that? No one is saying ‘it’s all the sun.’ What is being said by those who understand what causes global warming is simple–e.g., nominally, it’s the sun, stupid.

        We know, for example, that gamma rays have an influence on the climate. And, the sun plays a role in that. Essentially, the sun is the only independent variable.

        Since most of the rise in the temperature during the previous period of global warming occurred during the recovery from the LIA and has occurred with no assistance from humans, it can be considered ‘a natural component’ of global warming. That amount is equal to about 0.5°C/100 years. This estimate is based on the Earth’s experience beginning 200 years ago to current.

        Accordingly, we can estimate the temperature 100 years from now to be 0.5°C warmer than it is today, plus or minus ± 0.2°C, depending on the effects of multidecadal oscillation (Or not–the Earth also could descend into another ice age. It’s happened before and may be overdue).

        Science tells us, that, “the temperature in 2100 depends greatly on the combination of both effects,” i.e., “an almost linear increase of about +0.5°C/100 years and a multi-decadal oscillation of amplitude 0.2°C.”

        Given this relationship, there is no room left for any other causal factors and the hysterics of impending global warming catastrophe is exposed for what it is: nothing but a hoax and a scare tactic.

        Although fear of global warming or cooling is more a sign of personal mental problems and mass mania, it nonetheless is far more logical to fear global cooling (which kills people) that global warming (which have been the most favorable times for all forms of life over the last 10,000 years).

        Moreover, climate change is nominally due to solar activity. It’s the sun, stupid. And, the sun is anomalously quiet and has been for quite a while now. And, we’re now in a period of global cooling. The cooling ocean temperatures are a fact. Given these facts, fear of global warming is simply a sign of neurotic behavior by the neoecologists.

        No one can hide the decline. Have you heard about the freezing cold going on in South America of historical proportions?

        In one of the great examples of cerebral contortionism I’ve ever seen, the authors of the study below conclude, saying:

        “Lastly, one can invert the title of this paper and ask `Does the occurrence of lower/higher solar activity make a cold/warm winter in Europe more likely (than the climatological mean)?’ Our results strongly suggest that it does, which has implications for seasonal predictions.” (Ibid.)

        We must do our best to cut through this obfuscation and filet the mumbo-jumbo if we want to get to the meat of the matter, Ok?

        What we see yet another example of erstwhile authors being forced to dance and essentially proclaim without the slightest scientific backing that the role of the sun on the weather in the UK may have a bearing even on the rest of Europe but certainly not on the rest of the Northern Hemisphere and certainly not the rest of the globe. Accordingly, these dancing authors have tapped out a fiat license allowing global warming alarmist to endlessly continue to propagate groundless fears of runaway global warming due to human causes—even though the UK and possibly the rest of Europe may freeze over.

        Nevertheless, even this pitiful example of fiat science does not allow the authors to dismiss or explain away the essential and incontrovertible fact that all climate realists
        know:

        “Studies of isotopes generated in the atmosphere by galactic cosmic rays show that the Sun has been exceptionally active during recent decades. This grand solar maximum has persisted for longer than most previous examples in the cosmogenic isotope record and is expected to end soon.” (Ibid.)

        As a result, the authors essentially are begging you use your own common sense. They’re actually telling you to turn the title of their own study on its head if you want to discover the real truth. In other words, you must provide your own answer to the real question, that fascist academia will not allow the authors to print in big block letters, e.g.,

        THE BIG QUESTION: Were the relatively warmer European winters over the last several decades associated with global warming the result of relatively higher — and indeed ‘exceptionally’ higher — solar activity?

        THE ANSWER: If you are a climate realist, your answer will of course be: ‘Yes, it’s the sun, stupid.’

      • 1. Do we have them all
        2. are they all represented with enough accuracy

        Exactly. Write them in beautiful type, print them on high quality paper, frame them, and hang them on the wall.

      • Mosher: “clouds (and hence sunshine)”

        ““It is surprising that a statistically significant increase
        in cloudiness is not accompanied by a simultaneous
        decrease in sunshine at Ebro Observatory over the past
        century. The explanation may lie in a change in the
        proportions of the cloud types. We have shown how high
        clouds, less dense and optically more transparent than
        low clouds, have increased during the last part of the
        century, with perhaps little effect on the sunshine records.”

        At Ebro, cloudiness is not a replacement for sunshine.

        http://www.iac.es/folleto/research/preprints/files/PP08038.pdf

      • Hi

        Nice call and interesting paper. Rainfall in the Ebro Valley deems to have declined by about 4% since 1950 – as well as elsewhere in Spain. Although everyone there is now complaining about the rain increasing.

        I would look to the North Atlantic Oscillation rather than to global warming as a primary cause – which is driven by the long term drift of solar UV warming the ozone layer. Mid level temperature changes translate into pressure changes in the troposphere changing the tracks of storms spinning off the polar front.

        It is an interesting cloud complexity – if you get high cloud instead of low there is more sunshine.

        Cheers .

      • I kind of like how the EBRO sunshine hours graph on page 3 tracks the up and down of HADCRUT (for example).

        UK Sunshine hours does as well.

        Pick UK , Sunshine, Annual

        Up 100 hours per year from 1929

        http://www.metoffice.gov.uk/climate/uk/actualmonthly/

      • steven,
        When did clouds get equated with sun?

      • He brings it up when I ask about whether the change in bright sunshine hours over the 20th century were analyzed by the IPCC (they weren’t) and after he tries to claim I’m talking about TSI (which I’m not).

  12. Dear Dr. Curry:

    In the equation that you presented in this post, ΔT is assumed as surface temperature. Dr. James Hansen, on the other hand, assumes ΔT as earth’s temperature for the same equation, which is not the same as surface temperature. So which one is right?

    • Nabil,
      I posted an article here on the derivation of the feedback equation in high-order form, which explains how it can be related to average surface temperature.
      http://rankexploits.com/musings/2011/equilibrium-climate-sensitivity-and-mathturbation-part-1/
      The linear form, used above, just entails truncation of the higher order terms in temperature. I hope you find it useful.

      • Paul,

        Good math. But, is the model correct? Does the Stefan-Boltzmann law equation apply for the earth as a whole as you have assumed? I used the equation and it works fine for land, but not for surface water. Because radiation from surface water is controlled by convection heat transfer coefficient and not by surface water temperature. Please take a look at Earth’s Magic posted on http://www.global-heat.net and compare the mathematical models.

  13. Nic Lewis,
    Another excellent article. This does seem to rule out the possibility that the adjustment of the F&G06 results was due to lazy or unthinking application of a standardised procedure.
    I do believe, however, that it would be a good idea to avoid “piling on”. I, and I suspect quite a lot of others who visit this site, am truly interested in the response from Gabi Hegerl. There is a danger of closing down any hope of a dialogue if she is forced into a corner. At the moment, Dr Hegerl still has the possibility of saying:- “OK, there was a serious error in QC, and we will fix it OR the explanation for the apparent discrepancy is…”
    It would be a shame to see another “circling the wagons” response to this, given its importance.
    Paul

    • Paul

      I agree. There have been two excellent article by Nick and now is the time to hear a response before such as Gabi Hegerl are painted into a corner and perhaps feel unwilling to make a response .
      tonyb

      • Please understand that in July people may be on vacation or traveling to conferences. And I also expect there to be internal discussion and analysis before a response is provided. So far, I would say that Gabi Hegerl has been honestly engaging on this issue, she made a comment here on a previous thread and has been interacting with Nic. So we need to give this some time.

      • I second Judy’s comment

      • Judith and Nick
        Fair enough.

        tonyb

      • While we wait on the response we may notice what Knutti and Hegerl wrote in their review article in Natural Geoscience 2008. This Excerpt is from the chapter “Limitations and ways forward”


        What are the options for learning more about climate sensitivity? Before discussing this, a methodological point affecting estimates of S needs to be mentioned: results from methods estimating a PDF of climate sensitivity depend strongly on their assumptions of a prior distribution from which climate models with different S are sampled [Frame 2005]. Studies that start with climate sensitivity being equally distributed in some interval (for example 1–10 °C) yield PDFs of S with longer tails than those that sample models that are uniformly distributed in feedbacks (that is, the inverse of S (refs [Murphy 2004 , Forster 2006])). Truly uninformative prior assumptions do not exist, because the sampling of a model space is ambiguous (that is, there is no single metric of distance between two models). Subjective choices are part of Bayesian methods, but because the data constraint is weak here, the implications are profound. An alternative prior distribution that has been used occasionally is an estimate of the PDF of S based on expert opinion [Forest 2002, 2006, Morgan 1995] (Fig. 3). However, experts almost invariably know about climate change in different periods (for example the observed warming, or the temperature at the LGM), which introduces concern about the independence of prior and posterior information.

        Another option that makes use of this Bayesian framework is to combine some of the individually derived distributions to yield a better constraint [Hegerl 2006, Annan 2006]. Combining pieces of information about S that are independent of each other and arise from different time periods or climatic states should provide a tighter distribution. The similarity of the PDFs arising from various lines of evidence shown in Fig. 3 substantially increases confidence in an overall estimate. However, the difficulty in formally combining lines of evidence lies in the fact that every single line of evidence needs to be entirely independent of the others, unless dependence is explicitly taken into account. Additionally, if several climate properties are estimated simultaneously that are not independent, such as S and ocean heat uptake, then combining evidence requires combining joint probabilities rather than multiplying marginal posterior PDFs [Hegerl 2006].

        Several of the references are to papers discussed in this and previous thread.

      • That is a very good comment. It adds perspective. It presents some difficulties of actually using Bayesian inference, and shows that the authors attempted to address those difficulties.

        Thank you for adding it to the discussion.

  14. I would like to make clear that Jonathan Gregory, the lead author of Gregory 02, is away at present that and therefore nothing should be read into his not responding to my points.

    • Point taken. I was struck again just now by the gap between Steve McIntyre starting Climate Audit in Jan 05 and proper engagement with an establishment climate scientist (not that she’s seen that way now!) in Aug 06. Patience remains a virtue in this game.

  15. Joe Lalonde

    Nic,

    There is a great deal of bias with the data of being played around with for a certain outcome for a warmer conclusion. Hence, a funding bias has been created and so has the peer review system.
    The majority of monitoring stations are in the northern hemisphere where populations are.
    Over the years many stations have changed, closed, moved, etc. which gives a different reading in every different region.
    Creating a single formula for temperature is a fools run to actual movement of regional events.

    The majority of storm or anomalies of weather occur with the movement of heat and cold to different areas which then generate events either not seen before or more violent.
    Currently colder events are showing up with precipitation events that seem to be moving slower which then gives more regions longer precipitation which definately effects areas for plant life and growth.

  16. Nic,
    The most appropriate formulation for the distribution function, especially for policy decision-making, is an open question. There was no trickery in the IPCC report– unless you consider full acknowledgement of the mathematical approach in that particular graph, the absence of quantitative judgment, the reference to Frame et al 2005, and the current literature discussing all of this, to be trickery.

    Choosing a prior that is uniform in the estimate means that the need to choose bounds does generate inconsistent results (similarly, selection of an upper bound). That is not a new discovery (of yours). The science literature continues to evolve the most rational and objective approach to yielding stronger constraints.

    You would do better to discuss why FG were not sure of their result for various practical reasons, e.g. the net radiative flux imbalance at the top of the atmosphere has only been measured for a very short time, and their study doesn’t include albedo forcings from melting ice – if you’re actually as interested in their results as you pretend to be.

    Regardless, the question is one of the most useful approach for policymaking. You and Judith Curry seem to get very excited when you discover what has been already discussed in the literature, for years. If you were current, you would know that you have made much ado about nothing.

    Your email is ridiculous.

    • Martha

      “There was no trickery in the IPCC report”
      I don’t believe Nic said there was – strawman

      “……if you’re actually as interested in their results as you pretend to be.”
      What makes you think he is pretending – snark

      “Regardless, the question is one of the most useful approach for policymaking”
      No – that is your most useful question. Nic is talking about the science. Diversion.

      Usual stuff, Martha, and as usual, pretty weak and uninspiring. However, welcome back – you’ve been away for a while.

    • “Choosing a prior that is uniform in the estimate means that the need to choose bounds does generate inconsistent results (similarly, selection of an upper bound). That is not a new discovery (of yours). The science literature continues to evolve the most rational and objective approach to yielding stronger constraints.”

      Would that rational and objective approach involve using an upper bound for climate sensitivity of 18.5C and deeming it as equally likely as say, 1C or 2C?

      • Bingo! tallbloke.

        Where the heck did 18.5C per doubling come from, anyway? I daresay the history of the last 30 years would be enough to reject that value from consideration.

        Using a uniform distribution from 1-18.5 as a representative of the prior (best current information) distriution was either stupid, ignorant, or dishonest. That people in IPCC must resort to these shanigans is additional evidence they cannot be trusted.

    • Steve Fitzpatrick

      “The most appropriate formulation for the distribution function, especially for policy decision-making, is an open question.”

      Please explain two things:
      1. Why do you think that an appropriate distribution function ought to be different for “policy decision-making” than for, say, a scientific analysis?

      2. Why do you think it is OK for AR4 to use different distribution functions for different studies?

      IMO, your comment is what is ridiculous here, not Nic’s observation of inconsistency.

    • John Carpenter

      ” Choosing a prior that is uniform in the estimate means that the need to choose bounds does generate inconsistent results (similarly, selection of an upper bound). That is not a new discovery (of yours). The science literature continues to evolve the most rational and objective approach to yielding stronger constraints. ”

      I suppose you have literature references showing whatever it is your trying to say here?

    • Martha, imagine you believe that your child is sick. You wish to measure the child’s temperature.
      In you house you have two thermometers, one which can read temperature from -100 °C to +150 °C and the other from 35°C to 42°C.

      Which one do you use?

  17. Martha | July 8, 2011 at 9:15 am
    “There was no trickery in the IPCC report” Good grief!

    “You and Judith Curry seem to get very excited when you discover what has been already discussed in the literature”.

    Martha, you have a pop about Judith and Nic! What do you think the internet has been aghast about for the last two days? “2005…the absence of quantitative judgment”! Climate sure works slow!

    We have seen that statistics in Climate Science is very weak and the sure thing is that had scientists involved used the top, real statisticians, most of the arguments here would not exist!

    Maybe not a technical comment Judith but Martha thinks she can get away with that?

    • Gary Mirada

      Martha’s sole purpose in life is to try to discredit anything that attacks her belief in AGW. But like most zealots she simply undermines the case she wishes to promote. RobB has identified the content of Martha’s offering to the debate. A bit less than zero according to my credibility meter. But I like it when Martha drops by. It reminds us all that we must not give up the fight.

      Seguimos luchando

  18. Keith Jackson

    Nic – Another excellent post. For what its worth, I’ve developed a 2-hemisphere analog of the equation Q – F = Lambda*Delt T. It allows aerosol forcing changes to be estimated from observations using the fact that aerosols are mostly in the NH. For changes from 1861-1900 to 1957-1994, with non-aerosol forcing change =1.41: ocean heat change 0.08 (new Levitus data): temperature change = 0.31 it computes aerosol forcing = -0.42 for a net forcing of 0.99 and a computed climate sensitivity of 1.3. (Forcings in W/m^2 and T in deg. C.) It is consistent with your speculations, and is also consistent with the results of a broader Monte-Carlo survey based on estimated uncertainties in all the input parameters. (A paper which yields these results was, not surprisingly, rejected for publication, but I can e-mail it to you if you’re interested.)

    • Keith
      Many thanks for your comment and for sending the paper

    • Steve Fitzpatrick

      Could you send a copy to me as well (or ask Nic to)?

      • Keith Jackson

        Steve,
        I’ll be happy to send you a copy if you’ll post your address here, or if Nic has your address it’s fine with me if he sends you a copy.

      • Steve Fitzpatrick

        sfitzpa at comcast dot net

  19. CLIMATE SENSITIVITY BY A SKEPTIC.

    1) Skeptics Position on Global Warming
    The last 130 years global mean temperature pattern continues with a global warming rate of 0.6 deg C per century: http://bit.ly/pmOEot

    2) IPCC position on Global Warming
    The last 30 years global mean temperature pattern continues with a global warming of 1.5 deg C per century: http://bit.ly/qHDBZJ

    We live at an interesting time that the above IPCC position on global warming is supported by the world science community and by most world governments.

    The ratio between the two projections is 1.5/0.6 = 2.5.

    As IPCC’s climate sensitivity is 3, the skeptics climate sensitivity is 3/2.5 = 1.2

    CLIMATE SENSITIVITY = 1.2

    • Recently at ICCC6 in Washington, Dr. Roy Spencer suggested in his debate with Dr. Scott Denning that the value seems to be 1.3°C per 2xCO2 based on his analysis of temperature observations.

      http://bit.ly/nqKNTU

  20. Here’s the problem with the Bayesian approach as we see it being used here: prior information is being used to incorporate results from previous studies for the purpose of demonstrating that later data—despite being a reduced sample size—effectively predicts global warming. The Bayesian approach is being used to confirm a controversial preconception—i.e., personal opinions derived from supposed experts—about the causes of global warming. Nothing in the Bayesian approach helps us get around the problems of bias, lack of controls and demonstrable imprecision involved in the gathering of the both the prior or later information.

    • I’m not so sure that a Bayesian approach is a liability.

      In theory, properly used, a Bayesian analysis ought to be able to combine the uncertainty distributions from two experiments (the new one and the cumulative history of prior ones) into one distribution that maximizes the probability the posterior distribution fits all the data. A big drawback, however, is that prior bad estimates pollute the process and take a while to dilute. But if you do a PDF using the new study in isolation, then a PDF from a Bayesian and compare them, some things might be learned.

      However, Bayesian analysis can be useful to flush out those of questionable honesty or competence. When ridiculous prior distributions are used in analysis, everyone’s hands on that analysis are now indelibly tainted. Tell me there is no benefit in that.

  21. Alexander Harvey

    Some or much of this is in Gregory et al (2002).

    I think that the caption to the figure of interest which starts:

    “FIG. 2. Probability distribution for the effective climate sensitivity …”.

    is certainly misleading and unfortunately not the case.

    As I read it Figure 2 is an error distribution for the observable S* and not a distribution for the unobservable parameter of interest S. These are quite different things. One could convert the distribution S* into a likelihood for S given a model for the errors in the observable S*.

    The way the seem to have produced the distribution for S* is model free but Nic has supplied a model that he, the authors and the IPPC WGI could have used.

    If the caption was to be believed, the curve represented the pdf of the sensitity S, division by its prior would have yielded a likelihood function. Whereas the correct approach is to determine the relative likelihoods for differing values of S by asking the question: “Given S what is the likelhood of the error distribution for S*?” This is what the authors did not do but captioned the figure in a way that indicates otherwise.

    To that degree the authors must I think be at fault.

    If they commented on the precise nature of the figure somewhere in the body of the text I did not catch it. Even if they did, I feel that their result would be defective and ambiguous as opposed to just defective.

    To digress:

    Missing this step is commonplace as it “normally” doesn’t matter as the error distributions are commonly “normal”.

    To miss this step when the distribution for S* is obviously highly skewed is regretable. I think that some alarm bells should have gone off at some stage prior to publication or in the minds of the readership.

    In my mind this makes for a very long list of people who did not see this to be an error. The authors, the reviewers, the readership, WGI, and all those few that actually read that chapter and referenced the sources.

    Nic did spot it so all of the credit goes one way.

    However I do think that due blame needs be apportioned appropriately and that means all of those you let it stand unchallenged.

    Alex

    • Hoi "Bodge" Polloi

      However I do think that due blame needs be apportioned appropriately and that means all of those you let it stand unchallenged.

      You don’t find it if you don’t look for it. As far as I know there was no heretic peer in the review team, let alone in the IPCC.

      • Alexander Harvey

        If I be right in believing that some or much of this goes back to the original paper then the publisher of the “Journal of Climate” is definitely in the loop. As are the reviwers who ought to be able to perform their professional task irrespective of their opinions.

        I am old fashioned enough to believe that along with protecting the editor from publishing garbage, they must act to hone, polish, and correct the paper under review.

        Personally I found this paper to be difficult to follow, which is to say that I found its style unhelpful and not to my liking. I do not think it is clearly written but I am not the target readership. I suspect I am not alone else someone would have picked up on this before Nic did.

        I think clarity and correctness are issues that reviewers need to address. Whoever they were and whatever their views I think that either I am wrong or they failed to discharge their duty to the editor, publisher, authors and readership.

        Alex

      • “To miss this step when the distribution for S* is obviously highly skewed is regretable. “

        The reason S* is skewed is because it is a reciprocal ratio distribution random variable Y. The skewing is completely natural in this case.

  22. Is the use of the Bayesian approach merely climatology’s way of further undermining man’s hope and confidence in the usefulness of the scientific method to help overcome fear, superstition and ignorance through knowledge? It makes no sense that we can increase the precision of current information by augmenting it with prior information in a Bayesian analysis when we know that all of the information, past and present, has been corrupted. A Bayesian analysis cannot make bad data relevant.

    Moreover, it is the Likelihood Principle that is embodied in the Bayesian approach that flips the science of global warming on its head to begin with. If “[t]he inconvenient truth remains,” according to Philip Stott, that “climate is the most complex, coupled, nonlinear, chaotic system known,” then like flipping a coin, It will not matter if we devise a mathematical model to combine the data of the last 100 flips with a dataset reflecting the 100 flips before that—even if you consider want to consider how many tails you got on the previous 1,000 flips—the odds for the next flip still will be 50-50.

    Unless we are purposefully trying to deceive the public the Bayesian approach will never be useful in the field of climatology for anything more than providing some insight into the probability of an exceedingly unlikely event. For example, while it may not seem very likely that we at this very moment are on the precipice of the next Ice Age, it nevertheless is very likely that all life on Earth will someday face just such an event—whether or not that fact is appreciated when it happens—and irrespective of the fact that there will be nothing that any living thing can do about it.

  23. Nic,

    I think that this might be a good paper for “The Teachers’ Corner” of The American Statistician”, or perhaps for the journal Chance.

  24. David L. Hagen

    Judy, Nic Lewis, MattStat
    Why little reduction in IPCC’s projected warming uncertainty?
    See ferd berple | July 8, 2011 at 12:51 pm

    “average paleo temperatures for the earth over the last 600 million years have been almost entirely within the range of 11C – 22C ; current average temperature of the earth is 14.5C; . . .The only prior that could be assumed would be from -3.5C to 7.5C. (11-14.5, 22-14.5). The net effect of choosing 1C to 18.5C is to skew the sensitivity towards the higher range,”

    Ferd appears to give a key insight into why IPCC’s uncertainty range has not dropped much:
    1) Using the 1C to 18.5C prior rather than paleo temperature range of -3.5C to 7.5C prior.
    Suggest showing the impact of this difference in prior on IPCC’s projected warming.

    2) The second major issue would be under estimating natural variations (PDO, indirect solar, cosmic etc) resulting in over estimating anthropogenic contributions.
    (Recommend a major post on these issues.)

    • Paleo temperature ranges of that 11 degree range correspond to maximum CO2 concentrations of around 295 ppm at the top end. We are entering uncharted territories here. Do you not see that?

      • David L. Hagen

        WebHubTelescope “uncharted territories” only applies to recent history. Bob Carter and Don Easterbrook show an order of magnitude higher CO2 in the geological record, correlated with very lush highly productive periods. In the geological record, CO2 does not always track with temperatures, and typically lags temperature changes. e.g. in The Myth of Dangerous Human-Caused Climate Change Carter reviews:

        Third, the ice core data how conclusively that, during natural climate cycling, changes in temperature precede changes in carbon dioxide by an average 800 years or so (Fischer et al, 1999; Indermuhle et al, 2000; Mudelsee, 2001; Caillon et al, 2003); similarly, temperature change precedes carbon dioxide change, in this case by five months, during annual seasonal cycling (Kuo, Lindberg and
        Thomson, 1990). Boucot, Xu and Scotese (2004) have shown that over the Phanerozoic little relationship exists between the atmospheric concentration of carbon dioxide and necessary warming, including that extensive glaciation occurred between 444 and 353 million years ago when atmospheric carbon dioxide was up to 17 times higher than today (Chumakov, 2004).

        See Carter’s keynote presentation at ICCC6 2011 & previous presentations on Youtube.

      • That is indeed interesting that much higher CO2 levels are possible.
        That section you quoted is contradicted from later findings from this Nature article (they reference the findings specifically):
        http://www.deas.harvard.edu/climate/seminars/pdfs/Came_et_al_2007.pdf

        “Our results indicate that tropical sea surface temperatures were significantly higher than today during the Early Silurian period (443–423 Myr ago), when carbon dioxide concentrations are thought to have been relatively high, and were broadly similar to today during the Late Carboniferous period (314–300 Myr ago), when carbon dioxide concentrations are thought to have been similar to the present-day value. Our results are consistent with the proposal that increased atmospheric carbon dioxide concentrations drive or amplify increased global temperatures”

        This is important because during the more recent history where maximum CO2 levels correspond to Milankovitch cycles and hit rails of around 295 ppm, the fact that something can cause both a higher CO2 level and a higher temperature means that we might have punched through a natural limit cycle into this uncharted territory. It is only uncharted because we have to go way back in time to see this.

      • David L. Hagen

        WebHubTelescope
        Correspondingly see:
        “Atmospheric CO2 concentrations during ancient greenhouse climates were similar to those predicted for AD 2100.” Brecker Sharp & McFadden
        PNAS Jan 12, 2010 V. 107 N. 2 pp576-580

        The CO2 paleobarometer suggests the CO2atm values exceeded 3,000 ppmV during Permian (289-251 Ma) and Mesozoic (251-65 Ma) greenhouse climates. (others) result in much lower estimates . . .
        the warmest episodes of the Mesozoic were associated with CO2 atm equal to ~ 1,000 ppmv rather than 2,000-3,000 ppmV (5,8) . . The relatively low CO2 atm of 1,000 ppmV during greenhouse episodes suggest that either Mesozoic warmth was partially caused by a factor unrelated to CO2 or that the Earth’s climate is much more sensitive to atmospheric CO2 than previously thought.

        Either way >> today’s 300 ppm.
        At least they recognize that solar/cosmic/clouds may have been a factor.

        By “similar” they mean IPCC’s projection of about 730 ppm CO2 – which assumes a much faster rise with exponential growth of 468 ppmv/century compared to the current linear trend in CO2 increase of 200 ppmv/century extrapolating to 570 ppmv.
        e.g. See SPPI Monthly CO2 Report January 2011 vol 3 #1

        For statistical issues involved, see Singer NIPCC v IPCC – Addressing the Disparity

    • tempterrain

      David Hagen,

      This seems a good point. However, Judith has recently claimed a 1-10 deg C range (90% confidence limits) for 2x C02 so, presumably, there must be reasonable grounds for thinking that the upper limit prior must be higher than 7.5 degC?

      Is this because the sun is now more active than it was?

  25. Richard Saumarez

    Being a medic (and an engineer), you will probably regard me as irretrievably stupid, but I have two questions:

    1) Is not the “Feedback” equation a little naive? Is the process stationary? Is it linear?

    2) Why use Bayesian statistics at all to estimate parameters?

    In my field, which is dominated by spectatcular non-linearities and non-stationarities, there is a hard mathematical core that uses, in my view, inappropriate linearised models to explain certain physiological effects (on the basis that one can gain an “approximate solution”, as being approximately pregnant). The difference between climate “science” and the biomedical sciences is that in the latter one can do expiriments, which generally reject the linearists’ predictions – not that they are particularly concerned with data.

    If the range of climate sensitivities have been overestimated by a contentious use of statistics, should we not look at the predictions of these sensitivities? As a non-expert, it appears to me that the climate is not responding, at least in the short term, as predicted by the derived sensitivities?

    Therefore is there sonething wrong with the data and does it mean what we think it means? Is the data sufficient to support the analysis and, is the underlying theory correct?

    • True, true… we really must take the time to consider if some climatologists have switched to Bayesian statistics because Frequentist statistics fails to yield conclusions about reality that AGW True Believers are willing to accept.

    • you have described the issues cogently

    • Richard,
      Nice comment!

    • Richard Saumarez: 2) Why use Bayesian statistics at all to estimate parameters?

      If the prior distribution of the parameter is accurate, the Bayesian estimate has a smaller mean square error than the maximum likelihood estimate.

      I model non-stationary multivariate biological time series using non-linear differential equations. An applied subset of this kind of modeling is pharmacokinetics (PK), the science of drug dosing. In PK, judicious use of Bayesian statistics has gotten better estimates of the rate parameters than maximum likelihood and least-squares. I’ll forego references because this is so far away from climate science, but you can find examples if you search PubMed on NONMEM, a popular package for PK.

    • Richard Saumarez: If the range of climate sensitivities have been overestimated by a contentious use of statistics, should we not look at the predictions of these sensitivities?

      That’s clearly the best test, but an adequate result may take 3 more decades. It’s desirable to have the best possible estimates in the mean time.

      It would be unfortunate if we finally learned between 2039 and 2040 that the true value is close to 8.

    • Richard Saumarez: (on the basis that one can gain an “approximate solution”, as being approximately pregnant)

      More pertinent to this discussion would be a laboratory test result that gave you an approximate blood concentration of progesterone, HDL and LDL, and insulin. In most cases you could get more accurate estimates by “shrinking” the estimates toward the population means (that is, by Bayesian estimation, though it is not the only method that produces “shrinking”.)

      • Richard Saumarez

        My main concern is that we arguing about the statistical methods of estimating something that we cannot even define.

        Taking your example of pharmacokinetics/dynamics, Bayesian methods are useful because we have a “good” understanding of the system under study;i.e: the number of compartments (roughly, their effective volumes, the rates of transfer between them etc. This model supports a sophisticated analysis of what we can easily measure: concentrations of a drug/metabolite in plasma, which can be augmented by radio-isotope measurements and so on, that would not be possible if the measurements were considered in isolation.

        Looking at feed back, the most striking thing is that it is wholly REAL. I think this may give a method of statistical attack on the problem. Since, by definition the response is instantaneous, this may allow a boot strap approach to randomly segmenting the data into a hypothesis and test set and using the former to predeict the latter.

        One could object to this as follows:

        a) There isn’t enough data to do this, in which case I would suggest that the huge range of sensitivities mean that sensitivity is not a useful measurement to make short term predictions.

        b) There is lag in the system, which seems likely given the huge amount of water, ice, vegetation etc sloshing around. In which case the the sensitivity equation doesn’t reflect this and formally is wrong. I guess this would be defended as what they are doing is measuring the modulus of the feed back, which, again, is wrong.

        This seems to reflect a problem that I am always encountering: the difference between a masurement that is significant in testing a hypothesis, but is of no value in prediction. This is extremely relevent to prediction of sudden cardiac death (my field) where many things are significantly associated with SCD in the statistical sense, but have practically no predictive capacity.

        I am struck by the similarities of some the climatological arguments on this thread and those in medicine. One can make precise, or not so precise, measurements but the incorporation of these measurements into a model is uncertain, because of the lack of useful models.

        The challenge is to produce models that can be tested, rather than derived parameters that have uncertain interpretation. Perhaps the methods of medicine might give a clue as how to do this?

      • Many climate scientists are indeed concentrating on building models. The climate sensitivity is not an input to the models, but they are compared more directly with the data. Often they tell, what is the climate sensitivity of their model, and much of the estimation of climate sensitivity is trough this route. You describe in your message, why this is the case.

        But then we are facing the problem of, how good the models are, and how valid are the tests they are subjected to.

      • Richard Saumarez: My main concern is that we arguing about the statistical methods of estimating something that we cannot even define.

        OK, but that is not what you wrote.

        This seems to reflect a problem that I am always encountering: the difference between a masurement that is significant in testing a hypothesis, but is of no value in prediction. This is extremely relevent to prediction of sudden cardiac death (my field) where many things are significantly associated with SCD in the statistical sense, but have practically no predictive capacity.

        I like that. It seconds Steve Mosher’s 2 questions above.

  26. Brian G Valentine

    1. Is not the “Feedback” equation a little naive? 2. Is the process stationary? 3. Is it linear?

    1. The understatement of the year. As an engineer, I’m sure you’re familiar with the time and frequency domain analysis of an amplifier circuit. First and foremost in the analysis, is the assumption that the “forcing” can be decoupled from the “feedback.” That’s not even remotely true here – but has been done, despite the objections since it was proposed. 2. The process is usually described as approaching some stationary state, and continually readjusted in the description of what the final state (or a stationary state) is. Moving the target around gives the leeway never to be wrong in the projection. 3. It’s not even clear that the first (linear) term of a Taylor expansion (of more than one variable) of such a thing has meaning, because there is no conditions of convergence on it.

  27. Why the heck is everyone talking about uninformed priors and so on?
    I suggest everyone go back and look up the notion of “ratio distributions”.
    The issue here is that IPCC is converting a distribution with a given uncertainty, a rate sensitivity of units per temperature change, into a distribution of temperature change. This is essentially manipulating random variables as a ratio, in this case a reciprocal.

    The other issue is that the uncertainty is fairly large for Y, a value of 2.3 +/- 1.4 as stated by Forster & Gregory. I would NOT recommend doing this as a Normal distribution as the tails won’t get normalized correctly. The variance is way too large compared to the mean and you aren’t taking the left-hand tails in negative territory into account. Try something like a Gamma distribution of order which gives a similar variance; the Gamma is nice because it won’t cause problems with normalization.

    I bet that if these two issues are understood, you will find that the IPCC is correct and you will see these fat-tails just as they show. Nick is almost doing things right, but almost doesn’t count when one is dealing with probabilities. One has to be ruthless on making sure that the cumulatives sum to unity and that one avoids computing probabilities in the non-physical negative range. Just do the ratio distribution off of a PDF that matches the 2.3+/1.4 with a doubling of 3.7 and see what it looks like. You can look up how to do this in some applied Prob/Stat textbooks, I have referenced Alberto Leon-Garcia’s book in the past.

    I do this stuff all the time and if someone wants me to do it, I will.

    • David L. Hagen

      WebHubTelescope
      My compliments on very insightful modeling on fuels
      “If someone wants me to do it, I will.”
      Please, be our guest. I would love to learn from your expertise.
      Is this: ratio distributions by wolfram Mathworld, what you were referring to?

    • Brian G Valentine

      I believe you, but don’t you need some kind of, um, physical basis for choosing another PDF? What would that be?

      • Back of the envelope then. Nic said the numbers were 2.3 +/- 1.4 for the mean and two-sided sigma. So on the low side it is 0.9 and the high side it is 3.7. Take the reciprocal of these numbers and you get 1.1 and 0.27. Either way, the spread is about 4X in the width.

        For Figure 4 above, the broadest of the curves looks like it has an approximate relative half-width of 4X. That looks like the IPCC curve.

        Now what if the +/- 1.4 was actually a 3-sigma value? Then perhaps Nic has a point and the result would look more like his thinner PDF curves.

      • Your point that using the normal distribution should be questioned is valid. Whether it’s a good choice or not depends on the details of the empirical analysis, i.e. both on the actual data series used and on the methods used to obtain the estimate of Y from the data. The outcome should be a likelihood function that tells in relative terms, how likely are the actual empirical observations for each value of Y. The determination of the correct likelihood function is an inverse problem, and such problems are often difficult to solve and may lead to highly asymmetric results – like the Gregory 02 paper does, when the results are represented correctly as likelihoods.

        When all uncertainties of the original data are presented well by relatively narrow normal distributions, the proper likelihood function is usually also close to a Gaussian in shape, but when the random or other errors in the original data are large, the correctly derived shape of the likelihood function may differ essentially from Gaussian. It’s shape is even then determined solely by the data and the processing of the data, not by knowledge about the physically allowable range of the parameter that we wish to learn about.

        The Forster and Gregory 06 paper discusses the uncertainty analysis is some length, and I’m not ready to say anything specific about the comprehensiveness or correctness of their error analysis. I note only that there is no general reason to expect any specific form for the function.

      • I think one of the main points of discussion is how the uncertainties in our understanding (i.e. parameter uncertainty) compare to uncertainty in the ultimate possible physical outcome, which is essentially determining how natural dispersion plays in. Dispersive effects are many times fat-tailed and you only need to look at the extremely fat-tailed CO2 residence times to understand this. Does everyone that reads up on climate science even realize that a “mean” residence time for CO2 is barely calculable? And since the mean doesn’t exist, by definition a variance is really out of the question. The 95% confidence interval of CO2 residence times might be 10,000 to 100,000 years in our current non-equilibrium conditions.

        That gets into the question of what range of ultimate effects we will see. Arguing over the details of the normal Gaussian statistics in the face of these huge natural dispersive effects therefore pales in terms of our comprehension of what may actually happen. I always wonder why the climate science skeptics do not take on the natural fat-tail assumptions that make up the bulk of the arguments. All the normal Gaussian arguments essentially get thrown out and you really have to understand the statistical physics.

      • “Nic said the numbers were 2.3 +/- 1.4 for the mean and two-sided sigma. So on the low side it is 0.9 and the high side it is 3.7. Take the reciprocal of these numbers and you get 1.1 and 0.27. Either way, the spread is about 4X in the width.”

        I think that you have gone wrong somewhere. The narrow green curve for Forster & Gregory in my Figure 4 above is correct.

        My first post stated that +/- 1.4 was a 95% confidence interval – approximately two-sided 2 sigma. Dividing the lower and upper levels of 0.9 and 3.7 into 3.7 gives a 95% CI of 1.0 to 4.1 C. The 2.5% and 97.5% cumulative distribution function values for the narrow green curve in Fig. 4 are 1.0 C and 3.9 C. The upper value would be 4.1 C had the PDF not been truncated at 10 C to match the IPCC figure 9.20, with the CDF renormalised to unity over the 0-10 C range.

      • Well that probably explains it. They probably assumed a single sigma uncertainty level when they generated the curve. It is kind of reflexive to many scientists and engineers that whenever they see a result stated as X +/- Y to assume that Y is the standard deviation (and that is also assuming a normal distribution).

        One of the common approaches in analyzing aggregated sets of models is to collect the means of the individual results. Then one assigns a spread of the means to give the uncertainty in our collective understanding. From Figure 1 of your previous post (IPCC AR4:WG1 Figure 9.20. [Hegerl et al, 2007] ), I obviously can see the large degree of uncertainty in the results from the different studies. I think the intent is to show the aggregated uncertainty and a “wisdom of the crowds” interpretation.

        So the fact that they missed the 95% reading of 2.3 +/- 1.4 is important in terms of generating the curve but it will still need to be interpreted in the context of all the studies done, which is the point of Figure 9.20.

        That last paragraph that you wrote on confidence intervals is the critical information. What you should do is to take the Fig 9.20 Forster curve and place these confidence intervals as vertical bars to show that the IPCC tails have too much weight. That would make it more obvious and tell everyone where the calculation may have gone haywire.

      • The empirical analysis produces relative likelihoods, but it doesn’t produce confidence intervals, when interpreted properly using rules of inference. Getting confidence intervals requires using a prior and taking that at face value.

      • Are you saying that to project any kind of confidence level, that you need an underlying model? I would strongly agree with that approach. Physicists refer to that as a model, whereas statisticians call that the prior evidence.

        Without a model that really gets at the “ground truth”, all these relative likelihoods are relatively meaningless. That is why the mean value (or median value for fat-tail data) is so important, since that is what drives to the ultimate ground truth.

      • To calculate confidence levels for a physical parameter one must be able to interpret the results as probabilities for the value of that parameter. It’s very common that assuming a uniform prior in that parameter feels so natural that it’s not even noticed that it has been done. When the peak of the likelihood function is narrow, the influence of the chosen prior is small over a wide set of functions. Thus the issue that I raised is of little practical significance and forgotten.

        In this thread we have discussed situations, where that’s not any more true. Reasonable alternatives really do differ. That’s the origin of this controversy, but there is nothing exceptional in this case. The same is true for any estimate of a physical parameter based on a method with a large range of uncertainty and no well defined theory or earlier data to define the prior.

        The empirical work produces some results. When we wish to infer about alternative physical models we must study, how consistent they are with the empirical result is taking all uncertainties of the empirical analysis into account. The level of consistency is described by likelihood (or conditional probability), but moving from that to PDF for the physical parameter requires the step of inference that depends on the prior. Without the PDF we don’t have confidence intervals either.

      • Again I add further explanation to my previous message.

        All the above concerns most clearly situations where we cannot repeat experiments of identical setup to obtain an empirical distribution of results, but where the function that describes uncertainties is based on an analysis of the empirical method. If we get from the analysis 3.0, we must use this analysis of the method to tell, how much more likely it is that the corresponding real parameter is also 3.0 than 2.5 or some other value. The analysis of the method tells these relative likelihoods, but it doesn’t tell the PDF.

        If we would look at the square of the original variable, the empirical result would be 9.0. Compared to that the relative likelihood of 6.25 would be the same than the relative likelihood of 2.5 above, but the likelihood function would have a different shape with a longer upper tail and a shorter lower tail. Thus the likelihood function doesn’t provide unique probabilities or confidence intervals. Looking at the areas under the curve based on the original parameter and the square would put the limits of the confidence intervals at points that do not correspond to the same physical system.

      • A uniform prior is very common and natural because it agrees with Jaynes approach of maximum entropy. In other words if all one knows is the maximum and minimum over some range of values, then a uniform PDF is the choice that will maximize the entropy for this interval. Since Forster and Gregory specified a mean and a standard deviation (couched as a 95% interval), then one can only take this at face value and assume a normal Gaussian, as that is what maximizes the entropy.

        Now one could argue whether Forster decision to take that +/- 1.4C as meaningful in any way, but that is what we are dealt with under the assumption that they know what they are doing.

        So the inferences that Forster and Gregory made to get to the +/- 1.4C is really at the crux of the matter. They may not have assumed the normal Gaussian as the PDF to explain their uncertainty in the underlying model and parameters, but we must because we have no other choice with the numbers that they have given us. Applying the maximum entropy principle, we have to assume it is a Gaussian because they gave us a mean and a standard deviation. That may be misguided but we are not mind-readers into everything that has gone into Forster and Gregory’s analysis.

      • All that is easy and simple for a finite discrete set of possible states, but cannot be transferred to continuous variables without the choice of measure. With a properly chosen measure any prior can be made uniform. Thus freedom to choose the measure is equivalent to freedom to choose the prior.

        When F+G present their results they have chosen the uniform distribution in Y as their prior. That choice does not have any fundamental special status from the point of view of the real physics of atmosphere. It has a special status only for that particular experiment, as it’s the one where the likelihood function is according to them essentially Gaussian.

      • Interesting discussion relating discrete versus continuum. In statistical mechanics the transition from counting of discrete states to that of a continuum has stood the test of time, and really the only issue that can arise is one of counting noise (as in Poisson statistics, etc).

        “When F+G present their results they have chosen the uniform distribution in Y as their prior. That choice does not have any fundamental special status from the point of view of the real physics of atmosphere. It has a special status only for that particular experiment, as it’s the one where the likelihood function is according to them essentially Gaussian.”

        I think I see where you may be going. If the fact that F&G used just a few points in establishing their interval, which in essence is a discrete version of a uniform distribution, and then say that this gives them 95% confidence levels, then that can’t be right. Making the connection to a distribution with higher moments would suggest that they would need many, many more points to reach that level of confidence.

      • Statistical mechanics is first of all very insensitive to many details of input because of the huge number of particles in any macroscopic system. As a second important factor we have physical arguments in support of certain specific measures for the phase space. This is even more clear in Quantum Mechanics, where the the number of states can be calculated for any finite volume, and many results found to be the same for any finite volume.

        Going back to my earlier comments. Some of them can be reformulated as:

        For a case of distributions in one continuous variable every prior distribution is uniform for one mathematically possible change of the variable (the new variable can be defined using the the cumulative probability distribution of that prior). Thus saying that the prior is uniform has zero information as long as the choice between possible variables has not been done.

      • I think I see where you may be going. If the fact that F&G used just a few points in establishing their interval, which in essence is a discrete version of a uniform distribution, and then say that this gives them 95% confidence levels, then that can’t be right. Making the connection to a distribution with higher moments would suggest that they would need many, many more points to reach that level of confidence.

        I don’t think that I had anything like that in mind.

        What I want to say is only that empirical evidence of the type that F+G 06 or any other of these analyses of climate sensitivity or related variables is information on the likelihood function (or equivalently conditional probability), and that this information alone cannot provide any PDFs of confidence intervals for the climate sensitivity or a functionally related parameter like Y. To get a PDF or a confidence interval, a prior must be assumed and plausible alternative priors give in this case significantly different results.

        In addition a want to say that the prior that happens to be most natural from the point of view of the empirical procedure is not any better justified for that reason.

        I happen to like the prior of F+G 06 as I have explained elsewhere, but not because it’s used by them, but because it’s smooth in Y. (My intuitive preference does not go far enough to tell that it should be uniform, but it tells that it should not have the singularity in Y that the prior uniform in S has.)

        When (If) we are interested in S, it’s natural to plot the likelihood versus S, but then we should not equate equal areas with equal probabilities as that would imply a prior uniform in S.

      • We may be talking past each other. The fundamental relation is S = 3.7/Y. This post is all about having an uncertainty in the value of Y and then translating that to an uncertainty in S. That is the reason for evaluating the reciprocal ratio distribution. Mathematically we would consider this as simply a function of two random variables. The uncertainty in the value of 3.7 is fixed so we are simply looking at what effect the reciprocal has. If the uncertainty in Y happens to be uniform along an interval, the reciprocal distribution would look different than if it had a Normal distribution. Yet, some of the features would look the same, such as showing the asymmetry and the appearance of a fat-tail.

        Whether this is contrived we have to consider why we chose Y as a random variable.

        Y is defined as the change in radiation flux per degree temperature change. We are trying to make the connection from doubling of CO2 to a change of flux, and then to a change in temperature. If we have concerns over how much CO2 effects the flux, which makes it an overall climate response or feedback parameter, then we might also consider the uncertainty in the 3.7 number.

        dFlux = Y * dTemperature
        or
        dTemperature = dFlux / Y
        which leads to
        S = 3.7 / Y

        So the 3.7 comes from dFlux. Is there also uncertainty in this number?

        or should we consider a situation like this:
        dTemperature = G * dFlux

        where G is a gain parameter which gives the sensitivity of temperature change with flux. Or why not this?
        dTemperature = G * (dFlux/dCO2) * dCO2

        which would remove all the problems with ratio distributions and make it a problem of a product of random variables.

        The point is that we are analyzing the math from the perspective of the model that they have set up. If it is not the right model then we should argue that. Why are we working with Y and not with G?

        Sorry to sound pedantic but that is what I excel at.

      • What are your thoughts on how to avoid ratio games? At what point is co2 significant? ppm is very small, is <600 | <450 | <300 ppm, or whatever, (in)significant anyhow? What about the other chemicals and other limits in the atmosphere, does this affect the relationship of CO2 and heat?

      • That is essentially the set of questions to ask. How much CO2 will actually change (it is on a somewhat predictable upward track), what is the sensitivity of the heat flux to CO2 changes, and what the lag time of this effect is. And then the set of other externalities to add into the mix.

      • You don’t get my point. Lets accept that Y = 3.7/S. Then Y and S are two parameters that have the same informational content due to the exact relationship between them. Neither is a random variable, but they represent one property of the Earth system in two differing way. we are interested in learning about this property. For that we do empirical work. It so happens that the empirical analysis leads to results that have a Gaussian form, when expressed as function of Y. The same empirical likelihood result can be expressed as function of S. The likelihoods are equal for the same physical system, i.e. the likelihood at Y is the same as it is at S=3.7/Y.

        We have done exactly the same empirical analysis and we have the same result, but we plot in in two different ways. One way gives a symmetric Gaussian curve in Y and the another a asymmetric in S with long tail on the side of high S. The empirical work cannot tell us, how we wish to look at the data, that depends on, how we wish to use that information. Both ways are correct and equivalent, but look different and one is more suitable for some purposes and the other for other purposes. It doesn’t matter that one may integrate to one, while the other gives some other integral, because the integral has no physical significance.

        Now we may make the next step and introduce the prior, multiply the prior by the likelihood and normalize the resulting distribution. Now we have a posterior PDF. I may use one prior and you may use another. We get different PDFs and different confidence intervals. We get different confidence intervals even, if we transform them to the same variable.

        We have both done a valid operation assuming that we indeed believe that our prior is the right one based on all other information that you or I have on the values of S or Y. The results are different, but both have been obtained using a valid method.

      • We have both done a valid operation assuming that we indeed believe that our prior is the right one based on all other information that you or I have on the values of S or Y. The results are different, but both have been obtained using a valid method.

        I don’t have any problems with anything you say. I refer to that as good modeling, which takes into account uncertainties in our understanding and measurements and randomness in the observed process.

        One can make it more complicated than that but the idea of propagation of uncertainty is still one of the main issues that this post is addressing.

    • –> “This is essentially manipulating random variables as a ratio.”

      “For some, declaring a problem to be “randomized” is an incantation with the same purpose and effect as those uttered by an exorcist to drive out evil spirits; i.e. it cleanses their subsequent calculations and renders them immune to criticism. We agnostics often envy the True Believer, who thus acquires so easily that sense of security which is forever denied to us.” (Jaynes)

      • If you believed in Jaynes you would apply the maximum entropy principle and first assume you only knew the mean. That would give you the greatest uncertainty in your estimate. For the reciprocal ratio distribution, that would look like this using Alpha:
        http://www.wolframalpha.com/input/?i=derivative+exp%28-1%2Fx%29+from+x%3D0+to+x%3D4

        You can see how the IPCC trends to a broader function as they assumed that the +/- 1.4 was a fairly large uncertainty.

      • intrepid_wanders

        The issue still remains that:

        1. Why during normalization of multiple probability sets, the probability distribution changes.

        2. What effect normalization (treat one, treat all) has on the other datasets.

        In my experience, normalization to another priori affects all groups and distributions. Maybe we need contribution of Matt Briggs to shed light on this…

      • I am an admirer of the great Bayesian statistician and physicist, Edwin Jaynes.. It is the IPCC’s approach that conflicts with Jaynes’ views, not mine, I think.

        Very pertinently, Jaynes wrote “In any field, the Establishment is seldom in pursuit of the truth, because it is composed of those who sincerely believe that they are already in possession of it.” A point that is very relevant in relation to the climate science establishment !

      • David L. Hagen

        WebHubTelescope
        Any suggestions on how to handle a bipolar distribution of observations vs models? See: David L. Hagen July 8, 2011 at 10:49 pm on climate sensitivity
        My feeling is that the models incorporate major errors by ignorance, by not including substantive issues and having the sign and/or amplitude wrong on others.

      • This is a great question, David.
        I read your comment and I think I understand what you are saying. The bipolar distribution of observations is essentially grouped by two modes, the ones that are on the low side (< 1C) and the ones that are on the high side, upwards of 4C.

        I have noticed the same situation occurring with estimates of the CO2 residence time. Segalstad tried to pull together all the estimates of the mean residence time in the literature (http://www.co2web.info). He found values ranging from 2 to 20 years. This is interesting because the current thinking is that CO2 residence times have a very fat tail that pushes the actual expected time much higher, to even thousands of years under non-equilibrium conditions (like we have now). This is really a characteristic of fat-tail phenomena in that you can look at the initial drop-off and assume it is first-order kinetics explained by a simple damped exponential — yet the tail is more important because it actually has a huge weighting. That leads to the view of a bimodal or bipolar view of the actual data. Skeptics look at the initial drop-off, not understanding the actual implications of the fat-tail. The climate scientists look at the fat-tail because they understand the deeper implications of convolution on a fat-tail forcing function.

        I suggest that climate skeptics look at the CO2 residence time issue as something to understand and resolve.

    • Gil Grissom

      Uninformed priors? Uniform priors? lol

  28. Yet another of my predictions coming true. I’ll need to keep a tally. Mind you it will still take another 5 years, if ever, for the climate clique to get round to realising the bleeding obvious.

  29. Brian G Valentine

    I hope FDA or somebody will add guidance against “tasting” “arsinacal salts and spirits” as Newton evidently did

  30. I heard someone learned how to turn lead into gold years ago but the mining industry bought up the IP and burined the idea.

    • Brian G Valentine

      Given that the EPA is about three steps from closing down the mining industry in the US, they might have to drag the concept back out to stay afloat. Plenty of lead around, from recycling old car batteries and storage batteries for solar panel arrays.

      I understand EPA has recently classified “carbon dioxide” a toxic waste of the same hazardous class as cyanide waste dumped into a river. I’m not sure that’s fair – considering that the lifetime of CO2 in the air is four hundred years or so

  31. I would like to throw out a question to the group. Given the discussion that we have been having on these two threads in the context of Figure 9.20 of AR4, what do you think of the IPCC’s conclusion:

    “The equilibrium climate sensitivity is a measure of the climate system response to sustained radiative forcing. It is not a projection but is defined as the global average surface warming following a doubling of carbon dioxide concentrations. It is likely to be in the range 2°C to 4.5°C with a best estimate of about 3°C, and is very unlikely to be less than 1.5°C. Values substantially higher than 4.5°C cannot be excluded, but agreement of models with observations is not as good for those values.”

    • Brian G Valentine

      My thoughts about IPCC’s conclusion are pretty obvious, but there may possibly be an escape hatch here:

      The “sensitivity” is technically defined as a radiant flux, isn’t it – that corresponds to a surface temperature increase (all other things being equal). If in fact this results in increased cloud cover (for example), that results in less temperature increase than the stated, it is still not possible to rule out the stated value of “sensitivity”

    • The IPCC is simply giving a name to something they do not understand as if something important has been solved: “The equilibrium climate sensitivity of the climate system.” Such hubris is sort of funny.

    • tempterrain

      I “think” the IPCC perhaps should have made clearer the difference between short and longer term forcings. Observations can only indicate what the short term forcing, measured in decades or one or two centuries, can be, not the longer ones, measured in several hundred years or even millenia. This is possibly why agreement of models with observations for the higher levels is not so good. We just haven’t really started to see the long term effects yet. We haven’t yet passed any tipping point as James Hansen would probably say.

      The empirical evidence of the past 40 years would indicate a potential warming of about 1.5 deg C per century on a BAU scenario. CO2 levels were 325ppmv in 1970 and about 390ppmv now. This doesn’t seem inconsistent with a figure of 3 deg C for 2x CO2.

    • Why do they say it’s not a projection but a global average?

      • Brian G Valentine

        I dunno. To the climate people, “projections” are future conditions (with certain probabilities) based on modeling of fundamental principles. “Predictions” are future conditions based on past conditions coupled with known periodic climate cycle conditions. Seems that taking “projection” out of it tales the “probability” aspect out of it, which doesn’t make sense

    • tempterrain

      Judith,

      You ask what we think. This is what you thought in February 2011

      “I think we can bound this [2x CO2] between 1 and 6C at a likely level, I don’t think we can justify narrowing this further.”

      “To bound at a 90% level, I would say the bounds need to be 0-10C.”

      http://judithcurry.com/2011/02/26/agreeing/#comment-49648

    • Judy – An encyclopedia of evidence won’t answer your question to everyone’s satisfaction, but I’d like to offer a few personal perspectives that lead me to think that the estimated range is probably about right.

      I’m more comfortable with some of the reasoning in IPCC Chapter 8 than with the greater level of Bayesian subjectivity in some parts of Chapter 9 (although not all). Certainly, the two chapters complement each other. For an IPCC perspective different from Chapter 9, I would recommend Chapter 8 to readers unfamiliar with it, and would additionally recommend several of the posts in Isaac Held’s Blog that elaborate on aspects of the modeling (e.g., 3 through 9).

      Basically, climate sensitivity in Chapter 8 is approached mainly in piecemeal fashion – a response to CO2 forcing is a baseline, and feedbacks are superimposed to yield an estimate of sensitivity. The principal feedbacks are water vapor, snow-ice albedo, clouds, and lapse rate. The positive water vapor feedback is strongly linked to a partially offsetting negative lapse rate feedback, the albedo feedback is modeled as positive, and cloud feedbacks are also positive in almost all models but with very large variability. It is well recognized that we can’t just add these together, but in fact, they tend to reinforce each other, and the estimated amplification of the direct response to CO2 forcing is expected to exceed their sum.

      What I like about this approach is that it can be matched to observational data on humidity, ice extent, cloud extent and type (low, cooling clouds and high cirrus clouds that can exert a warming effect). As I interpret the evidence, the observational data tend to confirm the modeling for these individual feedbacks at least semiquantitatively, and this suggests to me that the climate sensitivity estimates are probably not grossly in error, even if precise quantitation still eludes us.

      Finally, I’m referring here to long term climate responses to persistent CO2 forcing. The term “climate sensitivity” is sometimes used as though the source of the forcing is irrelevant. This tendency to generalize may sometimes apply well to the physical nature of the forcing (at least if it is global), but it is more problematic in terms of forcings of different duration. I suspect it is questionable how well we can extrapolate from short term forcings to CO2, and particularly from those that are regional in nature. The same principle applies to internal climate fluctuations utilized for climate sensitivity estimates, such as ENSO events and volcanic eruptions. This notion also applies to observational data. For sensitivity to doubled CO2, knowing long term trends in humidity, cloud cover and type, ice, etc., would be particularly useful in comparison with trends of only a few years or less in these phenomena. The short term studies can be informative, but must ultimately be confirmed over the longer term.

      • correction: a volcanic eruption is short term, but imposes a forcing, not an internal “fluctuation”.

      • David L. Hagen

        Stephens finds low clouds have significant optical depth and negative feedback:
        Is There a Missing Low Cloud Feedback in Current Climate Models? Graeme L. Stephens Global Energy & Water Cycle Experiment V20 N1 Feb 2010 pp5-7

        The implication of this optical depth bias that owes its source to biases in both the LWP and particle sizes is that the solar
        radiation reflected by low clouds is significantly enhanced in models compared to real clouds. This reflected sunlight bias has significant implications for the cloud-climate feedback problem. The consequence is that this bias artificially suppresses the low cloud optical depth feedback in models by almost a factor of four and thus its potential role as a negative feedback.

      • Have the data by Stephens been published? My interpretation from the news article is that he concludes that low clouds can exhibit a lower albedo than sometimes modeled, but then an important question is how low cloud cover is changing over recent decades of warming. Recent HIRS and ISSCP data show long term trends indicating reductions in low cloud cover and/or increase in the high cloud/low cloud ratios, consistent with positive feedback in the models. This is not always true over short intervals, and needs to be monitored further.

      • ISCCP and ERBE show changes in SW that are much larger than the calculated C02 forcing in the satellite era. The LW changes go in the direction of increasing LW out in ERBE, ISCCP and HIRS (ahem!!!) – showing low level cloud changes.

        The question of CO2 feedback is moot as we were cooling (ahem!!!) in the LW and warming in the SW.

        One source of decadal change in low level cloud is the Pacific decadal variation – decreasing after 1976.77 in surface observations and increasing again after 1998 in surface, satellite and ‘Earthshine’ observations. They are decadal changes associated with decadal changes ocean conditions. A cloud radiative forcing and not some odd idea of ENSO for instance as ‘internal climate variation’ with no radiative dimension and therefore no impact on the global energy balance. .

        ‘Observational evidence of decadal change in cloud cover is seen in a 2009 study by Amy Clements and colleagues using surface observation of clouds from the Comprehensive Ocean Atmosphere Data Set (COADS). ‘Both COADS and adjusted ISCCP data sets show a shift toward more total cloud cover in the late 1990s, and the shift is dominated by low- level cloud cover in the adjusted ISCCP data. The longer COADS total cloud time series indicates that a similar magnitude shift toward reduced cloud cover occurred in the mid-1970s, and this earlier shift was also dominated by marine stratiform clouds…

        Our observational analysis indicates that increased SST and weaker subtropical highs will act to reduce NE Pacific cloud cover.’ (Clement et al 2009)

        In an analysis of global warming cloud feedbacks, Dessler (2010) used short term variations in surface temperature and CERES data to determine that cloud cover was negatively correlated with temperature. Dessler also plotted ENSO against surface temperature leaving no doubt that ENSO was the primary cause of the short term temperature variations. Leaving aside anthropogenic global warming – the finding of a positive feedback here is in the first instance an ENSO feedback. As Dessler says, ‘the climate variations being analysed here are primarily driven by ENSO, and there has been no suggestion that ENSO is caused by cloud variations.’ The Dessler (2010) study takes a statistical approach that may gloss over the difference in processes in play in ENSO and from global warming.

        Zhu et al (2007) found that cloud formation for ENSO and for global warming have different characteristics and are the result of different physical mechanisms. The change in low cloud cover in the 1997-1998 El Niño came mainly as a decrease in optically thick stratocumulus and stratus cloud. The decrease is negatively correlated to local SST anomalies, especially in the eastern tropical Pacific, and is associated with a change in convective activity. ‘During the 1997–1998 El Niño, observations indicate that the SST increase in the eastern tropical Pacific enhances the atmospheric convection, which shifts the upward motion to further south and breaks down low stratiform clouds, leading to a decrease in low cloud amount in this region. Taking into account the obscuring effects of high cloud, it was found that thick low clouds decreased by more than 20% in the eastern tropical Pacific… In contrast, most increase in low cloud amount due to doubled CO2 simulated by the NCAR and GFDL models occurs in the subtropical subsidence regimes associated with a strong atmospheric stability.’

        Burgmann et al (2008) discuss this in terms of a Pacific Decadal Variation (PDV) – and describe the sea surface temperature signature as ‘characterized by a broad triangular pattern in the tropical Pacific surrounded by opposite anomalies in the midlatitudes of the central and western Pacific Basin.’ Their study uses a variety of data sources to examine decadal variability of surface winds, water vapour (WV), outgoing longwave radiation (OLR) and clouds. They conclude that the ‘most recent climate shift, which occurred in the 1990s during a period of continuous satellite coverage, is characterized by a ‘La Niña’ SST pattern with significant signals in the central equatorial Pacific and also in the northeastern subtropics. There is a clear westward shift in convection on the equator, and an apparent strengthening of the Walker circulation. In the north-eastern subtropics, SST cooling coinciding with atmospheric drying appears to be induced by changes in atmospheric circulation. There is no indication in the wind speed that the changes in SST or WV are a result of changes in the surface heat flux. There is also an increase in OLR which is consistent with the drying. Finally, there is evidence for an increase in cloud fraction in the stratus regions for the 1990s transition as seen in earlier studies. Together, these results suggest that there are decadal-scale changes in the atmosphere involving circulation, water vapor, clouds and radiation that may play a role in PDV, and are worthy of further study.’

        And please – a failure to substantively address the issues – waffling off on tangents – will be deemed to be obfuscation and ignored. The use of a convoluted (tortured) syntax with peppered with meaningless scientific jargon will likewise be ignored – I am Australian and we recently deposed a Prime Minister because of a regrettable turn of phase. .
        .
        I know that this might seem to be the kettle calling the pot black – but you have to crawl before you can walk. If you have something to say – keep it simple, short and to the point.

      • Chief Hydrologist, 7/9/11, 3:21 am, sensitivity II

        You wrote,

        In an analysis of global warming cloud feedbacks, Dessler (2010) used short term (i.e., not climate) variations in surface temperature and CERES data to determine that cloud cover was negatively correlated with temperature. Dessler also plotted ENSO against surface temperature leaving no doubt that ENSO was the primary cause of the short term temperature variations.

        How did you determine that Dessler used the parameter cloud cover? He doesn’t use that term or any of its synonyms cover,amount, fraction, extent, with or without “cloud” in front, or cloudiness.

        Dessler said,

        The cloud feedback is conventionally defined as the change in ΔR–cloud per unit of change in ΔT_s.

        Conventionally? Better uniquely. The term ΔR_cloud is his own derivation from ΔCRF, formed by subtracting ΔR_all-sky from ΔR_clear-sky, and by adjustments for water vapor, temperature, and surface albedo anomalies used to convert ΔCRF into ΔR_cloud. How can one deduce from these machinations whether cloud cover is on the cutting room floor or might be silently concealed as the principle parameter of his analysis?

        IPCC separates total cloud albedo into two primary parameters, cloud albedo effect, sometimes abbreviated as simply cloud albedo, and cloudiness, including its many synonyms. (Just for completeness, a secondary IPCC parameter is cloud lifetime.) By removing cloudiness from total cloud albedo, IPCC makes cloud albedo into a specific reflectivity per unit cloud area. Strictly speaking, the units would be horrible, perhaps (Watts/m^2)/m^2, but IPCC never discusses that little complication. With common units for specific cloud albedo and total cloud albedo, the reader can’t tell which the information is, whether it includes cloud cover or not, and such is the case with Dessler’s paper.

        Not surprisingly, Dessler’s result supports IPCC’s results. (His paper was published in Science, a conforming journal advocating for AGW and consensus science.) His first reference is to AR4, where IPCC admits

        In spite of this undeniable progress, the amplitude and even the sign of cloud feedbacks was noted in the TAR as highly uncertain, and this uncertainty was cited as one of the key factors explaining the spread in model simulations of future climate for a given emission scenario. This cannot be regarded as a surprise: that the sensitivity of the Earth’s climate to changing atmospheric greenhouse gas concentrations must depend strongly on cloud feedbacks can be illustrated on the simplest theoretical grounds, using data that have been available for a long time. 4AR, ¶1.5.2 Model Clouds and Climate Sensitivity , p. 114.

        and

        The model intercomparisons presented in the TAR showed no clear resolution of this unsatisfactory situation. Id. P. 116.

        The physics of cloud formation, the Clausius-Clapeyron relation, and elementary observations dictate that cloud cover will increase with increasing surface temperature, notwithstanding short term, non-climate or regional effects like ENSO oscillations. The sign should be crystal clear: cloudiness is a dominant force regulating climate, a strong, short term positive feedback to TSI and a strong negative, climate scale feedback to surface temperature.

        However a full measure of doubt remains as to whether ENSO caused short term temperature variations, or whether some other event, e.g., TSI variations, caused both ENSO and the temperature changes. This could be resolved by application of the correlation function to determine which term was leading, and hence a candidate cause.

        For those reading here who might urge that IPCC showed that TSI variations are too small to have caused significant climate changes, I would remind them that Stott, et al. (2003) (pre AR4, and rejected by IPCC) and Tung, et al. (2008) (post AR4) separately showed that Earth’s climate contains a sizeable an amplifier of solar variations. If IPCC had the clouds right, it might have discovered the missing amplifier.

      • Here is more on HIRS and ISCCP Cloud Trends, indicating that low clouds (which predominantly cool) have tended to decline or remain unchanged, while the ratio of high clouds (which tend to warm) to low clouds has tended to increase. These trends are what one might expect to produce a net positive cloud feedback, but do not prove it. They do tend to exclude a strong long term net negative feedback.

      • ‘This paper gives an update on the observed decadal variability of the earth radiation budget (ERB) using the latest altitude-corrected Earth Radiation Budget Experiment (ERBE)/Earth Radiation Budget Satellite
        (ERBS) Nonscanner Wide Field of View (WFOV) instrument Edition3 dataset. The effects of the altitude correction are to modify the original reported decadal changes in tropical mean (20°N to 20°S) longwave
        (LW), shortwave (SW), and net radiation between the 1980s and the 1990s from 3.1, 2.4, and 0.7 to 1.6, 3.0, and 1.4 W m2, respectively. In addition, a small SW instrument drift over the 15-yr period was
        discovered during the validation of the WFOV Edition3 dataset. A correction was developed and applied to the Edition3 dataset at the data user level to produce the WFOV Edition3_Rev1 dataset. With this final correction, the ERBS Nonscanner-observed decadal changes in tropical mean LW, SW, and net radiation between the 1980s and the 1990s now stand at 0.7, 2.1, and 1.4 W m2, respectively, which are similar to the observed decadal changes in the High-Resolution Infrared Radiometer Sounder (HIRS) Pathfinder OLR and the International Satellite Cloud Climatology Project (ISCCP) version FD record but disagree with the Advanced Very High Resolution Radiometer (AVHRR) Pathfinder ERB record. Furthermore, the observed interannual variability of near-global ERBS WFOV Edition3_Rev1 net radiation is found to be remarkably consistent with the latest ocean heat storage record for the overlapping time period of 1993 to 1999. Both datasets show variations of roughly 1.5Wm2 in planetary net heat balance during the 1990s.’

        Reexamination of the Observed Decadal Variability of the Earth Radiation Budget Using Altitude-Corrected ERBE/ERBS Nonscanner WFOV Data
        TAKMENG WONG, BRUCE A. WIELICKI, AND ROBERT B. LEE III

        Your use of a single and ambiguous source is a problem. Check Table 1 of the Wong reference for a numeric comparison – all but the AVHRR show cooling in LW and warming in SW. And also the change in the late 1990’s to more cloud – ‘consistent with’ decadal changes in SST. HIRS shows an increase in emitted IR of 0.2 W/m2 between the 1980’s and 1990’s. ISCCP shows 0.5W/m2 and ERBE show 0.7 W/m2 in. Now really – the only way that makes any sense is if there are low level cloud changes – which is also seen in the SW. Either that or dismiss the satellite evidence entirely. The net changes for ISCCP and ERBE in the are 1.8 and 1.4 W/m2 respectively – warming overall but entirely in the SW. That is, cooling in the LW and warming in the SW.

        High and low cloud have different optical properties – so it matters less to know the quantity of cloud than to know the changes in cloud radiative forcing. And global change in cloud matters less than the change in marine cloud in the tropics and sub-tropics. While cloud changes might be the proximate cause of radiative changes – it is the radiative changes that cause climate change.
        .

      • “the only way that makes any sense is if there are low level cloud changes – which is also seen in the SW”.

        Yes, I agree. It’s a good paper. My only other comment is that the flux changes are described as anomalies vis-a-vis 1985-1989, and so they don’t tell us the sign of of the TOA flux imbalance. Given the heat storage changes (Fig. 7), the imbalance is presumably a warming one on average during the studied interval.

      • Yes Fred – most of us accept that the planet was warming in the period. The simple energy balance equation shows the relationship.

        Ein/s – Eout/s = d(H)/dt – the average unit energy in less the average unit energy out equals the rate of change of global heat storage H.

        d(H)/dt was positive in the period as we know – implying that Ein < Eout in the period in a warming energy imbalance. Ein changed little in the period. Eout changed quite a bit in both the SW and LW – more IR emitted (cooling) and less SW reflected (warming) over the period.

      • Not to quibble too much, Robert, but I was referring primarily to the TOA flux imbalance. It’s theoretically possible for that imbalance to be zero, or even slightly negative (i.e., more energy out than in) while the surface is still warming. The evidence from the OHC is pretty good confirmation that it was positive, but even there, unless we know what the entire ocean is doing and not just the upper part, the surface could theoretically be gaining heat from the ocean while losing heat to the atmosphere. For example, during phases of a strong El Nino, upper ocean warming could theoretically shift the TOA balance to a net energy loss, although this would be unlikely to persist throughout the El Nino.

        It’s a quibble because we know these things don’t explain what was happening over the entire interval. Rather, surface and ocean were both gaining heat due to more energy in than out.

      • Fred,
        I completely disagree about the relative strength of evidence as to climate sensitivity from simulations by AOCGMs (Chapter 8 of WG1 “Climate Models and Their Evaluation”) and from observational evidence that is either direct or intermediated through simple Energy Balance Models. I am not alone in believing that AOGCMs may be getting things quite wrong. Hansen recently concluded that most AOCGMs get heat flux into the ocean very wrong, which means that if their forcings are correct then they substantially overestimate sensitivity (he instead thinks it means their forcings are wrong, of course). And one of the best known teams of climate parameter investigators, Chris Forest, Peter Stone and Andrei Sokolov, who use the sophisticated MIT 2D climate model with a GCM-like atmospheric module, wrote about AOGCMs:

        “Much of the work has focused on evaluating the models’ ability to simulate the annual mean state, the seasonal cycle, and the inter-annual variability of the climate system, since good data is available for evaluating these aspects of the climate system. However good simulations of these aspects do not guarantee a good prediction. For example, Stainforth et al. (2005) have shown that many different combinations of uncertain model sub-grid scale parameters can lead to good simulations of global mean surface temperature, but do not lead to a robust result for the model’s climate sensitivity.

        A different test of a climate model’s capabilities that comes closer to actually testing its predictive capability on the century time scale is to compare its simulation of changes in the 20th century with observed changes. A particularly common test has been to compare observed changes in global mean surface temperature with model simulations using estimates of the changes in the 20th century forcings. The comparison often looks good, and this has led to statements such as: ”…the global temperature trend over the past century …. can be modelled with high skill when both human and natural factors that influence climate are included” (Randall et al., 2007). However the great uncertainties that affect the simulated trend (e.g., climate sensitivity, rate of heat uptake by the deep-ocean, and aerosol forcing strength) make this a highly dubious statement. For example, a model with a relatively high climate sensitivity can simulate the 20th century climate changes reasonably well if it also has a strong aerosol cooling and/or too much ocean heat uptake. Depending on the forcing scenario in the future, such models would generally give very different projections from one that had all those factors correct.”

        As you no doubt know, the “Randall et al., 2007” study that the above-quoted statement referred to as “highly dubious” in fact constitutes the complete Chapter 8 of AR4:WG1.

        That team published results a few years ago showing that all AOGCMs they studied mixed heat into the ocean faster than their best estimate of the actual rate, by a factor of several times at least.

        Another revealing comment on investigating climate sensitivity using AOGCMs was made in a 2005 study by James Annan et al.:

        “In previous research the value of the models and methods in generating useful estimates of climate sensitivity has generally been assumed and not directly tested.”

      • Nic – I think you missed my main point – again it may be I wasn’t clear enough. The chapter 8 justification for a climate sensitivity range between about 2 and 4.5 does not depend on the modeling you refer to, but rather to something much more subject to confirmation. It entails predicting what will happen to water vapor, lapse rate offsets to water vapor, ice, snow, and clouds – and then looking at observational data (mainly from satellites) to see how these phenomena are actually behaving. When that is done, the evidence for substantial positive feedback from water vapor minus lapse rate, and snow/ice becomes almost inescapable, and the evidence from clouds suggests a positive feedback and excludes a strong negative feedback (long term – see below). This makes a sensitivity less than 1.5 very unlikely and a sensitivity less than 2 also fairly unlikely. This has nothing to do with heat transport into the ocean, although that phenomenon, in my view, also supports fairly high sensitivities once the evidence for significant rates of deep ocean transport are factored in (but that’s a different topic).

        Again, it’s important to recognize that we’re discussing long term trends in the above phenomena as related to long term CO2 forcing. How any of these climate phenomena behave in response to short term climate perturbations is a different matter – in fact, a “sensitivity” to short term perturbations may have a value very different from the long term one.

      • Fred,
        It has a lot to do with heat transport into the oceans. If there has been only a fairly small change in ocean heat flux over the last century and the ratio of global increase in surface temperature to increase in forcing is low (as the evidence certainly suggests), then it follows that climate sensitivity is low – perhaps of the order of 1.5 C.

        I also disagree with what you say about cloud feedback.

      • I’m not sure this disagreement is resolvable, Nic. Generally accepted evidence gives us a good idea of the direct temperature response to CO2 forcing in the absence of feedbacks. I then refer to the effect of superimposing feedbacks that are subject to observational confirmation, and which indicate fairly substantial amplification of the direct CO2 response.

        Yes, of course there are other ways to look at climate sensitivity. Ocean heat uptake is a separate topic, and its measurement is still plagued by many technical issues.. It has much potential future utility in narrowing down our understanding of TOA flux imbalances, and perhaps in helping us estimate climate sensitivity, but current and historical measurements limit our current ability to do this – particularly over short timescales. In any case, it doesn’t change our observations about feedback phenomena. To me, these are strong pieces of evidence in their own right, which was why I emphasized them as complementing evidence from other sources.

    • Steve Fitzpatrick

      Since you asked:
      It seems to me that the AR4 group may be going through some painful contortions to discount/discredit/tilt data and analysis which are contrary to the 2C to 4.5C range. Nic has pointed out a rather glaring example where a prior (which can be most generously described as ‘questionable’) is used to drastically modify the conclusions of a published work; one in clear disagreement with the 2C to 4.5C range.
      Based on the rather vast uncertainties in aerosol forcing, and the substantial discrepancies between model projections of ocean heat uptake and measured heat uptake (ARGO), it strikes me as bizarre that the IPCC insists on excluding the possibility of quite low sensitivity, when there is a wealth of evidence for fairly low sensitivity. As someone said earlier: their thumb appears to be on the scale, always pressing for high sensitivity values. They should be much more circumspect in their analysis: more openness about conflicting data/studies, less certainty about climate sensitivity, and most important of all, clear and accurate statements about the real level of uncertainty.

    • Keith Jackson

      As a best guess I’d say the sensitivity is most likely to be in the range 1.5 +/- 0.5 degrees C. I think it is very unlikely to be as high as 3 degrees C. Limitations on fossil fuel availabilty and a sensitivity in this range will limit further “anthropogenic” warming to below about +1.2 degrees C forever. One hundred years from now the world will be far more concerned about the advent of the next ice age than about global warming.

      • tempterrain

        Keith, Do you know when the next glacial period is due – naturally? In about
        a: 800 years
        b: 8000 years
        c: 80000 years
        d: longer than the above

      • tempterrain,
        Who cares and why?

      • tempterrain

        Who cares? You might think scientists are strange animals but they do care and are curious about all sorts of oddities which aren’t related to concern about the price of gasoline. They don’t start from the political viewpoint that the consequences of an acceptance of CO2 are unacceptable, therefore conclude it must therefore be wrong, and work backwards through the available scientific evidence to try to justify an initial “instinctive reaction”.

      • tempterrain,
        I know scientists.
        You are no scientist.
        I asked where you got your answer.
        You answer, as usual, with an information free meander.

      • Keith Jackson

        tempterrain – No, I don’t know when the next glacial period is “due”. In comparison to the last few interglacials it could be within a few hundred to a few thousand years. Current conventional thinking is that because of the low orbital eccentricity it may be as long as 30,000 years; however, the current solar insolation at 65 degrees N in the summer is low and will remain so for a long time. I belive it is quite possible that a combination of low solar activity and high volcanic activity could push the world over the edge. Also, since the world seems to need something to worry about, an ice age seems like a better candidate than most as the “warming” is seen to fade…

      • tempterrain

        Try option d

      • tempterrain

        Sorry : Try option C:

      • And you know it is answer ‘C’ exactly how?

      • tempterrain

        I just a know-all I suppose! Seriously, if you look up ice ages in Wikipedia you’ll see that the glacial intervals have been about 100,000 years for the last 10 cycles. Also, the last minimum was 20,000 years ago which means that the next on is due …………

        Furthermore, CO2 levels always fall as temperatures fall. Even if you claim that is a result rather a cause of falling temperatures, the fact that it is rising quickly means you don’t have to worry about glaciers in London or New York quite just yet.

      • Rattus Norvegicus

        It’s called Milakovich, but our current antics appear to have put it off for a few tens of thousands of years…

      • tempterrain,
        The record shows CO2 goes up after temperatures go up.
        Can you possibly be more pointless?

    • David L. Hagen

      Judith
      Re IPCC’s “it. . . is very unlikely to be less than 1.5°C.”
      I do not see any evidence for “very unlikely”. IPCC ignored (or excluded) ocean oscillations and indirect solar impacts including modulating cosmic rays and thus clouds. Consequently, much of the anthropogenic fitting to global warming models is an argument from ignorance which overestimates anthropogenic and underestimates natural causes. There appear to be upwards of 15 estimates below 1.5 C. Incorporating the numerous lower climate sensitivity estimates would need to rephrase to something like:
      “Climate sensitivity is likely to be in the range 0.5°C to 4°C with a best estimate of 0.6°C for measurements and 3°C for models. Values less than 0.5°C may be possible.”
      (Based on my gut feel of papers read without calculation.)

      Idso evaluated eight naturally-occurring phenomena:

      (1) the change in the air’s water vapor content that occurs at Phoenix, Arizona, with the advent of the summer monsoon, (2) the naturally-occurring vertical redistribution of dust that occurs at Phoenix between summer and winter, (3) the annual cycle of surface air temperature that is caused by the annual cycle of solar radiation absorption at the earth’s surface, (4) the warming effect of the entire atmosphere caused by its mean flux of thermal radiation to the surface of the earth, (5) the annually-averaged equator-to-pole air temperature gradient that is sustained by the annually-averaged equator-to-pole gradient of total surface-absorbed radiant energy, (6) the mean surface temperatures of Earth, Mars and Venus relative to the amounts of CO2 contained in their respective atmospheres, (7) the paradox of the faint early sun and its implications for earth’s thermal history, and (8) the greenhouse effect of water vapor over the tropical oceans and its impact on sea surface temperatures. . . .
      These eight analyses, in the words of Idso, “doubling of the atmosphere’s CO2 concentration could raise the planet’s mean surface air temperature by only about 0.4°C,”

      Idso, S.B. 1998. CO2-induced global warming: a skeptic’s view of potential climate change. Climate Research 10: 69-82.
      There are forty citations to Idso 1998

      Lindzen (slide 22 notes:)

      the errors in the feedback factors make it impossible to narrow the range of (model) sensitivity, thus explaining why this range has not diminished since 1979. However, for the low sensitivity obtained from the actual climate system, we see that sensitivity is narrowly constrained to about 0.5C . . .

      summary of:Lindzen, R.S. and Choi, Y.-S. 2009. On the determination of climate feedbacks from ERBE data. Geophysical Research Letters 36: 10.1029/2009GL039628. corrected/updated to 0.5K in:
      On the observational determination of climate sensitivity and its implications Richard S. Lindzen and Yong-Sang Choi J. Geophys. Res, 2010
      This suggests a bipolar grouping of climate sensitivity estimates – those from nature and those from models.

      Kyoji Kimoto, finds 0.75K ON THE CONFUSION OF PLANCK FEEDBACK PARAMETERS
      ENERGY & ENVIRONMENT VOLUME 20 No. 7 2009

      Douglas et al. find 0.8K
      Douglass, D.H., Knox, R.S., Pearson, B.D. and Clark, Jr., A., 2006: Thermocline flux exchange during the Pinatubo event. Geophysical Research Letters.Vol.33, L19711, doi: 10.1029/2006GL026355

      Spencer is finding low sensitivity
      Spencer, Roy W., William D. Braswell, 2008: Potential Biases in Feedback Diagnosis from Observational Data: A Simple Model Demonstration. J. Climate, 21, 5624–5628. doi: 10.1175/2008JCLI2253.1

      More Evidence that Global Warming is a False Alarm: A Model Simulation of the last 40 Years of Deep Ocean Warming “the simple model required a feedback parameter equivalent to a climate sensitivity of only 1.3 deg. C in response to a doubling of CO2.”

      Spencer on solar/cosmic ray/cloud feedback

      the total (direct + indirect) solar forcing on climate associated with the solar cycle could be 3.8 times that most mainstream climate scientists believe.

      etc. etc.

      • David L. Hagen

        “models on observations vs GWM models” might be better categories.

      • David L. Hagen

        Contrast this evidence with what the IPCC has already announced (before the reports are written!)
        Next climate warming report will be dramatically worse: UN (AFP) – Nov 22, 2010

        UN Secretary General Ban Ki-moon would make it clear to world leaders in Cancun “that we should not take any comfort in the climate deniers’ siren call.” . . .
        scientists agree that climate change “is happening in an accelerated way.” . . .
        just about everything that you will see in the next report will be more dramatic than the last report, because that is where all the data is pointing.”

        What hope is there for objective evidence based reporting rather than model “projections”?
        Could the following be the reason?

        Thirty billion dollars of emergency funding over three years was agreed at Copenhagen and a UN panel on how to raise 100 billion dollars a year from 2020 has already delivered its report.

    • The climate sensitivity has been measured in the industrial age to be approximately 1.2C. I see no reason to believe it strays far from the measured value.

      • tempterrain

        No. It [2 x co2] hasn’t happened. So it can’t have been measured.

      • I have no doubt climate scientists are capable of calculating the sensitivity before co2 exactly doubles. It seems you do. I can assure you they are actually smarter then you appear to believe.

      • tempterrain

        Yes, you are probably right about them being smarter and they’ve estimated the most likely value to be about 3 degC.

      • One minute you doubt their capability of doing basic math problems and the next you are putting your faith in their ability to make calculations of incredible complexity with many uncertainties. Please do make up your mind.

      • steven,
        Asking tempterrain to make up his mind has a very large assumption in the question.

      • David L. Hagen

        steven
        Any references/reviews for that 1.2C?

      • Is it a coincidence that the ratio of the last 130-years global warming rate to the last 30-years global warming rate of 0.6/1.5 = 0.4? (your 40% value)

        http://bit.ly/oxjXaL

    • Dr. Curry,
      The IPCC conclusion is reduced to gibberish, when the underpinnings are actually reviewed with an eye towards meaningful advice.
      \

    • The true climate sensitivity value must be determined from observation.

      If the climate follows the following pattern, the sensitivity value will be about 3.

      http://bit.ly/qHDBZJ (IPCC position)

      If the climate follows the following pattern, the sensitivity value will be about 1.2.

      http://bit.ly/pmOEot (Skeptics position)

      • Skeptics Climate Sensitivity = IPCC Climate Sensitivity * Last 130-years global warming rate/ Last 30-years global warming rate

        Skeptics Climate Sensitivity = 3*0.6/1.5 = 1.2

    • The IPCC conclusion is not even wrong.

    • With respect to the IPCC numbers on climate sensitivity, and at the risk of sounding like a broken record, the only numbers I think have any relevance, are the actual values of climate sensitivity which are observed. I have no interest whatsoever in any numbers derived from the use of non-validated models. How much has the concentration of CO2 increased, and how much has global temperature increased, due soley to this increase in CO2? So far as I can tell, we do not know what the contribution of factors other than CO2 is, so we have very little idea of how much the rise in CO2 has contributed to the recent rise in global temperatures. I think the conclusions stated by the IPCC have no basis in what I understand as “the scientific method”. The conclusion the IPCC should have come to is we just dont know.

    • curryja, 7/8/11, 8:50 pm, sensitivity II

      Here are some comments from the viewpoint of systems science on your passage du jour:

      “The equilibrium climate sensitivity is a measure of the climate system response to sustained radiative forcing. It is not a projection but is defined as the global average surface warming following a doubling of carbon dioxide concentrations. It is likely to be in the range 2°C to 4.5°C with a best estimate of about 3°C, and is very unlikely to be less than 1.5°C. Values substantially higher than 4.5°C cannot be excluded, but agreement of models with observations is not as good for those values.” IPCC, AR4, SPM, p. 12

      ECS is not a measure of anything in the climate. IPCC defines it in its Glossary thus:

      equilibrium climate sensitivity refers to the equilibrium change in the annual mean global surface temperature following a doubling of the atmospheric equivalent carbon dioxide concentration. Due to computational constraints, the equilibrium climate sensitivity in a climate model is usually estimated by running an atmospheric general circulation model coupled to a mixed-layer ocean model, because equilibrium climate sensitivity is largely determined by atmospheric processes. Efficient models can be run to equilibrium with a dynamic ocean.

      Directly or indirectly, ECS is an output of GCMs. It is fictional because the real climate is never in equilibrium. Callendar (1938), see especially discussion with George Spencer, director, Met Office, 1920-1938. See also Lindzen and Choi (2011), p. 6., lines 121-125, for an alternative explanation. Earth’s climate is also known never to be in thermodynamic equilibrium, as a minimum because it is heated by the Sun, and mechanically agitated by gravitational forces. ECS is a prediction from an approximation to (i.e., a model of) Earth’s real climate. ECS is both a prediction and an estimation, but never a measurement because Earth has not been subjected to a doubling of CO2 since Joseph Black identified it at the start of the Industrial Era.

      [We should rename CO2 Black Gas. Cosmologists hypothesized Dark Matter and Dark Energy to balance their equations of Relativity. Not to be shortchanged, Climatologists deserve recognition for their equivalent mysterious elements of Black Matter and Black Energy that, since the Industrial Era, equilibrate Earth’s climate in ever warmer states.]

      IPCC confirms that the ECS is not a measure of Earth’s real climate as stated above, but is found in models:

      Substantial progress has been made in understanding the
      inter-model differences in equilibrium climate sensitivity.
      AR4, Ch. 8, Executive Summary, p. 591.

      Also, AR4 Chapter 10 is Global Climate Projections. Bold added. Section 10.5 is Quantifying the Range of Climate Change Projections, (Bold added), and contains Box 10.2 Equilibrium Climate Sensitivity. Thus IPCC contradicts the subject paragraph by including ECS among its projections.

      What is IPCC trying to say in the SPM by declaring it is NOT a projection when by definition it is to be ? Could IPCC have meant that it is not a data point but a rate to be ? Earlier, IPCC created an explanation with this definition:

      When a projection is branded “most likely,” it becomes a forecast or prediction. TAR, WG2, Box 3-1.

      And now

      Equilibrium climate sensitivity is likely to be in the range 2°C to 4.5°C with a most likely value of about 3°C, based upon multiple observational and modelling constraints. Bold added, AR4, Technical Summary, ¶TS.6.4.2, p. 88.

      This is a stronger statement than the subject paragraph because it raises the 3ºC figure from a best estimate to a most likely value. Thus ECS appears not to be a projection because it is a prediction. This is reinforced in the following IPCC definition:

      Climate projections are distinguished from climate predictions in order to emphasize that climate projections depend upon the emission/concentration/ radiative forcing scenario used, which are based on assumptions concerning, for example, future socioeconomic and technological developments that may or may not be realised and are therefore subject to substantial uncertainty. AR4, Glossary, p. 943.

      Finally, IPCC admits that GCMs predict ECS:

      Modelling assumptions controlling the cloud water phase (liquid, ice or mixed) are known to be critical for the prediction of climate sensitivity. AR4, ¶8.6.3.2.2 , p. 638.

      This is neither pedantry nor semantics. Scientific models predict real world effects from postulated real world causes. Science assesses these models according to whether future measurements (observations reduced to facts) validate their predictions. Validation is a persistent test of model predictions by measurements, which are two distinct entities. This fundamental principal of science needs to be kept forefront by using consistent, unambiguous terminology.

      Where IPCC speaks of agreement between models and observations, above, it must be referring to other parameters than climate sensitivity. Only now, and since AR4, is the ECS being estimated from measurements. Lindzen & Choi (2011) conclude,

      Finally, with f_Total, the equilibrium climate sensitivity for a doubling of CO2 is obtained via Eq. (3). Here the statistical confidence intervals of the sensitivity estimate at 90%, 95%, and 99% levels are also calculated by the standard error of the feedback factor f_Total. This interval should prevent any problems arising from limited sampling. As a result, the climate sensitivity for a doubling of CO2 is estimated to be 0.7K (with the confidence interval 0.5K−1.3K at 99% levels). This observational result shows that model sensitivities indicated by the IPCC AR4 are likely greater than the possibilities estimated from the observations. Bold added.

      According to L&C, the equilibrium climate sensitivity test is complete and in the language of science, their result invalidates the GCMs.

      • Thank you Mr. Jeff Glassman, Perhaps your observation will now save us, a hole bunch of money.

        http://www.deseretnews.com/article/705351459/Wyoming-gets-supercomputer-for-climate-studies.html

        Hope; we still have the PO…

      • Tom, 7/9/11, 9:06 pm, sensitivity II

        Thanks for hole bunch of money, as in rat-hole, eh?

        But let Wyoming have its little super computer. Maybe they ought to install it outdoors where the programmers might discover the Sun and the clouds. The cost is better than spending it on welfare, and besides the price is lost in the noise. The Kyoto price tag is around $30 trillion in pre-Obama dollars.

      • I wouldn’t worry too much about anything other then the transient climate sensitivity. The earth has been warming for over a century. Have you ever seen an attribution of recent warming that included the long term response to earlier forcings? Or has all the warming been attributed as part of the transient response of recent forcings? Obviously anything past the transient response is so negligible it needn’t be included.

      • Steven, 7/10/11, 12:23 am, sensitivity II

        Perhaps transient climate sensitivity would be better, but Judith Curry didn’t ask for comments on that. Besides, it’s new and not much is known about it yet.

        IPCC reports that GCMs are initialized as of about 1750 in steady state with constant temperature and constant CO2 concentration. This quashes the on-going temperature and CO2 increases evident from the Vostok record. As a result, Industrial Era rises in these parameters are attributed to man (correlation establishing cause), wrongly including on-going natural processes. Based on the Vostok record, Earth should about now be stabilizing (briefly in geological terms) at an anomaly of about 3ºC ± 1ºC, the temperature of the preceding four peaks. So the prediction from Vostok is for a natural temperature response about the same as IPCC predicts for man doubling atmospheric CO2 content after turning off the on-going natural processes.

        If we could see ahead to 2111, when the temperature (anomaly) is 3º C instead of 0ºC and the CO2 concentration is approaching 600 ppm, when the ice caps are gone and Greenland is called whatever might be the Chinese word for breadbasket, we couldn’t tell whether the climate change was natural or anthropogenic.

        By the way, for those who might observe that a CO2 prediction from Vostok would never reach 580 ppm, remember that Vostok is extensively low pass filtered (between about 20 years and 1500 years) so it is heavily attenuated and cannot be compared directly with MLO data. Also, Vostok sits inside the CO2 sink of Antarctic waters while MLO sits in the plume of the massive outgassing from the Eastern Equatorial Pacific, and Keeling himself warned about the use of such data. The temperature record might suffer from the same low pass filtering, but the source/sink bias is correct by use of anomalies, and besides, the temperature record fits the modern temperature record without software bending and blending.

      • Jeff, transient equilibrium is just the equilibrium achieved in a reasonably short amount of time. The reason it is of importance is that if you claim a large equilibrium sensitivity and a small transient sensitivity then you have to account for the response to earlier forcings which would also have a small transient but a large equilibrium sensitivity. There is only so much warming to attribute and the warming can’t be both the equilibrium sensitivity from earlier forcings and the transient sensitivity from current forcings. Bottom line is the larger the equilibrium sensitivity in comparison to the transient sensitivity the less warming which can be placed on recent forcings and the less warming we can expect in the future since the ratio of transient to equilibrium would remain the same. I did find your comment on the Vostok temperatures interesting when l looked at the chart.

      • steven 7/10/11, 2:02 pm, sensitivity II

        You bring in a new, undefined term, transient equilibrium that I could find used nowhere. TAR and AR4 combined uses transient simulation (37), transient climate sensitivity (5), transient sensitivity (2), and transient global climate sensitivity (1).

        IPCC reports appear not to claim that transient simulations run to equilibrium. I found two external references which say unambiguously that they do not.

        As greenhouse gases build up to the equivalent of twice the pre-industrial level (2xCO2 level) in a transient simulation, the ocean and ice masses will not be in equilibrium with the heating of the lower atmosphere: they will be lagging behind the rate of forcing due to their longer time scales. We can then speculate that the global-mean temperature at the time the transient simulation reached 2xCO2 greenhouse levels would not be as high as that for an equilibrium 2xCO2 simulated climate.

        http://www.meteor.iastate.edu/gccourse/model/co2/transient.html

        The transient sensitivity is lower than the equilibrium sensitivity, due to the “inertia” of ocean heat uptake. Fully equilibrating ocean temperatures would require integrations of thousands of model years.

        http://en.wikipedia.org/wiki/Climate_sensitivity

  32. David L. Hagen

    Judith
    How will we handle “climate sensitivity” to “global warming” when global average temperatures start coolingwhile CO2 continues to increase?
    E.g. Easterbrook (2001) predicted cooling beginning about 2007, “cooling of about 0.3-0.5° C until ~2035.”
    Numerous others are providing similar model predictions.
    Nils-Axel Mörner: Arctic Environment by the Middle of this Century

    The date of the New Solar Minimum has been assigned at around 2040 by Mörner et al. (2003), at 2030-2040 by Harrara (2010), at 2042 ±11 by Abdassamatov (2010) and at 2030-2040 by Scafetta (2010), implying a fairly congruent picture despite somewhat different ways of transferring past signals into future predictions.
    The onset of the associated cooling has been given at 2010 by Easterbrook (2010) and Herrara (2010), and at “approximately 2014” by Abdassamatov (2010). Easterbrook (2010) backs up his claim that the cooling has already commenced by geological observations facts. At any rate, from a Solar-Terrestrial point of view, we will, by the middle of this century, be in a New Solar Minimum and in a New Little Ice Age (Figure 7).

    Empirical evidence for a celestial origin of the climate oscillations and its implications Nicola Scafetta

    It is found that at least 60% of the global warming observed since 1970 has been induced by the combined effect of the above natural climate oscillations. The partial forecast indicates that climate may stabilize or cool until 2030–2040.

    Syun-Ichi Akasofu, On the recovery from the Little Ice Age, Natural Science, Vol.2, No.11, 1211-1224 (2010), doi:10.4236/ns.2010.211149

    Craig Loehle, Trend Analysis of Satellite Global Temperature Data, Energy & Environment Vol. 20, No. 7, 2009, 1087-1098.
    etc etc.

    • David L. Hagen

      Does the UK Met Office now expect cooling?

      the Met Office has an intensive programme to make its seasonal and other long-range forecasts accurate enough for presentation to the public again in the future. Adrian Scaife, who is in charge of the research [says]. . . .“We now believe that [the solar cycle] accounts for 50 per cent of the variability from year to year,” []. With solar physicists predicting a long-term reduction in the intensity of the solar cycle – and possibly its complete disappearance for a few decades, as happened during the so-called Maunder Minimum from 1645 to 1715 – this could be an ominous signal for icy winters ahead, despite global warming.

      So, will it rain tomorrow?FT Mag.

  33. It is cold here. Your statistics, however well intentioned, do not explain why my furnace is heating my house in July. I pay a CARBON TAX because burning fossil fuels gives me a carbon “Footprint”. This year I will pay more due to the cold climate that increases my carbon foot print because of the global increase in temperature. ??? Please explain .. ??????

  34. As a non-stats type, I must say that much of this discussion seems to revolve around “taste,” “experience,” and other subjectivae: “But you Baysesianists don’t realize . . .”, etc.

    If that is really the case then our knowledge must be really weak. How can any policy decision be justified?

  35. I think the best use of a Bayesian prior in this sort of situation is in an inverse fashion: Given the evidence, how strong would your prior have to be get a sensitivity number in the range [a,b]? That actually makes the nature of disagreements clearer and lets you know when the data are really too weak to arbitrate between alternative reasonable priors.

  36. This morning I’ve had a chance to read through this thread and reflect a bit. It seems to me that the issue is not so much that the IPCC AR4 chapter 9 authors have made an error in determination of the sensitivity in Fig 9.20, but rather that there is unacknowledged structural uncertainty in their methods for determining climate sensitivity (both statistical and physical/conceptual).

    The field has matured sufficiently that the structural uncertainty needs to be explored, not just with regards to sensitivity calculations, but most other aspects of climate science as well. Failure to to examine structural uncertainty should preclude high confidence conclusions (e.g. very likely, unequivocal) by the IPCC.

    • “Failure to to examine structural uncertainty should preclude high confidence conclusions (e.g. very likely, unequivocal) by the IPCC.”

      The opposite has the same outcome too. i.e. an objective appraisal of structural uncertainty should also should preclude high confidence conclusions.

    • Indeed, on one side we have the valid results of climate science, on the other side we have the need for well justified policy advice. The gap between the scientific knowledge ant the knowledge needed for the well justified policy advice is wide open. People try to bridge the gap creating islands of knowledge that would form a chain to support the bridge, but in vain. All those islands are illusions created by hopeful thinking rather than solid analysis. The step from empirical likelihood functions to PDFs with the help of a well defined prior is one of those illusionary attempts, and even more so, when the likelihood functions themselves are dependent on doubtful assumptions concerning systematic uncertainties and lack of knowledge.

      It would be so easy to accept the skeptics solution of postponing all actions, but that’s not any better justified than the alternatives.

      • You may recall that we discussed the issue of creating pdfs for climate sensitivity in a thread discussing annan and hargreaves entitled “Probabilistic(?) estimates of climate sensitivity.” http://judithcurry.com/2011/01/24/probabilistic-estimates-of-climate-sensitivity/

      • Yes, I remember, and I have actually referred to their paper and discussion on it in several recent comments. My feeling is that there are now several people contributing to this site, who are better prepared to discuss those issues than they were at that time. (I’m certainly one, and hardly the only one.)

        Solving a problem studied for years by many competent scientists is likely to be too much for any site, but at least we might still learn a bit more. My present view of the recent discussion is that we might be able to give useful comments on, how some parts of AR4 could have been written better providing the same valid information, but getting rid of misleading or unsupported formulations. Such discussions might optimistically thinking influence AR5 and make a little better (or some other overviews of the state of science, if we do not extend our thoughts all the way to AR5).

      • Agreed, in fact this was Nic’s motivation for publishing his paper here first, rather than waiting for the much slower journal publication process, in the hope that the discussion would influence the AR5.

      • Your last sentence doesn’t mention the cost of the actions. If this is large and certain this changes the overall equation significantly. If the cost is large, certain and borne in large part by the poor an ethical dimension is also unavoidable. This isn’t the place, no doubt, to go into details. But the radical uncertainties in climate science are not for me matched by the uncertainties of the cost of action.

      • The cost of action is one of the most important factors in wise decision making. Low cost, low risk decisions can be made with little positive evidence, but increasing costs and risks add progressively to the required evidence.

        Similarly the size of benefits (certain or possible from reduction of risks) increases the level of acceptable costs. Something in the spirit of cost-benefit analysis is needed even, when a full analysis is beyond our capabilities, but the problem of decision making is often in just this step.

      • IMO likelihoods and scenarios in principle would be much more useful in decision making re climate change than erroneous PDFs. Cost-benefit for each strategy in the context of each scenario (with some sense of likelihood) would provide the basis for robust decision making.

      • I don’t I think have to understand exactly what this would look like in practice to hope fervently that such a view also influences AR5.

      • My view is that every analysis that tries to estimate costs and benefits over a long period (say 30 years or more) is hopelessly too imprecise. It’s necessary to make long term considerations and scenarios, but more to learn about the significance of near term actions in enabling possibilities for future developments or inadvertent closing of paths that might be important, than to calculate net present values. Then we should make decisions having in mind that they should be the most useful choices over the shorter term that we can really envision.

      • Agreed. Note that option pricing and option theory introduce ideas like learning value and information value which are much more helpful in cases like this than NPV.

      • With all the uncertainties, how do you know that there are valid results from climate science? That is why there are a bunch of us skeptics out here. There are no proven valid results from climate science. There is valid climate data, but it does not validate the unstable climate forecasts.

    • It seems to me that the issue is not so much that the IPCC AR4 chapter 9 authors have made an error in determination of the sensitivity in Fig 9.20, but rather that there is unacknowledged structural uncertainty in their methods for determining climate sensitivity (both statistical and physical/conceptual).

      Judith, is the distinction Nic makes above between evidence for climate sensitivity “from simulations by AOCGMs … and from observational evidence that is either direct or intermediated through simple Energy Balance Models” relevant to this ‘structural uncertainty’?

    • curryja, 7/9/11, 9:14 am, sensitivity II

      You wrote,

      It seems to me that the issue is not so much that the IPCC AR4 chapter 9 authors have made an error in determination of the sensitivity in Fig 9.20, but rather that there is unacknowledged structural uncertainty in their methods for determining climate sensitivity (both statistical and physical/conceptual). Bold added.

      The data in Figure 9.20 also appear in Box 10.2, Figure 1a, p. 798. Using those data, IPCC constructed cumulative distributions for Box 10.2, Figure 2, p. 799. Thus Figure 2 of Box 10.2 contains the cumulative probability versions of the probability density curves of Figure 9.20. The authors involved are not just Gabriele Hegerl and Francis Zwiers for Chapter 9, but include Gerald Meehl and Thomas Stocker for Chapter 10. IPCC gives no explanation for its wise decision at least to try the distribution functions.

      In the First-Order Draft of Chapter 10, IPCC said,

      There is no formal way of estimating a single PDF from the individual results due to different assumptions in each study. Nevertheless, an expert judgement can be based on the average of the nine PDFs shown in Box 10.2, Figure 2 such that best agreement with observations is found for a sensitivity of 3.0°C, with a median value of 3.4°C, similar to the centre of the TAR range and close to the AOGCM average. The average of the nine PDFs suggests that climate sensitivity is very unlikely below 1.5°C (8% probability) and unlikely above 4.5°C (28% probability). Bold added.

      The issue gets confused here because IPCC had defined PDF to be the probability density functions, shown in Figure 9.20, but it averaged probability distribution functions of Box 10.2, Figure 2. Reviewers disapproved, pointing out confusion over the meaning of PDF and regardless, challenging the statistical meaning of averaging PDFs, whatever they might be. This is a structural problem with statistics.

      Accordingly, IPCC revised this section for the final version to say,

      There is no well-established formal way of estimating a single PDF from the individual results, taking account of the different assumptions in each study. Most studies do not account for structural uncertainty, and thus probably tend to underestimate the uncertainty. … [¶] … Progress has been made since the TAR in diagnosing and understanding inter-model differences in climate feedbacks and equilibrium climate sensitivity. … cloud feedbacks (particularly from low-level clouds) have been confirmed as the primary source of climate sensitivity differences … . [¶] … Basing our assessment on a combination of several independent lines of evidence, as summarised in Box 10.2 Figures 1 and 2, including observed climate change and the strength of known feedbacks simulated in GCMs, we conclude that the global mean equilibrium warming for doubling CO2, or ‘equilibrium climate sensitivity’, is likely to lie in the range 2°C to 4.5°C, with a most likely value of about 3°C. Equilibrium climate sensitivity is very likely larger than 1.5°C. Bold added.

      So IPCC retained its conclusion reached by alleged PDF averaging, simply concealing its faulty methodology, pretending to have used a lesser-known, advanced methodology by introducing the weasel-words, well-established . IPCC explicitly and forthrightly introduced the question of structural uncertainty. Moreover, IPCC identified cloud feedbacks as the primary, intermodel structural uncertainty. However, the structural uncertain with cloud feedbacks runs far deeper than intermodel variability in estimating climate sensitivity.

      The equilibrium climate sensitivity is a measure of the climate system response to sustained radiative forcing. … Cloud feedbacks remain the largest source of uncertainty. {8.6, 9.6, Box 10.2} AR4, SPM, p. 12.

      In spite of this undeniable progress, the amplitude and even the sign of cloud feedbacks was noted in the TAR as highly uncertain, and this uncertainty was cited as one of the key factors explaining the spread in model simulations of future climate for a given emission scenario. This cannot be regarded as a surprise: that the sensitivity of the Earth’s climate to changing atmospheric greenhouse gas concentrations must depend strongly on cloud feedbacks can be illustrated on the simplest theoretical grounds, using data that have been available for a long time. … Clouds, which cover about 60% of the Earth’s surface, are responsible for up to two-thirds of the planetary albedo, which is about 30%. An albedo decrease [increase] of only 1%, bringing the Earth’s albedo from 30% to 29% [or vice versa], would cause an increase [decrease] in the black-body radiative equilibrium temperature of about 1°C, a highly significant value, roughly equivalent to the direct radiative effect of a doubling of the atmospheric CO2 concentration. Bold, square brackets added, 4AR, ¶1.5.2, p. 114.

      The structural uncertainties above are not expressed in trivial intermodel variability, but lie at the core of IPCC climate modeling, including its reliance on the radiative forcing paradigm. If AR5 settles no other issue, it needs to resolve the highly significant structural uncertainties with IPCC statistics and in modeling cloud albedo.

  37. Eliza Lynch

    from the British met office refer to WUWT
    Please respect FT.com’s ts&cs and copyright policy which allow you to: share links; copy content for personal use; & redistribute limited extracts. Email ftsales.support@ft.com to buy additional rights or use this link to reference the article – http://www.ft.com/cms/s/2/35145bee-9d38-11e0-997d-00144feabdc0.html#ixzz1RcEhVubW

    “We now believe that [the solar cycle] accounts for 50 per cent of the variability from year to year,” says Scaife. With solar physicists predicting a long-term reduction in the intensity of the solar cycle – and possibly its complete disappearance for a few decades, as happened during the so-called Maunder Minimum from 1645 to 1715 – this could be an ominous signal for icy winters ahead, despite global warming.

  38. It seems we are missing an element in the analysis. We need to determine the probability of a climate scientist making a correct prediction. Then, apply Baysian analysis to any result of their papers to determine the likelihood of it being right, and then that knowledge can be used to determine if anything should be done based on any given paper.

    • David L. Hagen

      Could unrecognized systemic bias from excluded or unrecognized physics be causing the major disconnect between observations of climate sensitivity and projections from global climate models?

      • This is what gave me this idea – the story of the weather man and the wedding. The probability of the weather man being correct is factored into the determination of whether to have the wedding or not. I really don’t see why this wouldn’t apply to climate scientists also.

        Near the bottom of the page:

        http://stattrek.com/Lesson1/Bayes.aspx

  39. This may belong to the Uncertainty Monster idea or at least it may be related to it, but the idea is to use Bayesian statistics to determine if policy should be enacted based on a prediction of a climate scientist.
    The analogy is here: http://stattrek.com/Lesson1/Bayes.aspx

    From the link:
    Example 1
    Marie is getting married tomorrow, at an outdoor ceremony in the desert. In recent years, it has rained only 5 days each year. Unfortunately, the weatherman has predicted rain for tomorrow. When it actually rains, the weatherman correctly forecasts rain 90% of the time. When it doesn’t rain, he incorrectly forecasts rain 10% of the time. What is the probability that it will rain on the day of Marie’s wedding?

    Solution: The sample space is defined by two mutually-exclusive events – it rains or it does not rain. Additionally, a third event occurs when the weatherman predicts rain. Notation for these events appears below.

    * Event A1. It rains on Marie’s wedding.
    * Event A2. It does not rain on Marie’s wedding
    * Event B. The weatherman predicts rain.

    In terms of probabilities, we know the following:

    * P( A1 ) = 5/365 =0.0136985 [It rains 5 days out of the year.]
    * P( A2 ) = 360/365 = 0.9863014 [It does not rain 360 days out of the year.]
    * P( B | A1 ) = 0.9 [When it rains, the weatherman predicts rain 90% of the time.]
    * P( B | A2 ) = 0.1 [When it does not rain, the weatherman predicts rain 10% of the time.]

    We want to know P( A1 | B ), the probability it will rain on the day of Marie’s wedding, given a forecast for rain by the weatherman. The answer can be determined from Bayes’ theorem, as shown below.
    P( A1 | B ) = P( A1 ) P( B | A1 ) P( A1 ) P( B | A1 ) + P( A2 ) P( B | A2 )
    P( A1 | B ) = (0.014)(0.9) / [ (0.014)(0.9) + (0.986)(0.1) ]
    P( A1 | B ) = 0.111

    Note the somewhat unintuitive result. Even when the weatherman predicts rain, it only rains only about 11% of the time. Despite the weatherman’s gloomy prediction, there is a good chance that Marie will not get rained on at her wedding.

    • Jim – Using the same reasoning and stats for the weatherman, but starting with the proposition that in the absence of the forecast, we don’t know whether a day is more or less likely to be rainy (i.e., a 50% likelihood), the forecast of rain will be right 90% of the time. Get some umbrellas and set up a tent, Marie.

      Here’s a more interesting question, though. Suppose we’re very skeptical about the existence of rain – we just think it’s unlikely and nothing to worry about even though we can’t say exactly how unlikely. How unlikely would rain have to be in reality for the weatherman’s forecast to have less than a 50% chance of being right?

      If you do the same math, it turns out that even if there were only a 10% probability for rain to be a concern before the forecast, the weatherman’s forecast, with the accuracy you stipulated, would correctly predict rain 50% of the time. Whether to get umbrellas when there’s a 50% chance of rain is of course a policy decision, not a scientific one.

      • Ah, but we do have predictions from climate scientists and we do believe that the climate has warmed. But is it due to the small amount of CO2 emitted by mankind and will it result in a catastrophe?

      • Also, based on this discussion, if we don’t know the probability of a given set of predictions being right or wrong, we can make up a few PDFs just to see what they might look like. Apparently, Bayesian statistics doesn’t require a lot of certainty to get a result.

      • Lets leave the last sentence off from your message:

        Also, based on this discussion, if we don’t know the probability of a given set of predictions being right or wrong, we can make up a few PDFs just to see what they might look like.

        When the Bayesian approach is understood correctly, it indeed tells that we may have many different PDFs when we have taken into account the empirical data. That is an indication of the fact that empirical data is often not as informative as it appears on the first sight.

        One empirical analysis of the type of F+G 06 does not tell that the climate feedback parameter Y is 2.3 ± 1.4 W m^-2 K^-1 with 95% certaintyor that the equilibrium climate sensitivity is in the corresponding range 1.0–4.1 K. Those limits are obtained only, when the additional assumption of uniform prior in Y is made. This assumption is totally independent of the empirical work and Bayesian inference tells that it is so.

        The same empirical data leads to the different limits 1.2–14.2 K with 90% certainty, when a different prior is used in the AR4 WG1 report.

        The first conclusion is that the total uncertainty is larger than that presented in either analysis unless we really have valid reasons to use a specific prior. The correct interpretation of Bayesian method is that without additional information the uncertainty is larger, not smaller than that obtained by naive interpretation on the empirical work.

        One example of such naive interpretation is to take the numbers presented in F+G 06 as valid confidence limits for Y or S. The paper states explicitly that the numbers correspond to uniform prior in Y. That sentence is essential, and the limits are not valid without it. The prior is used in the paper as additional subjective input, it’s not a result of the analysis.

      • Thanks Pekka. What do you think of the idea of factoring in the accuracy of climate scientists’ predictions as part of an evaluation process to determine if their results should be acted upon?

      • My view is that the decision making process is plagued by a number of uncertainties and arguments on, how decisions should be made under such uncertainties. Some of these arguments favor doing less with increasing uncertainties while other arguments support doing more, when the uncertainties are large in certain characteristic ways.

        When the situation is so complex, one should not pick any specific uncertainty and use that as the decisive factor. The complex should rather be accepted as a whole and analyzed starting from specific questions about alternative real world decisions, and looking then, what are the most important factors in deciding on specific choices and further what can be said on those factors.

        I.e., we should approach the process from the need of information and what support science can provide than from science imagining that science by itself could tell, what to do. The right basic approach is essential, when the data has many holes in the range from basic science to wise decisions.

        Sometimes the expression top-down is used for, what I recommend (or at least something similar), while the alternative approach is bottom-up.

    • Jim – I think your result appears unintuitive because of the somewhat odd way that you have set it up, kind of like those quirky word problems on so many standardized tests.

      All we are really looking for is how often is the weatherman right when he predicts rain so we just need to know the total times he predicts rain (in this case 40.5 days) versus total number of times it actually rains (5 days). Not a very good forecaster for a desert area.

  40. The temperature of the earth is sensitive to Albedo. CO2 in the atmosphere is sensitive to the temperature of the oceans. Add in the manmade CO2, if you like. Albedo is sensitive to the temperature of the oceans. Warm oceans melt Arctic Ice and result in Arctic Ocean Effect Snow which increases Albedo and cools the earth. When the earth and oceans are cool, the Arctic freezes and the Snow is turned off and Ice melts and sublimates and Albedo decreases and the earth warms.
    This is the THERMOSTAT FOR EARTH’S TEMPERATURE!
    The temperature of the earth has been extremely stable for the past ten thousand years. The only thing that is that stable and that has a set point is ICE and WATER. None of the other forcing could have maintained the temperature stability that earth has had during the past ten thousand years.

    • The whole point is that the difference from the last 10000 years is that CO2 is heading towards a value that is twice as high as any time in that period, and that just might have an effect. Especially if you compare it with the last time CO2 had these values, which was more than 20 million years ago and it was distinctly warmer, not by coincidence. You have to look at the paleo evidence.

      • The paleo-evidence says the temperature rose first, followed by the rise of CO2 – not exactly compelling.

      • You are taking a short-sighted look at the ice ages. Earlier CO2 was dropping slowly for millions of years due to natural processes, and it was getting cooler as a result.

      • How about offering us some data to go with that opinion?

      • http://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-6-1.html
        You can see how it cools during the Eocene as CO2 decreases. Is this news to you?

      • CO2 increases after temperatures increase historically, according to the record.

      • However, while the starting points were different, both the Eemian and the Holocene have had 100ppm CO2 increases.

        Eemian started at 190 and maxed out at 290ppm

        And Co2 followed Delta T

        http://www.ferdinand-engelbeen.be/klimaat/eemian.html

      • During the past 7000 years, CO2 did go up while temperature was stable in a narrow range. The paleo evidence does not show any indication that CO2 is driving temperature. CO2 follows temperature at times and CO2 goes its own way at times, but at no time does CO2 drive temperature. Earth’s Oceans are the hugest Carbonated Drink that anyone could imagine. CO2 in the atmosphere, goes up and down with temperature, just as the fizz in a Carbonated Drink.
        It does not matter how high CO2 goes, if it warms or if it does not. The warmer it gets, more Arctic Sea Ice will be melted and it will snow more.
        This is a very stable system! Look at actual data, not at computer model projections. They have been consistently wrong.

      • How about the Cretaceous when it was 1000 ppm and there was no polar ice? Was that stable too?

      • A stable 80 million year time span? Represented by one CO2 concentration?
        http://en.wikipedia.org/wiki/Cenomanian-Turonian_boundary_event#cite_note-findarticles1-3
        Despite the opening of the S. Atlantic, it was an example of punctuated equilibrium.

      • Exactly, stable but warmer and with higher CO2. It is not like there is only one stable climate for earth, and we could be shifting towards a more Cretacean situation.

      • “discrepancies between pCO2 and climate proxies suggest that during the Cretaceous, pCO2 was not the primary control on climate change. Rather than questioning the primary role of the atmospheric CO2, our modelling results allow us to put forward that the atmospheric CO2 is not the whole story and that, owing to the overwhelming effect and interplay between the paleogeography, the water cycle and the seasonal response, the climate system may undergo subtle climatic changes (as the 4 °C global warming simulated here between the Aptian and the Maastrichtian runs). The identified atmospheric feedbacks including changes in planetary albedo, in water vapour distribution and in meridional latent heat transport are all poorly represented in zonal energy balance model as the one used in [7] whereas they appear to be of primary importance when focusing on ancient greenhouse climates. Our study implies that the use of a global relationship between pCO2 and temperature independent of the geography in long time scale carbon cycle model [37] and [38] may induce significant errors.”

        http://ff.org/centers/csspp/library/co2weekly/20060918/20060918_16.html

      • In other words, pCO2 might driven by temperatures (or by things that effect temperature), rather than being a principle driver of climate change.

      • I would also expect CO2 follows warming as there will also be more life which makes more CO2 with more warming.

      • You just made that up.

      • Yeah, that’s what I meant by when said that’s what I would expect. Show me it’s wrong if you know otherwise.

      • Life has a tendency to sequester carbon below ground causing coal and oil deposits on long time scales, removing it from the atmosphere/ocean system, and therefore acting as a net sink. The fossil fuels return this sequestered carbon to the atmosphere, as can certain volcanic events.

      • But aren’t those long time scales millions of years? The reason I would expect that is because CO2 levels are higher in summer than winter and my understanding is that is due to the increase in vegetation. So there is a kind of hemeostatis effect with CO2. More heat and more CO2 and precipitation will lead to more life.

      • CO2 decreases most quickly as plants grow, which is in the summer season and in the daytime. Growing plants are net users of CO2 which is why planting more forests should help keep amounts down.

      • Thus, Dyson’s miracle carbon eating trees.

        I have a problem with his trees. They’ll be beautiful giants. When we need to release the carbon, tree huggers will chain themselves to those majestic giants and cause another so-called LIA.

      • JCH –
        The solution to your dilemma is a match in the forest. Sometimes happens when some fire fighter needs overtime money. Or a controlled burn decides to be uncontrolled. Or a lot of other ways.

        Although if one waits long enough Mama Nature will take care of it via lightning ignition.

      • ok, higher in the winter than the summer. You were right about that.

  41. .Jeff, yes sloppy writing on my part. In my defense it was late at night for me, I was in a hurry to call it a night and it caught me off guard to read transient sensitivity was a new concept. Hopefully the intent of my comment managed to make itself known and if not that’s ok we can skip it.