The IPCC Fourth Assessment Report of 2007 (AR4) contained various errors, including the well publicised overestimate of the speed at which Himalayan glaciers would melt. However, the IPCC’s defenders point out that such errors were inadvertent and inconsequential: they did not undermine the scientific basis of AR4. Here I demonstrate an error in the core scientific report (WGI) that came about through the IPCC’s alteration of a peer-reviewed result. This error is highly consequential, since it involves the only instrumental evidence that is climate-model independent cited by the IPCC as to the probability distribution of climate sensitivity, and it substantially increases the apparent risk of high warming from increases in CO_{2} concentration.

In the Working Group 1: The Physical Science Basis Report of AR4 (“AR4:WG1”), various studies deriving estimates of equilibrium climate sensitivity from observational data are cited, and a comparison of the results of many of these studies is shown in Figure 9.20, reproduced below. In most cases, probability density functions (PDFs) of climate sensitivity are given, truncated over the range of 0°C to 10°C and scaled to give a cumulative distribution function (CDF) of 1 at 10°C.

Figure 1. IPCC AR4:WG1 Figure 9.20. [Hegerl et al, 2007] (click on figure to enlarge)

Of the eight studies for which PDFs are shown, only one – Forster/Gregory 06 [Forster and Gregory, 2006] – is based purely on observational evidence, with no dependence on any climate model simulations. Forster/Gregory 06 regressed changes in net radiative flux imbalance, less net radiative forcing, on changes in the global surface temperature, to obtain a direct measure of the overall climate response or feedback parameter (Y, units Wm^{-2} °C^{-1}). This parameter is the increase in net outgoing radiative flux, adjusted for any change in forcings, for each 1°C rise in the Earth’s mean surface temperature. Forster/Gregory 06 then derived an estimate of equilibrium climate sensitivity (hereafter “climate sensitivity”, with value denoted by S), the rise in surface temperature for a doubling of CO_{2} concentration, using the generally accepted relation S = 3.7/Y °C.

Measuring radiative flux imbalances provides a direct measure of Y, and hence of S, unlike other ways of diagnosing climate sensitivity. The method is largely unaffected by unforced natural variability in surface temperature and uncertainties in ocean heat uptake, and is relatively insensitive to uncertainties in fairly slowly changing forcings such as tropospheric aerosols. The ordinary least squares (OLS) regression approach used will, however, underestimate Y in the presence of fluctuations in surface temperature that do not give rise to changes in net radiative flux fitting the linear model. Such fluctuations in surface temperature may well be caused by autonomous (non-feedback) variations in clouds, acting as a forcing but not modelled as such. The authors gave reasoned arguments for using OLS regression, stating additionally that – since OLS regression is liable to underestimate Y, and hence overestimate S – doing so reinforced the main conclusion of the paper, that climate sensitivity is relatively low.

Forster & Gregory found that their data gave a central estimate for Y of 2.3 ± 1.4 per °C, with a 95% confidence range. As they stated, this corresponds to S being between 1.0 and 4.1°C, with 95% certainty – what the IPCC calls ‘extremely likely’ – and a central (median) estimate for S of 1.6°C.

Forster & Gregory considered all relevant sources of uncertainty, and settled upon the standard assumption that errors in the observable parameters have a normal distribution. Almost all the uncertainty in fact arose from the statistical fitting of the regression line, with only a small contribution from uncertainties in radiative forcing measurements, and very little from errors in the temperature data. That supports use of OLS regression. The Forster/Gregory 06 results were obtained and presented in a form that accurately reflected the characteristics of the data, with error bands and details of error distribution assumptions, so permitting a valid PDF for S to be computed, and compared with the IPCC’s version.

It follows from Forster & Gregory’s method and error distribution assumption that the PDF of Y is symmetrical, and would be normal if a large number of observations existed. Strictly, the PDF of Y follows a t-distribution, which when the number of observations is limited has somewhat fatter tails than a normal distribution. But Forster & Gregory instead used a large number of random simulations to determine the likely uncertainty range, thereby robustly reflecting the actual distribution. The comparisons given below would in any case be much the same whether a t-distribution or a normal distribution were used. From the close match to the IPCC’s graph (see Figure 5, below) achieved using a normal error distribution, it is evident that the IPCC made a normality assumption, so that has also been done here. On that basis, Figure 2 shows what the PDF of Y looks like. The graph has been cut off at a lower limit of Y = 0.2, corresponding to the upper limit of S = 18.5 that the IPCC imposed when transforming the data, as explained below.

Figure 2 (click on figure to enlarge)

Knowing the PDF of Y, the PDF of the climate sensitivity S can readily be computed. It is shown in Figure 3. The x-axis is cut off at S = 10 to match Figure 1. The symmetrical PDF curve for Y gives rise to a very asymmetrical PDF curve for S, due to S having a reciprocal relationship with Y. The PDF shows that S is fairly tightly constrained, with S ‘extremely likely’ to lie in the range of 1.0–4.1°C and ‘likely’ (67% probability) to be between 1.2°C and 2.3°C.

Figure 3 (click on figure to enlarge)

However, as Figure 4 below shows, the IPCC’s Forster/Gregory 06 PDF curve for S (as per Figure 1) is very different from the PDF based on the original results, shown in Figure 3. The IPCC curve is skewed substantially to higher climate sensitivities and has a much fatter tail than the original results curve. At the top of the ‘extremely likely’ range, it gives a 2.5% probability of the sensitivity exceeding 8.6°C, whereas the corresponding figure given in the original study is only 4.1°C. The top of the ‘likely’ range is doubled, from 2.3°C to 4.7°C, and the central (median) estimate is increased from 1.6°C to 2.3°C.

Figure 4 (click on figure to enlarge)

So what did the IPCC do to the original Forster/Gregory 06 results? The key lies in an innocuous sounding note in AR4:WG1concerning this study, below Figure 9.20: the IPCC have “transformed to a uniform prior distribution in ECS” (climate sensitivity), in the range 0–18.5°C. By using this prior distribution, the IPCC have imposed a starting assumption, before any data is gathered, that all climate sensitivities between 0°C and 18.5°C are equally likely, with other values ruled out. In doing so, the IPCC are taking a Bayesian statistical approach. Bayes’ theorem implies that the (posterior) probability distribution of an unknown parameter, in the light of data providing information on it, is given by multiplying a prior probability distribution thereof by the data-derived likelihood function[i], and then normalizing to give a unit CDF range.

Forster/Gregory 06 is not a Bayesian study – the form of the PDF for its estimate of Y, and hence for S, follows directly from the regression model used and error distribution assumptions made. It is possible to recast an OLS-regression, normal-error-distribution based study in Bayesian terms, but there is generally little point in doing so since the regression model and error distributions uniquely define the form of the prior distribution appropriate for a Bayesian interpretation. Indeed, Forster & Gregory stated that, since the uncertainties in radiative flux and forcing are proportionally much greater than those in temperature changes, their assumption that errors in these three variables are all normally distributed is approximately equivalent to assuming a uniform prior distribution in Y – implying (correctly) that to be the appropriate prior to use.

Returning to the PDFs concerned, can we be sure what exactly the IPCC actually did? Transforming the PDF for S to a uniform 0°C and 18.5°C prior distribution in S, and then truncating at 10°C as in Figure 1 (thereby making the 18.5°C upper limit irrelevant), is a simple enough mathematical operation to perform, on the basis that the original PDF effectively assumed a uniform prior in Y. At each value of S, one multiplies the original PDF value by S^{2}, and then rescales so that the CDF is 1 at 10°C. On its own, doing this is not quite sufficient to make the original PDF for S match the IPCC’s version. It results in the dotted green line in Figure 5, which is slightly more peaked than the IPCC’s version, shown dashed in blue. It appears that the IPCC also increased the Forster/Gregory 06 uncertainty range for Y from ± 1.4 to ± 1.51, although there is no mention in AR4:WG1 of doing so. Making that change before transforming the original PDF to a uniform prior in S results in a curve, shown dotted in red in Figure 5, that emulates the IPCC’s version so closely that they are virtually indistinguishable.

Figure 5 (click on figure to enlarge)

Altering the probability density of S, by imposing a uniform prior distribution in S, also implicitly changes the probability density of Y, as the two parameters are directly (but inversely) related. The probability density of Y that equates to the IPCC’s Figure 9.20 PDF for Forster/Gregory 06 is shown as the pink line in Figure 6, below, with the observationally-derived original for comparison.

Figure 6 (click on figure to enlarge)

The IPCC’s implicit transformation has radically reshaped the observationally-derived PDF for Y, shifting the central estimate of 2.3 to below 1.5, and implying a vastly increased probability of Y being very low (and hence S very high).

Use of a uniform prior distribution with an infinite range is a device in Bayesian statistics intended to reflect a lack of knowledge, prior to information from actual measurement data, and any other information, being incorporated. Such a prior distribution is intended to be used on the basis that it is uninformative and will influence the final probability density little, if at all. And, when used appropriately (here, applied to Y rather than to S), that can indeed be the effect of a very wide, symmetrical uniform prior.

But the IPCC’s use of a uniform prior in S has a quite different effect. A uniform prior distribution in S effectively conveys a strong belief that S is high, and Y low. Far from being an uninformative prior distribution, a uniform distribution in S has a powerful effect on the final PDF – greatly increasing the probability of S being high relative to that of S being low – even when, as here, the measurement data itself provides a relatively well constrained estimate of S. That is due to the shape of the prior, not to the imposition of upper and lower bounds[ii] on S – or, equivalently, on Y. Since the likelihood function is very small at the chosen bounds, they have little effect.

What the ‘uniform prior distribution in S’ transformation effected by the IPCC does is scale the objectively determined probability densities at each value of Y, and at the corresponding reciprocal value of S, by the square of the particular value of S involved. Mathematically, the transformation is equivalent to imposing a prior probability distribution on Y that has the form 1/Y^{2}. So, the possibility of the value of S being around 10 (and therefore Y being 0.37) is given 100 times the weight, relative to its true probability on the basis of a uniform prior in Y, of the possibility of the value of S being around 1 (and therefore Y being 3.7).

The IPCC did not attempt, in the relevant part of AR4:WG1 (Chapter 9), any justification from statistical theory, or quote authority for, restating the results of Forster/Gregory 06 on the basis of a uniform prior in S. Nor did the IPCC challenge the Forster/Gregory 06 regression model, analysis of uncertainties or error assumptions. The IPCC simply relied on statements [Frame et al. 2005] that ‘advocate’ – without any justification from statistical theory – sampling a flat prior distribution in whatever is the target of the estimate – in this case, S. In fact, even Frame did not advocate use of a prior uniform distribution in S in a case like Forster/Gregory 06. Nevertheless, the IPCC concluded its discussion of the issue by simply stating that “uniform prior distributions for the target of the estimate [the climate sensitivity S] are used unless otherwise specified”.

The transformation effected by the IPCC, by recasting Forster/Gregory 06 in Bayesian terms and then restating its results using a prior distribution that is inconsistent with the regression model and error distributions used in the study, appears unjustifiable. In the circumstances, the transformed climate sensitivity PDF for Forster/Gregory 06 in the IPCC’s Figure 9.20 can only be seen as distorted and misleading.

The foregoing analysis demonstrates that the PDF for Forster/Gregory 06 in the IPCC’s Figure 9.20 is invalid. But the underlying issue, that Bayesian use of a uniform prior in S conveys a strong belief in climate sensitivity being high, prejudging the observational evidence, applies to almost all of the Figure 9.20 PDFs.

[i] The likelihood function represents, as a function of the parameter involved, the relative probability of the actual observational data being obtained. The prior distribution may represent lack of knowledge as to the value of the parameter, with a view to the posterior PDF reflecting only the information provided by the data. Alternatively, it may represent independent knowledge about the probability distribution parameter that the investigator wishes to incorporate in the analysis and to be reflected in the posterior PDF.

[ii] There is logic in imposing bounds: as Y approaches zero climate sensitivity tends to infinity, and S becomes negative (implying an unstable climate system) if Y is less than zero. This situation is unphysical: the relative stability of the climate system over a long period puts some upper bound on S – and hence a lower bound on Y. Similarly, physical considerations imply some upper bound on Y, and hence a lower bound on S. The actual bounds chosen make little difference, provided the observationally-derived likelihood function has by then become very low. The appropriate form of the prior distribution for the Bayesian analysis is more obvious if the bounds are thought of as applying to the likelihood function, rather than to the prior. Equivalently, the bounds may be viewed as giving rise to a separate, uniform but bounded, likelihood function representing independent information, by which the posterior PDF resulting from an unbounded prior is multiplied in a second Bayesian step.

**Biosketch.** Nic Lewis’ academic background is mathematics, with a minor in physics, at Cambridge University (UK). His career has been outside academia. Two or three years ago, he returned to his original scientific and mathematical interests and, being interested in the controversy surrounding AGW, started to learn about climate science. He is co-author of the paper that rebutted Steig et al. Antarctic temperature reconstruction (Ryan O’Donnell, Nicholas Lewis, Steve McIntyre and Jeff Condon, 2011, Improved methods for PCA-based reconstructions: case study using the Steig et al. (2009) Antarctic temperature reconstruction, Journal of Climate – print version at J.Climate or preprint here).

Even if the foundations are strong you will be there to chip away at them in the hope they will fall down. Because you don’t like the conclusions they support.

Mirrow, meet lolwot.

Lolwot, meet mirror.

I was always led to believe that that is how science is supposed to work. Sure did when I was writing my Masters thesis and the pesky experimental data disagreed with my beautiful theory.

Or does climatology work in a different way? Prayer perhaps? Divine inspiration? Sacrifices to Gaia? Or just a nice comforting huddle together to come up with a consensus?

But no actual science. :-(

The entire deceptive fabric is rapidly unraveling,

The climate scandal has rounded the last corner to Reality:

“Truth is victorious, never untruth.”

Mundaka Upanishad 3.1.6; Qur’an 17.85

and numerous other scriptural verses

Thanks, Judith!

Oliver

‘Even if the foundations are strong…’ sounds like you are channeling Gavin Schmidt. Well it takes one to know one.

There is a close relationship between this post and the paper of Annan and Hargreaves discussed here in an earlier thread.

I don’t think that there are objective arguments that can tell, what is the correct prior in Bayesian analysis. Neither is it possible to avoid the difficulty of selecting the prior by declining to use Bayesian approach, because the choice of prior is there, whether it’s acknowledged or not.

After I got over some stupid errors in reading the Annan and Hargreaves article, I certainly prefer priors that are rather uniform in Y to those uniform over a range in S. Thus I basically agree with the message of the post, while and consider this agreement to be only a subjective judgment. The Annan – Hargreaves article seems to have the same general view, although they formulate the choice of prior somewhat differently.

I think that there is much valid analysis in Annan & Hargreaves’s papers. I support the scathing criticisms they made of the Frame 2005 paper advocating, in many cases, the use of a uniform prior in S.

The arguments for selecting a prior in Y rather than in S, and then transforming the resulting PDF for Y into a PDF for S, have not been explored by Annan & Hargreaves, so far as I am aware. I would like to see these arguments explored much more fully.

In physical sciences, where an OLS regression model with normally distributed errors is validly used to estimate a slope parameter between two variables with observational data, errors in the regressor variable contributing a small part of the total uncertainty, it is usual to accept the uniform prior in the slope parameter (here Y) implied by the regression model. That prior is the appropriate non-informative reference prior in that case.

I agree that they didn’t study the issue thoroughly, but they did discuss the influence of priors. Their proposal of using a Cauchy-distribution for the prior in S happens to be closely related to a rather uniform prior in Y, whether they thought about this point or not. It’s closely related, because the Cauchy distribution has a 1/S^2 tail. similarly to what a uniform distribution in Y produces.

I should add that it’s an overstatement to say that there is a clear error in the WG1 report at this point. No principle forbids using that experimental data in the way it was used in WG1. I find the approach highly questionable, but that’s my subjective judgment. The essential importance of the choice of prior should have been emphasized, but I think this importance had not really been appreciated widely, as it’s specifically related to the fact that climate becomes unstable, when the feedback factor reaches +1 and that all high values of climate sensitivity are according to present thinking due to a feedback factor not far from +1.

Pekka, one way I thought about the uniform prior was like this: what happens when S=17-18? Waterworld – the polar ice caps are melted and people are forming the last civilizations on rafts. Right? How about the range S=16-17? Still waterworld, for interval all the way down to about S=10. Then it’s just people fighting out in the streets for water, as massive hurricanes wipe us off the coast. If I’ve been hearing correctly.

Then we get down to the conventional IPCC estimate of 3-5C; still pretty bad, but nothing not nearly as bad as that additional 5-10C.

But if we go with a uniform prior aren’t we effectively saying that the “Waterworld scenario” is more likely actually more likely to occur than

what the models predict, because by cutting off S at 18.5, you’re saying that almost half the probability is that S > 9?

So I know I can’t try to think what hollywood movie is going to be the best emulation of the next century. But somehow it just seems more likely that the planet is setup in a way that avoids falling into runaway positive feedback scenario. This is how I would want my prior setup.

Pekka:

Contrary to your assertion, a principle forbids using the experimental data in the way it was used in WG1. This principle is the law of non-contradiction.

While the uniform prior is non-informative about the climate sensitivity, it can be shown that non-uniform priors of infinite number are equally non-informative. The proposition that probability density function ‘a’ is the non-informative prior and probability density function ‘b’ is the non-informative prior and so forth is false, by the law of non-contradiction.

Yeah pretty much shows the IPCC massaging the data for a preferred result. The paper was peer-reviewed but the IPCC method of re-reporting the results was not (excpet by the IPCC process itself). If you can’t call it an error (technically) it is at least very bad (and biased) practice. It feels like these kinds of leanings saturate this science, even unashamedly in the face of obvious criticism.

@SUT, the normal-looking PDF curve for Y is not at all a reflection of real Earth climate Y. Besides that models that do a good job over large time spans also suggest otherwise, we can see that the temperature rises have been very large in the last few decades and are due for a pull back. The Earth (perhaps the faster than expected melting ice absorbing much heat, not reflected in current temperatures, in reaction to greenhouse effect), like all systems, eventually speeds up its reaction too far above average levels and then results in effectively pulling in the reverse direction to slow down the runaway train (ie. oscillation behavior around an implied average point that we see in many systems, including human systems like the stock market). This leads to the cycles we see in many systems. It seems we are near a top cycle of some sort in Earth surface temperatures.

Forster/Gregory 2006 measured data not going that far back and focused on this time period where the net increase was very large, so we should expect Y to have been larger than the climate actual Y since Y is a reflection of the rate of growth. As we near a local maximum, we are approaching Ys that are much higher than average (corresponding to S that are much lower). [Note as well that Y is a linear approximation that only reflects the immediate curve in a local region, perhaps even less than 1 year from the central point.]

So, in short, the study was not one to give a good estimate of S for the climate. Now, I don’t know why that study was included, but an accurate reflection of Earth Y would not lead to a normal looking PDF for Y as we naively draw from the very simplified FG2006 study.

I don’t know why IPCC included the modified FG2006 instead of simply omitting that data altogether or include after a better explanation of a clear transformation.

As for the 0-18.5 range for S, well, FG2006 recognized at the beginning of the report that we have not been able to rule out high values of S. Our S=3 might actually be too low. To quote:

“In 1990 the Intergovernmental Panel on Climate Change (IPCC) suggested a range of 1.5–4.5 K for the global surface equilibrium temperature increase associated with a doubling of CO2 (Houghton et al. 1990). Since then, despite a massive improvement in models and in our understanding of the mechanisms of climate change, the uncertainty in our projections of temperature change has stubbornly refused to narrow (Houghton et al. 2001). …In particular, it has proved extremely difficult to rule out very high values of climate sensitivity using observations (Gregory et al. 2002).”

In any case, just like 0-18.5 for Earth climate (long term) S could conceivably be an unrealistic prior, it is also an unrealistic prior for Earth climate (long term) Y. An 18.5 Y, for example, corresponds to an S near 0.2 which is much lower than the physical models tied to historical data and believed by most climatologists predicts.

The FG2006 should perhaps have been left out of the IPCC report or a clear transformation of that data (likely through a different paper) should have been undertaken to produce better data. But accepting FG2006 without a transformation to deal with the bias of measuring Y at a time when we are due for (and have been undergoing) a pullback in rate of growth of T would be inaccurate.

@Nic Lewis: Is it possible that there were sufficient weaknesses in the methods, measurements, analysis, etc, employed in Forster/Gregory 06, but that the data itself is deemed useful? The IPCC isn’t based upon one scientific paper. It takes lots of views into account, so that might justify the particular Bayesian approach taken to transform the Forster/Gregory 06 data points.

As an example, if I set out to map the major underwater features of a particular lake by looking carefully from a particular point 50 feet above the surface level from one of the banks and performing naive calculations, I may very well end up with results that will be skewed; however, the data might still be very useful and can be transformed into something much more accurate by applying the proper mathematical transformations with physical justifications.

That said, the “massage” they undertook, while defensible to some extent based on models believed to be reliable and somewhat questionable but normally defensible mathematical considerations, might not have been the soundest approach to have taken although perhaps the best they could achieve from the data and understanding they had at that time.

Better than to say they were dishonest (especially since they did point out the transformation), you at least understand what they did, have told the public in much clearer terms than they did, and can similarly use the Forster/Gregory 06 raw results to argue a different sensitivity range.

@Nic Lewis: Let me add that in the future I may try my hand at understanding the details of the FG06 paper and the statistical analysis you cover here. It would be neat to come up with a more refined analysis as a learning exercise.

In the meantime, let me add that back (over a month.. so the details are hazy in my mind today) when I read the wikipedia page on this topic, I concluded that based on the details you give, that you might have assumed the values from the FG06 paper described an implied prior PDF that is a normal distribution (iirc). However, I think an accurate distribution would have 0 probability or very very low values to the left of 0 for S (so it might be more like Poisson dist). And this would imply a Y PDF (prior?) that is not normal and perhaps between what you assumed and what the IPCC assumed. Also, we should be able to calculate approx details of the Y PDF instead of assuming, right, since we have data points? So that appears to be the way to go. And we might want to also consider a data set that goes farther back in history (beyond satellite data).

@Nic, I have a question, and then basically disagree with you in your view that the IPCC failed to apply justifiable approaches or at least was trying to hide their approach. Perhaps you didn’t notice the IPCC report says a number of words on this issue.

But first, I realize after rereading your article (and unlike just stated in earlier comment) that you were explicit about the FG06 Y value having a normal distribution.

Now, the question. Did you realize that the IPCC graphs you are looking at are on equilibrium climate sensitivity values, which the FG06 paper specifically stated it was not deriving? The Y they use comes from a particular simple linear model. The S from the IPCC comes from a definition that assumes we know the future of the full behavior of the nonlinear system and can let the 2xCO2 effect run to completion (or absent that ideal value, can approximate it using models and possibly also using various simplified assumptions and calculations to speed how long it takes to calculate the value). Thus, the Y from FG06 and the IPCC S are not related as you stated but only approximately so.

If that equation is used to convert, the IPCC is justified in using some other form of processing as well. They state clearly that they use Bayesian methods, do use a uniform prior, and discuss this in at least various paragraphs in section 9.6.1, acknowledging that certain studies have not used a uniform prior, although it is common to do so.

This is a quote where they acknowledge that some papers use nonuniform priors.

“..Note that uniform prior distributions for ECS, which only require an expert assessment of possible range, generally assign a higher prior belief to high sensitivity than, for example, non-uniform prior distributions that depend more heavily on expert assessments (e.g., Forest et al., 2006).”

As I said in the earlier comment, a normal distribution for Y is not likely accurate at all. Given that this Y is not the Y of the accepted nonlinear models of the earth system and that they clearly state that they perform Bayesian methods using uniform prior on various studies, I see neither bad science nor dishonest science from the IPCC as regards FG06 manipulation. Imperfect, yes, but so is the FG06 paper.

BTW, note that we can’t get S from just measuring values. We may or may not approximate it, but FG06 does have to resort to models as does anyone. The effects of 2xCO2 cannot be measured, as you appear to state*, since we can’t know that the equilibrium has been reached because we don’t and will never fully understand the earth system (which certainly isn’t described fully by the model FG used).

[*]You had said:

“is based purely on observational evidence, with no dependence on any climate model simulations … to obtain a direct measure of the overall climate response or feedback parameter … Measuring radiative flux imbalances provides a direct measure of Y, and hence of S, unlike other ways of diagnosing climate sensitivity.”

>> It appears that the IPCC also increased the Forster/Gregory 06 uncertainty range for Y from ± 1.4 to ± 1.51, although there is no mention in AR4:WG1 of doing so.

Three items (the most applicable is likely the last one below).

First, they state in 9.6.2, “Wigley et al. (1997) pointed out that uncertainties in forcing and response made it impossible to use observed global temperature changes to constrain ECS more tightly than the range explored by climate models at the time (1.5°C to 4.5°C), and particularly the upper end of the range, a conclusion confirmed by subsequent studies.”

So it is possible the IPCC changed the error bars for that reason (I haven’t done the math). Essentially, this would suggest that perhaps FG06 did not treat errors appropriately (maybe one of their assumptions or normal errors was inconsistent with the Wigley ’97 report). On the other hand, fig 9.20 suggests that the original errors were likely fine.

Second, (again I have not done the math), the error change might have been necessary in order to encompass the confidence level the IPCC sought.

Finally, look at what else they had to say. Among the points here is that they renormalized after clipping. Nic, did you carry out that step?

And particularly, note how they reason that below 0 climate sensitivity must be clipped (or adding more co2 would be equal to cooling the system). FG06’s normal dist is unclipped.

“All PDFs shown are based on a uniform prior distribution of ECS and have been rescaled to integrate to unity for all positive sensitivities up to 10°C to enable comparisons of results using different ranges of uniform prior distributions (this affects both median and upper 95th percentiles if original estimates were based on a wider uniform range). Thus, zero prior probability is assumed for sensitivities exceeding 10°C, since many results do not consider those, and for negative sensitivities. Negative climate sensitivity would lead to cooling in response to a positive forcing and is inconsistent with understanding of the energy balance of the system (Stouffer et al., 2000; Gregory et al., 2002a; Lindzen and Giannitsis, 2002).”

So they explain the reasoning and applied certain transformations to everyone if necessary (and they stated if they applied such).

At the bottom of 9.6.2, there are a few lines on FG06, where the IPCC clearly gives all the exact values derived by FG06:

“Some studies have further attempted to use non-uniform prior distributions. ..Forster and Gregory (2006) estimate ECS based on radiation budget data from the ERBE combined with surface temperature observations based on a regression approach, using the observation that there was little change in aerosol forcing over that time. They find a climate feedback parameter of 2.3 ± 1.4 W m–2 °C–1, which corresponds to a 5 to 95% ECS range of 1.0°C to 4.1°C if using a prior distribution that puts more emphasis on lower sensitivities as discussed above, and a wider range if the prior distribution is reformulated so that it is uniform in sensitivity (Table 9.3).”

I would suppose that the authors of the study in question might have some insight as to a potential misrepresentation of their data and findings.

Have they been asked to comment on Lewis’ analysis?

I didn’t ask Forster and Gregory to comment on my analysis in advance. I thought that would put them in a difficult position, as they were Contributing authors for chapter 9 of AR4:WG1 and, presumably, accepted (at least tacitly) the IPCC’s treatment of their results. The thrust of my post is very much consistent with what they wrote in their 2006 paper.

I have drawn the post to their attention.

I’m having trouble with your logic.

You didn’t ask them to comment because you didn’t want to put them in a difficult situation.

So to avoid that, you write this post and then you say that you believe that they tacitly accepted a misuse of their data and findings?

Do you think that stating that you assumed that they tacitly accepted a misuse of their data and findings

does notput them in a difficult situation?Joshua, I am having trouble with your logic as well. In a nutshell… why do ‘you’, support all of this sloppy science, which has been shown to be the hallmark of the UN efforts & their AGW camp? Is this the type of work that you would be doing if you were a scientist, involed in these projects? Would you care enough, to be more accurate?

Sorry – I missed this earlier.

I’m not supporting sloppy science. If nic’s thesis is correct, it seems to me to represent a significant problem re: estimates of sensitivity. If his implication is correct – that this was deliberate misuse of Forster/Gregory 06 for the purpose of deception, then I think that it’s a bigger problem. If his further implication – that not only was this a deliberate misuse of Forster/Gregory 06 for the purpose of deception, but that they tacitly accepted such a deceptive distortion, then it is somewhat even more of a problem.

Nothing that I’ve posted here should be construed otherwise.

Really?

Ever wonder why others misunderstand you?

Joshua, it would be unethical for Nick Lewis not to make public his finding on such an important issue.

Directly requesting a comment from Forster and Gregory would have forced them to go on the record. By drawing Forster and Gregory’s attention to the problem, he’s allowed them to respond in their own time, if they wish to.

No disagreement.

They could (1) choose to not respond. Obviously, that would put them in a

“difficult situation.” But he has publicly stated that he has called their attention to his analysis. The are in the same “difficult situation” that they would have been in had they decline to respond.

Alternately, they could respond by saying: (2) They disagree with Nic’s analysis, (3) they agree with Nic’s analysis, (4) they agree with Nic’s analysis but think that the error was based in an inadvertent mistake in advanced statistical analysis, (5) they agree with Nic’s analysis and agree with his implication that the mis-use of their data and findings was intentional with the purpose to deceive.

I think that either of those first four options would put the authors in less of a “difficult situation” than they are in now, having been accused, on a public forum, of tacitly approved (an implied deliberate) mis-use of their data and findings.

Even the fifth option would be less of a difficult situation than they are in now, having been accused of tacitly approving of (an implied deliberate) mis-use of their data and findings.

I don’t find it plausible that nic’s was basing his course of actions on a desire to keep from putting the authors in a difficult situation.

(5) they agree with Nic’s analysis and agree with his implication that the mis-use of their data and findings was intentional with the purpose to deceive.There’s no such implication in Nic’s analysis.

He wrote: “In the circumstances, the transformed climate sensitivity PDF for Forster/Gregory 06 in the IPCC’s Figure 9.20 can only be seen as distorted and misleading.”.

A “distorted and misleading” result can be produced unintentionally.

You may be misinterpreting this passage : “[..] Bayesian use of a uniform prior in S conveys a strong belief in climate sensitivity being high, prejudging the observational evidence [..]”

That is a statement about the significance of a uniform prior in S, not about the beliefs or intentions of the IPCC authors.

You may be troubled by the phrase “innocuous sounding note”. Merriam-Webster’s definition no. 1 for “innocuous” is “producing no injury : harmless” (the other definition, no. 2, doesn’t apply). The inappropriate imposition of a uniform prior in S has indeed “harmed” the analysis. The phrase doesn’t signify intent, only that the straightforward simplicity of the note belies the inappropriateness of the procedure.

Joshua, I suggest that these are side-issues.

Joshua,

I agree. There is just no logic to Nic Lewis’s explanation.

If Nic Lewis had a problem, either with F& G’s paper, or the way F& G allowed their paper or data to be used by the IPCC, the proper course of action would have been to raise the matter with them first.

Of course, it would be fair enough to challenge them, later, if their explanation had been inadequate, but to go public with this kind of implied criticism without giving them an opportunity to provide additional background information is a breach of normal procedure, not to say a breach of good manners.

Of course following ‘normal procedure’ and having especial regard to some possible hurt feelings about ‘implied criticisms’ must be our overriding consideration when thinking about climatology.

Its not as if the subject that Nic Lewis has raised is an important one is it?

Do you have link or something that explains this “normal procedure”?

Do you have a link that explains good manners? Normal procedure is for Nic Lewis to treat F& G in the same way as he would like to be treated if he were in their position.

If he were to now say that he’d prefer not to be consulted in advance, I do have to say that I just wouldn’t believe him.

Are you suggesting F & G never read the IPCC report and didn’t notice this alteration? Shouldn’t the IPCC lead authors have told them?

If alleged bad manners is the best argument you have then by all means go with it. I wonder if F&G feel as vexed as you.

It’s not just a matter of manners, it’s a matter of integrity.

If Mr. Lewis wishes to be looked at as an honest analyst, then he should try to find out the truth of the questions he raises, and not simply assume the worst of those he dislikes.

Lewis’ major claim is that the IPCC has taken liberties with the authors’ work. The authors themselves were involved in writing the work that Lewis thinks misrepresents them. The rational thing to do before pointing fingers is to ask the authors what happened.

Lewis’ failure to do this suggests he’s afraid of the answer; he doesn’t think the authors themselves are going to echo his outrage. If he thought they would, obviously he would greatly strengthen his case with a quote from the authors on this terrible betrayal.

But of course Lewis wouldn’t want to contact the authors if he thought they might say something like “Yeah, we discussed it, and we decided all the sensitivity studies should use similar statistical assumptions” or “On reflection, the choice we made in 2006 seems unwise to us now” or, worse of all, “We don’t think they should have changed the analysis, but it doesn’t effect our results in any substantial way.”

Lewis got just enough facts to accuse the IPCC of wrongdoing, then carefully avoiding learning anything more. In this case manners illuminate character.

Robert, You really have a knack for putting your foot in your mouth. Have you been reading any of the reply posts by Lewis here? He clearly is trying to find the truth. He doesn’t make any statements that he dislikes F&G.

tempterrain:

If Nic Lewis had a problem, either with F& G’s paper, or the way F& G allowed their paper or data to be used by the IPCC, the proper course of action would have been to raise the matter with them first.Nic is only obligated to let them know of the comment on their peer-reviewed work after the publication of his comment, and give them a chance to respond. Sorry but you are making up a criterion for responsible behavior in research that simply doesn’t exist. (Having been the subject of numerous peer-reviewed comments on my own work, as well as having produced a number of my own, I can assure you, the way you describe the process to work is unrelated to how it actually works.)

The idea that we are required go through the authors, before we can publish a critique of theirs (or others) work, is risible.

Carrick,

I’m not sure what field you work in but in my more genteel field of electronics that is exactly how its done! At least in my experience it is – and I’ve never once known any substantial error to be not subsequently corrected, and never once known anyone starting mouthing off on the net via some totally uncontrolled blog!

tempterrain, I’m not sure why posting an article on the internet is suddenly “mouthing off”.

I’m in physics (basic and applied sciences). Electronics suggests a very applied field (does it even involve peer reviewed publications?), things may get done very differently over there, but in peer reviewed physiics research, getting it published is just the first step…that’s a bit like painting a red target symbol on your work. What you publish with your name on it is fair game for any and all (reasonable) criticism.

The idea we’re supposed to hide the fact we find problems with other people’s work or otherwise paper over their mistakes is just…weird. And it’s very anti- the philosophy of science. If somebody has screwed up, everybody deserves to know that, as well as who caught the error.

Carrick, I don’t think anyone has any problem with aggressive discussion, but only if if there are genuine, rather than beat up, concerns over the integrity of any particular paper.

Robert summarises it all very well in his earlier post writing : “The rational thing to do before pointing fingers is to ask the authors what happened. Lewis’ failure to do this suggests he’s afraid of the answer”

D-Wave’s claim of the worlds first commerically available quantum computer was heavily criticized in many blogs such as Shtetl-Optimized.

Or were you being sarcastic about electronics being genteel and having superior manners?

Would you mind elaborating on what you mean by tacitly accepted?

Putting your paper out in its format is the appropriate thing; now F & G have the opportunity to analyse, think and reply to your thoughts. If they think you are on the mark, they can say so without demeaning anyone. Reconsideration on the basis of new information or insight is a hallmark of scientific thinking.

One problem with this manner of thinking and rethinking, however, is that the bottom-line reader (like me) wants to know what the impact is if/as you are right. By dropping the non-observational data to supporting argument status, your work dominates. With a climate sensitivity as low as you suggest, what does that do for the IPCC 2100 scenarios? Would the high end become not 4C but 1.5C?

By being professional and courteous and socially sensitive, we can fail to achieve what the best part of science – usefulness – and the best part of determining a truth: the communication of it. This is not a complaint of yourself, but (and from my experience as a geologist in private industry) a note that often we put observations out with the implications left for others. It is the implications, not the observations, that are generally useful. The observations are backup.

Very true, Doug, but then the implications get hijacked politically and everyone argues the toss about that rather than the utility of the science that underpins your thesis!

RobB, another good point. Makes it hard to get anything done, doesn’t it? You put the facts on the table AND the implication, and the fight about the implication cause the facts to be ignored or dismissed. You put the facts on the table and leave the implication for the powers-that-be and either the implications are ignored because they are not supportive of what-is-happening, or the implications ARE understood and your work (and the project) disappears because you are an embarassment.

Case in point: back in 1977 I was working as a recent geology grad in the field looking to find a second silver-lead-copper deposit in the Northwest Territories of Canada. We were supposed to follow a glacially dispersed mineral trail near a, competitor discovery onto our claims. I’d just finished an undergrad glacial course and some field work, and was keen to put my new knowledge into action. Using simple observations of scratches and the “plucking” of rocks on the downside of glacial flow directions, I concluded on my second day in the field that the glaciers moved not towards the competitor’s property but FROM the competitor’s property. Of course the Exploration Manager was excited about the richness of his samples: it was from the place that caused all the fuss in the first place!

That night I went to the EM with my maps, pointed out what was happening, and offered to show the rest of the party how to do these readings. Not only did he not stop our summer’s project, but he took all my readings of my map and told me to stop doing them, as they were unnecessary and the head office wouldn’t understand them anyway. I was young and naive and figured he had his good reasons. Now I know better. He must have lain in his cot that night and wondered how he would get out of the mess.

By the way, the problem was that he was following the assumption that the continental glacier moved in all places in the same way as it does on a large scale. Fact is glaciers move as rivers do: though generally downhill in a given direction, locally they follow the contours of the local topography. In this case, 90* west of the “known” direction. Sort of like global warming: “generally” rising has local cooling (suggesting that, like glaciers, the LOCAL situation, like cloudiness/albedo/ocean currents) is the significant factor, and not an equally distributed, universal force (like CO2)

The only information from the review I can find from either author is here:

“Great read i particularly liked your use of appendices and tables”

[Piers Forster]

Nic, this is an interesting case and I will add it to my list of IPCC misleads.

But it is quite long and technical and really needs a less technical summary of the main point.

PaulM

As I read it, the beauty is that it is based on physical observations, rather than simply model simulations backed by theoretical deliberations or subjective interpretation of questionable paleo-climate data taken over cherry-picked periods of our geological past.

Realize that this may be an oversimplification, but could one simply state that we have physical observations, which lead to a much lower 2xCO2 climate sensitivity (mean value 1.6C) than the model-derived estimates used by IPCC (mean value 3.2C)?

Max

Max, you are summarizing the importance of the F&G study, which is that it is both model independent and shows a relatively low sensitivity. Nic’s point is that the IPCC applied an ad hoc Bayesian procedure that skewed the F&G result strongly to the warm side. Nic’s is an audit result, and a very good one at that. It has that ad hoc “hide the decline” flavor.

Note also the final line: “But the underlying issue, that Bayesian use of a uniform prior in S conveys a strong belief in climate sensitivity being high, prejudging the observational evidence, applies to almost all of the Figure 9.20 PDFs.”

And the justification offered is that it was “considered” this was a good thing to do.

No doubt.

Brian H:

The proposition that “use of a uniform prior in S conveys a strong belief in climate sensitivity being high” is false. Use of a uniform prior in S conveys believe in the proposition that information about the climate sensitivity is absent. The problem of justification results from the fact that non-uniform priors of infinite number convey the same belief. The existence of more than one equally uninformative prior violates the law of non-contradiction. Thus, when the uniform prior is used as a premise to an argument for CAGW, this premise is false, by the law of non-contradiction. As the premise is false, this argument for CAGW is unproved.

My take on the implication was that any data-constrained set of priors ends up generating a lower sensitivity, and hence the choice of a flat set amounts to biasing the sensitivity high.

The particular flat set chosen is arbitrary, as you say. Which means, I think, that Bayesian analysis cannot be useful here in the absence of informative priors.

But then nothing is useful, as every analysis is equivalent to some Bayesian one.

Terry,

That argument doesn’t help the least. Whatever you do, you use a prior, knowingly or not. It’s better to know and recognize the fact than to hide it.

Excellent point, I agree 100%

Pekka:

It sounds as though you have a strongly held belief in a false proposition. The proposition is that the subjective Bayesian approach to extracting a model from observational data is the only possibility. That your belief is false is proved by the existence of the alternative which I sketched in the series of articles that were entitled “The Principles of Reasoning” and published in this blog. Under this alternative, conformity to Bayes’ theorem is enforced but all of a model’s prior PDFs are selected objectively.

Terry,

I don’t believe that the argumentation you presented is valid. As far as I remember I did argue so also in that thread.

Pekka:

I don’t believe you’ve falsified any of the claims that are made by “The Principles of Reasoning.”

Pekka:

Your recollection is wrong.

I have written some comments to the threads. You made three lengthy posts, and I didn’t certainly comment very extensively on that scale, but I did comment.

In any case I’m absolutely convinced that it’s not possible do determine, which prior is favored without some additional input that determines the measure in the phase space of continuous variables. One has to use external information to get rid of the unquestionable mathematical fact that the shape of the PDF can be modified arbitrarily by a change of variables and that mathematics alone cannot favor one set of variables over another.

Collecting more empirical data related on the system may provide information on the appropriate phase space measure, but we need either a theory of the system or data to restrict the choice of the variables.

When we are looking at a well defined and theoretically understood physical system, the theory does provide this additional information (Quantum Mechanics provides many good examples). In absence of such full theory, we may still use intuition to tell, what is likely to be consistent with the physical nature of the system. My intuition tells in this case that the distribution in Y should be smooth and without singularity. This requirement in enough to tell that the distribution in S falls like 1/S^2 or faster for large S. All this is, however, subjective intuition, as the explicit and implicit assumptions that influence my intuition have not been formalized and verified.

All my claims on the impossibility of choosing a unique favored prior are based on the above. The more theory we have on the issue the more we might be able to say about the prior, but that is question of knowledge of physical nature. Mathematics alone cannot say anything about the prior.

I second that. It was too technical for me to get the gist of the science, though I got the gist of potential implications.

Three people have written non-technical summaries at Bishop Hill.

There is a one-sentence summary by woodentop, followed by longer contributions from James and Jeremy Harvey. (None of these have yet received official endorsement from Nic!)

OK, those summaries now have the Nic Lewis seal of approval.

Nicholas Lewis-

Thank you for your efforts on this subject.

My only meager suggestion is that you might want to explain the difference between “likelihood” and “confidence” for those not familiar with the difference.

Again, thank you for your efforts!

Thanks for your comment. A quick answer to your query:

A confidence interval is intended to indicate the reliability of an estimate, in terms of the probability that the true value of the parameter being estimated falling below the lower confidence limit, inside the confidence interval, or above its upper limit, as the case may be.

I use likelihood as part of the statistical term “likelihood function”, being a function of the parameters of a statistical model. The Wikipedia definition of the likelihood function is reasonable: the likelihood of a set of parameter values given some observed outcomes is equal to the probability [density] of those observed outcomes given those parameter values.

One of the true travesties inflicted on Physics by “Climate Science”. Far from being “extremely likely”, a 95% confidence level, outside the social pseudo-sciences, is barely worth treating as a possible hypothesis, much less a confident conclusion. 3 or 4 9s is about the real starting point for that kind of talk.

Important and clearly-written article – thank you to Nicholas Lewis.

Let me get this straight, The IPCC fudged the only climate sensitivity study based on observations, and the rest are from tautological models?

Oh, my Sainted Aunt. This, climate sensitivity to CO2, is only the single most important number and they play ‘Ring Around the Rosie’ with it?

==============

Just to clarify: Forster/Gregory 06 was the only study using instrumental, observational data featured in IPCC Figure 9.20 that was independent of any climate model. Most of the other studies depended directly or indirectly on simulations using complex AOGCMs. The two instrumental studies with peculiar multi-peaked PDFs in Figure 9.20 used only rather simpler climate models, but IMO had other severe shortcomings, as is probably evident from the strange shapes of their PDFs.

The age of (IPCC) enlightenment. Use the following words in a single sentence…statistics, drunks, lamp posts, support.

And gyptis444, don’t forget lost keys…

“By using this prior distribution, the IPCC have imposed a starting assumption, before any data is gathered, that all climate sensitivities between 0°C and 18.5°C are equally likely, with other values ruled out.”

18 is just as likely as 1? Strange prior.

Insane is a better term than strange.

I prefer about 0.1, myself.

>;)

Note, also, that they de facto exclude negatives. How ’bout a range from -18 to +18? A flat S prior distribution would then have at least minimal justification.

The IPCC assumes 18C is as likely as 1C, but 18.000001C is completely unlikely, as is 0.9999999999.

Why? If 1C is as likely as 2.3.4,,, 8C, the surely 0.999999999999C cannot be completely unlikely. This assumption on the part of the IPCC is a clear error.

Why was it not caught? Because is skewed the results in line with the bias of the authors and thus was not noticed.

.

The choice of the range matters only, if the empirical data doesn’t also exclude with high certainty higher and lower values. Thus it may be acceptable from this point of view or not. In the case of the climate sensitivity the outcome depends on, how the empirical data is used. In the IPCC interpretation the combined evidence of the data is restrictive enough to make the results to have very little sensitivity to a widening of the range. Thus the problem is not in these limits.

Agreed: as I wrote in endnote [ii] “The actual bounds chosen make little difference, provided the observationally-derived likelihood function has by then become very low. ”

The reason why the IPCC’s use of a uniform prior in S distorts (in the sense of overriding a lot of information in the data) the probability distribution is its shape, not its bounds.

Junkink:

What you’ve said is not quite right for while a particular value of the equilibrium climate sensitivity possesses a probability density, under the IPCC’s model, it does not possess a probability. The entity that possesses a probability is an interval in the equilibrium climate sensitivity, e.g., the interval that is bounded by 2 and 3.

Nick wrote :

“Indeed, Forster & Gregory stated that, since the uncertainties in radiative flux and forcing are proportionally much greater than those in temperature changes, their assumption that errors in these three variables are all normally distributed is approximately equivalent to assuming a uniform prior distribution in Y – implying (correctly) that to be the appropriate prior to use.”

If the IPCC’s assumption of a uniform prior distribution in S is accepted, what does it imply about the distribution of errors in the three variables above in Forster and Gregory? Is such a distribution one that could be picked by a reasonable subjective choice?

Above (July 5, 2011 at 9:11 am) is a reply to Pekka ‘s comment (July 5, 2011 at 8:10 am).

A uniform prior in S would, I believe, be an appropriate reference (uninformative) prior if errors in measurements of the surface temperature were very much greater than combined errors in the measurements of forcings and net radiative balance. That is the opposite of the actual error magnitudes as reported by Forster & Gregory. Therefore IMO it would not be a reasonable subjective choice of prior.

Thank you Nick. My phrasing was a bit ambiguous, and you’ve kindly answered a slightly different (and surely more sensible) question than the one I meant to ask., which was :

If we assume a uniform prior distribution in S (and stay with the OLS regression method), what is the set E of error distributions such that for any D member of E, one could modify the statement of yours I quoted above to state :

.. since the uncertainties in radiative flux and forcing are proportionally much greater than those in temperature changes, their assumption that errors in these three variables are all D-distributed is approximately equivalent to assuming a uniform prior distribution in S ..

Are any of these D error distributions at all sensible? Have these ever been used as error distributions in the scientific literature?

This is proposed as a roundabout way of evaluating the validity of assuming a unfiorm prior distribution in S, without getting tanlged in the Bayesian subjectivity problem (instead we consider the hopefully more well-understood subjectivity concerning error distributions).

Thank you, _Nic_ .. my apologies, force of blasted habit.

My understanding is that a uniform prior in S (and hence, equivalently, a 1/Y^2 prior in Y) would be the correct uninformative reference prior (that which has least effect on the posterior PDF) if way stayed with Forster & Gregory’s OLS regression method to estimate Y, if and only if the magnitude of the errors in measurements of the surface temperature were much less than combined errors in the measurements of forcings and net radiative balance, the opposite of what Forster & Gregory’s error analysis showed. [In addition Normal error distributions would be required, but I don’t think that there have been any claims that the error distributions are very far from Normal.]

Having least effect on the posterior PDF is not a valid argument as that depends on issues of experimental methodology rather than on the phenomenon being studied.

There may also exist empirical data from different sources and different statistical properties. Which of them should then define the uninformative prior.

Actually the uninformative prior does not exist, and in a problem like this one, there isn’t anything even approaching that ideal. There are only informative priors of variable kind. Choosing one of them is subjective, but arguments can often be presented to support the choice. Such arguments can in my view be based only on pre-existing general understanding of the phenomenon considered. In this case the general understanding that I prefer is the linearized model of forcing + feedbacks. Nonlinearities are a problem, but in absence of knowledge on them, I pick the linearized feedbacks as the most natural basis for choosing the prior.

Pekka, the approximately uniform prior in Y is, according to F&G, a consequence of a) the choice of an OLS regression type, b) the fact that the combined errors in the measurements of forcings and net radiative balance were very much greater than errors in measurements of the surface temperature, and c) the assumption of normal distributions in the errors of the three observables.

Note that a, b and c aren’t concerned with Bayesian analysis.

In order to justify imposing a uniform prior in S, it seems to me that one would need to challenge one or more of a), b) and c), and replace with new regression method and/or ratio of errors and/or choice of distribution, a consequence of which should be a 1/Y^2 prior in Y.

If one doesn’t do this, one hasn’t justified the imposition, and has commited an error (at least an error of judgement). This is at least how I see it, but I’m not an expert.

Correction: “and/or choice of distribution” should read “and/or choice of error distributions”

What you are describing are issues related to the empirical work, but no feature of this work is allowed to influence the selection of the prior. That’s the whole idea of prior. The prior describes, what we knew about S and Y before this empirical analysis was done or what we would know without this work.

It’s the IPCC that’s attempting to derive a PDF, so they need to justify their prior. They use a uniform prior for FG06 of 0 deg C to 18.5 deg C. According to this choice, what we knew about S prior to FG06’s analysis was that it was more likely than not that S > 9 deg C. In no way does this actually represent our knowledge before FG06 – at least this is not supported by any empirical (or indeed GCM-based study) that I’m aware of, let alone taken altogether, so I don’t see the choice as being a reasonably debatable point – it seems wholly inappropriate.

As has been noted, James Annan made the above point to the IPCC with some emphasis in the chapter 9 review comments :

Another query : five of the studies (including FG06) have been “transformed to a uniform prior distribution in ECS”. Why have they all been given uniform priors? Once the result of one study (say FG06) is known, then it doesn’t seem to make sense apply a uniform prior to the next study, since we now have new evidence and the prior applied to the next study should reflect that, shouldn’t it?

oneuniverse,

Your comment brought up in my mind one good and valid reason for drawing the Figure 9.20 as it’s drawn even, if the prior used is not considered valid. This reason should, however, have been explained in the report to prevent misleading conclusions.

The figure is drawn using a flat prior for S, which is the variable of x-axis. That means that each of the curves tells the relative factor that each of the analyses provides in transferring a prior to a posterior distribution in S. That means that in case, where two of the analyses are based on independent data, the values of those two curves can be multiplied to form the combined effect that these two analyses have on the prior. Not all the analyses are independent but some of them are. If the curves had not been presented in this way the multiplication of the values would not be a valid procedure. In this interpretation the flat prior is a tool of presentation rather than final assumption.

Specifically some of the curves are not based in any way on the same information that Foster/Gregory 06 is using. (By “not in any way” I mean that no earlier version of the information is used either and not directly or indirectly.) Then we may consider that curve as a possible prior for applying the Foster/Gregory 06 results.

Furthermore we can take into account the the influence of an alternative prior multiplying by its presentation as function of S the distribution obtained by combining all independent evidence of the curves of the Figure 9.20. As the distribution obtained by combining all independent evidence is narrower than any of the evidence separately, the choice of prior will have a lesser influence on the outcome, but it may still have a significant effect.

In light of the above comments, it’s not satisfactory that the figure is described as a comparison of the PDF’s. It’s not only a comparison of alternative estimates, but also a compilation of estimates that should be combined.

There is some discussion of the significance of the choice of prior also in the main text. In this discussion the results of Foster/Gregory 06 are specifically listed based on the prior flat in Y. All in all I do not, however, find the discussion of the influence of priors and of the influence of combining independent studies to be satisfactory. There’s not much space for any particular topic in IPCC reports, but this is so central that a more comprehensive discussion had been justified also in the main text. The supplementary material skips also some important points.

Why not just show each study in its’ own graph? I find the sphaghetti graphs annoying anyways.

Pekka:

You err when you assert that “Actually the uninformative prior does not

exist” and “There are only informative priors.” In the course of the

following remarks, I prove that uninformative priors are of infinite number. One of these priors is uniform. The others are non-uniform.

Proof:

Let Y designate a real-valued variable. Let X designate a finite interval in Y.

Within X, the probability density of Y is non-zero. Outside X, the

probability density of Y is zero.

Let Z designate a partition of X into segments. Each element of Z is of

infinitesimal length.

Let x designate the value of Y that is associated with a specified segment in Z. Let Pr(x) designate the probability of x. By definition, the

“probability density” that is associated with this segment is Pr(x) divided

by the length of this segment. The probabilities of the various segments in Z sum to 1.

Pr(x) is a function that maps the various values of Y within X to the values of the associated probabilities and is an example of a probability

distribution. By the definition of “information,” when a constant value is

assigned to Pr(x) this assignment is uninformative about the value of Y.

Going forward, I assume that a constant value is assigned to Pr(x) and thus that Pr(x) is uninformative about the value of Y. Two cases are of interest.

In Case 1, the segments in Z are of equal length. In this case, the

probability density within X is uniform or “flat.” The associated probability density function is both uninformative about the value of Y and uniform. In Case 2, the segments in Z are not of equal length. In this case, the probability density within X is uninformative about the value of Y and non-uniform.

Partitions of X into segments of infinitesimal length are of infinite number. Thus, uninformative prior probability density functions are of infinite number. Of the infinity of uninformative priors, one is uniform and the remainder are non-uniform.

Q.E.D.

It follows from the above proof that the problem of determination of the

prior and posterior probability density functions belonging to the

equilibrium climate sensitivity lacks a solution. The IPCC purports to have found a solution but it has found one by the illegitimate method of failure to reveal the existence of the infinity of uninformative and non-uniform priors.

The lack of a solution is generally a barrier to the use of the Bayesian procedure in the estimation of parameters. However, this barrier is overcome in the circumstance that the set of partitions of X containing segments of unequal lengths is empty. This circumstance arises in the case of an experiment consisting of a set of Bernoulli trials. In a single trial, the relative frequency with which the system is observed to be in a particular state will be 0 or 1. In two trials, it will be 0 or 1/2 or 1. In N trials, it will be 0 or 1/N or 2/N or … or 1. The trials partition the interval between 0 and 1 into segments of equal length 1/N. Now let N increase without limit. The “relative frequency” becomes known as the “limiting relative frequency.” The segments are of equal and infinitesimal length. The set of segments that are of unequal length is empty and thus the uniform prior probability density function is unique. It follows that the problems of determination of the prior and posterior probability

density function have solutions.

That these problems have solutions provides a portion of the basis for the entirely logical solution to the problem of extracting a model from

observational data that I describe in “The Principles of Reasoning.” As I have just shown, the IPCC’s procedure for extraction of a probability density function over the equilibrium climate sensitivity is illogical.

Terry,

We are clearly in semantics again.

You choose your variables and you choose your definition of uninformative prior. With suitable semantics, your definition uses the same words as some other definition.

For me uninformative prior is something that does not contain information. Your statement that there are an infinite number of uninformative priors, which actually carry different information means in my semantics that there are no uninformative priors.

On semantics on can argue forever. Thus I guess its better to sop here, when it appears that the reason of differing views is understood.

Furthermore I have already stated in several messages that the experimental setup is not a basis for choosing the prior. That makes your additional arguments void for me.

Pekka:

In the case of the limiting relative frequency, the uniform prior is, in fact, unique. Though the use of this fact plus the entropy minimax formalism, it is possible to extract a model from observational data and natural laws without arbitrariness. The arbitrariness is replaced by optimization of the model’s inferences, using their unique measure – the conditional or unconditional entropy – as the optimized quantity. Your belief in the inevitability of arbitrariness seems to be a consequence from a lack of understanding of the potentiality from this option.

There may be different ways of adding an external constraint to make the outcome unique, but an external constraint or a externally defined measure or something equivalent to that is always needed.

This seems intuitively obvious taking into account the very wide freedom offered by possible changes of variables, and I have seen sufficiently statements that are equivalent to that in high quality sources to confirm that the intuitive conclusion is indeed true. Making me believe that I’m here in error requires references to really good sources.

I should add that there may be limiting behavior that is approached, when the experiment is repeated more and more times, but the question that is of interest here is, how to infer from empirical data of historical type from the real world without the possibility of repeating experiments with identical setups.

Pekka (July 8, 2011 at 12:35 pm):

One can extend logic from its deductive branch and through its inductive branch through replacement of the rule that every proposition has a truth-value by the rule that every proposition has a probability. This produces the “probabilistic logic.”

It can be proved that, in the probabilistic logic, an inference has a unique measure. This measure is the (conditional or unconditional) entropy. The entropy of an inference is the missing information in it for a deductive conclusion. The entropy of a deductive inference is nil.

In view of the existence and uniqueness of the measure of an inference, the problem of how to select the inferences that are made by the extracted model may be solved by an optimization in which that inference is selected which is of least or (depending upon the type of inference) greatest measure.

In the circumstance that it is the inference that is of greatest measure being selected, maximization of the entropy is accomplished under constraints, expressing the available information. A consequence from these constraints is for the extracted model to precisely express that information which is available in informational resources such as observational data.

The rules under which the entropy minimizing or maximizing inferences are selected are the principles of reasoning for the probabilistic logic. Thus, while these principles are constraints that are external to the observational data, they are not external to the system that is comprised of the observational data plus the probabilistic logic.

As the selected inferences are the product of an optimization, one theoretically expects a model to outperform when extracted under entropy minimax. This expectation has been experimentally validated.

You can satisfy yourself on nearly all of the above claims by reading the published literature. There is a bibliography at http://www.knowledgetothemax.com/#Bibliography.

The most essential comment here is that I have been considering inference, when there is no possibility for repeated experiments with the same setup.

The second comment is that as far as I know the entropy maximization requires a well defined measure, which is not available for a continuous variable without some extra information. In the limit of infinite number of repeated experiments that may be available, but that is not, what is of interest in these problems. The practical problems that I have been discussing disappear with sufficient empirical data. My statements have all been formulated having in mind the present problems, where the extent of empirical data is very limited. Usually I have not even thought that someone could apply the statements to situations of plentiful data.

Pekka (July 9, 2011 at 12:09 pm):

Regarding entropy maximization, as you say it requires a well defined measure. This measure may be called “Shannon’s measure.” Its existence is guaranteed by the content of the probabilistic logic. The entropy is Shannon’s measure of an inference.

Entropy maximization does not solve the problem of the posterior PDF of the equilibrium climate sensitivity (ECS), for as I’ve shown “non-informative” (entropy maximizing) prior PDFs are of infinite number. Faced with this infinity of prior PDFs, a member of the “subjective Bayesian” school selects one of them and justifies this selection as being an expression of his/her subjective belief.

A shortcoming is evident in the reasoning of the subjective Bayesians. This is that for person A, prior PDF a can be the true prior while for person B, prior PDF b can be the true prior. The proposition that “a is the true prior and b is the true prior” is false, by the law of non-contradiction.

A proposition of this type is a premise to the IPCC’s argument for the possibility of a catastrophically high equilibrium climate sensitivity (ECS) and this argument must be regarded as unproved in view of the falsity of this premise. However, in arguing for this possibility, IPCC Working Group I has buried all of the unselected priors from public view through a failure to recognize their existence in AR4. Burying the non-selected priors has created the appearance that the IPCC’s argument for the possibility of a high ECS is true when it is unproved.

A decision to mitigate CO2 emissions is apt to be expensive. Such a decision should have a stronger basis than subterfuge.

In the provision of a stronger basis, it should be recognized that the magnitude of the ECS has several shortcomings as a basis for public policy. One, which I’ve already covered, is that the selection of the prior PDF is arbitrary. Another is that the ECS is not an observable feature of the real world. In view of its latter feature, that the ECS has this or that numerical value will forever remain a matter of conjecture. Thus, debate over the magnitude of the ECS must be recognized as scientific nonsense and an alternative basis for making public policy on CO2 emissions found. In the discovery of such a basis, a crucial step would be to replace the magnitude of ECS by the magnitudes of various limiting relative frequencies as the subjects of Bayesian inferences, for unlike the ECS, the limiting relative frequency has but a single uninformative prior.

Terry,

Does the Shannon measure have any potential, when we have no possibility of repeated experiments?

Is anything related to entropy maximization value in the situation that we have been described here, where the empirical evidence is as limited as it is here and where the uncertainties are of the nature they are in this analysis?

All may comments are on this case and similar other cases, they are not about cases, where experiments can be repeated arbitrarily many times or were the physical system consists of a large number of particles that are controlled with identical constraints as the gas molecules are in a macroscopic volume of gas.

Do you think that defining and determining a unique measure is possible int those situations that are of interest here?

I sent the message again before I read it, but I think you can understand, what I try to say.

Pekka (July 10,2011 at 1:16):

Your question: Does the Shannon measure have any potential when we have no possibility of repeated experiments?

My answer: Yes. One’s inability to repeat experiments is no barrier to the construction of an information theoretically optimal model. A model of this type provides that maximum possible information to the user of this model about the outcomes of statistical events. The maximum possible information is precisely what is needed by a maker of public policy. In contrast, a probability density function over the equilibrium climate sensitivity conveys no information to a maker of public policy regarding the outcomes from his/her policy decisions. Speculations about the magnitude of the equilibrium climate sensitivity are useless for the purpose of making policy decisions.

Your question: Is anything related to entropy maximization of value in the situation that we have been described here, where the empirical evidence is as limited as it is here and where the uncertainties are of the nature they are in this analysis?

My answer: Yes. Entropy maximization can be used in the assignment of numerical values to probabilities. For example, it can be used in the assignment of numerical values to the probabilities of the outcomes of statistical events.

Your question: Do you think that defining and determining a unique measure is possible int those situations that are of interest here?

My answer: Yes. You should understand that the probabilistic logic contains only two measures. One is probability. The other is Shannon’s measure.

Terry,

Is your conclusion right now that you can obtain significant information by those methods from the set of available data, or is it that you cannot presently and therefor you don’t know anything?

Terry,

I must add that I have tried to find at least some hints on, how the maximum entropy principle or the Shannon measure would provide something useful to the understanding of the problems being considered here and failed. You have made some rather generic statements, which may well be true in their proper area of application, but I cannot see, how they could be of any significance here. Nothing in your further comments answers to my main concern, which I discuss again in the following paragraph.

The lack of a unique natural measure in the space of continuous parameters like climate sensitivity S or feedback strength Y or any of the infinite number of equivalent functions is an essential problem with the present amount of empirical data. With increasing data the problem gets less severe in the Bayesian inference as the relative power of data grows and the choice of prior affects less the outcome. Other approaches may have different formulations for the same effect, but the conclusions cannot change much as long as each method is used correctly.

Pekka (July 12, 2011 at 11:49 am):

Climatologists have not yet identified the set of statistically independent events that underlies the IPCC’s conjecture regarding the cause of global warming. According to the WMO, climate variables are are averages of weather variables over 30 years. Under the WMO’s guideline, the duration of a climate event can be no less than 30 years.

If the events underlying the conjecture were to be identified and if each of these events were to have the minimal duration of 30 years then the HADCRUT3 global temperature time series would provide us with no more than about 5 observed statistically independent events for the combined purposes of building and statistically validating the model to be used for regulatory purposes. This number of events is insufficient to support construction of a model that predicts the outcomes of events with statistical significance. Thus, under the constraints of climatology and the HADCRUT3, we have no information and can have none about the outcomes from policy decisions.

Currently, environmentalists are pushing us into making policy decisions on the basis of no information. It seems to me that many are doing so from a position of statistical naivite. I notice that in blogs such as this one, the topic of the identity of the statistical events underlying the conjecture does not arise unless I introduce it into the discussion.

Going forward, if we stick with climatology and its 30 year averaging period then in order to provide policy makers with information about the outcomes from their policy decisions we need to come up with independent variable and dependent variable time series that are of much greater duration than the HADCRUT3, for 150 observed events is about the bare minimum for a statistically validated model that predicts with statistical significance. Alternatively, climatology can be dumped in favor of meteorology for the purpose of jacking up the event count.

Wow, Terry. If the WMO says we need to look at 30 year averages to define climate events, then we have half good data to cover one period. Unless of course we do the more statistically rational and look at very long-term geological evidence which reveals the earth has changed a whole lot in its history over thousands to millions of years.

Why hasn’t anybody been saying much about this before?

Terry,

Your latest message appears to confirm, what I have thought all the time. Your texts on maximum entropy, Shannon measure and the supposed claim that the problem of induction has a solution are not of any relevance for the present problems.

You are not alone in thinking that the statistical evidence for theories of climate science is weak. I’m not now arguing about that. Actually I was discussing the problems of using the Bayesian inference in practice, when the outcome is so strongly dependent on the choice of prior, but you entered this discussion to tell that you have a method that can solve the problem. Now we have learned that this possibility is theoretical and of no value with the present knowledge of climate.

You have claimed that Christensen has solved the problem of induction using the principle of maximum entropy. I cannot, however, find on that anything else than your texts, which are really far from being convincing. If the claim would be true, it would certainly be very widely known and I would not have difficulties in finding better presentations. Without better evidence I do not really believe that there is any truth in the claim. (I have strong feeling on the most likely point of error, but your text is not detailed enough to conclude for sure.) Even less do I believe that it would solve those problems with continuous distribution of possible parameter values that I have emphasized as an obstacle for defining objectively, what is an uninformed prior.

Pekka (July 13, 2011 at 2:13 am):

Regarding a solution to the problem of induction, when you say “I cannot, however, find on that anything other than your texts,” it appears to me that you have not bothered to access the peer reviewed literature that I cited for you via a bibliography. It sounds as though you are saying that you cannot find information when I’ve laid out a path by which this information is available to you and you have not accessed it. If you have read any of it, what documents have you read and what, if anything, is wrong with the logic that is expressed? What’s this about?

BLouis79 (July 12, 2011 at 9:57):

“Why hasn’t anybody been saying much about this before?” It appears to me that fraud or incompetency regarding statistical matters is widespread among professional climatologists. To reach general conclusions on the basis of a sample size of less than 6 observed events is simply absurd.

Terry,

Statistical incompetence is generally common amongst non-statisticians, which is why statisticians are usually engaged by serious researchers.

It looks like climatologists are unhappy with climate normals computed as 30 year averages every decade (WMO standard). They are looking at changing this, on a parameter by-parameter basis perhaps.

http://journals.ametsoc.org/doi/abs/10.1175/2010BAMS2955.1

For interested readers, the WMO document on climatological normals is here:

http://www.wmo.int/pages/prog/wcp/wcdmp/documents/WCDMPNo61.pdf

via

http://www.wmo.int/pages/prog/wcp/wcdmp/GCDS_1.php

It explains the history and standards and definitions and statistical issues.

Terry,

Your bibliography is not very helpful. Most of the papers are difficult to obtain and I cannot find anything that would really serve as starting point. Without better easily obtainable presentation, I’m not motivated in any major effort in finding them. (This doesn’t apply to some well known references, which don’t, however, discuss this theory specifically.) If there is a paper that describes better the basic ideas that are new and go beyond the well known applications of the theory based on Shannon’s work, and also beyond the well known statistical physics, please tell, what it is.

There is some relationship to the papers of Jayne, but not to the point that his work would provide support to your most basic claims. In statistical physics such additional input on phase space does exist that helps forward with the problem of continuous variables that I have emphasized in my earlier comments. In statistical physics we are also discussing systems of really huge number of particles and it’s even possible to divide the whole system to an extremely large number of subsystems, which continue to contain huge number of particles (or degrees of freedom).

Your own texts are confusing and vague, when the critical points of the argument are approached. When a claim is made that one of the great classical problems has been solved, the ideas of the solutions must be expressed in a more approachable way as every weakness of the exposition is certainly taken as an indication that an error is hiding in the badly presented details.

The fact that this kind of result of fundamental significance has not been widely recognized in more than 40 years is also a strong indication that more or less everybody else has the same feeling that the claims are wrong or at least so strongly misleading that, when everything is dropped that is not supportable is taken off, what is left is not any more of interest.

In all the above I had not to the least in my mind, what climate scientists think about these issues, but rather theoretical physicists (like myself) and philosophers of science. Anybody, who is capable of presenting well ideas of fundamental interest in this area will be noticed at least in the present environment. When wider recognition is not observed it’s very strong evidence on the fact that either the ideas are wrong or the real correct content is not so significant. The third possibility is that there is a correct idea behind, but it has been presented so poorly that it’s not understood by anybody, but the lack of proper representation is almost always due to bad ideas and reflect the gaps in the thinking of the author.

I read that to say that you’re holding back on stating the obvious: the IPCC is deliberately throwing available precision out the window because it would lead in unacceptable directions.

I would just like to emphasize that the method used in Forster and Gregory, by ignoring internal oscillations in clouds etc, will, as they acknowledge, overestimate sensitivity. Thus the mean 1.6 deg C is probably closer to the neutral climate (no feedback) 1.3 deg C value. Excellent analysis. They were sure there must be a fat tail, so they forced one. In the estimates of sensitivity based on models, the adiabatic lapse rate assumption and lack of cloud change assumption are baked in and do not give an independent estimate of sensitivity. And they should quite well know that.

Many thanks. I quite agree your point about clouds and model-based estiamtes of sensitivity. I tried to bring out the point about internal cloud oscillations, in writing: “The ordinary least squares (OLS) regression approach used will, however, underestimate Y in the presence of fluctuations in surface temperature that do not give rise to changes in net radiative flux fitting the linear model. Such fluctuations in surface temperature may well be caused by autonomous (non-feedback) variations in clouds, acting as a forcing but not modelled as such.”

Yes, I was just adding emphasis.

I think I’ve never heard so loud

The quiet message in a cloud.

=======================

….. and clarity for us non scientific types, Craig. Thanks.

History repeats. Some one with mathamatical and statistical expertise- welcome Nicholas Lewis! – and no agenda, does the what McIntyre and others have done, examine the claims of the IPCC scientists.

It will be very interesting to see what the IPCC comes up with in their 5th assessment and whether they are able to back off from their over-reaching claims in light of both empirical (temperature) data to the contrary and the volume of thoughtful criticism of their assumptions and methodologies.

I sincerely hope so. A fully impartial and non-politically driven analysis of the current state of climate science would be very welcome.

I suspect the IPCC next reports and SPMs will downplay science (since that has given them so much grief) and emphasise instead social and societal topics, perhaps under the chilling headline of ‘sustainability’.

Well, if Pachauri’s “vision” for AR5 (as he had articulated in July 2009) continues to hold sway, I suspect that your suspicions may be very well-founded, John. Indeed, “sustainability” may well be “pervasive” and “overarching:

Oh, yes, and there’s also the “Cross Cutting Theme” which pertains to Article 2 of the UNFCCC (which is, according to Pachauri, the IPCC’s “main customer”):

My impression to date is that AR5 “data” is already set in concrete, and almost fully baked in its analysis and conclusions, and all that’s being worked on is the weasel-wording. To “communicate better” with the marks.

There are two points that the article makes:

1) That there exists at least one error in the AR4 that is neither inadvertent nor inconsequential

2) That the likely range of sensitivity to CO2 is double by this deliberate and consequential manipulation.

The first indicates that there should be, if anything, more scrutiny and scepticism applied to conclusions that are predicated on statistical manipulation:

allreconstructions demonstrating the “unprecedented” nature of recent changes would fall comfortably into that category.The second is a non-trivial alteration to a figure that’s already inflated to an enormous degree based on unwarranted assumptions, and ignoring primary sources of negative feedback.

Every time CO2 sensitivity is put to the knife, it seems to be easily sliced in half.

Hang on- doesn’t the article suggest an improper statistical method has been applied to the ONLY empirically and model-free data on climate sensitivity included in AR4. Further that this has added significant extra weighting to the higher S ranges, bringing the observational results more ‘in line’ with the modelled?

I.e., they’ve fudged the data.

I may have misunderstood something along the way (as i’ve literally signed myself up for a statistics course as my stats are getting a tad rusty), but that’s how i read it.

Inadvertent errors should occur with the same regularity in both the over and under categories. Can anyone recall an inadvertent IPCC error which underestimated the threat of CO2 emissions?

I’m sure they occurred, but were caught and corrected by the eagle-eyed senior editors at the WG2 and 3 levels.

Or SLT.

Nick,

You say: “But the underlying issue, that Bayesian use of a uniform prior in S conveys a strong belief in climate sensitivity being high, prejudging the observational evidence, applies to almost all of the Figure 9.20 PDFs.”

Does that mean that the same procedure has led to the same skewing of the PDF of S in the model-derived results too? Or does the assumed prior of S appear as part of the model rather than part of the post-processing for those other studies?

Tom

That’s a very interesting point- my reading was that this approach had been applied to the empirical results only to bring it more in line with the modelled- if it was applied uniformly, maybe that gives more justification to why they did it (whether it is the correct application or not is moot in this point, if it was applied uniformly it moves from ‘deliberate manipulation’ to ‘lazy mistake’).

Tom,

The other studies featured in Figure 9.20 mainly use simpler climate models than AOGCMs – models in which climate sensitivity S can be adjusted to known values using a single parameter. Typically there is also a single parameter controlling the rate of ocean heat uptake, and forcings (from aerosols in particular, this forcing being quite uncertain) can of course readily be adjusted. Estimates of natural variability from an AOGCM provide a critical input in deriving, by comparing temperature estimates from the simple model with observations, a likelihood function for the parameters jointly at each possible combination of parameter settings (and in one or two cases AOGCMs provide surrogates for some of the observational data). A joint PDF in the parameters is then obtained by multiplying the derived likelihood function by the assumed prior for each parameter (usually uniform), and a marginal PDF for S obtained by integrating out the unwanted parameters.

So, the answer to you question is yes, the assumed prior for S appears as part of the post-processing after the likelihood function has been derived. The IPCC state that all the PDFs in Figure 9 were given on the basis of a uniform prior in S. Note that, since the estimate of S will be significantly correlated with the level of forcings and ocean heat uptake rate assumed, unrealistic priors in those two parameters may also distort (upwards, in practice) the PDF obtained for S.

Is this the same as flourishes with the right hand while the left hand palms the pea, and by the left hand while the right hand palms the pea? The two invalid operations seem to be being used to justify each other.

Great Scientific Discoveries with Correct and Irrefutable Evidence Take a Little Time. Politicians Only have Until their Next Election to make a Splash. Government Appointees are Always subject to the Wim of their Patrons and Benefactors who are Always looking for a Splash. Science and Politics are like Oil and Water. Who knew?

Welcome Nic Lewis! You tactfully attributed erroneous manipulation of Forster and Gregory’s data to the IPCC. More specifically, by whom with editorial authority at the IPCC and why? Why should I believe anything coming out of the IPCC? The authors of the AR4:WG1 report should weigh-in on this important criticism of their report. How long will reputable scientists allow their names to be included on IPCC reports with obvious political bias editing?

Brilliant job, Nic Lewis. This is a sensational discovery. To see that the only empirical and observed evidence of climate sensitivity was tampered with in IPCC WG1 shows how perverted that organisation has become and how twisted are it’s adherents.

I also request all serious posters here to have this issue stay on topic and not to allow trolls to sidetrack or hijack the thread by diverting it from the main topic. Already one roll has made a couple of attempts to side track with Himalayan Glaciers comment and also taking Nic to task for not asking Forster and Gregory’s permission before posting etc.

There will be many such attempts to hijack the thread by the usual 6 or so trolls. Don’t feed these trolls and keep the thread on topic.

People interested in the debate during review over this can read the review comments (James Annan) and responses here:

http://pds.lib.harvard.edu/pds/view/7787808?n=92&imagesize=1200&jp2Res=0.25&printThumbnails=no

Thanks gryp, exactly what I was wondering about.

Steve McIntyre has now uploaded a more convenient PDF version, with ‘uniform prior’ the suggested search term.

Nic:

Can you provide some insight into what prompted you to explore the original IPCC figure and the Forster-Gregory article?

You have produced an excellent and clearly articulated analysis by the way.

Bernie:

I attended a Climate Change Question Time event in London late last year, at which many of the important UK players on the subject spoke. Vicky Pope, the head of Climate Change Advice at the UK Met Office, stated that the possibility of climate sensitivity being under 1.5 deg.C had effectively been ruled out. I subsequently emailed her to ask what her basis for saying this was, thinking that there might be some definitive new study of which I was unaware.

Vicky Pope replied simply that “The AR4 concluded that ECS is very likely larger than 1.5C based on the evidence provided in Box 10.2 of AR4”. Box 10.2 reproduces Figure 9.20 so far as observational evidence goes, so I got hold of all the papers whose PDFs featured in Figure 9.20 and analysed them. The Forster/Gregory 06 paper seemed the best of the bunch (most of the others have severe shortcomings, IMO), but then I noticed that the IPCC had changed the basis on which its sensitivity estimates were calculated. Hence this article.

Bravo. That’s worth a book, not just an article, Nic.

It would be interesting to hear what Vicky Pope has to say about your post, Nic.

Nic:

Thanks for your response. It says a lot about the acuity of Vicky Pope. The superficiality of the reasoning around such a central scientific issue is astonishing. James Annan review comments and critique of this section of AR4 seem to have been largely ignored. His comments, IMHO, read like a red flag: I still do not understand some of the “notes” made in response to his comments. Have you figured them out?

Richard

Thanks. But too technical for a book, I fear.

RobB

I have emailed Vicky Pope to alert her to the post.

Bernie

Yes, James Annan (along with Hargreaves) has made a number of brave attempts to stand up against the ‘uniform prior in S’ basis – unfortunately they seems to have been pretty much alone in this until recently.

Interestingly, one of Frame’s co-authors, Myles Allen, seems now to have abandoned the Frame 05 advocacy of using a uniform prior in S when estimating S (“Quantifying and communicating uncertainty in climate prediction” lecture at Oslo conference, 2010). He points out the advantages of using reference priors, such as the Jeffreys prior, to maximise the distance between the prior and the posterior (minimising the influence of the prior relative to that of the data). However, he then backs off, saying “Reference priors like Jeffreys’ (conditioned on the likelihood) help reduce arbitrariness, but do not incorporate all the data that subjectivists would like to incorporate.” This is the basic problem that I see: mainstream IPCC-supporting climate scientists do not want to let the data speak for itself, they want to incorporate their own personal beliefs and biases (which are likely to be strongly influenced by the consensus, IPCC, interpretation).

I haven’t studied the Notes giving the responses to Annan’s AR4 review comments in detail, but I do note that without his intervention the IPCC would have totally excluded the Forster/Gregory 06 study from Figure 9.20!

The innocuous-sounding change introduced in AR4 and reported by Nicholas Lewis reflects the more general methodological guidelines under which IPCC reports tend to function: “If reality does not fit the model, ignore reality. If the model does not fit the policy agenda, tinker with the model till it fits. And do not let the suckers notice what you did.”

Truman’s law : If you can’t convince them, confuse them.

In this case I think they confused themselves too.

There is always the old standby; call skeptics names, talk about how they generally less qualified (linking the bought IPCC consensus and ancient leftist folklore), talk about the many other tons of abstract studies from the pro-warmist community.

Stay off real science discussions while always accusing dissent of being non-science people.

Our friends in the media and government will accept this.

Excellent analysis Nic. I think your presentation of the radical transformation of the PDF for Y shows that the IPCC was too casual in applying Frame et al to Forster and Gregory. Grypo’s link to review comments are interesting in this context as well. Hopefully Annan will drop in on the discussion.

“Of the eight studies for which PDFs are shown, only one – Forster/Gregory 06 [Forster and Gregory, 2006] – is based purely on observational evidence, with no dependence on any climate model simulations.”

Why is there only one study which does not rely on computer model simulations?

If climate sensitivity is such a critical issue for CAGW, why is Nic Lewis (not a climate scientist) the first to point this out? After how many years?

I guess if the train is leaving the station, you either get on board or get left behind. And this railroading was moving full speed ahead.

It’s wrong to speculate how the climate scientists (and WG1 editors) Forster and Gregory felt about this at the time – and feel about it now that Nic, an outsider, has published this expose. But it sure makes for an interesting question.

If anyone involved in climate science had the first clue about quality control, this would have been checked, rechecked, audited, replicated, and argued about on a wide scale. Climate sensitivity is the biggest wild card and the highest trump in the deck of CAGW cards. When normal people are involved in a project which has absolutely massive implications for their organization, their state, their country, the world, they usually realize the importance of testing and checking the critical parts of their work. Apparently climate scientists aren’t normal people. Because it is becoming increasingly clear that nobody ever checks anything, ever.

Thank God for non-scientists with the ability and fortitude to pick up the ball the recklessly irresponsible scientists keep fumbling.

Oh, I think they understand the power of “checking and re-checking” quite well. And it puts the fear of death into them.

Nicholas Lewis says: “This error is highly consequential, since it involves the only instrumental evidence that is climate-model independent ……”

Maybe even pivotal? Thanks for your analysis.

I did an analysis similar to that of Foster and Gregory several years ago using ISCCP satellite data and based on the analysis of Zhang, et al., 2004. For the basic slope data I derived a feedback (Y in this notation) of 2.2 +/- 0.54 W/m2/deg C. The central estimate was approximately the same as Foster and Gregory’s, but the uncertainty a little smaller because more data points were available with the ISCCP data. I also made an approximate correction for the effect of correlated noise caused by random fluctuations in cloud cover, atmospheric water vapor, and ocean heat re-distribution. This increased the value of Y to approximately 3.1, leading to an expected value for climate sensitivity of approximately 1.2 deg C with 5 and 95% probability levels of 0.9 and 1.9 deg C. A separate analysis based on observed differential heating rates between the NH and SH gives essentially the same answer. I have little doubt that IPCC estimates of climate sensitivity are far too high.

A link to a publication or web post of your analysis would be appreciated by many.

Craig,

I have a copy of a of a paper that I wrote on the subject, but which was rejected for publication by JGR Atmospheres. I am unfamiliar with how I might post it on the web but will be happy to do so if I can find out how. I can also send you a copy by e-mail if I can get an address.

cloehle at ncasi dot org

Craig, could you get Judy to post it here? I would love to see keith’s paper and I doubt I am alone…

erans99-2 at yahoo dot co dot uk will reach me. A copy of the review comments would also be of interest.

Keith,

If you wouldn’t mind: Jeffid1 at gmail dot com

“I have a copy of a paper that I wrote on the subject, but which was rejected for publication by JGR Atmospheres.”

Another bit of perception management that will come back to haunt “them,” now that it’s been exposed.

I also would much appreciate a link to your analysis.

Nic, he can’t yet provide a link because it’s not on the Web yet.

Keith; have you considered Google Docs? Anyone can make any paper or spreadsheet etc. openly available there.

Brian H,

I was not aware of Google Docs, but I’ll give it a try. Thanks.

A high climate sensitivity is crucial to the CAGW hypothesis; low sensitivity removes at least the “C” (for Catastrophic) and even if the”A” remains, it’s not such a big deal any more. This leads to a point that has not yet been brought out–the choice of regression type. Forster and Gregory provide two objective reasons for choosing OLS, but also add a third one–that OLS produces the highest estimates of climate sensitivity and therefore could not be attacked as a possible bias toward low sensitivity. All other choices for regression produce higher slopes and therefore lower sensitivity. And there may be good objective reasons (such as larger errors in the x-variable than thought by F&G) for choosing orthogonal or reduced major axis or geometric mean of the two OLS (X/Y and Y/X) approaches. I would be interested in what estimates of sensitivity result from using one or another of these approaches. Would either Nicholas Lewis or one of F&G be able to supply the answer(s)?

Perhaps it’s worthwhile to say this with emphasis:

The Foster/Gregory 06 results are used in Figure 9.20 of the AR4 WG1 report. The text tells that all PDF’s are presented in this figure based on a uniform prior distribution of ECS. As this is stated explicitly,

there is no error in the WG1 presentation.One may argue that the prior is not well justified, and I have that opinion, but there is no error.

Oh come on. If the prior is not well justified then the presentation is in error. If Foster/Gregory 06 was the only analysis not based on software models and its original (and well justified) results are nowhere else presented, but only this unjustified version, the WG1 presentation of the literature on climate sensitivity is colossally in error.

Differing judgment is not an error. The experimental analysis does not tell, what is the correct prior. The report told explicitly, what prior was used. The fact that Foster and Gregory didn’t introduce explicitly any prior, is irrelevant, as it shouldn’t be introduced at that stage, but when the results are used. That’s what was done in WG1 report.

As I wrote in a message further up, the great effect that the selection of prior has in this case should be emphasized. In this respect the WG1 report might be criticized, but even this fault is not really something to be called an error.

“but even this fault is not really something to be called an error.’

Call it what you like. fault, error, boo boo, slip up, analyst ‘judgment’, etc etc.

Sensitivity is THE MOST important issue. The details behind the calculations need to be as transparent and open as possible. Choices made by analysts need to be clearly defined and defended.

Lets see.. and engineering grade report on climate sensitivity.. where have I heard the request for that before

Thanks Steve. Boo boo it is, from now on, for me!

Boo boo seems a bit generous to me. Whilst I am no statistician, it does seem to me to be a ‘misrepresentation’ of the original work.

let’s try “poor” representation.

Let’s try to give the author’s the benefit of the doubt. Although the commenters clearly pointed out the problems…..

Let’s not use any emotionally tinged words that impugn the authors.. cause that will just lead to a fight over personality

Why do you start here?

When you say “the authors,” do you mean of the IPCC report? Is implying that they misused data and findings of Forester/Gregory for the purpose of deceiving not impugning them? If nic is drawing a distinction here – and saying that he doesn’t think the errors he identified were intentional, or even that he isn’t implying that they were intentional, then I stand corrected.

Otherwise, it seems a bit late to stop the impugning. In which case it would be a bit late to worry about the fight over personality.

It seems to me that if the concern is preventing fights over personality – there are better ways to go about this process.

i dont see nic saying that the errors were intentional. Read his article again.

quote the part in his article where you think he says CLEARLY

‘these mistakes were Intentional”

here is what i read:

“Returning to the PDFs concerned, can we be sure what exactly the IPCC actually did? Transforming the PDF for S to a uniform 0°C and 18.5°C prior distribution in S, and then truncating at 10°C as in Figure 1 (thereby making the 18.5°C upper limit irrelevant), is a simple enough mathematical operation to perform, on the basis that the original PDF effectively assumed a uniform prior in Y. At each value of S, one multiplies the original PDF value by S2, and then rescales so that the CDF is 1 at 10°C. On its own, doing this is not quite sufficient to make the original PDF for S match the IPCC’s version. It results in the dotted green line in Figure 5, which is slightly more peaked than the IPCC’s version, shown dashed in blue. It appears that the IPCC also increased the Forster/Gregory 06 uncertainty range for Y from ± 1.4 to ± 1.51, although there is no mention in AR4:WG1 of doing so. ”

see the language of Uncertainty that Nic uses.

he comes a BIT closer to calling the act INTENTIONAL when he writes the following:

“The IPCC did not attempt, in the relevant part of AR4:WG1 (Chapter 9), any justification from statistical theory, or quote authority for, restating the results of Forster/Gregory 06 on the basis of a uniform prior in S. Nor did the IPCC challenge the Forster/Gregory 06 regression model, analysis of uncertainties or error assumptions. The IPCC simply relied on statements [Frame et al. 2005] that ‘advocate’ – without any justification from statistical theory – sampling a flat prior distribution in whatever is the target of the estimate – in this case, S. In fact, even Frame did not advocate use of a prior uniform distribution in S in a case like Forster/Gregory 06. Nevertheless, the IPCC concluded its discussion of the issue by simply stating that “uniform prior distributions for the target of the estimate [the climate sensitivity S] are used unless otherwise specified”.

The transformation effected by the IPCC, by recasting Forster/Gregory 06 in Bayesian terms and then restating its results using a prior distribution that is inconsistent with the regression model and error distributions used in the study, appears unjustifiable. In the circumstances, the transformed climate sensitivity PDF for Forster/Gregory 06 in the IPCC’s Figure 9.20 can only be seen as distorted and misleading.”

However, Note that I can call a graph misleading WITHOUT implying that you intentionally mislead me.

I dont see Nic Impugning the authors. I see comments about a math proceedure that leads to a distorted graphic.

I’m going to re-post that with an attempt to reformat it to be less confusing:

I find it interesting that there are quite a few other commenters who believe that there was an implication in nic’s post of deliberate misuse of the data – yet my interpretation of his implications seem to be uniquely incorrect: Here are just a few from among the first group of posts.

There are two points that the article makes:

and

Yes, Joe, Nicholas Lewis has discovered in climate science what I first encountered in space science in 1972 – manipulation of observational data in order to promote a particular point of view.

and

Spot on, Joe.

Day by day, every bit of ugly manipulation by the coterie of few keeps coming to light. This farce has to be exposed and chipped way bit by bit till it’s very foundations are razed.

and

and

Let me get this straight, The IPCC fudged the only climate sensitivity study based on observations, and the rest are from tautological models?

and

Beyond that (less importantly), there is a very line line or posts where it isn’t necessarily clear that they think nic implies deliberate misuse of data, but there is a clear belief that nic’s article justifies a view that the data was deliberately misused:

and

Now nic can’t be held responsible, necessarily, for whether other people (including me) think that his article documents deliberate misuse of data – but if he wished to avoid such an outcome, and limit whether the arguments become overwhelmed with a “fight over personality,” it would seem that he could easily take steps to do so.

I will repeat a post I wrote to nic much earlier in the day (which remained unanswered, quite possibly due to the nesting problems iwth this blog or simply his interpretation that the questions don’t merit an answer) – with a slight edit for clarity:

Benefit of the doubt is the #1 resource for scam artists and bullies. The unwillingness of the civilized to make a fuss it what they count on.

In “Iterated Prisoner’s Dilemma”, the by-far winningest strategy is “Tit for Tat”. You get the benefit of the doubt ONCE, but from then on each time you get what you gave last time.

The IPCC and its authors and editors have long since used up their “Once”, and routinely accuse skeptics of being “anti-science” and ignorant.

No more Mr. Nice Guy. Tit-for-tat.

Brian you are exactly right on this. When people put their work out in a public forum, especially peer reviewed literature, we are under no obligation to take in account subjective issues, like whether the author’s fragile feelings were hurt by our comments.

It is also certainly not necessary, and in some cases, not even appropriate to discuss your criticisms with the author before publishing them—that leads to a nepotistic feedback loop that runs counter to the best interest of the science.

If what Nic has said was wrong and the authors can demonstrate this, I’m sure Judith will give them a forum for their response here. That extending this opportunity runs counter to the way criticism of others work gets done on Open Mind, Climate Progress and Real Climate should be noted as well.

Joshua is proposing a set of etiquette rules that simply don’t exist either in principle or in practice. There has been plenty written up on the topic of responsible conduct in research, the fact that Joshua was unable—when challenged—to produce a link backing up his overwrought concerns on appropriate behavior, speaks volumes.

Differing Judgment? As in the judgment of whether its right or wrong to grossly transform the results of peer reviewed science so AR4 can say what you want it to say? Is that the type of judgment you want to consider? Are you claiming they didn’t realize the implications of what they were doing, or that their judgment told them it was okay to make these types of alterations without regard?

You get the dead parrot award;

“I’ll tell you what’s wrong with it, my lad. ‘E’s dead, that’s what’s wrong with it!

No, no, ‘e’s uh,…he’s resting.

Look, matey, I know a dead parrot when I see one, and I’m looking at one right now.

No no he’s not dead, he’s, he’s restin’! Remarkable bird, the Norwegian Blue, idn’it, ay? Beautiful plumage!

The plumage don’t enter into it. It’s stone dead.

Nononono, no, no! ‘E’s resting! “

Grypo has a long message below referring the supplementary material of the WG1 report. That text explains the problem. I tell my own view in the comment to that message.

As I comment above: There is no way around the fact that judgment is always involved in interpreting empirical results.

The role of the judgment is reduced by additional empirical results, but it can never be fully eliminated.

Yes, empirical evidence requires some judgement.

This is not that; they took a journal article purporting to show one thing, mathematically butchered it (without bothering to get comment from the original author) to get it to say what they needed it to say. They included it by reference in AR4, without mentioning the contortions required to get the result they wanted and then pretended AR4 was okay, because WG1 wasn’t included in the problems of the other groups.

That isn’t judgement, That is criminal intent. If they had committed murder, this evidence would suffice to raise it to 1st degree.

Judgment in not committing fraudulent acts. To characterize this as a question of judgment over how to interpret raw data is akin to questioning whether OJ Simpson should’ve used a sharper knife. It kind of misses the real question.

Except that they didn’t do any of that.

We have already learned from them that the original authors were contacted (and were also contributing authors of the chapter), that they accepted, what was done. They didn’t necessarily have everything represented exactly as they would have preferred, but that’s normal for a review.

The steps are also described briefly in the main text in the paragraph that I copied very recently to this thread.

My judgment of the best prior and some details of the presentation differ from, what the report contains, but that’s also normal.

Given that we told AGW is the most important thing ever and that the IPCC claims show the lead science in the area. Is it not only fair that we should expected it to reach a academic standard you damned of a undergraduate writing a assignment ?

Yep; we should damned well demand it!!!

;)

Yes, but another case of misrepresentation mired in a statistical morass. Intentional? No-one will likely ever know, probably not. This is just another item that gives those that believe (assume?) that the IPCC’s science is not in balance more justification in their thinking.

Another case of this type of actual results manipulation …too likely fit an agenda… was in RPJ’s case of weather relater disaster losses and the “mystery” graph:

http://rogerpielkejr.blogspot.com/2010/02/ipcc-mystery-graph-solved.html

I expect this discussion to get derailed away from the math to the motive

Pekka,

It is not an error, it is simply a deception.

I don’t think Nic’s article quite makes clear the full extent of the IPCC’s exaggeration here.

Forster and Gregory say that the sensitivity to CO2 doubling is 1.0 – 4.1C.

Now look at the caption to IPCC fig 9.20: “The bars show the respective 5 to 95% ranges,”. The grey bar in fig 9.20 goes from about 1.2 to 7.9C. So the top end of the range has almost been doubled.

Actually, it’s worse than that. The IPCC 1.2 to 7.9C range is only a 90% confidence interval. Forster & Gregory’s stated 1.0 to 4.1C range is a 95% confidence interval. As I wrote in the article about the IPCC’s recomputed PDF:

“At the top of the ‘extremely likely’ range, it gives a 2.5% probability of the sensitivity exceeding 8.6°C, whereas the corresponding figure given in the original study is only 4.1°C.”

Sorry. I should read before commenting!

Systematic misuse of data is a better way to describe it.

Just spotted another error.

In the corresponding table, 9.3, page 722, it says that the range of ECS in Forster and Gregory is 1.2 – 14.2 !

So we have:

F&G: 1.0 – 4.1

IPCC Fig 9.20 F&G: 1.2 – 7.9

IPCC Table 9.3 F&G: 1.2 – 14.2

This is fascinating in terms of the estimated error bars & the huge transform they have been subjected to. Very interesting work!

One side issue that I wondered about, and maybe more knowledgeable commenters can help here, is how does the Forster & Gregory work that forms the basis for the climate sensitiviy estimate relate to the work by Lindzen and Choi that has been discussed in various places, most recently following the PNAS rejection of their second paper (e.g. here on Judith’s blog)? It seems to me that the two papers use at least a similar approach – and one that seems very sensible to me – correlating surface temperature with radiation fluxes at the top of the atmosphere. Can anyone discuss the differences between the Forster & Gregory approach, and the Lindzen approach, and maybe also comment on the likely statistical compatibility of the two outcomes in terms of sensitivity? And how about the relation to Dessler Science 2010 vol. 330 p. 1523, and Roy Spencer’s papers?

And Keith Jackson’s comment above is another example of such an analysis.

Jeremy,

I believe that Lindzen’s analysis was based on tropical data observations, while Foster and Gregory’s and my analysis were based on global data. My analysis, however, also tends to support some of Lindzen’s ideas in that I’m seeing some negative long-wave cloud feedback, which is consistent with Lindzen’s idea of an infra-red “iris” effect. (An anvantage of using the ISCCP data is that many of the component contributions to the net feedback can be broken out separately).

Apart from the difference in domain, Lindzen and Choi attempted to overcome the issue alluded to in the above article:

“The ordinary least squares (OLS) regression approach used will, however, underestimate Y in the presence of fluctuations in surface temperature that do not give rise to changes in net radiative flux fitting the linear model.”

Their method of overcoming this (which is quite an important bias towards high sensitivity, see “Spencer, R. W., and W. D. Braswell (2010), On the diagnosis of radiative feedback in the presence of unknown radiative forcing, J. Geophys. Res., 115, D16109, doi:10.1029/2009JD013371.”) was to focus on large changes in sea surface temperature and try to maximize the correlation with those changes and corresponding changes in radiation flux. Statistically both methods appear to have slight biases in opposing directions, so I think the real answer may be in-between. The regression method used by Forster and Gregory is however definitely capable significantly overestimating sensitivity. As for Dessler’s paper, we might be able to get a better sense of where the differences arise between it and Lindzen’s and Spencer’s work if their responses to it ever make it through the peer review process, which has apparently stalled these responses considerably.

Thanks, time – I thought all these analyses were linked. Let’s not hold our breath re. Lindzen & Spencer’s responses – I sense that ‘stalled’ may be an underestimate.

Have The Politburo at RC handed down the Party Line on this one yet?

If not, here’s my prediction:

‘Non peer-reviewed, unknown contributor, unwarranted attack on hard-working climate scientists, failed to grasp the essential points, unworthy of discussion, Big Oil funded denier blogs, already been debunked, no credibility, thousands of volunteer climate scientists, voodoo science……………..’

You missed out “contrarian”

Or as Mann is STILL calling anyone who questions anything, “…dishonest mud slingers”.

I suppose Vicky Pollard, the head of Climate Change Advice at the UK Met Office, will have to admit that this is a more sophisticated subterfuge than splicing together pieces of spaghetti to hide a decline.

So ECS estimates range from 1 to 14.2 do they?

Now that the Science has settled, we can leave it to the politicians to sort out the details.

Can’t we?

I’m not sure, but aren’t the estimates of climate sensitivity based on geological evidence (e.g., ice ages, PETM) also observation-based? (see Knutti, R., and G. C. Hegerl (2008), The equilibrium sensitivity of the Earth’s temperature to radiation changes, Nature Geoscience, 1, doi: Doi 10.1038/Ngeo337, 735-743). There are a bunch of those.

Also, FYI, there is an update to the estimated climate sensitivity based on observations in Murphy et al. (JGR, 114, D17107, doi:10.1029/2009JD012105, 2009). Forster is a co-author on that paper.

The problem with sensitivity estimates based on ancient data is the great uncertainty of the input data (solar activity, land albedo, etc), which creates very fuzzy numbers.

Right. But my point was that there are lots of observation-based estimates of climate sensitivity. And given the scatter in the plots in the Forster paper, I wouldn’t be so quick to conclude that Forster’s estimate is better than these other estimates.

I don’t think anyone here is assuming that. (Those with more knowledge than me may have made that judgment but that’s another thing.) What we’re concerned with is whether the AR4 WG1 presentation was true to the original Forster paper. And you’re not disputing I think that it was the only paper selected (rightly or wrongly) that did not rely on climate models for its sensitivity calculation.

Craig:

I agree. Another problem with paleo-climate sensitivity estimates is that the forcings are very different from increasing CO2. My field geologist gut instinct tells me that the great forcings of major glacial periods (insolation, albedo, etc.) are regional and radically focused direct surface application or radiation of extra heat as compared with the well mixed, relatively homogeneous and isotropic CO2 tropospheric increase.

These radically different forcing mechanisms, in my estimation, would likely produce completely different feedback mechanisms resulting in climate sensitivities unique to the specific forcing.

Our only data on temperature for these geology-based sensitivity studies are proxy-based. And we all know how reliable such proxies are. Plus as one goes back in time dating error becomes a problem, along with signal censoring (e.g. due to erosion of a deposited sediment) and smearing. I don’t think the data are up to the job.

To call relations between proxy data of forcings and proxy data of climate “observational” is to stretch the term “observation” beyond it’s limits.

Uhm . . . don’t know which side of the debate you’re on; not that it matters; your argument may damage Nick’s ship, but it takes the entire CO2 warming armada to the bottom of the sea. Sorry.

I don’t think I’ve damaged Nick’s ship at all. I was saying that the term “observations” doesn’t apply to studies deriving climate sensitivity from paleo data. There are other problems with such an approach, of course, but whether those are severe or not, it is semantically wrong to call them “observations”.

Agreed. Empirical is a better term, as opposed to climate model based.

I feel the same way about calling the global temperature estimates from the HadCru, GISS and NOAA surface statistical models “observations.”

For studies like Knutti and Hegerl I would probably prefer the term “phenomenological”.

Brilliant work, Saint Judith. Thanks so much.

Very interesting. A few weeks back I worked through reproducing the results of this very paper, and put up the scripts to do so:

http://troyca.wordpress.com/2011/06/22/erbe-pinatubo-and-climate-sensitivity/

My main point was trying to resolve the discrepancy between Dr. Spencer’s estimate for sensitivity during the Pinatubo years and that of FG06 using the same analysis. Using 72-day averages resulted in a sensitivity ~ 1.0 C during the Pinatubo years, although there’s a lot of noise and uncertainty around all the estimates. I also got a slightly lower senstivity than FG06 for the years 1985-1996, but once again there’s a lot of uncertainty (and I’m a bit confused about the exclusion of 1997).

Many thanks for commenting and posting this link. I will examine your analysis in detail once things are a bit quieter. It sounds very interesting.

Excuse me, please. Brilliant work, Nicholas Lewis. You will need to be a saint for all the abuse you will receive.

The Forster and Gregory approach to estimating climate sensitivity from observational data is an appealing one. It circumvents errors from paleoclimatologic model parameter fitting or the itemization of feedbacks from more recent data. On the other hand, it is plagued by problems of observational uncertainty stemming from short time intervals and the correspondingly small changes in radiative fluxes, such that small absolute errors can introduce large errors into the estimates that would be avoided by much longer time series. Future improvements in technology should reduce this concern.

Forster and Gregory discussed these potential error sources extensively, and were appropriately tentative in their conclusions. Included among the uncertainties are the extent to which flux imbalances that might primarily reflect ENSO events in the tropics can be extrapolated to long term climate responses to CO2. Their paper is exemplary in this regard, and has not always been matched in later attempts to follow their approach.

The thinking of the authors has also evolved somewhat since the 2006 paper. In their 2008 JGR paper, Gregory and Forster focused on transient climate sensitivity (TCR, representing the climate response to 1% annual CO2 increases up to a CO2 doubling) as a more reliable indicator of the climate response to CO2 than the more uncertain climate sensitivity estimates. Their TCR estimate encompassed the range between 1.3 and 2.3 C, which is not very different from the IPCC AR4 TCR Range of 1.5 to 2.8, although on the lower side of average. TCR is of course only a fraction of a final climate sensitivity value, with the ratio dependent on estimates of the rate of ocean heat uptake and other variables. A typical fraction of TCR/climate sensitivity is about 0.6 with a low probability of values exceeding 0.7 – see, for example Held 2010. This implies a greater value of climate sensitivity than estimated in the 2006 paper.

In section 42 of their 2008 paper, Gregory and Forster elaborate on this point:

“The equilibrium climate sensitivity ΔT2x = F2x/a is the warming in a steady state under 2xCO2, as discussed in section 1. Meehl et al. [2007] report that the 5–95% range of equilibrium climate sensitivity in CMIP3 models is 2.1–4.4 K. Over the last three decades, a lot of attention has been given to ΔT2x but it is still relatively poorly constrained. A number of studies examined by Hegerl et al. [2007] and summarized by Meehl et al. [2007] (Box 10.2) have set observational constraints, but these are fairly weak, especially on the upper bound… F is poorly known… Our observational constraint on the TCR is by contrast rather strong”.

The choice of where to assign priors in a Bayesian analysis is not always obvious, and it is sometimes useful to examine the consequences of different assignments. It is always desirable that the choice be noted in the analysis, as was done in AR4. It appears that Forster and Gregory may have acquiesced in that assignment, while Nic Lewis supports a different choice. In response to a comment above from Joshua, Nic mentioned that he would call his analysis to the attention of Forster and Gregory. In view of the latter’s apparent acquiescence in the IPCC representation of their paper, this will be informative. They may respond, and I hope Nic shares their response with us.

To expand slightly on the change in focus to TCR by Gregory and Foster (2008), if one uses a conservatively high figure for the TCR/Climate Sensitivity ratio of 0.7, their updated estimates translate into a climate sensitivity range of about 1.9 – 3.3 C per CO2 doubling. A lower ratio would yield a higher climate sensitivity estimate – for a ratio of 0.6, the range would be 2.2 – 3.8 C. TCR involves an interval of about 70 years, and so it is unlikely that a response to doubled CO2 would exceed 70 percent of the equilibrium value in an interval that short.

I’d be interested to know what observational (not AOGCM simulation based) evidence, if any, supports your claim that 0.7 is a conservatively high figure for the TCR/Climate Sensitivity ratio. Without any such evidence, I would suggest that a figure of 1.0 as conservatively high would be more appropriate. Bear in mind that some of the observational evidence studies point to effective ocean diffusivity being pretty low, far lower than the ocean models of almost all AOGCMs produce.

Nic,

To suggest that a conservatively high figure is 1.0 is to implicitly state that the deep ocean negative feedback does not exist. What evidence do you have for that?

RB, In my book, “a conservatively high figure of 1.0” means that one thinks the actual value is less than 1 (even if only slightly), which implies that heat is absorbed by the ocean (I wouldn’t really describe it as a feedback).

I just happen to have my own simple ocean heat uptake model loaded. Using an effective ocean diffusivity of 0.65 cm^2/s (which is the central estimate derived in the Forest 06 study), the surface temperature response to a step forcing increase reaches about 90% of its ultimate level within 25 years, if I’ve got everythng right. And that is with the base of the diffusive thermocline being quite deep (700m). As I understand it, upwelling effectively puts a limit on the depth to which warming can penetrate. Only a 500m base of thermocline depth is needed per Lindzen and Giannitsis’s 1998 GRL paper ‘On the climatic implications of volcanic cooling’, which would shorten the period until 90% of equilibrium response was reached even further.

Nic,

On the one hand, there is a lot of scepticism regarding AOGCM’s and on the other hand, people such as yourself place a lot of confidence in simple two-box models. The approach of modelers such as Held are quite different. It is well-known that simple two-box models can have multiple solutions, are less constrained and have incomplete physics compared to the AOGCM’s. Isaac Held here looks at how the simple two-box models relating the globally averaged energy imbalance at the TOA to the globally averaged surface temperature and concludes that a linear formulation deviates substantially from the behavior of GCM’s. The simple models equilibrate substantially faster than the GCM’s. He attributes the apparent nonlinearity to the fact that a more realistic approach would incorporate the surface temperature field, with its spatial variations. Specifically, he pinpoints the slow response to ocean heat uptake in the high latitude regions, where the oceans are most unstable to vertical mixing. Therefore, when capturing the climatic response using simple models, it becomes a case of “Make things as simple as possible, but no simpler.”

In Winton 2010 the authors attempt to construct a better model that correlates with model behavior by introducing an efficacy factor. Further none of the 20+models have a TCR that is anywhere close to 1, and they have a strong physical basis for expecting that. In summary, modelers view the AOGCMs as being the more comprehensive models and they use simple models by trying to approximate the behavior of AOGCMs. Modelers use simple models but first tune them to AOGCM output to restrict the range of their potential outputs, not doing which would be the equivalent of making things up. On the other hand, sceptics such as yourself view the AOGCMs to be suspect (which in many ways they are as modelers would also admit) but simplified models to be the more comprehensive word on climate behavior while there are only minimal physical constraints.

I believe that your model with a mostly 25-year response time may win a lot of admirers in the sceptic blogosphere, but you would have a hard time gaining acceptance from most of the professional climate scientists for the reasons detailed by Isaac Held as I paraphrased above to the best of my understanding.

RB:

On the one hand, there is a lot of scepticism regarding AOGCM’s and on the other hand, people such as yourself place a lot of confidence in simple two-box modelsThe reduced dimensional models are not necessarily worse than the AOGCMs. If you start with an AOGCM, formally there exists an equivalent reduced dimensional model, after one integrates over the additional degrees of freedom of the more detailed AOGCM model. This is a standard thing to do, I’ve done it in the past (in another field). The main concern in

startingwith the reduced dimensional models being that the model doesn’t violate basic physical principles. The exercise of going from the full model to a reduced dimensional system helps elicit the assumptions necessary for a particular form of a reduced dimensional system to hold true.On this note, Tamino is also a frequent user of reduced dimensional models. I’ll point out that Lucia got banned from his site because of her criticism over this very issue of physicality of the model. At the time Nick Stokes and Arthur Smith came in and vigorously defended Tamino’s work against Lucia’s criticisms. It is interesting, given this history, that you are painting this as a “skeptics only” activity.

On a separate note, none of the AOGCMs, at this point, even if more comprehensive, accurately model the real climate system (that is well beyond their capacity). So it is not clear, if you were to produce a reduced dimensional version of an AOGCM model, that you would necessarily get something that was more likely to be physical, than a simple e.g. two box model with which you’ve taken care for example to ensure that the 2nd law of thermodynamics isn’t violated (apropos in this case).

Carrick,

I’m not aware of the history and in any case, I don’t believe it has anything to do with my criticism of Nic’s model results. My criticism has to do with the dichotomy of treating GCMs as suspect and reduced dimensional models as superior. In this particular case, the multiple timescale issue arising from slow mixing at higher latitudes seems to make any analysis with a linear dependence on global mean surface temperature suspect. Here Isaac Held cautiously states that a 500-year equilibration time is more appropriate while outlining the limitations of simple two-box models. Even Tamino joins in on the discussion later. In this context, the AOGCM model seems superior because it accounts for a relevant physical effect – the slow response arising from high latitudes. As I stated, Held also says that Winton 2010 attempts to incorporate the relevant physical effect in a simple model.

That should be “slow response arising from ocean mixing at high latitudes”

seems to make any analysis with a linear dependence on global mean surface temperature suspectI suppose I should also add, for the purpose of analyzing equilibrium climate sensitivity – there may be other questions relating to short timescale processes where it may still be useful ..

RB, there isn’t any “dichotomy” here. The full AOGCM models are known to be unphysical (they have non-physical kludges for the physics that isn’t treated properly). If you started with a smaller order model but constrained its parameters to be physically realizable, that could actually be an improvement with respect to e.g. global parameters.

Are there going to be limits? Sure, but if it takes 500 years to notice them, I’m pretty sure the AOGCM models give meaningless output on that scale anyway.

Carrick,

If I understand you right, I think you are discussing a hypothetical. Let’s grant that the reduced-order model can also yield good results if all of the relevant physics was modeled using physically realistic parameters. I don’t believe that this is the case for the purpose of calculating an ECS using a simple linear model as Nic appears to be using. And, per Held’s comment/question, I don’t believe there are simple theories yet independent of GCM simulations that can predict the strength of the ocean mixing if such a model was created. Therefore, currently we seem to be stuck with the GCMs. I seem to more or less be repeating myself, so I’ll stop here.

RB what I’m saying is I think you are placing too much faith on the ability of currently-in-existence AOGCMs to accurately predict global phenomena. AFAIK, none of them are reliable at predicting the strength of the ocean mixing (the data I’ve seen suggests they are way off).

It’s very possible that one could generate a reduced dimensional (phenomenological) theory that could do a better job on this sort of thing, than a more detailed (but physically wrong) model. Simply being more complex doesn’t necessarily make the model better.

The Gregory & Forster 2008 paper you cite has no relation to the Forster & Gregory 2006 paper that my article deals with; it uses a completely different method, with entirely different data, and is heavily dependent on AOGCM model simulations. It also estimates a different sensitivity parameter.

The choice of prior that I support for Forster & Gregory’s 2006 study is that effectively used in the original paper, where the authors explained that it arose from their regression model and error assumptions. They explicitly stated that in their view their choice was more appropriate, given their analysis and input data, than using a uniform prior in S. And, as you agree, Forster & Gregory discussed potential error sources extensively.

I’m afraid that I don’t agree with your claim that Forster & Gregory were “appropriately tentative in their conclusions”. On the contrary, the authors stated that to show the robustness of the main conclusion of the paper – a relatively small equilibrium climate sensitivity – they deliberately adopted the regression model that gave the highest climate sensitivity. I know that earlier Forster & Gregory inconsistently describe the conclusion as the ‘suggestion’ of a relatively small climate sensitivity, but in the light of their aforementioned subsequent statement about robustness I see that as likely to have been a sop to a reviewer who was hostile to their conclusion.

Nic – If you look at their 2008 paper, two points seem apparent. First, they conclude that there is less uncertainty in TCR than in Equilibrium Climate Sensitivity (ECS), which I suggest supports the proposition that they found considerable uncertainty in the latter.

Second, their TCR estimates, which they found to be better constrained than ECS, are incompatible with the low range of ECS estimated in the 2006 paper. Rather, they require an ECS range similar to the IPCC estimates. It appears therefore that they have probably revised their climate sensitivity estimates upwards, although we would have to know the figure they would use for the TCR/ECS ratio to judge by how much.

I suggested earlier that a figure of 0,6, or 0,7 at maximum is realistic. This is particularly true for a long term response to CO2 forcing, because of the major role of the deep ocean (down to 4000 meters and even below) in long term heat storage needed for equilibrium. Recent data from a number of authors – Purkey and Johnson come to mind but there are several others – have indicated a greater role for the deep ocean than previously surmised. A variety of phenomena, including so-called “ocean fronts” are important players in heat transfer down to great depths. Because of the vastness of the deep ocean, most of the heat from long term forcing will eventually be stored there, but the process take centuries to approach equilibrium asymptotically.

Note that the deep ocean plays a lesser role in heat storage or dissipation from short term forcings (e.g., volcanic eruptions), and so equilibrium in these cases can be nearly complete within decades – a major difference from persistent CO2 forcing.

Nic -You’re right that some of this justifies a different venue. I do want to add, though, that RB above cited an Isaac Held item on this, and another informative one is at Held – Transient vs Equilibrium Responses. Some of the same concepts are addressed, and Held gives a TCR/ECS ratio mean of about 0.56, which if applied literally to Gregory and Forster 2008 would imply an ECS in the middle of the IPCC range (although that overstates the level of precision that would be justified). The ensuing discussion refers (without details) to observational data on deep ocean mixing utilizing tracers such as CFCs or C14.

Fred

Thank you. I agree about TCR estimates being better constrained than those of ECS. That is bound to be the case, and I don’t see that anything more should be read into what Forster & Gregory say about it.

I note what you say about deep ocean heat storage, although the amounts reported by Purkey & Johnson are not huge, have large error bounds and will to some extent already be accounted for in the standard 0-3000m ocean heat storage datasets. And insofar as slightly warmer new bottom water will stay below the surface for of the order of 1000 years, I am not sure how relevant it is.

I’m not sure that this is the right thread for an extended debate on the relative magnitudes of TCR and ESC, so for the present we will just have to differ a little on what an appropriate ‘conservative’ maximum to assume for the ratio of TCR to ESC is. 0.7 may or may not be conservative, 1.0 clearly is, as the ratio must be less than unity!

My comment above yours was intended as a reply to yours and not a further reply to my own.

Another error is that in table 9.3 the 5-95% range for Gregory et al is described as 1.1 – infinity. This makes no sense mathematically.

I haven’t checked, but I’m not sure either of these are actually errors. Remember that the IPCC initially transformed Forster/Gregory 06 to a uniform prior in S over the range 0-18.5C, and then truncated to 0-10C and renormalised. The 14.2C top of the 95% range could be right.

And the Gregory 02 results do technically cover the range up to S=infinity, as this study also is based on estimating Y. With a normal error assumption, mathematically there is a positive probability of Y being zero (or less than zero).

Well, whether you call it an error or nonsensical distortion is debatable.

I dont see how the 95%ile of a pdf can be at infinity.

If you look at it in terms of Y being zero, you end up dividing by zero, so still nonsense.

What is novel about he relatively new ‘science’ of global warming is that it is simply understood a priori that a determination of the truth or falsity of any issue—no matter what it may be—can never be seen to resolve the issue that is central to the validity of AGW theory—i.e., that human CO2 is bad. On that central issue, the faith of the AGW True Believers is simply unchallengeable on scientific grounds. And, that is what amuses skeptics like Freeman Dyson.

Nic Lewis,

Very informative analysis; thanks. I sure hope Forster and Gregory offer comments, but I am not going to hold my breath.

Looks like you think that nic’s post puts them in a “difficult situation?”

Maybe a bit. Certainly there is a substantial difference in the assumed prior (they explicitly justified their assumed prior in the original article) and what got used in AR4. As original authors who participated in the IPCC AR4 process, it would seem that they accepted a fundamental change in their original work for incorporation in AR4, with that change based

noton a published refutation of their original work but on the internal workings of the AR4 group itself. Did they conclude by the time of AR4 that their original publication was in error? If so, why have they not issued a retraction or correction? I do find this strange. Makes me wonder if perhaps peer pressure is sometimes more important than peer review.Those are all good questions. That is why I believe it is important to view nic’s analysis in the context of any response that they might have – before drawing conclusions (It’s not clear to me that you, at least, have drawn conclusions – however the “I’m not holding my breath” comments suggest that you may have done so).

You know, Joshua, they teach you all this claptrap about ‘context’, but what you dramatically fail to understand is that you would have a much greater understanding of the context if you had a shred of curiosity about the science and the statistics, instead of vast experience at interpreting and misinterpreting ‘context’.

=====================

depends on their character. why speculate about that?

They’ve been invited, see if they show up.

If they dont, the first question you can ask is why?

lots of answers for that

The reason I’m speculating is that nic stated that he chose the course of action he took in order to avoid putting the authors in a difficult situation.

Seems to me that they are in a difficult situation regardless – and that Steve Fitzpatrick agrees.

http://judithcurry.com/2011/07/05/the-ipccs-alteration-of-forster-gregorys-model-independent-climate-sensitivity-results/#comment-83352

I would guess that the same speculative answers would have all been available if he had asked them to comment on his analysis before creating this post – prior to them giving a response. A response from them would have limited the speculation (and still might – at least for those who are open-minded about a response they might give).

So again, his explanation for his course of action does not seem plausible to me. What, then, would be the reason? I could speculate about answers to that if I were so inclined.

you have not shown yourself to be a good judge of motivations. you cannot even see that Nic has said nothing in his article about the motives of the authors. You repeatedly misrepresent other peoples arguments.

UNTIL you can represent someones argument FAITHFULLY, you should get out of the business of motive hunting. You havent dealt much with Nic over the years. i’ll suggest you dont know shit

Agreed Steve. Joshua has done a very poor job of representing Nic’s paper, and is conflating the original authors being put in a “difficult situation” through what is in fact their own willful behavior, with Nic having committed some unpardonable offense.

I’m reticent to comment here, as I have been admonished for posts that “divert” attention from the technical aspects in question. If you’re interested in reading my response, it’ll be up shortly on the recent “Week in Review” thread.

I think the whole thing would be helped if you quit inventing rules that scientists are supposed to behave by, and spend a bit more time learning and documenting what is standard practice. I can tell you, your claims as to how NicL should have handled this are entirely inconsistent with my own experience as a scientist. (And I have explained above why it is not always a good thing to have too nepotistic of a relationship between the critic and the person being criticized.)

Josh, they put themselves in that spot. Nic is being very gracious to them. Fact is, this is exactly the conflicts that the IPCC were

forcedto address because people were either too stupid to understand this isn’t appropriate or their ethical sense has no foundation.Now, Josh, I believe that’s the candor you were so desperately desirous of, no?

suyts-

I’m reticent to comment here, as I have been admonished for posts that “divert” attention from the technical aspects in question. If you’re interested in reading my response, it’ll be up shortly on the recent “Week in Review” thread.

I haven’t read Forster & Gregory ’06 or Gregory ’02. Assuming Gregory in both studies are one and the same, surely there must have been comment by Forster & Gregory about the divergence from the earlier study? Anyone know?

Not that I know of. The methods are based on entirely different types of observational evidence, and I’m not sure that one would expect the later study to single out the earlier Gregory study for comparison.

From SI, Appendix 9B , regarding the use of uniformed priors:

That’s a valid description of the situation and confirmation that no error is involved, but judgment, which follows the most common choice of prior as far as I know.

My own preference for the relatively flat prior for Y rather than S is based on the physical model, which consists of forcing and feedback. In this physical model there is no natural reason to prefer feedback factors slightly below +1 as the uniform prior in S does. To me seems much more natural to have a smoothly behaving prior in Y or equivalently feedback factor. Annan and Hargreaves presented similar arguments in their paper.

‘Judgement’ allows all sorts of possible bias to enter the fray. I wish it was an error to be honest.

Yes it does, but there is no way around it.

Pekka,

there is a quite simple way around it that apparently many Climate Scientists resist. Fully explaining the whys and wherefors of the Judgement calls makes a rather dramatic change in the situation. Leaving these issues poorly explained or not explained leaves us evil deniers with much room to talk about their poor or biased judgement.

Having them then come forward in response to these criticisms at a later date with what appears to be excuses makes the situations worse. Fully document what is done up front. It is always the correct thing to do and will be appreciated. It also leaves us evil deniers less room to work.

Yes, that is the most frequently raised argument against Bayesian analysis: choice of prior can bias results.

That is the basic argument being made here (and by Annan and Hargreaves in a series of papers on this topic), the use of the uniform prior biases results to have an unrealistically long tail. Annan argues that the use of an expert prior constrains ECS in a more likely way than the uniform prior which the IPCC favored in this analysis. It is a good old scientific argument — the the comments on AR4 from Annan linked above, he rips ’em a new one! Oh, and BTW, Annan supports the consensus…

which consensus?

It’s not really a scientific argument. The selection of the prior is hardly what I would call a ‘scientific” argument. It’s an assumption.

Pekka

I agree with various of your comments, and I think that our views on appropriate priors are actually very similar. I’m sorry that you didn’t like my description as an error of the IPCC’s restatement of the Forster/Gregory 06 results using a prior that contradicted the non-informative prior implicit in study’s regression model and error distribution assumptions. I think that the IPCC’s doing so in this case was inconsistent with its claim that a uniform prior indicates that little is known, a priori, about the parameters of interest except that they are bounded below and above.

Perhaps you would have been happier if I had instead pointed out as being an error the untruth of the IPCC’s statements that all the PDFs in Figure 9.20 are based on a uniform prior in S? That is no doubt an inadvertent error, but one that further erodes one’s confidence in the reliability of scientific statements by the IPCC.

Nic,

Error is a strong word with a concrete meaning. I don’t think that the presentation of WG1 report fulfills error’s criteria. If they had added some of the strongest sentences of the supplementary material or something equivalent to the main text I would have little to complain. Presently the issue is not emphasized enough, and the presentation is therefore biased. The supplementary material proves that the authors were fully aware of the issue. I have no way of judging, why their views on the favored prior are different form mine.

The empirical work tells, how the posterior distribution differs from the prior distribution. It’s natural that the results are presented in the variable, where empirical data is described most simply. In this case that is as function of Y, but that by itself is only a consequence of the empirical methods used. That by itself does not favor any prior compared to another. That the variable happens to be the same that I prefer to use in describing the prior distribution is rather accidental than of any deeper meaning. Therefore I don’t see anything basically wrong in the fact that WG1 uses a different prior than the experimental paper. My different view for the prior comes from the physical picture that seems most natural for me.

In spite of all reviewers, it’s unavoidable that some inadvertent errors remain. Thus such errors are not a big issue.

The fact remains that the IPCC report altered the conclusions of the original paper, and in a direction to imply a higher and more alarming climate sensitivity, without giving the original results or explaining clearly what they’d done to alter them. You may claim that they had covered themselves with their generic statement about Bayseian Priors (which few people fully understand), but the effect is still to mislead in a preferred direction. IMHO it may not be an error, but its not honest science either.

Does this not reduce Bayesian estimates to tautology? Selection of priors and expert opinion and weighting of evidence are locked in a circle, and if you choose one you choose all.

All our knowledge is based on adding newly obtained information to, what we thought before. In that we often find errors in earlier understanding, thus adding new information also corrects errors. Bayesian approach is just a mathematical formulation of this general principle.

It’s very seldom wise to start from scratch. Instead the combined evidence of earlier knowledge is usually an essential part of the posterior understanding. The more empirical data we can obtained the more it determines the posterior knowledge and the less weight has the thinking that we had before any of this data was available.

All our understanding of the real world has some subjective inputs, but its role is really small in the best confirmed knowledge like the results of Newtonian mechanics in situations, where relativity and quantum mechanics can be safely disregarded. In many parts on climate science the role of subjective judgment remains much larger. The case of estimating the climate sensitivity is a prime example.

Choosing the prior is a formalized way of entering the subjective judgment to the analysis. Formalizing this step is the central idea of Bayesian approach. It’s a very useful approach, because formalization makes it possible that people can tell exactly, what their subjective input is, and to discuss and argue on the choice. Without the Bayesian formalization the input would still be there, but it could not be specified and argued upon.

The cognoscenti have known for some time that the empirical and semiempirical findings reported in AR4 Chapter 10 are at odds with the

Summary for Policy Makers. Even the semiempirical studies such as Andronova and Schlesinger look very different than the models PDFs, even though they make assumptions about the forcings (e.g., small solar forcing due only to TSI). While the medians are described as in accord with the models, clearly there is appreciable probability of a climate sensitivity below what the the SPM announces as highly improbable.

The sham is that the Summary for Policy Makers does not mention these studies, nor makes any attempt to reconcile them with the models. If all you read is the SPM, you would never know these other results exist.

The SPM is the real purpose of the IPCC, it is after all designed to ‘guide’ policy and this policy is made by those who will only read the SPM and probable don’t have enough knowledge to deal with the more technical content anyway such they read it. Its the real sucker punch and its use is way there are some many things that slip into it which cause problems.

Other empirical estimates of climate sensitivity (CS):

Lubos Motl,

http://motls.blogspot.com/2006/05/climate-sensitivity-and-editorial.html

“By thermometers, [280->385 ppm CO2] has led to 0.6 °C of warming,

so the full effect of the doubling is simply around 1.2 °C. This is

the most solid engineering calculation I can give you now. We will get

an extra 0.6 °C from the CO2 greenhouse effect before we reach 560

ppm, probably around 2090.”

— http://www.climateaudit.org/?p=2560#comments , see #3 & 7

Interestingly enough, a number of semi-recent peer-reviewed studies yield

similar low numbers for CS, in the 1-2ºC rise/2xCO2 range. Forex,

http://www.nasa.gov/centers/goddard/news/topstory/2004/0315humidity.html

http://meteo.lcd.lu/globalwarming/water_vapour/dessler04.pdf —

Minschwaner and Dessler, CS est as 1.8ºC with H2O feedback.

http://arxiv.org/abs/physics/0509166 Douglass & Knox. They got around

1.1ºC for “full” sensitivity, including all feedbacks? Widely

criticized.

Soden et al, Global cooling after Pinatubo eruption, Science 296, 26

APR 2002 [link?]: 1.7 – 2.0ºC per 2xCO2 (by my calculation, see

http://www.climateaudit.org/?p=1335 , #84

P. M. de F. Forster and M. Collins, “Quantifying the water vapour

feedback associated with post-Pinatubo global cooling”:

http://www.springerlink.com/content/37eb1l5mfl20mb7k/

Using J. Annan’s figure of 3.7W/m2 forcing for a 1ºC tmp rise,

http://www.climateaudit.org/?p=2528#comment-188894 ,

yields a 0.4(±)ºC for H2O forcing, or a 1.4ºC sensitivity (CS) figure

for the Pinatubo natural experiment. See

http://www.climateaudit.org/?p=1335, my #94.

Wikipedia http://en.wikipedia.org/wiki/Climate_sensitivity#Other_estimates

lists several other empirical estimates in this same range. So it appears that the IPCC is being disingenuous in asserting they’ve ruled out any possible CS below 1.5 deg.C/doubling CO2.

Peter D. Tillman

Consulting Geologist, Arizona and New Mexico (USA)

There are two kinds of people. The first are the ones proposing theories. The second are the ones proposing those ideas are garbage.

Call the first ones mutations. Call the second ones selective pressures. The end result is evolution. That’s what we all want.

Look at the past 10 thousand years. It is very clear that there is a high negative feedback to temperature. Every time it gets a degree or two above the “normal” it then get cooler. Every time it gets a degree or two below the “normal”, it then gets warmer. There is a high climate sensitivity to temperature with negative, stable, feedback that does not respond to the steady increase of CO2.

Does anyone else notice this? Look at the actual temperature data. Temperature data is stable! Only Climate Models are unstable!

Looking at the past four billion years the result’s the same. It’s the most important argument for negative feedback (of some kind) that there is. It’s really not that hard to understand.

Confucius say, “No tree grow to sky.”

:)

Indeed, Herman. In a situation with such a homeostatic mechanism (a “thermostat” or “cruise control” if you will) the entire concept of a “climate sensitivity” is called into question. There is no relationship, for example, between the temperature of my house and the amount of gas used to heat it. There is no relationship between the speed of my car on cruise control and the amount of gas used to power it.

As a result, while I applaud Nic for finding the error / interesting choice of priors, for me the whole debate misses the point. Until we determine the strength and nature of the homeostatic mechanisms, it seems premature and indeed counter-productive to discuss Bayesian priors regarding a measure (climate sensitivity) which may lack physical meaning.

w.

One gets the feeling that if the prior resulted in S being rooted rather than squared it wouldn’t have made the cut.

It seems I don’t trust the IPCC any more.

@ Peter D. Tillman.

Yes, these empirical studies (and others that do not make assumptions or set constraints that necessarily lead to large climate sensitivities, constitute a body of evidence that the climate sensiitivity is about 1 deg C for doubling, 1/3 of the IPCC ‘central’ value.

On p798 of AR4 WG1, you will find the models and empirical/semiempirical studies compared in a figure. The models’ results are captioned as “Constrained by climatology.” The empirical/semiempirical results are captioned as “Constrained by past transient evolution of temperature evolution.” This says it all. The models ARE climatology. Data are just data.

it is dogma that all forcings are the same. I do not agree. A forcing operating via Svensmark’s cosmic ray hypothesis principally operates in mid to high latitudes and where CCN are in short supply. It explains the polar see-saw phenomenon. This is NOT a uniform forcing and does not operate globally at a uniform rate. It will cause changes in clouds, rain, albedo, temperature and wind in ways that will alter climate patterns differently than solar visible light effects. CO2 itself does not even operate the same as visible light forcing changes because it should have its greatest effect at the poles (due to super dry air). Finally, changes in ultraviolet should have their biggest effect in the stratosphere. This dogma about all watts being the same interferes with the ability to separate the effects of the different forcings in the interest of simplicity.

Yes in fact we know that the paradigm of only the globally average TOA radiative forcing mattering must be erroneous as it fails to explain how Milankovitch forcing (changes in insolation) causes the glacial interglacial cycles, when it is a forcing which is tiny on a global scale (even hemisphericaly completely out of phase!) and yet it clearly does. In Hansen’s “sensitivity” study of the glacial/interglacial cycle, insolation isn’t even mentioned, because the mechanism it must work by (latitudinal distribution of insolation changes alters equator to pole heat fluxes and acts on different ice and cloud regimes) is not something that works well for assessing the “global” sensitivity to “global” radiative forcing.

Craig:

This was the point I tried to make (poorly) in my previous reply to you. I agree that the reliability of paleo-proxies are questionable, but this dogma, as you say above, is a much bigger issue.

Even if the proxies were 100% reliable, the sensitivity derived would useless to calculate the man-made CO2 sensitivity. That the consensus view requires that all of the diverse climate forcing mechanisms produce the same climate sensitivity does not pass the straight-face test.

Craig,

I know you are a scientist and I am not so could you also please clear something else up for me? I have worked with linear control systems all of my life and have acquired some understanding of what “deterministic chaos” means. In addition to the points you make isn’t it also folly to believe that the S value is not sensitive to initial conditions? Doesn’t this make these papers on S all subject to unquantifiable errors?

One aspect of Roy Spencer’s work is that internal random oscillations of things like surface ocean temperature spatial patterns (affected by winds) which can affect clouds could have a forcing effect that could easily be mistaken for climate sensitivity to external forcing. Whether Forster & Gregory’s work falls into this trap or not you can evaluate by visiting Roy’s site, but the point of this blog post concerns how their conclusions (right or not) are altered by fiddling with the priors for S

Climate models assume that forcings are linear, predictable and relatively well suited to computer models.

The alternative is that climate is non-linear. That sometimes a large forcing will have no effect, but at other times a few small forcings may have a large effect.

The problem for climate science is that the second alternative, non linear climate forcings, cannot be solved by current mathematics theory. There is no use spending billions on computers to model climate is the second alternative is true, because there is no computer fast enough to solve the calculations in a human lifetime. Not by a long shot.

Clearly, no climate scientist wishing to work in computer models is going to admit to this, as they would be out of a job. Instead they will continue to insist that climate is linear, and can be predicted, all we need is a computer faster than last years model.

X: Next year when the predictions prove wrong, all that means is we need a faster computer. Go to X:

Fred,

Thanks, that was the conclusion I had come to some time ago. My question to Craig actually originated from an answer to the feedback question given by Gavin Schmidt. In it he stated that the feedback parameter calculated during the last glacial maximum was unlikely to be small or negative. Ignoring any other criticisms, I could not understand why we felt it must be the same at the LGM as it is now. To name one relevent initial condition that has changed………atmospheric water vapor content has changed on every spatial level.

Craig,

Insightful and concise.

Thanks, Craig, for putting this so concisely. I’m tired of the blinkered obsession of orthodox climatology with trying to make everything global and uniform.

Nic wrote this: “The IPCC’s implicit transformation has radically reshaped the observationally-derived PDF for Y, shifting the central estimate of 2.3 to below 1.5, and implying a vastly increased probability of Y being very low (and hence S very high).”

It worse than that: it pushes the credible region for the slope of the regression line so that the slope estimate itself is nowhere near the middle. Put differently, the prior on S overrules most of the information on the slope that is in the data. I don’t think I have ever met even any most subjective Bayesian who would support such a procedure absent a very strong prior justification for the prior on S.

Note also that the prior on S was chosen “a posteriori” — I would bet (Bayesians do this) that the authors tried a bunch of “priors” in order to get their result.

This is a nice post. The technical details can hardly be simplified because Bayes’ theorem (and other rules of conditional probability) is not intuitively clear.

Jeffreys “priors” have nice features. Have you tried this with Jeffreys’ priors on Y?

I don’t believe that such priors are rare at all. Some of the well known basic examples of Bayesian approach contain such priors. To give just one example: The case of diagnosing a very rare disease.

I think your going to have to wrap this up into a single easy to understand paragraph if you want people to pick it up and report it!

The downside is the IPCC will argue its all about the model sensitivity as they are so more complex than real life data and the empirical data is most likely wrong as it doesnt match their superior model results etc…etc…

Its shocking to those who understand how science “should” operate, but I am not sure if this will appeal to the layman. The IPCC always talk about model projections, and their models ignore observed sensitivity and instead use some made up one, so they will argue it doesnt affect their projections, as they ignored any observed data in the first place when building them!

Science is supposed to work like this: you put out an idea (theory) and then anyone can try to undermine it or overthrow it. It doesn’t matter what their bias is. If they find an error it is an error. A scientist does not get to dismiss criticism because the critic is biased–unless he is the IPCC.

Thanks for your comment, MattStat. All expert statistical input is most welcome! You make a good point about the IPCC’s use of a uniform prior in S overruling most of the information on the slope that is in the data. I think that the pink line in my Figure 6 is also quite revealing about this.

I’ve read different claims as to what the Jeffreys prior is for the OLS regression model with errors in the independent variable dominated by those in the dependent variable/ fitting errors. Guojun Wang in “Some Bayesian methods in the estimation of parameters in the measurement error models and crossover trial” derives a formula for the Jeffreys prior for a regression where the ratio of the two sorts of errors is known, which reduces to a uniform prior (in Y) where the ratio is as here. That is also the standard reference prior. But I guess that is not what you had in mind?

De-emphasis, if not actual ignoring, of inconvenient data is the impression it gives. Your Fig. 6 is hard to interpret in any other way.

I don’t understand the need for a “prior distribution” in the quantity that is being experimentally determined (“S”).

Why isn’t S just a straightforward result of the experimental analysis and why should it at all be influenced by a “prior distribution”?

You might well ask, given that physics and other proper sciences made almost all of their experimental progress without worrying about prior distributions.

Note that the quantity that was experimentally determined was Y, not S. S has no direct relation with observable variables. Rather, Y is the slope coefficient for an (approximately) linear dependence of net radiative balance N, minus the change in forcings Q, on changes deltaT in mean surface temperature. So Y is determined from a regression of (Q-N) on deltaT.

Given Forster & Gregory’s regression method and observational error assumptions, the error (and hence probability) distribution for the resulting slope coefficient estimate can be derived from frequentist statistical theory, as used in science for many years. Alternatively the same probability distribution for the estimate of Y can be arrived at using a Bayesian analysis, starting with a prior distribution that is uniform in Y (not S). Such a prior distribution has a vanishingly small influence on the probability distribution for the estimate of Y: it is an uninformative reference prior.

Either way, the resulting probability distribution for Y is the same. It is that shown in my Figure 2. This can be converted into a probability distribution for S using the relation S=3.7/Y and standard rules of probability for parameter transformations (which there is no dispute about). The resulting distribution is that shown by the green line in Figures 3 and 4.

I hope this helps.

Yes it does; thanks for taking the time. I also read the Wiki pages related to Bayesian analysis, those are helpful also.

Moderation note: I am clearing this thread of non substantive and/or off topic comments

Judith

What a good idea, there are too many people trying to invent motives rather than discussing the science behind this intriguing post.

tonyb

Yes. Well, not too surprised to see my comment cleared. It was not related to the science per se. But given this statement from this paper, this topic is about more than just that:

“Here I demonstrate an error in the core scientific report (WGI) that came about through the IPCC’s alteration of a peer-reviewed result.”

Why was it was altered?

a product of thisless than substantiated

There appears to be a kind of Gresham’s Law for blogs, where cogent comments are driven out by those of little substance, less consequence. And some contributors play the game very well. They are boring. Thanks for stepping in.

Judith – OT, I know, but I am rereading Samuel Johnson’s account of his trip to the Western Isles, and found this, which I thought as Lady Uncertainty, you might rather like:

“To be ignorant is painful. But it is dangerous to quiet our uneasiness with the opiate of hasty persuasion.”

Oops… left a dangling mess or words. Please alter. That would make sense.

Forster and Gregory (2006) derive their sensitivity by using the radiation imbalance versus the temperature change, using mostly annual averaged data. A key part is how this imbalance is defined. It is the difference between forcing and measured top-of-atmosphere radiation. This needs an assumption about forcing, for which they mostly use GHGs and volcanic effects saying other things like aerosols were neglected. This could be important given the previous thread that showed solar radiation to be increasing possibly partly due to aerosol decreases in their period around 1990.

Their results showed a positive shortwave feedback and a more neutral longwave feedback for a net positive feedback. However, in my interpretation, if they had included decreasing aerosol as part of the forcing, which would have reduced the imbalance, Y would have been reduced leading to a larger sensitivity, S. The shortwave seems to correlate with the imbalance, so it seems plausible some of it could contribute to the forcing rather than just being the response.

Their radiation data also excludes areas poleward of 60 degrees, which may also underestimate sensitivity.

Interesting point, although not really bearing on what the IPCC did to Forster & Gregory’s results. However, Forster & Gregory’s regression period was 1985-1996. I wouldn’t expect a dip in the aerosol forcing in the middle of that period to have much effect. They state that tropospheric aerosol forcing in the datasets analysed (Myhre et al 2001) changed very little over 1985-1996, which I would have thought was the most important thing.

Areas poleward of 60 degrees are a fairly small proportion of the total area.

Jim D,

Just for accuracy, they do not say that aerosols were neglected. They drew forcing data from three separate sources and ran sensitivity analyses on them. Their actual comment was:

“The tropospheric aerosol direct and

indirect radiative forcings were found not to impact our

regression. Despite potentially large absolute errors in

these forcings, their impact on our analysis is likely to

be small, as the tropospheric aerosol forcing in the

datasets analyzed changed very little over 1985–96

(Myhre et al. 2001).”

Paul

This result is very sensitive to how the imbalance is calculated, and that in turn depends on how the forcing is calculated. I am fairly sure their error bars don’t take these a priori uncertainties into account, but they have a lengthy discussion that says aerosols could conceivably have a large effect on their results if correlated with surface temperature.

I also wonder how a one-year response can be regarded as equilibrium climate sensitivity. Surely thermal inertia in the ocean means this will significantly underestimate the sensitivity to long-term forcing. That is, a sustained forcing will have a larger amplitude response than a multi-year-scale oscillating forcing of the same magnitude.

No: that is the beauty of using top of atmosphere radiative balance data – it automatically reflects the flow of heat into the ocean, so thermal inertia of the oceans is irrelevant to the estimate of equilibrium climate sensitivity that it provides, unlike with virtally all other instrumental methods.

The surface temperature is very sensitive to thermal inertia, so I don’t see how this method is immune to it. If you looked at daily surface temperature and forcing changes, you would get a very low sensitivity because several hundred W/m2 over a diurnal cycle only corresponds to about a one degree C change in sea surface temperature, making Y very high and sensitivity low. Annual scales are not long enough to completely remove inertial effects like this.

I’m confused. As far as I have read, nobody is disputing the laws of Kirchoff, Planck, or Stefan-Boltzmann. These really says to us that if the earth is is radiative thermal equilibrium, then it *must* radiate more if there is additional earthbound heat generation (fuel burning). I haven’t seen papers on a time scale for re-equilibration.

“There is preliminary evidence of a neutral or even negative longwave feedback in the observations, suggesting that current climate models may not be representing some processes correctly if they give a net positive longwave feedback.”

This paper suggests there should be a net negative longwave radiative feedback in response to rising temperature, which is perfectly compatible with the known and agreed physical laws.

mrsean2k remarked July 5, 2011, 9:31 am

The analysis by Nicholas Lewis arrives at a mean non-model based 2xCO2 climate sensitivity of 1.6°C (with some caveats that including the impact of clouds would probably lower this value).

This compares with a mean model-based value used by IPCC of 3.2°C (so the comment is spot on).

But let’s do a real rough check, based on the HadCRUT surface temperature record, the Mauna Loa measurement of atmospheric CO2 (after 1958) and the IPCC estimated CO2 level based on the Vostok ice cores (prior to 1958):

The linear increase in surface temperature from 1850 to today was 0.66°C (0.041°C per decade).

Atmospheric CO2 concentration was 290 ppmv in 1850 and is 390 ppmv today.

IPCC AR4 WG1 tells us that the all anthropogenic forcing components except CO2 (aerosols, other GHGs, land use changes, other changes in surface albedo, etc.) have essentially cancelled one another out, so we can use the estimated radiative forcing for CO2 (1.66 W/m^2) to equate with total net anthropogenic forcing (1.6 W/m^2).

IPCC further estimates that total natural forcing (solar) was 0.12 W/m^2, or 7% of the total natural + anthropogenic forcing.

We can now calculate:

C1 (1850) = 290 ppmv

C2 (2011) = 390 ppmv

C2/C1 = 1.345

ln(C2/C1) = 0.2962

dT (1850-2011) = 0.66°C

dT(290-390 ppmv) = .97 * 0.66 = 0.61°C

ln2 = 0.6931

dT(2xCO2) = 0.61 * 0.6931 / 0.2963 = 1.4°C

This simple calculation gives a pretty close check.

Of course, if we accept the conclusion reached by several solar studies that around half (rather than only 7%) of the observed 20th century warming can be attributed to the unusually high level of solar activity (highest in several thousand years), we arrive at a lower 2xCO2 climate sensitivity. (Shapiro 2011, Scafetta + West 2006, Solanki et al. 2004, Shaviv + Veizer 2003, Lockwood + Stamper 1999, Geerts + Linacre 1997, Gerard + Hauglustaine 1991, among others).

Max

Because the earth has clouds with behaviors, and atmos moisture is not uniform or constant, and surface albedo changes constantly, it is possible to have either or amplification or damping of the theoretical CO2 effect (or both via different processes). This is why the sensitivity is so much in dispute and hard to measure.

Dr. Loehle,

What of this paper? It is suggested that we put about 300GT of H2O into the atmosphere yearly, mainly from cooling processes. Would this non-natural increase in water vapour not also effect the ability to figure out the sesitivity of our climate?

http://www.timcurtin.com/pdfs/econometrics_science_climate_change_15_May_2011.pdf

I would suggest that the amount of water we put in the atmosphere from irrigation is huge, but the amount NOT put in the atmosphere due to human use and urban impermeable surfaces is also large, and I have not looked into the numbers. These things (plus land use change) would complicate the ability to detect sensitivity ONLY if they change during the data evaluation period.

Wouldn’t human impermeable surfaces ensure more evaporated unto the atmosphere eventually.

Then there is this multiplied :

“Withdrawals from the Ogallala Aquifer for irrigation amounted to 26 cubic km (21 million acre feet) in 2000. As of 2005, the total depletion since pre-development amounted to 253 million acre-feet (312 cubic km).”

“the amount NOT put in the atmosphere due to human use and urban impermeable surfaces is also large” – what does that mean? asphalt? Many urban areas have a great deal extra H2O than they would have in the past especially suburban areas in drier wealthier countries. One need only look at satellite imagery to see human made green areas compared to brown natural areas right beside them.

Also doesn’t it complicate sensitivity for the projection period rather than the “data evaluation period”? or is that what you meant?

Yes, urban areas in dry regions may give off more water. Humans also harvest crops/trees thereby reducing evapotranspiration. And we alter albedo. None of these things happens/changes much on the time scale of the analysis in question here

I am from the prairies, where irrigation has changed the land drastically more greener than it was 100 years ago. When you plant you till the soil, this also dries it out. When the plants grow you water it as much as you can. When you harvest you cut down the plants often you leave them for awhile to dry out. On the land that doesn’t have crops it will usually have grass that can be used for grazing or hay or nothing. Basically, the more you get from the land the more water it uses. The more crops you have, in general, the more water that is being used. Perhaps it is different elsewhere. In forestry my understanding is when they remove larger trees this also exposes the plants underneath to sun and wind and the foresters replant with younger seedlings, I don’t think this increases the watersheds water flow in the forests but I could be wrong.

re: time scale – So the normal climate time scales of 30 years are not a part of this?

Does anybody know of any “correction” ever applied to climate data in whatever situation by the IPCC authors and/or other mainstream climatologists, that has resulted in an underestimating of the dangers of global warming?

AR/4 was riddled with error because the IPCC went overboard trying to scare people with their (unfounded) assumption of CO2 and temperature increase.

The TAR was far more objective – and the careful reading of it shows that the evidence taken in support for the “relation” between CO2 and “climate” demonstrates the exact opposite. Other conclusions were drawn, of course – principally in the Summaries for Policy Makers, wherein no conclusion presented therein had any relation to what what presented in the Assessment Reports.

The IPCC revision of science is nothing new – they have no reason not to do to other people what they do to themselves.

Besides Rajendra Choo Choo Pachauri, is there anyone retains a bit of credibility in the IPCC?

The flat prior PDF is an example of a non-informative prior; it is non-informative about the numerical value of the equilibrium climate sensitivity within the range of equilibrium climate sensitivities in which the probability density is constant and not nil. The existence can be proved of non-flat priors of infinite number that are equally non-informative. In this way, the argument fails that the flat prior shall be selected for use because of its non-informativeness. The selection of the flat prior is arbitrary yet the selection of this prior is a premise in the IPCC’s argument for CAGW. This premise is false, by the law of non-contradiction. It may be concluded that the IPCC’s argument is unproved.

More than a century ago, the logician George Boole opined that selection of the flat prior PDF was arbitrary. Boole’s followers went on to discover an alternative to the Bayesian argument that needed no prior PDF. This alternative came to be known as “frequentism.” However, frequentism had the shortcoming of failing to support induction, the process by which one generalizes. Under frequentism, scientists cannot take the crucial step of generalizing from their observational data.

There is a loophole in Boole’s conclusion. This loophole appears in the circumstance that the prior PDF is defined over the limiting relative frequency of statistical events of a particular description. In this circumstance, the set of priors that are non-flat and non-informative can be shown to be empty. Thus, the flat prior is uniquely the uninformative prior. Through the exploitation of this loophole climatologists could, if they wished, help us to avoid basing costly policy decisions on unproved and unprovable conclusions.

When we can define the flat prior in different variables, how could the loophole be of any help?

Attempting to use the loophole is just a way of creating false impressions.

The way the empirical data can be obtained determines the form of the PDF that summarizes the results of the experiment. There is absolutely no general reason to expect that the variable that allows for simplest summary of the experimental data is the same one, which is the most natural choice for the flat prior. In this case the empirical data can be summarized by a normal distribution PDF in Y. This is not a valid argument for saying that a flat or smooth distribution in Y is the most natural or least informative prior.

My view is that it’s indeed the most natural prior, but the reason is in theoretical physical view of the related processes.

Thank you Nic Lewis. A good catch and a well written article.

After AR3 was published, the IPCC’s range of ECS came in for quite a lot of criticism – with two principle complaints, IIRC. Firstly, that it was subjective to the point of being arbitrary, and secondly that the report quoted (many times) that climate sensitivity was likely to be in the range of ΔTx2 = 1.5 to 4.5°K, and yet the CMIP3 models spanned a range from 2.1 to 5.

The AR4 acknowledges that the previous range came from expert opinion, and then seeks to show that there is “independent” evidence, outside the GCMs, which supports a revised range of 2.0 to 4.5. I have always presumed that the elimination of the 1.5 to 2.0 interval was seen as critically necessary to avoid (further) embarrassing questions about the mismatch between the cited likely range and the range apparent in the models used to make the AR4 projections.

Against this backdrop, the Forster and Gregory results must have presented the IPCC with a dilemma.

On the one hand the results represented an important plank in the argument that independent observational-based evidence existed, and therefore needed to be included. On the other hand, they gave the wrong answer from the perspective of the IPCC.

“On the one hand the results represented an important plank in the argument that independent observational-based evidence existed, and therefore needed to be included. On the other hand, they gave the wrong answer from the perspective of the IPCC.”

Don’t you just hate it when that happens?

“Pekka Pirilä | July 5, 2011 at 5:11 pm | Reply

I don’t believe that such priors are rare at all. Some of the well known basic examples of Bayesian approach contain such priors. To give just one example: The case of diagnosing a very rare disease.”

As in the estimates for the 100,000 new cases of vCJD, every year, that would result from eating beef from cows infected with BSE?

You do know that such models fail more often that they succeed don’t you?

Have you ever talked to people at the coal-face, medic’s, virologists, microbiologists and biochemists, as to what utility they put on these models?

Ever been down to the CDC or London School of Hygiene and Tropical Medicine and see how the professionals do it?

They guess based on previous patterns.

The example is related to the fact that for a very rare disease a highly selective diagnostic method may appear to confirm the disease in a case where the conclusion is still right with a very low probability.

If the prior likelihood is 1/1000000 and the selectivity of the method 1/1000, the likelihood of the disease is only 1/1000, when the method gives a positive result.

This is a simplified example, but the same basic phenomenon is commonplace in very many fields of application. It tells, why using the Bayesian approach is really essential in some problems. There are also very many problems, where the Bayesian approach is of little significance as it doesn’t change the results at all or very little.

Kind of like meterologists who sell their predictions for a living use ocean cycles to predict the weather?

Unless I missed something, Forster and Gregory do nothing to correct for serial correlation in the error. In the appendix, they describe (without using this phrase) a nonparametric bootstrap to estimate standard errors, but the method used is inappropriate in the presence of serial correlation.

It isn’t difficult to correct for serial correlation in the errors, and failing to do so leads to misleadingly precise estimates.

Interesting that the previous post here “An explanation(?) for lack of warming since 1998” refers to a model that does address serial correlation (being based on Kaufman, A., H. Kauppi, and J. H. Stock, 2006: “Emissions, concentrations and temperature: a time series analysis.” Climatic Change, 77, 248-78.). As I noted on that thread “Polynomial Cointegration Tests of Anthropogenic Impact on Global Warming” Michael Beenstock , Yaniv Reingewertz and Nathan Paldor (not published) criticised Kaufman et al for getting spurious regression in data which are nonstationary to different orders.

Would be interested in your view on that state of the debate here and the applications of these techniques to global climate time series (assuming you have been following it).

Just as an aside I find it curious that all the comment on the previous thread seems to relate to discussion over the conclusions, rather than having a hard look at the assumptions in the statistical modelling that it is all built upon.

Isn’t having a Uniform Prior the same as having no Prior at all?

Or, in other words, isn’t the Uniform Prior the Prior of absolute ignorance- not even hazarding a guess?

Orkneygal:

Perhaps you are sensing (correctly) that the uniform prior is not uniquely maximally uninformative. Thus, a prior is a possibility for which the probability is high of a low equilibrium climate sensitivity.

@Orkneygal

You have to pick an upper and a lower bound for your uniform prior. You can’t pick -infinity to +infinity since that is not a proper distribution, so what do you pick? If you pick 0 to 18.5, you are assuming, that without any prior knowledge, you believe the median climate sensitivity is 9.25 deg C per doubling.

OT: but this issue is related to a brain teaser…

Suppose you know there are two bills in a hat and that one bill is twice the size of the other bill. You draw one bill from the hat and it is a $10 bill. You are then given the option to exchange you $10 bill for the other bill in the hat…do you make the exchange?

Based on expected values, you should make the exchange. The expected value of the trade is 0.5*5 + 0.5*20 = 12.25 given the other bill is half the time a $5 bill and half the time a $20 bill.

Now play the game again, but this time have two pieces of paper in the hat with numbers written on them…one number twice the other number. You draw out one piece of paper and DO NOT LOOK at the amount written on the paper. (call the amount X) You are given the ability to trade. Do you trade? Why or why not?

Hint — it seems like the same logic would apply. Your expected value is 1.25X, but how could it be right to trade a bill you didn’t even look at?!

When you can figure out this puzzle you will understand the issue with uniform priors :)

Negative numbers?

hmm. interesting clarifying question. If I allow negative numbers does your answer change? (e.g. you can’t look at the bill, but you want to or do not want to change?)

If you want, you can answer assuming negatives are allowed or are ruled out.

-J

Well with negative numbers allowed, there’s certainly no reason to switch (I think) because it’s as likely that your +5 becomes a +10 as a -5 becomes a -10.

Absent negative numbers, I’m having a hard time seeing how the latter case is different from the former. In the former, you make the decision to swap out the bill regardless of it’s value. It’s not like the Monty Hall problem because you are given no information before being asked to swap.

@Brad

I don’t want to belabor the point, especially since this thread is long and hard to follow, so here is why I brought up the example.

Let’s restrict ourselves to positive numbers since that makes the point.

With an “uninformed prior,” you quickly decide that you want to exchange your bill for the other one in the hat. The problem is that this makes no sense. Why would you exchange your bill sight unseen for the other bill? If you wanted the other one, just take it. If you took the other one, you would want to exchange it for the original…madness!

The problem is you are assuming that any number you pull out of the hat creates an equal chance that the other bill is one half the value or twice the value. This is effectively having a uniform prior distribution between 0 and infinity. However, that is not a proper probability distribution (does not integrate to 1).

If you want to have a uniform distribution, you MUST identify an upper bound on the distribution. With an upper bound specified, seeing the value of the bill now let’s you decide if you want to exchange or not.

For example, suppose the upper bound on your prior is $1000. Now if you see a bill that has a value greater than $500, you do not want to exchange. There is no chance the other will is twice X.

Bottomline: it seems plausible that you can answer the question without a prior, but the answer is nonsense. You must have a prior, and it dramatically impacts your answer.

-J

J,

Thanks for the explanation. I understood the meaning of your paradox from the original message and was quite puzzled as I realized that swapping made no sense at all, even though it seemed like the correct answer (assuming no negative numbers).

Thanks for the explanation, I’ll try to adjust my priors now.

Actually I would think it goes with the fact that not all denomination exist. Not sure for US$, but for swiss Francs, existing denomination are 10, 20, 50, 100, 200, 1000. If we know that one hat has the double of the other hat, we know no 1000 notes can be hidden. So possible denominations are 10, 20, 50, 100, 200. A uniform random choice has an expected value of 76.-

The probability by switching the hat is:

100% 10.- if having a 20.-

100% 20.- if having a 10.-

100% 100.- if having a 50.-

50% 50.- and 50% 200.- if having a 100.-

100% 100.- if having a 200.-

Assuming a uniform probability for the first draw, the expected value of switching is 71.-

So you should not switch without checking the banknote; and if checking it, switch only if you have a 10.-, a 50.-, or a 100.-

This explanation does not explain anything if you assume you can have any number written on your piece of paper, of course…

That problem sounds quite like Bertrand’s paradox.

junkink, You said that, “You have to pick an upper and a lower bound for your uniform prior. You can’t pick -infinity to +infinity since that is not a proper distribution, so what do you pick?”

If this was a tactical situation where you have just been told that all the intelligence you had been relying on for operations and security was compromised. None of it was useable. What would scientists do… debate, how bad was that/it, really? Let’s form a new committee to review how we got here? Call in a nuclear physicist that has won the Nobell Piece Prize?

You all, need to secure your base of operations. Now. Find any camp spies and interrogate them… yourselves. Talk about a brain teaser… How much bad data does it take to stop scientists on a quest? At what point will all of science be seen as, a simple laughing-stock? AGW stuff, Drugs=Good, then; Bad & Ugly(Niacin-recent), pick any year; DARPA’s speed of light prize(new)… Orion Too (which Mr. Dyson said himself, ‘it was never really going to be practical’), a few billion on the Atomic Bomber Project, you know ‘The Bomber That Will Never Need to Land’ & powered with atomic engines…give us a break. Please.

You are not like congress; & we can’t vote for programs we think make sense. If you aren’t careful, the Chinese will cut our budget next year.

We don’t have to worry now… the UN will make up the diff.

http://www.un.org/en/development/desa/policy/wess/wess_current/2011wess.pdf

It sounds like you are experiencing the “madness” part of the problem. I hate to inflict this on people because it caused me a lot of brain pain, but it does eventually help you understand the importance of priors!

Your reply is more baffling to me than the brain teaser. :) As far as I can see I have answered the teaser. The problem lies with the lure of the “expected value”, which can be shown to be a meaningless entity (see above).

I still have no idea what a prior is, or what you would do with one, other than to give him or her a priory.

I’m confused. If you drew out a $10 bill and you knew there was a 50% chance the other bill was $5 and a 50% chance the other bill was $20, you would NOT make the trade?

If you do not make the trade, I want to gamble with you :)

If you do make the trade, how are things different when you do not look at the value of the bill?

-J

Then I will gamble with you!

The value on the papers makes no difference, it is still a 50/50 bet. 50% chance to win, 50% chance to lose. Simple as that.

The idea that you should go for the trade because of an “expected” 12.50 is a mirage. It is still 50/50.

If you have a 50% chance each of losing $5 or winning $10 – you wouldn’t trade your bill in?

When does the game start?

Joshua,

Run the numbers. Try it with a random number generator (on your calculator) two pieces of paper and a pencil. There is no “expected value” because X (or Y) is random. I will play this game with you if you like, as defined by Junkirk above. In the long run we will both end up even.

I’m going to respectfully back out of this one.

Actually, the random number generator on a calculator wouldn’t work, cos they choose numbers within a range. Some sort of jiggery pokery would be required.

You’d be awfully easy to chase out of a poker hand. Care to give me your money, er, play?

Rattus,

Not sure what you are replying to as we’ve run out of indents. But as it is only a bit of macho posing, and doesn’t involve any mathematical discussion of the subject, I’ll move on.

I came into this sub thread late and only read a couple of comments upthread, so it appears as though I missed the definition of the bet. As I understood it at this point the bet was you have $5 and you had a 50% chance of losing $5 and a 50% chance of winning $15. If you wouldn’t take that bet then my point holds.

If instead the problem was there is a pot of 3 bills $5, $10 and $20 and you pick one bill at random out of the pot, that changes things. If you

1) Pick the 5 then you have a 100% chance of winning, take the trade.

2) Pick the 10 then you have a 50% chance of losing 5 and a 50% chance of winning 10. Take the bet.

3) Pick the 20 then you have a 100% chance of losing, pass.

The situation in poker (I’m thinking only open face games, stud and hold ’em variants) is more complex. You need to know what the odds of filling your hand are and whether or not this hand will or will not be the best hand (your prior). You then need to know what the odds being offered you by the pot are. If the pot odds are better then you take the bet, if not, then fold.

Of course if you hand is filled and it is a lock, then of course you bet. The point is that you have to take into account the expected gain from the pot vs. your hand filling and being the best. This aspect of the game is what makes it fun and a great game of skill.

This seems to be related to the Monty Hall paradox, and if that is the sort of information you have available, you should always take the choice.

Junkink’s solution to his teaser assumes X has been chosen from a choice of only X/2, X and 2X. (X/2 is not a possible double; neither is 2X a possible half, since there is no X/4 or 4X). According to its defined terms the puzzle is therefore nugatory. The Monty Hall paradox is appears rather less so until you imagine it played with 100 doors instead of 3.

Suppose prize is behind door 77. You guess door 39. Quizmaster (who knows where prize is) then opens 98 other doors showing they’re empty. Chances of prize are (blindingly obviously) 1/100 in 39 and 99/100 in 77.

@James Evans

I think you are playing the full game from the beginning and determining that each strategy (exchange or keep) must yield the same result by symmetry. That is correct.

However, repeated play of the game:

I have $10 and I can exchange for a coin flip of $5 or $20

Does indeed yield a positive expected gain of $2.50. If you are offered this choice you should choose to exchange the $10 for a positive expected gain.

The trick is that something happens when going from the full game to the sub-game. The trick has to do with what you assume are the possible sub games that can be played (see my discussion with Brad above).

-J

Junkirk,

But the value of the first paper isn’t always $10. If it was, then you would be right. It’s a no-brainer. But you didn’t pose that teaser. If you know that the only possible values on the papers are 5, 10, 20 – that is a completely different problem. If the number on the paper that you draw from the hat is random (as posed in your teaser) then there is no long term “expected” value of what the other papers say.

P.S. As you see, I am playing with the “random values written on paper” game. The “bills” game is obvioulsy not wuite the same, as there is no $2.50 bill, or $40 bill etc. And that obviously changes things.

Play any subgame you want.

If you have the choice between $X with 100% probability or equal chance of $0.5X and $2X. You gain in expectation by taking the 50/50 gamble.

The conundrum is that you know you shouldn’t switch, but every time you select a bill and do the analysis, the analysis suggests you should switch.

The fault lies in the assumption that no matter what value of X you drew, there is a 50% chance of 0.5X and a 50% chance of 2X. This is NOT a valid assumption. To have that outcome, you would need a uniform prior over an infinite domain. not possible.

Once you describe the priors, then it matters greatly the bill you get because not all draws result in a 50/50 chance of half or double.

To convince yourself, pick a subgame and play it. If you assume an even chance of half or double, you will find you want to switch. The key issue is why the big game doesn’t turn into the sub-game after selection of the first piece of paper.

Sorry Judith for so much derailment! I’ll bow out now as well :)

-J

Junkirk,

“The fault lies in the assumption that no matter what value of X you drew, there is a 50% chance of 0.5X and a 50% chance of 2X. This is NOT a valid assumption.”

Actually, this is trivially true, given the “two pieces of paper with one random number double the other” teaser that you posed.

Put it this way: If you have a six sided dice, then the “expected” value of a roll is 3.5. If you have an infinite sided dice there is no “expected” value, ever. In the long term the results will never settle down. This is the nature of the brain teaser that you posed. As you said originally – you set no limit to the possible random number on the paper, which screws up some intuitive responses to the problem.

All IMHO.

And come to think of it, that refutes my answer to Joshua. Our game wouldn’t come out even in the long run… It would be a random result.

OK. I’ll shut up now.

classic

P.S. Whether you look at the paper or not makes no difference.

You have only 2 samples so the mode is equal.

The total is of no interest so the mean is not of interest.

The distribution is equal also so the median is equal.

Therefor, it doesn’t matter what you choose.

Nic writes “Given Forster & Gregory’s regression method and observational error assumptions, the error (and hence probability) distribution for the resulting slope coefficient estimate can be derived from frequentist statistical theory, as used in science for many years. ”

Correct and central to the discussion. Why use a more trendy Bayesian approach that includes uncertainties instead of a tried and tested approach, when you do not have to?

There is still too much emphais on CO2 or GHG in general; and too litle attention paid to multi-year lags in the balance of incoming versus outgoing. People have beenconfused in their treatment of static versus dynamic processes ever sine I started reading about global climate about year 1993.

Craig Loehl and max,

The figure of 1.6deg C, for 2x CO2, is, according to Nic Lewis’s article the median, not the mean or average as you’ve claimed.The median is only the same as the mean if the distribution is symmetrical which it clearly isn’t. So the average value, (defined as the value which is just as likely to be too high as too low) would look to be just over 2 deg C.

It’s slightly lower than the IPCC estimate but not outside their range. Its interesting that F &G ‘s empirical result is accepted, or at its “beauty” is, by hardened deniers like Manacker!

Except that the median represents the data set better here. Isn’t that the gist of it? The mean is not as meaningful.

CL and Manacker agree with you about the use of the word ‘median’, why would they use the terms mean or average?

I would agree with them both that the average is the more appropriate term in this case. It takes into account the asymmetry of the distribution. However, it’s significantly higher than the 1.6 degC claimed.

Tt, it is well known that using the mean without regard to the skewness of distribution is a common mistake of novices. Why do you prefer the mean in this case?

tempterrain

Forget the “hardened deniers” ad hom – otherwise I’ll refer to you as a “gullible believer”.

But back to the 2xCO2 CS.

The observed warming and change in atmospheric CO2 since 1850 confirm a 2xCO2 CS of 1.4C.

The F+G data (observation rather than model-based) showed a median of 1.6C, which IPCC then “adjusted” upward to arrive at a model-based mean value of 3.2C.

“Believe” what you will, tempterrain, but that is what the analysis by Nic Lewis shows.

If you can find errors in the analysis or methods used, please speak up.

Max

You’re still confusing ‘mean’ and ‘median’. It is quite possible for the median of a distribution to be 1.6 but the mean (average) to be 3.2

If you were a betting person and had to choose the right number, on the basis that you’d be just as likely to be too high as too low, then you’d choose the average.

I’m not surprised at you getting this wrong, or more likely, deliberately misrepresenting the information to obtain as low a number as possible, but I am surprised that Craig Loehl doing the same without acknowledging the ‘slip’.

I don’t believe I have even used the words mean or median, but will be glad to clarify if someone can point me to my statement.

I will say I have no opinion on whether the F&G paper is “right” but I am quite sure that uniform priors on S are wrong. If you are estimating Y that is where your confidence limits are computed and if you then divide by Y you can transform those limits in well-known ways. The distortion in the uncertainty bounds from how IPCC altered the F&G result is simply wrong and represents “creative statistics”.

Craig ,

“Thus the mean 1.6 deg C is probably closer ………”

July 5 at 9.22

No tempterrain you are wrong,

the distribution is skewedand you don’t understand indices of central tendencies.This is how you choose the more meaningful indices:

1. if the data is categorical you choose mode,

otherwise

2. if total is of interest you chose mean,

otherwise

3. if the

distribution is skewedyou choose medianotherwise choose mean

In this case the distribution is skewed so you choose the median.

No, I don’t think I’m wrong. The long tail of the distribution does need to be taken into account, particularly as regards its policy implication. If we are so unlucky that climate sensitivity is at the top end of the range, ie in the long tail, the outlook is going to be particularly bleak..

However we can at least recognise the existence of the long tail by taking the average of the whole distribution rather than the median value.

Annan and Hargreaves 2009 noted the impact of the long tail on policy…

tempterrain, I think you’re wrong on this too. That the mean and median are different tells you something about the distribution (that it is skewed, but there are better indicators of skewness than this), but one doesn’t use mean over median to “recognise the existence of the long tail by taking the average of the whole distribution”.

That’s just a completed bollixed up argument.

(Instead, you use confidence levels.)

My bad.. make that

confidence interval.OK No problem with confidence limits, levels, or intervals. So what would you say these were , according to F & G?

My first thought on reading the FG paper was, how well could they derive

equilibriumclimate sensitivity from such a short time period (15 years)? I wondered if the IPCC adjustment accounted for that.FG have published a number of papers since then, including their 2008 joint effort, “Transient climate response estimated from radiative forcing

and observed temperature change.” They conclude:

But in FG 06, the independent variable in the regression was changes in surface temperature T, which had a very small uncertainty compared with the uncertainty in the dependent variable. So OLS regression was, as FG concluded, applicable in FG06.

It may not be relevant but it should be pointed out that the changes in surface temperature during the period in question have a very large uncertainty. I don’t think we know what they are, so we can’t really regress them. But this is a digression in a debate that assumes known values for many unknowns going in, as most climate debates do.

One of the most interesting things about the climate debate is that in one place it involves people arguing about point A, by assuming that B is well known, while not far away people are hotly debating B.

Forster & Gregory did perform quite a careful error/ uncertainty analysis, and concluded that the uncertainties in surface temperature changes were much smaller than the other uncertainties. Bear in mind that it is only changes in temperature over a relatively short period that matters here. The fact that their 1985-1990 long wave regression gave an almost perfect straight line fit supports the accuracy of the temperature record for that period at least.

Nic, I am fairly sure that F&G did not do an uncertainty analysis of the mathematical issues with the surface statistical models, such as HadCru. UAH shows no warming during 1985-90. Bad data matching bad data for 5 years suggests a bad coincidence. But this is not relevant to this thread.

Sorry about the formatting.

Thanks for your very interesting article, Nic.

Many comments have made reference to the IPCC but this raises the question of who or what is the IPCC? The (apparently) inappropriate use of a uniform prior seems to have distorted the findings of a peer-reviewed and published paper. But who is accountable and who should answer for this? Nobody, because there is no single individual at the IPCC who you can point the finger at. It is just a faceless collection of authors and editors with no accountability. When a scientific paper is published under normal circumstances there is a name at the bottom. There are well-established mechanisms for it to be refuted and counter-refuted. But for IPCC failings, there is no point of contact and no mechanism for addressing any published shortcomings. This is a serious weakness when you consider that IPCC reports are meant to represent the best body of scientific opinion. The only time criticisms are answered is when the mainstream media takes an interest and that only sems to be for emotive subjects like glaciers etc.

Nic’s analysis appears to create doubts about a pivotal aspect of the CAGW argument and somebody should explain what he has uncovered. In truth though, are there likely to be any implications? My guess is that unless the Forster and Gregory make a public fuss then the matter will be ignored by the MSM and it will die a gradual death in the blogosphere. This is not how it should be IMHO.

Should not the Lead Authors be held responsible for what is in their Chapters, they are Masters of the Vessel/Captain of the Ship are they not?

Hi Orkneygirl

Yes, it would make sense for LA’s to be held accountable for their chapters but AFAIK there is no formal mechanism or process for this to happen.

“somebody should explain what he has uncovered”

Jeremy Harvey on Bishophill has written a great explanation of the Climate sensitivity controversy that I think is worthy of being published alongside Doug Keenan’s article “How Scientific is Climate Science?” in WSJ (easily found in Google).

I had thought that what the IPCC had done would be considered too esoteric to be explained to the masses and therefore unworthy of much attention but he seems to have achieved the impossible.

Comment no. 66 (this link should take you to p2 starting at comment 41 if I am lucky)

http://www.bishop-hill.net/blog/2011/7/5/ipcc-on-climate-sensitivity.html?lastPage=true#comment13552977

matthu

Sorry – I wrote badly and caused confusion. What I meant was that the IPCC should explain why they changed the original work. I didn’t mean that somebody should explain how they changed it although the link you provide provides an excellent technical explanation.

This link takes you straight to Jeremy’s summary.

Correction. This link takes you right to Jeremy’s summary. And always will. (The previous one ceases to work on the moment there’s a page 3. Not the best aspect of Bishop Hill’s blog engine but a small price to pay at a time such as this.)

James Annan has commented on this kerfuffle.

A fair response by Annan.

I will just highlight this:

“These arguments (which as you saw I made during the IPCC review process [here here here]) were basically brushed aside. The IPCC authors exclusively relied on and highlighted the results that had been generated using uniform priors, and downplayed alternative results, which already existed in the literature, that had been generated with different priors.

However, with the passage of time I believe my arguments have now become more widely (if grudgingly) accepted, so I look forward with some interest to see how the IPCC authors deal with the subject this time.”

Steve MacIntyre comments at Bishop Hill:

“My guess is that the changes were done by or under the supervision of Coordinating Lead Authors Hegerl and Zwiers. Figure 9.20 is said to be in the style of Hegerl et al 2006 (coauthor Zwiers), where uniform priors were applied.

IPCC review procedures permits Lead Authors to ignore review comments. Hegerl and Zwiers ignored Annan’s critical review comments.”

SM provides no link to support his assertions but it is interesting nonetheless. It must be very frustrating for reviewers like Annan to be ignored in this way.

Steve had already put up the PDF (portable document format, not probability density function!) of all the comments and responses on Chapter 9, which has the details of the rejection of James Annan’s prescient comments, on Climate Audit. I gave the link earlier but here it is again.

Even in this area the IPCC is much less than helpful, in that it withdrew the PDF, which is much better for searching, and replaced it with a cumbersome multi-page HTML presentation.

Read the comment links in his post. You can read the author responses there. They did make some changes in the text and figures in response to Annan’s criticisms, but did not accept his criticism on the use of uniform priors.

Also, I suggest that you read the paper linked in his post. It explains the issue quite well and shows the effect of choice of prior on the analysis.

“IPCC review procedures permits Lead Authors to ignore review comments. Hegerl and Zwiers ignored Annan’s critical review comments.”

There’s difference between “ignored” and not being swayed by the arguments at the time. But if Annan is correct, the conservative establishment is coming around to his view of using ‘uniformed priors’. The arguments had not even been made by him in the first review draft of Chapter 9. What’s interesting is that looking at this gives people an insight into how this all works. It’s not always pretty. For more background you can look through the arguments here in the comment section:

http://www.realclimate.org/index.php/archives/2006/03/climate-sensitivity-plus-a-change/

OR go back to the review comments and search for Stefan Rahmstorf’s name.

OR just go along with the notion that this is all a grand conspiracy OR just assume the IPCC is a bunch of knuckheads who don’t know anything and make mistakes all the time and “ignore” other ideas. Luckily the internet provides plenty fodder for whatever reality you feel like meandering in.

I accept that the internet provides plenty of fodder for whatever grabs your fancy and I personally favour ‘cock up’ rather than ‘conspiracy’. However, the fact remains that the IPCC review process lacks transparency and accountability and that this is a significant weakness.

By the way and in case of any trans-Atlantic confusion, ‘cock up’ is one of those quaint English sayings relating to a mistake rather than something that you might find elsewhere on the internet during your meanderings!

What kind of ‘transparency and accountability’ are you looking for?

You’re looking at it right now. Years later people are still pretending like these arguments are novel. I’m pretty sure these authors realize that the decisions they make will be quartbacked on Monday morning. Unfortunately, they don’t have the hindsight we have.

What kind of ‘transparency and accountability’ are you looking for?

Well, given that the IPCC Reports summarize the science and provide advice for policy makers, there ought to be mechanisms in place to address disputes or at least to provide explanations. In this case, the lead authors should explain why they treated the F&G paper in that way. It is not enough to be able to just ignore reviewers’ comments and write what you want particularly on such an important subject.

This particular explanation has explanation in the text and then more detailed in the appendix 9B

http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-chapter9-supp-material.pdf

It should also be noted that Peirs Forster took the oppertunity to comment in the review section of AR4 chapter 9, and he said:

“Great read, I particularly liked your use of appendices and tables”

Your link provides general information surrounding the difficulty in selecting a suitable prior but AFAICT it does not provide an explanation for what was done in this particular case.

Submitting IPCC “reviewer comments” is not like reviewing an article in a journal, where the reviewer has a lot of say over the outcome of the process. IPCC review is more like a semi-public comment. The decision of whether to pay any attention to a review comment is entirely up to the Lead Authors.

It’s also instructive to re-read Annan’s travails in trying to publish a comment on either Frame et al 2005 or Hegerl et al 2006. All his attempts to publish comments were rejected even though his point about uniform priors clearly warranted airing. Some years later, after pretty much giving up, he managed to get something into the literature in 2009.

William Connolley observed early on that one of his problems was that he wasn’t eminent enough in the field for his criticisms to be accepted, despite their validity.

So which is worse? Review comments for the IPCC, or every journal that wouldn’t publish his approach?

‘The decision of whether to pay any attention to a review comment is entirely up to the Lead Authors. ‘

That’s what an editor does.

Sea level has it’s own curve stretching issue. One of the Hockey Stick team and RealClimate.org organizers named Rahmstrof used a very odd mathematical alternative curve shape instead of just fitting a curve to his data. The difference is shown here:

http://climatesanity.files.wordpress.com/2011/02/chao-addtional-sea-level-rise-rate-with-vr2009-and-moriarty-smooth2.png

Another bizarre oddity in the paper is the fitting of a straight line to scatter plot data that does not merit a linear fit. Then he extrapolates far into the future, based on it, as shown here:

http://bp1.blogger.com/_1aOzUxvOj5k/RuTSV3OMjlI/AAAAAAAAAIw/YSziSWYFk_Y/s1600-h/Error+4b.JPG

These odd manipulations (along with added “corrections” to *actual* sea level based on water reservoirs on land, while ignoring ground water pumping) resulted in a sea level curve that shows a recent upswing:

http://www.pik-potsdam.de/~stefan/Publications/Nature/rahmstorf_science_2007.pdf

Tom Moriarty’s blog at http://climatesanity.wordpress.com has several recent entries on this fiasco.

In other news, basic sea level data is not on the pro-AGW side at all:

http://i.min.us/idFxzI.jpg

If you look closely at the satellite sea level data for 1993+ (e.g., at U. Colorado), you will visually see a difference between the earlier Topex/Poseidon data (1993-2002) and the subsequent Jason 1,2 data (2003-present). The fact is that the Jason data give a smaller rate of increase (2.3 mm/yr) than the Topex/Poseidon (3.5 mm/yr). After accounting for the recently imposed “GIA” correction (0.3 mm/yr), the Jason data are in accord with the various tidal gauge studies, which lie in the range 1.5-2 mm/yr. There is as yet no indication of acceleration.

Irrespective of what the IPCC did, the fact remains that Forster and Gregory arrived at a sensitivity range of 1.0 – 4.1*C, which is a reasonable fit for the value covered by many other approaches, i.e. 1.5 – 4.5*C, and a refutation of the 0.5*C posited by AGW sceptics.

Richard, worth re-reading this from Nic about Forster and Gregory:

So refutation of lower sensitivity is I think too strong. But it’s good to have an argument about the real data. I take it you’re admitting that ‘what the IPCC did’ was wrong?

NIc,

Brilliant writeup. Thank you so much. Like others, I hope to see this published in the literature.

Sorry I failed to close the html above. I hope this corrects it (the earlier comment can be deleted).

Because this is a fascinating subject, and equilibrium climate sensitivity (ECS) is contentious, I thought I’d summarize my impressions from the discussion so far. Other abbreviations: FG-2006 = Forster and Gregory 2006; GF-2008 = Gregory and Forster 2008; TCR = transient climate response.

I’m persuaded by Nic Lewis that the IPCC application of an arbitrary Bayesian prior to climate sensitivity tends to distort the results of FG-2006, which don’t justify this type of Bayesian approach. This conclusion, however, is a matter of judgment, and can’t be characterized as either “right” or “wrong”. Like Nic, I tend to disagree with the IPCC judgment. I think, however, that he has overemphasized its scientific importance.

I believe that the consequences of the IPCC (mis)judgment are minor. FG-2006 is only one of many dozens of papers on ECS, and since the IPCC report, several others have used similar methods. These methods have advantages but also disadvantages (see below) in comparison with other ECS approaches. In support of my conclusion, I’ll focus on FG-2006 and not worry about the many other ECS methods.

FG-2006 reports an ECS range of 1.0 – 4.1 C per CO2 doubling, which is lower than IPCC estimates of 2 – 4.5 C, but with obvious overlap. However, GF-2008 concludes that ECS estimates are a less reliable indicator of climate responses to CO2 than TCR (temperature change from a 1% annual CO2 increase up to a doubling). GF-2008 reports a TCR range of 1.3 – 2.3 C.

This is incompatible with the low values for ECS in FG-2006, at least at the low end of the range, because ECS must always be greater than TCR. If WG-2008 is to be accepted, the ECS from FG-2006 must be adjusted upward.The extent of the adjustment depends on the ratio of TCR to ECS, which in turn is a function of the rate of ocean heat uptake as well as other variables. Some of this is discussed above in a long series of comments at Comment 83429 . In determining this, Isaac Heldhas cited a median value of 5.6. In above comments, I suggested a conservatively high 6 or even more conservative 7. A value of 5.6, applied to WG-2008, would place ECS very much in the center of the IPCC range. A value of 7 would place it near the center. Any value less than 1.0 would raise it above the FG-2006 level. In light of this change between FG-2006 and WG-2008, the significance of the IPCC treatment of FG-2006 diminishes in importance.

A separate question is whether the higher ECS means that FG-2006 did something “wrong”. I tend to think not. These appear to be qualified, meticulous investigators, and while I can’t speak to every aspect of their analysis, their overall treatment strikes me as rigorous. What then to make of their 2006 range for ECS?

Probably the main point is that their analysis, like some subsequent energy balance approaches, looks at short term climate fluctuations that emphasize the tropics and exclude latitudes beyond 60 N or S. As a result, the results are heavily influenced by ENSO events. The latter, however, probably do not affect climate in the same manner as long term global forcing from atmospheric changes in CO2 (and probably from solar irradiance changes as well). In particular, anomalously high convection in ENSO and ENSO-related regional cloud changes can lead to negative feedbacks not seen with persistent forcings that operate over longer timescales on a more global basis. The relationship between the two different phenomena is therefore very uncertain. FG-2006 acknowledge this possibility but don’t dwell on it. Nevertheless, it remains a significant limitation to our ability to extrapolate short term climate fluctuations ECS for CO2. The same problem applies to later studies by Lindzen/Choi, Spencer/Braswell, and Dessler, studies that disagree among themselves on ECS and feedbacks.

To summarize, FG-2006 should probably not have been subjected to the IPCC adjustment. It probably does a good job of estimating climate sensitivity to a variety of short term climate fluctuations. It probably underestimates ECS to CO2 forcing. Conclusions that go beyond the use of the word “probably” are probably unwarranted.

Bias informs your speculation and probably confirms it.

============

Fred,

That is a good summary. In continuation with my comments at the thread at Comment 83429 that you highlight – to recapitulate, Isaac Held points to an apparent non-linear dependence on global mean surface temperature arising from the slow response due to ocean mixing at high latitudes. I’d like to point to a two-part analysis by PaulK here which I found useful. While I agree with his statement

To claim superiority over any other estimate of ECS, the GCM would have to demonstrate that its estimate is better constrained by its ability to match other critical data.I found it interesting that the higher-order models in delta-T yielded higher sensitivity. My speculation is that the non-linear dependence if one uses a global mean surface temperature (as described by Held) in the energy balance equation could indeed yield such a higher-order equation in delta-T. I’m happy to be corrected on this.

Saying it probably does a good job of estimating climate sensitivity may just be wishful thinking.

I notice that I forgot to place a decimal point in front of the values for the TCR/ECS ratio I cited above. They range from 0.56 to 0.7, and must be less than one.

Fred, if you are presuming that ENSO, etc will lead to biased ECS based derivations of sensitivity, why don’t you put in your request with Nic, Jeff, and Craig for an email copy of Keith Jackson’s paper:

Layman and Fred,

In my paper I did look at the effect of the 1998 El Nino. I found out that leaving 1998 out of the analysis actually reduced the (non-corrected for short-term cloud, etc. noise) sensitivity from 1.66 to 1.24 deg. C. I looked at the effect of similarly deleting other years, but this was by far the largest effect. However, since El Ninos are a part of the climate system, I felt it was most appropriate to keep it in. A better long-term analysis might well want to consider the statistical frequency of such events, though that will require more years of data.

Thanks for you reply Keith. When I read the passage I quoted from your comment, I thought perhaps you had looked at ENSO as well.

Fred,

Your comments are thoughtful, but may I point out a misunderstanding?

You write “FG-2006 reports an ECS range of 1.0 – 4.1 C per CO2 doubling, which is lower than IPCC estimates of 2 – 4.5 C, but with obvious overlap.”

The Forster/Gregory 06 1.0 – 4.1 C range for S is a 2.5 – 97.5% confidence interval, whereas the IPCC 2 – 4.5 C range is a 16.7 – 83.3% confidence interval. Not the same at all!

The 16.7 – 83.3% confidence interval for the FG-2006 results is only 1.2 – 2.3 C: almost no overlap with the IPCC 2 – 4.5 C range.

More relevant to the risk of high climate sensitivity, the 5 – 95% confidence interval for FG-2006 is 1.1 – 3.3 C. The upper limit of the 5 – 95% confidence intervals for the studies with PDFs in IPCC Figure 9.20 are given (Table 9.3) in C as: 8.9, 9.3, 9.2, infinity, 11.8 and (for FG-2006) 14.2. [These are using the full ranges of the prior distributions used, not truncated at 10 C as for Figure 9.20.] That is a huge difference.

Nic – You’re right that the 2.5 – 97.5 % confidence interval for FG-2006 is stricter than in IPCC assessments, which vary from “likely” for a 2 – 4.5 C interval to a 5 – 95 % interval for 2.1 – 4.4 C from the Chapter 8 data. My larger point, though, would be that these differences are superseded by the change from FG-2006 to WG-2008 implying that the earlier estimates from these authors needed to be adjusted upward to conform to their later evidence. Once that is done, the overlap with IPCC will be considerable – or complete, depending on the magnitude of the adjustment as described above.

Fred, I may not fully understand the stats, but I do understand that WG-2008 was not published at the time of the 4th assessment. Therefore I cannot understand what it has to do with Nic’s discussion on what happened to FG-2006’s PEER reviewed paper in the final 4th assessment?

It has nothing to do with it. I agree with Nic that the IPCC treatment of FG-2006 was questionable, although that’s a matter on which people can disagree. What GF-2008 does is render some of this moot, because it implies that FG-2006 estimates of long term climate sensitivity to CO2 were too low. All of this is addressed in some detail in the long exchange of commentary above.

The IPCC changing the range in FG-2006 is a problem, but it doesn’t matter because the main result (increasing the climate sensitivity) is found to be robust with later research.

Very Mannian.

-J

It should, however, be noticed that the WG1 report contains the following comment in the chapter that contains the Figure 9.20:

Forster and Gregory (2006) estimate ECS based on radiation budget data from the ERBE combined with surface temperature observations based on a regression approach, using the observation that there was little change in aerosol forcing over that time. They find a climate feedback parameter of 2.3 ± 1.4 W m^– 2 °C^–1, which corresponds to a 5 to 95% ECS range of 1.0°C to 4.1°C if using a prior distribution that puts more emphasis on lower sensitivities as discussed above, and a wider range if the prior distribution is reformulated so that it is uniform in sensitivity (Table 9.3).This is the character of most of the shocking “discoveries” about the IPCC; they’re usually stuff that has been cheerfully and openly disclosed. The “skeptic,” whomever that may be, in the course of reading the report, finds something they don’t like, like an author who also works for Greenpeace, or a different set of statistical assumptions applied to a result.

They then trumpet their “discovery” and the “research” they have done, courageously “auditing” climate science. I guess that makes them sound more heroic than saying “I was reading this one report, and in the footnotes, they said that they made a choice, and I disagree with that choice and think they should have done something else.” Doesn’t have quite the same air of “J’accuse!” that psuedoskeptics like to affect.

Robert, who are you trying to convince with that rubbish?

julio, @ lucia’s is trying to use Forster’s and Gregory’s acquiescence, and WG ’08, as rationales for diminishing Nic’s work.

We should hear from Frame and Gregory, and the sooner the better.

==============

Chaper 8 of course relates to the climate sensitivity exhibited by AOGCMs, not observational evidence. There are good reasons for placing very little trust in the range of sensitivity estimates exhibited by AOGCMs, in my considered opinion.

Nic – That’s certainly a topic for a blog of its own, but I do disagree that Chapter 8 ECS is unrelated to observational data. One can’t arbitrarily choose feedbacks for water vapor, ice/albedo, clouds, etc., without looking to see how these phenomena are actually behaving – e.g., what are the radiative properties of water vapor, how is relative humidity changing, what is happening to low cloud cover, high cloud cover, and the high/low cloud ratios, etc.?. Obviously, there’s a lot more to it than that, but I’m merely pointing out that observational data enter into the picture. I hope I haven’t started an entirely new line for debate here, because we need a larger venue than the middle of a thread for that.

Nic,

There is a source for the goose argument here. I agree that there is little reason for trust in the range of sensitivity estimates from AOGCMs.

But Fred Moolten’s analysis of the F&G paper is essentially correct. The paper does not and cannot estimate equilibrium climate sensitivity (despite the arguments put forward in the paper). The paper confirms that realworld observations can be matched by a linear feedback model with a climate sensitivity of something less than1.6 deg K/doubling of CO2. No problem.

Extrapolation of this result to equilibrium conditions is problematical for a number of reasons.

So I think that it is a good idea to keep separate the issue of whether the IPCC was justified in modifying and reporting the results in the way that it did.

Fred,

Coming back a little belatedly on your argument that the Forster/Gregory 06 results are quite likely to have been significantly biased downwards (in sensitivity terms) by ENSO type events, I see that as unlikely. Their results for the 1985-1990 period were much the same as for the longer 1985-1996 period. And, in general, cloud changes caused by internal climate dynamics seem much more likely to lead to the Forster/Gregory 06 method over-estimating sensitivity than under-estimating it, as pointed out in the article.

Nic – You’ll have to argue with Forster and Gregory on that point, because their later data (2008) are only compatible with a higher climate sensitivity than reported in the 2006 paper, and are closer to most of the other estimates that have led to typical ranges of 1.5-4.5 or 2.0-4.C.

Gabi Hegerl, joint coordinating lead author of AR4:WG1 Chapter 9, has asked that it be mentioned on this blog that the authors of Forster and Gregory were part of the author team and not unhappy about the presentation of their result (in Figure 9.20). I hereby do so. That firms up my tentative earlier comment that F&G were Contributing authors for chapter 9 of AR4:WG1 and, presumably, accepted (at least tacitly) the IPCC’s treatment of their results.

Piers Forster has also confirmed that when their paper was published, he tried to invert the results (convert them from Y to S) and got a range of sensitivity much like Figure 3 in this post. However, he remembers being persuaded by the Oxford group (Frame, Allen, Stainforth, etc, I assume) and other statisticians that by doing this simple inversion F&G were inadvertently assuming a very skewed and unrealistic prior themselves.

Of course, whether or not the authors of a paper agree with presenting its results on a different, inconsistent basis in no way shows that doing so was valid, nor that there was anything wrong with the basis on which they were originally presented. I am not sure that Piers Forster realised that doing anything other than a simple inversion would produce a PDF for S that implied the results for Y presented in their paper were wrong. Perhaps David Frame told him that no such implication arose. Certainly, Frame doesn’t seem to have been very concerned about the inconsistencies between the various approaches he advocated. As climate scientist James Annan wrote, commenting on the approaches advocated in Frame 05: “Basically, you have thrown away the standard axioms and interpretation of Bayesian probability, and you have not explained what you have put in their place.”

If the IPCC is supposed to a an assessment of the peer reviewed literature then why is it allowed for the writing team to create and insert new work. This new work may be valid and useful but it has not been peer reviewed for publication or answer4d in the literature. Why is given weight and substance by appearing in an IPCC AR?

Indeed, when Nic passes on the message

are we to take it that F&G are happy about every rule in the WG1 book being broken to completely change their own work, without prior publishing and peer review?

Why is it at all pertinent that F&G were part of the author team? Their work is being referenced as part of the peer reviewed literature and not because they have a high or low opinion of any aspect of their own work or any response to it.

If this is novel work then it should be submitted for peer review and the critical response following publication

In case of doubt, I totally agree!

Which leaves you, just to be clear,

claiming to understand the results and methods of Forster and Gregory better than Forster and Gregory do.On this thin reed you have chosen to rest the accusation that the IPCC made a clear-cut mistake and altered methods to overstate climate sensitivity.

Color me skeptical.

Say what?! Sorry, but that is not at all what he is claiming.

CONCLUSION

Modern ground based and satellite measurements, climate history data and geological data all point to the fact that when it becomes warmer, the high latitudes rise much more in temperature than the tropics. This can only be the result of increased heat transfer from the tropics pole ward. Established physical transport phenomena science lets us quantify this heat transfer and its dependence on surface temperature. The result is a much larger negative feedback than the positive sum of feedbacks incorporated in the known climate models. This large negative feedback should be incorporated into these models. The result would be that the climate sensitivity is reduced tenfold. A doubling of the CO2 concentration has such a small temperature effect, that this is indiscernible from all other effects. ~Dr. Noor van Andel

Fear of AGW is alien to reason. Global warming alarmism has always been a matter of choosing the least significant contributor to global warming — for reasons that the natural sciences can never not explain — and then creating mystical properties not observed in nature by applying magical magnification formula.

As a purely theoretical point in applied math stat, if a uniform prior is used to represent “ignorance”, you can not be equally “ignorant” about a parameter and its inverse.

Personally, I think Bayesians introduce more problems than they solve.

Another idea, due to Kass and Wasserman, is to give the prior density about as much weight as a single observation. If you do that, and if you have at least 20 observations, then it does not make much practical difference what prior you choose or whether you put the prior on the slope or its inverse. Given that I already said I think that Bayesians introduce more problems than they solve, such downweighting of the prior strikes me as a more “accurate” representation of “ignorance”.

After thinking for 40+ years about these problems philosophies of statistical inference, I have concluded that, in practically addressing problems in the shared world with experimental evidence, nothing actually beats Fisher’s “fiducial” inference.

You wrote a concordant opinion above when you directed attention to how much scientists had learned before considering priors and Bayesian inference.

This is true, if you think that the problem is not there, when you don’t recognize it.

For finite discrete PDF’s it’s still possible to define a truly uninformative distributions (the same probability for each alternative), but for any distribution that allows an infinite number of values that’s not possible. That applies to every continuous PDF, because every flat continuous distribution can be transformed to any other form by a suitable change of variable. Conversely all continuous distributions are flat in some variable. For this reason it’s not possible to get rid of the problem. Hiding it is just cheating.

Pekka, you wrote: This is true, if you think that the problem is not there, when you don’t recognize it.

It is also true if the Bayesians have not actually solved any problems in applied math; or if the claimed solution creates problems in higher dimensional spaces; of if the proposed solution can not in principle be tested against experimental data; or if the proposed solution can not be computed without simplifying assumptions that introduce new sources of inaccuracy.

You have not clearly stated whatever problem it is that you think I am ignoring.

I am not ignoring a problem: my claim that Bayesian solutions can not in fact be shown to outperform Fisherian fiducial distributions in actual research is a claim that Bayesians have not solved a problem that they claim to have solved.

–> “This is true, if you think that the problem is not there, when you don’t recognize it.”

We are not dealing with the physical world at all. By simply naming it, the Left created the needed crisis to begin with. And, thus the crisis of AGW — fear created for the purposes of arguing against capitalism – was created out of whole cloth and relies on magic formulae – like ‘climate sensitivity’ — to support it.

After naming it then the argument follows as night follows day. And, so too follow: claims based first on belief by a select group of experts; and, after that simply what are deemed to be obvious facts as a matter of common sense; to an unassailable logical deduction of a expert culture; to the unquestioned objective reality of a class of effete snobs in Western civilization who in their hubris anoint their own thoughts as representative of a worldwide view of all things related to climate change.

What we see, according to Habermas, is what ‘follows from the structure of speech itself,’ from which grows their belief in the object of their argument such that, ‘expressions of subjectivity are liberated from social restraints.’ What we see is the birth of, ‘the autonomous logic,’ of the climate specialists – implying some expert competence – going about the business of global warming alarmism and the politics of climate change catastrophists.

So too, reason becomes a matter of affiliation and a consequence of consensus among a self-reinforcing group of experts and specialists such that, “communicative reason finds its criteria in the argumentative procedures for directly or indirectly redeeming claims to propositional truth, normative rightness, subjective authenticity and aesthetic harmony” (Habermas, 1987a: 314).

That analysis should be submitted to **Social Text**.

The IPCC charter claims “objective and complete assessment of current information,” and to be “policy-relevant and yet policy-neutral, never policy-prescriptive.” http://www.ipcc.ch/organization/organization.shtml

Many of us who have studied and analyzed these assessments see something very different, a more biased and proactive role- to advise governments on how to counter (likely castastrophic) human-induced global warming. This noble enterprise has resulted in fame, a Nobel Prize, moral and political power, and funding to scientists and science departments worldwide whose work supports the thesis of high climate sensitivity and likely castastrophic global catastrophic global warming. It appears more and more likely that this noble enterprise has also resulted in noble cause corruption if not worse. Without high climate sensitivity, there is no catastrophic warming, and the IPCC loses its moral and political power and most of it reason for being. Academia loses a lot of funding. Do I smell the mother of all science/policy fights!

Judith,

I know it must be a tricky thing to police a thread so that it stays on topic, but you have deleted all my comments that show that a piece of mathematics posted above is inaccurate.

The misleading post still remains but the rebuttal is no more! Oh the horror!

As you have clearly come down on the other side of the argument, I will write to Oxford University immediately and demand a refund for the three apparently wasted years that I spent there studying mathematics. C’est la vie.

Au revoir.

I found that whole line of argument incomprehensible, i thought it was off topic. i will restore your comments and the other related ones, but i still have no idea what this is about.

@JamesEvans, I too am having trouble relating the math to the paper. Please summarize your mathematical argument in a short post and let Judith delete the stuff she restored.

More from Annan and Hargreaves on this here. This is regarding a paper which was published in 2006 (link is in the article).

He’s been riding this horse for quite a while and has published at least 2 or 3 papers making the argument now, in addition to papers making similar arguments about model evaluation. He seems to think that his horse is a winner now, I suppose we’ll have to wait and see…

I would also note his comment (no. 3) in the original post.

http://julesandjames.blogspot.com/2011/07/priors-and-climate-sensitivity-again.html

Rattus,

I am inclined to believe A&H’s horse is a winner. It is not possible to not have some prior knowledge of expectations in this case. So the expert prior including the most reasonable range makes sense. The difference in the IPCC interpretation of F& G versus their original just illustrates the point. The Sensitivity is 3 article concentrates more on sensitivity is unlikely to be greater than 4, but the range is 1.3 to 4 which appears to be the most likely range given the data at the time. It de-emphasizes the fat tail which is good for making decisions. Results outside that range are possible still, just more likely to require some unknown unknown kicking in which would likely not be CO2 related. Sensitivity is defined as CO2 relate right?

If you wanted all anthropogenic impacts less natural, then a different range of priors would be in order. All anthropogenic and natural would require another.

So Nicks post just illustrates the bias, intentional or not, of the IPCC to higher sensitivity which does not meet the intent of the definition IMO.

A thread concentrating specifically on the choice of prior without the might lead to interesting discussion. The earlier discussion on the basis of the Annan and Hargreaves paper didn’t answer the main questions, and this thread is by now too complex for allowing efficient discussion.

To get the discussion going we would need also someone, willing to defend the uniform prior in S. Finding anyone for that task may be a problem, but perhaps someone is ready for that as well. For a most useful discussion we would need the best and most up-to-date arguments from both sides.

Pekka,

I would given the strength to do so, but based on some narrow points that you might not feel to be the argument you want.

That ultimately we do assume a uniform prior when the strength of the evidence seems to justify that choice.

That it is the simplest of all such deluded assumptions.

That all such assumptions are either unhelpful or downright dangerous.

That it matters little.

That given that S has a value it is not meaningful to consider it to be drawn from some prior distribution by some all knowing but uncommunicative agent.

That the lack of evidence for the precise value of the parameter S is not at all the same thing as some inherent uncertainty in S. E.G. that we have little evidence to suggest that S < xºC does not imply that S could be greater than xºC as it either is or it isn't.

That the jump we make from likelihoods to uncertainties is always subjective and almost always based on a uniform prior made almost redundant by overwhelming evidence, so why use any other prior.

That what people commonly dress up as a prior is merely another strand of evidenece, a statement about relative likelihoods and better consider as such. It can be combined in the evidential mix still leaving the question of the prior, the giver of density, open.

I think some or all of these points I could argue.

I could also argue that the value 2.7182…. ºC is the value for S in the almost certain knowledge that I will never live to see sufficient evidence to settle the matter. What we lack is strong evidence and I see little chance of this changing. It does help that the definition of S renders its determination by observation moot.

I doubt whether these be the kind of arguments you were hoping for. They are but a step away from suggesting that we leave the nature of the prior totally undecided and hence rendering all confidence intervals impotent undecidable absurdities. Given the current paucity of evidence I think that is my position. And it bothers me little in comparison with more imminent and observable issues. I neither know nor care as to the value of the ECS. I am yet to be persuaded that attemptes to put bounds on it that are not based on evidence are useful.

Alex

Alex,

Do I interpret and summarize you correctly, when I formulate the conclusion as follows:

The Figure 9.20 tells, what the empirical evidence is on each particular value of S. It doesn’t tell, what is the PDF of S, as that depends on the prior, but it tells the empirical evidence.

(All that assuming that the curves represent without errors the data they are based on.)

Pekka,

Thank you for your thoughts below, it is encouraging that I still get somethings mostly correct. I envy your clarity and find your English usage worryingly superior.

Right now I have written much and find that I am tired and that irks me. I am tempted not to answer you directly for risk of being wrong from lack of consideration. I would prefer to work through an example to show where the warm familiarities of the mathematical manipulations end and the difficult logical chains of inference start but I will spare us both that.

I think you would be broadly correct. Whether I am is a different matter. Without the introduction of some function that has the charateristic of density I do not see how a cdf can be formed. A prior can perform that function. It seems natural but not necessary to include such a density function at the bottom of the stack and hence “prior” to the addition of empirical evidence. Whether or not it is still possible to obtain a prior in the sense of knowledge prior to, and untainted by, the evidence seems doubtful. I would be happy to accept expert evidence (opinion) into the mix but shun the notion of an expert prior. I regard Bayesian methodology as sound but perilous. Not from its inherent weakness but from our own foibles. Far from idiot proof, it be idiot friendly.

I am no longer shy of being wrong on such matters. It would have pained me once back when it didn’t happen.

This thread is long and my computer is close to its limits. Be not offended by a failure to reply.

Alex

Pekka,

That would be interesting. I am not into statistics per se, but it is not too difficult to approximate probabilities. To me the range of prior estimates should decrease with more information or the significance is unknown, i.e. we haven’t learned anything significant. Basing an analysis on ignorance would be a first step, then if any following tests or evidence indicate some level of increased confidence, then the a more knowledgeable range should replace the range based on ignorance. To me that is the logic behind an expert prior.

For example there is a big difference in the 60-90 degree North and South rate of warming. The sensitivity to CO2 would logically be within the range of the two regions. That information is well known so ignorance should not be used for determining a range of priors. In the early 80’s fine, that data was not sufficient to be used. Now we know that purely due to co2 change, we have an estimate range of sensitivity based on more complete information.. Other factors may come into play, but warming at the North pole cannot only be due to CO2 and that warming is 4 times the warming of the South pole.

You can supposed the difference is due to ozone, natural climate patterns, the sun or little green Martians, but not CO2 or other anthropogenic gases.

.

[IMG]http://i122.photobucket.com/albums/o252/captdallas2/climate%20stuff/nstropicsglobalto2100.png[/IMG]

I don’t know if this image thing is right, but it should be the plot of UAH northern extent, southern extent, tropics and global. The linear regression for each extends to the year 2100. While it may be overly simple, the range is pretty consistent with most analysis using data not models. What part is CO2 and what natural or other, dunno, but it is interesting. The expert range Annan used is a bit liberal based on most of the data.

I bet on the Gregory.

I bet on the Piers;

Had I bet on ol’ Annan,

I would be free, today.

============

Nic, I think this is a very interesting an important post I have gone back to the 2006 paper and find that it is a gem of nicely presented argument. The last part of that paper writes of the likeliness that CS sits nearer to the lower values they give. One impression I have from this thread and the work of Linzen and Choi is that there is information out there on climate sensitivity that could be combined in a meta-analysis much as the epidemiologists do with multiple studies.

Is anyone keeping a scorecard or register of instances where scientific papers are modified after the event or source data is adjusted without notification or reasonable cause? If not, I believe that someone with the competence and outlet (blog/website) who did so would be doing everyone else a very great service indeed.

IIRC this is not the first time a paper has been altered. I believe that recently data on sea levels has been adjusted. There have, I believe, been at least one if not two occasions where sharp eyes have spotted changes in historical temperature data sets without notification. I would cite the great dying of thermometers c1990 worthy of inclusion in such a register because of the impact it seems to have had – a real case of man made warming if ever there was one.

I suspect that such a register would shock.

The paper itself (at the end of section 2) noted the effect on the analysis of having a uniform prior on S. Since Forster & Gregory participated in the recasting of their results with a uniform prior one would have to assume, since their comment did not object, that the recasting did not offend them. Showing the result on the same scale as the other results (some of which were also changed because all were standardized on a U[0,10] prior.

On can argue about the appropriateness of a uniform (ignorant or naive) prior, and I tend to agree with those who argue against the use of an ignorant prior, but this is a debatable point. As Annan has pointed out in comment 3 here this was quite simply the majority view of how to do such analysis at the time. He lost the argument in 2006, but holds hope for winning in 2012.

But of course, a similar graph from 5AR will probably need to have some results recast to the chosen prior because not all will choose the same prior.

A few people above referenced the idea that F&G 06 studied a short period and therefore must have found only the “transient climate response”, not the equilibrium sensitivity. I believe this is a red herring. First of all, if one waits for the temperature response to equilibrate the system, the imbalance will go to zero and your diagnosed sensitivity will be infinite. Secondly, the argument being made seems to be that the fully temperature response to forcing scales up with time. But F&G were meauring not the temperature response to forcing, but the ~feedback~ response to temperature change! This should only vary in time for situations were the time period is too short for the timescales of, say, cumulus convection (weeks) or for ~very~ slow feedbacks like CO2 outgassing. Arguably for our concerns the later can be neglected as it is on the order of several centuries. The temperature change, while not full in response to the forcing, should scale about the same to the ~feedback~ as a full response would. Held’s double the TCR to get ECS applies to forcing fits to temperature responses in EBMs, if that, but has nothing to do with F&G 06.

The above comment seems to be well justified concerning the delay in warming due to the heat capacity and slow warming of the oceans. There are, however, also slow feedbacks like the change in surface albedo from the reduction of snow cover that contribute to TCS/ECS. Other changes in the relative temperatures and differences of climate between oceans and continents or other subdivisions of the surface cannot either be taken into account in the analysis. Thus the full ratio of TCS/ECS as discussed by Held and others cannot be applied to the results of F&G 06, but part of the difference between TCS and ECS does apply to the results.

The problem that seasonal variations and even year to year variations are fast even in comparison with what is often defined as TCS adds to the problems, but the direction of the related error is not obvious. We don’t know, how the restriction to the short time scale influences the variation of

averagesurface temperature in relation to its influence on the variations in the TOA energy balance, because the variations in the surface temperature are not uniform and the geographic distribution of all contributing factors are different at different time scales. The influence of ENSO might be particularly severe and also emphasized in the original article.I agree that reduction in snow or ice cover resulting from warming constitutes a likely slow positive feedback, but its magnitude may be quite small, at least for the modest changes in surface temperature that can be expected to arise if sensitivity is in fact fairly low, so the Forster/Gregory 06 results may nevertheless be a close approximation to a measurement of equilibrium climate sensitivity. Certainly, what the F/G 06 method is estimating seems to me likely to be much closer to ECS than to TCR

What the IPCC did aside, F&G use GISS & HadCru data to derive their moderate sensitivity estimate. If they has used UAH instead they would have found a sensitivity of roughly zero, as there was no temperature change during the period analyzed. Why has no one done this?

One of the most interesting things about the climate debate is that in one place it involves people arguing about point A (in this case sensitivity), by assuming that B is well known (in this case temperature change), while not far away people are hotly debating B. Most of AGW science, including F&G, is based on assuming that the surface statistical model means are facts. They are not. Far from it.

“F&G use GISS & HadCru data to derive their moderate sensitivity estimate. If they has used UAH instead they would have found a sensitivity of roughly zero, as there was no temperature change during the period analyzed.”David – the type of analysis performed by F&G does not require a temperature change averaged over the period analyzed to yield a climate sensitivity >0, because it looks at each up and down fluctuation in temperature in terms of its relationship to changes in forcing. The only observations that would dictate a zero sensitivity would be ones in which temperature was a completely flat line – this would be very different from their Figure 1 data.

I doubt that very much Fred. If the forcing increased, but the temperature did not, the conclusion must be that the sensitivity is zero. (Conversely, if you are saying that the net forcing over the period was zero then the HadCru temperature increase over the period must have been unforced.) In other words, if the forcing profile over this period generates a specific sensitivity for the HadCru temperature increase profile it cannot possibly generate the same sensitivity for the UAH non-increase profile.

The sensitivity has to be dependent on what the temperature does cumulatively, not just its qualitative ups and downs. After all, the cumulative effect is just the sum of the ups and downs. So the warming HadCru ups and downs cannot be the same as the UAH non-warming ups and downs, can they?

Let’s suppose the ups and downs trended downward sharply, while forcing increased. Would you still get a positive sensitivity? How is that possible? It would mean that the sensitivity was positive so long as temperature oscillated, even though it was cooling rapidly. That is an absurd result.

David – You didn’t understand the paper. If you review it, including Figure 1, you will see that what I said was correct – its conclusions are based on the fluctuations over time and not the net temperature change over the entire interval. Attempts to relate net temperature changes to forcing over an extended time interval would represent a different study, and one that is beset with problems in correctly quantifying the forcings unless the interval is very long (and even then there are problems). In that type of study, you wouldn’t need to look at TOA flux changes but only at the forcing/temperature relationship. FG06 is not that type of study.

Fred, let me put it another way. There may be a confusion here between correlation and sensitivity. Suppose you have an upward trending oscillating parameter A. You also have a flat trending oscillating parameter B. A and B may in fact be closely correlated, but it would be false to conclude that doubling A would increase B, which is what sensitivity is all about.

David –

“What the IPCC did aside, F&G use GISS & HadCru data to derive their moderate sensitivity estimate. If they has used UAH instead they would have found a sensitivity of roughly zero, as there was no temperature change during the period analyzed. Why has no one done this?”

This is not the case. Fortunately, I reproduced some of the results of FG06 recently:

http://troyca.wordpress.com/2011/06/22/erbe-pinatubo-and-climate-sensitivity/

It was easy enough to sub in the UAH anomalies, and the results can be seen here:

http://dl.dropbox.com/u/9160367/Climate/FG06_withUAH.png

Using UAH yields an estimate of Y=2.89, or a sensitivity of around 1.28 C per CO2 doubling. The FG06 study does not simply compare forcing changes to temperature changes. It uses TOA imbalances as well. As Dr. Spencer explains, “Temperature changes lag the radiative forcing, but radiative feedback is simultaneous with temperature change.”

Just to clarify a point I made above, F&G use GISS & HadCru data to derive their moderate sensitivity estimate. If they has used UAH instead they would have found a sensitivity of roughly zero.

Oh Dear!

Oh Dear, oh Dear!

Oh Dear, oh Dear, oh Dear!

What can the matter be?

If we go down this path it doesn’t lead anywhere pleasant.

Arguably the problem also lies in the original paper and possibly almost every occurrence where someone uses an analysis of experimental data to infer error bars.

Here the authors (Forster & Gregory) have tried to be clear concerning priors:

{Abstract}

“(assuming Gaussian errors in observable parameters, which is approximately equivalent to a uniform “prior” in feedback parameter)”

{Method}

“For the uncertainty analysis we assume, like Gregory et al. (2002), that errors in the observable parameters (N, Q, and [delta]Ts) all have Gaussian distributions. In a Bayesian statistics framework, this is equivalent to assuming Gaussian observational errors and a uniform “prior” in each of the observables. Since the uncertainties in Q and N are much larger than in [delata]Ts (a factor influencing our choice of regression model; see appendix), uncertainty in Q-N is linearly related to uncertainty in Y, so our assumption is also approximately equivalent to assuming a uniform prior in Y. Other studies (Forest et al. 2002; Knutti et al. 2003) have assumed a prior that is uniformly distributed in equilibrium climate sensitivity, which is proportional to 1/Y. Compared with our prior, theirs emphasizes the higher values of equilibrium climate sensitivity. We believe our choice is more appropriate for our analysis and input data, but it has the result that our range of climate sensitivity lies lower. Differences in priors should be taken into account when comparing quoted ranges of possible temperature change.”

This might seem to be intuitive. They may be seen to imply that they have justified the use of a uniform prior in Y. But is either correct?

To my mind, knowledge of the distribution or errors does inform one as to whether a particular method of analysis produces a liklelihood function. In this case OLS with gaussian errors in the fluxes is compatible with a maximum likelihood result and gives a likelihood function that can be futher interpreted.

To my mind that knowledge (assumption of the distribution of the parameters) does not inform us, in any way, about the nature of priors for the unobservalbe parameter Y.

Priors are prior! They are that part of the bigger picture about which the experiment is uninformative.

If we could construct experiments that yielded information about both likelihood and prior probabilies then the world would be a much simpler place.

In this case the most informative prior would be the value for Y (or S), lacking that perhaps some theoretical result that gave a distribution from which our Y was an occurrence. Failing that some bounds for Y (or S). It is the last case that holds. We have theoretical reasons for believing Y>=0, but no strong indication as to its form beyond that.

As I read it, their analysis results in a likelihood function that has the same form as a gaussian distribution, but it is not a gaussian distribution. Likelihoods lack for density (or mass), integrating a likelihood may be possible but is not meaningful. Integration does not result in a cdf. From a cdf one can produce error bars or uncertainty values. From a likelihood function one cannot. A likelihood does inform one about the relative likelihoods of two point values but can say nothing about ranges of values.

They, and many many others, desired to produce error bars to give some sense of the uncertainty of their result. To do so requires a knowledge of the density function to allow for the integration step to produce a cdf. Lacking knowledge of which pdf to use they assume a uniform prior.

That they assume gaussian distributions is useful, in that it allows them to choose a model with and unobervable parameter Y (and also sigma) whose likelihoods functions, given the evidence, can be calculated. At this point that assumption is used up. It informed them as to the distribution of the errors in Y but not the nature of the prior distribution of Y.

A choice of prior in the absence of prior knowledge can only be an assumption. It is a very common assumption and it is questionable. But if we question it too hard we would need to either strip out every error bar and uncertainty limit from every paper without a known prior, or put large horror warnings against each one stating that these error limits are the result of a subjective judgement of the authors.

The amount of coverage that is given to priors in this paper shows that they are drawing attention to this issue. The amount of coverage that WGI give to this shows that they are drawing attention to this issue.

Now I am critical of the WGI graphic (but not really the text), if they had chosen the label “Likelihood” and not “Probability Density”, (they indicate in the text that these are the same), that would have been more objectively true. If the ranges in the bottom part had warned that they were the result of the integration of the likelihood functions with a uniform prior in S that would have been more clear and perhaps worth the cost of running to a separate graphic. Perhaps that they should note that people draw conclusions from visual images that are not supported by captions or accompanying text.

That they have extended the Probability/Likelihood downwards to negative values is a hoot! Well it causes me to giggle.

Presentation in terms of a truncated likelihood function (which has the same form as the pdf assuming a truncated uniform prior) does allow for the further application of pet priors in a simple way. It would also tempt the combination of the data with other independent strands of evidence should they exist amongst the examples given or elsewhere.

Lacking knowledge of the prior requires assumptions be made before the information from evidence (likelihood functions) can be turned into uncertainty intervals. Such intervals are hence subjective and this is commonplace.

Should we go without such uncertainty intervals and argue purely from the likelihood of certain point value choices for candidate values for the unobservable parameter?

Well we can, but it is restricted to point values. If we wish to reason about certain questions on the basis of uncertainty ranges we need cdfs and hence knowlege of pdfs and for these we need some function, in this case a prior, to add that important aspect of density that cannot be derived from the likelihood function that results from the application of a statistical model to the experimental evidence.

I would strongly disagree with the authors if they imply that knowledge of the distribution of the errors is in someway equivalent to knowledge about the distribution of the prior. But do they do this as might be assumed?

Repeating from the quotes above, they say:

“uncertainty in Q-N is linearly related to uncertainty in Y, so our assumption is also approximately equivalent to assuming a uniform prior in Y.”

From the use of “in Y” not “for Y” should I infer that they are not justifying any choice for a prior for the parameter Y?

I do not know. But it is what I do infer from that phrase. What they are saying is not entirely clear to me.

If they refer to their basis of choosing a gaussian error model based on some “prior” assumption that the “distribution of the errors in Y” are gaussian. I think that is correct but either a bit opaque or rather trivial to state that gaussian errors have the form of a “gaussian shaped” likelihood and a uniform prior “for the errors”.

It is a very different thing to argue about a prior that gives the distribution for the “errors” in Y than to argue about to a prior that gives the distribtion of the Y itself.

If they or anybody were suggesting that knowledge of the distribution of the observations of a variable informs one as to the the prior distribution of the corresponding model parameter. I think they be wrong.

What they actually think is not clear to me. But I do think that this may be the inference that has been seized on.

In the absence of knowldege all choices of priors are subjective, choosing a uniform prior for Y>0 is subjective. Choosing a uniform prior for S<= some arbitarty value is subjective. Most of the error bars and uncertainty intervals one sees are the the result of the subjective choice of a prior, (nearly always the implicit choice of a uniform prior).

Alex

Alex,

I agree on everything in your post (unless something minor was not noticed).

Just some additional comments in the same spirit.

The experiment produces some quantitative results with some uncertainties that are dependent of the experimental procedure. The results are what they are, there are no probabilities or likelihoods in that. When the results are used to infer knowledge on some physical parameter the likelihood enters. The likelihood is a conditional probability that tells, what is the estimated probability of the observed result in case that the parameter has some particular value. In case of a continuous distribution of possible empirical values the probability is calculated for a narrow range of values close to the one actually observed. The absolute values of the likelihoods do not influence the outcome of the Bayesian inference, only the relative likelihoods that correspond to the physical parameter that we wish to learn about. The likelihoods need not sum to one as the sum will cancel out in the Bayesian analysis.

The likelihood is independent of the variable being used to describe the possible states of the real world. Thus the likelihood is the same for Y and S, when their values correspond to the same physics. The result of the empirical analysis is given by the likelihoods. In this sense the likelihood as function on Y in the original paper and the likelihood presented as function of S in WG1 report tell exactly the same thing about the empirical results. It would not be necessary to refer at all to Bayesian inference or to the prior and posterior probabilities to present fully the empirical result – and as I already wrote, it doesn’t matter, whether that is done using Y or S on the x-axis.

After the empirical results have been obtained, they can be used in Bayesian inference to learn something about the real world parameter. Everybody doing that has an equal right to choose the prior he considers most applicable, the authors of the empirical analysis are in that respect not more privileged than any other scientist. Their choice can be valued based on the trust that others have in their prior knowledge and understanding of the atmospheric physics, not based on the fact that they have done the empirical work.

You might enjoy the book “A Comparison of Bayesian and Frequentist Approaches to Estimation”, by Francisco J. Samaniego. A self-proclaimed Bayesian, Prof. Samaniego substantiates the point that in order for Bayesian inference to improve upon likelihood based inference (and Fisherian fiducial confidence intervals), the prior density has to be at least accurate enough, it must be on the correct side of what he calls a “threshold”. Thus, for the making of assertions about the shared world (the world in which we collect the data), priors are not all equal.

Priors are, however, (almost) always

untestable.There is almost never a shared body of data that scientists can examine in order to determine which, if any, of the priors under consideration might be accurate enough to improve upon likelihood inference.Professor Samaniego is on the statistics faculty at UC Davis. The presentation in the book is technical and referenced. I recommend it to all who discuss Bayesian vs. frequentist inference.

MattStat,

What you write above is pretty obvious. It doesn’t, however, help at all in situations, where the prior is thought to have a major influence on the inference. When there is reasonable justification for such an expectation, we have too alternatives: use a prior and introduce all problems created by that, or use the likelihoods and have the expectation that the conclusions are significantly wrong.

All is easy and simple, when the empirical data leads to a narrow enough distribution on the basis of the likelihood to make the influence of all plausible priors small, but in the opposite case we have the dilemma of the first paragraph.

You wrote this:

When there is reasonable justification for such an expectation, we have too alternatives: use a prior and introduce all problems created by that, or use the likelihoods and have the expectation that the conclusions are significantly wrong.Is this a case where there is a reasonable justification for a prior? Can it be shown to be accurate? Is there shareable evidence in its support?

Unless the prior is accurate, the Bayesian point estimate will have a higher mean square error than the maximum likelihood point estimate. So unless you can substantiate the accuracy of the prior with some evidence, it is likely that your Bayesian estimate will be more inaccurate than your MLE.

Indeed if there are a bunch of Bayesians with different priors, and a single MLE, then at most one of the Bayesian point estimates can have a smaller asymptotic MSE than the MLE, and maybe none. The effect of the Bayesian estimation is the slow the convergence of the MLE to the true value.

Introducing some repetition here, in order for the Bayesian estimate to be closer than the MLE, the mode of the prior has to be closer to the true value than the MLE is, and there has to be an actual mode, not uniformity. You can almost never demonstrate that such is true of the prior with any evidence.

Mattstat,

Forgive me as I am a statistically challenged, but isn’t that the case? There is more information so expert priors can be used to narrow the range. With more than one data set, the results of the selected priors could be compared. To me, most of the statistical issues boil down to a lack of curiosity. There is more than one way to skin a catfish, compare methods..

Dallas wrote: With more than one data set, the results of the selected priors could be compared.

In the present instance, there is only 1 data set. Even with more than one data set you have trouble showing that one of the Bayesian estimates will be closer to the true value than the MLE.

MattStat,

In case of continuous variables, we can replace choosing priors for the Bayesian inference by choosing the parameter that we use in the determination of the maximum likelihood estimate among parameters that have a unique monotonous functional dependence on each other. Each of the choices gives its own maximum likelihood point, and the points are usually not such that correspond to the same physical system.

That has been my point in many comments of this chain. In case of continuous variables there is no way around the fact that our largely arbitrary choices influence the outcome of our analysis, when our goal is to make inferences on the statistical properties of variables that can be used to describe the real world.

Dr Curry, you’re famous at last….

http://www.bbc.co.uk/news/science-environment-14054650?postId=109614557#comment_109614557

Global warming is not caused by too much CO2; it is caused by too little Ice Age.

There would appear to be no valid statistical reason to assume uniform priors. That assumption is used when you know nothing about the distribution.

In this case we know something about the distribution, based on the paleo reconstructions. As CO2 levels have gone up and down, earth’s average temperature has kept within a narrow band of 11 – 22 C.

With current temperatures at 14.5, if you are going to use a uniform prior, then it should be in the range of -3.5 to 7.5. Not 1-18.5 as chosen by the IPCC, That would have the effect of skewing the estimate towards an unrealistically high value.

In any case, using the paleo data, one can build a distribution for CO2 and temperature that makes no assumption about distribution. Once that is established, the appropriate statistical treatment should be obvious.

“the IPCC have imposed a starting assumption, before any data is gathered, that all climate sensitivities between 0°C and 18.5°C are equally likely”

Thanks for the great analysis Nic.

I’d have been inclined to replace the word ‘startling’ with ‘ludicrous’, but I think you made a wise choice of wording, given the inevitable dissembling, accusations and apologist ramblings which have followed.

The observed warming of the sea surface was ~0.7+/- 0.3C over the C20th. Lagging shortly behind was an increase in co2 of 120ppm over the same period.

Considering that the proposed relationship of co2 increase to temperature increase is logarithmic, it seems extremely unlikely that the next ~170ppm increase in co2 would cause ~18C increase in temperature, even assuming the AGW hypothesis has some merit. Another

If climate scientist really are saying with a straight face that they believe that a climate sensitivity of 18.5C is just as likely as a sensitivity of 1C, then they are clearly in need of oversight from engineers who understand how to make simple estimates of the constraints the sensitivity figure would be bounded by.

As a mechanical engineer, I would have thought that the ‘uniform priors’ themselves need to be considered in terms of a probability distribution.

Tallbloke:

It sounds as though you’ve confused the idea of “probability” with the idea of “probability density” for the IPCC isn’t saying that a sensitivity of 18.5 C is as probable as a sensitivity of 1 C but rather that the two sensitivities have equal probability densities. Under the IPCC’s model, the probability of either sensitivity is nil.

Thanks Terry, I knew something didn’t sound right. In this case, my understanding of the stats methods.

We are really getting worried now. This was recently published by I believe people committed to AGW

http://tetontectonics.org/Climate/SO2InitiatesClimateChange.pdf

SO2 causes global warming

Re previous the sentence i the Abstract is

“Thus CO2, a greenhouse gas, is contributing to global

warming and should be reduced”. Of course this is in light of the recent publication stating that SO2 emissions (in China) causes COOLING. I think we are about to lose complete credibility in the training of Climate Scientists

Apologies its was the wrong quote its this one from Abstract of paper above

“Volcanic eruption Mass extinction Tipping pointMethane

Large volumes of SO2 erupted frequently appear to overdrive the oxidizing capacity of the

atmosphere resulting in very rapid warming. Such warming and associated acid rain becomes extreme when

millions of cubic kilometers of basalt are erupted in much less than one million years. These are the times of

the greatest mass extinctions. When major volcanic eruptions do not occur for decades to hundreds of years,

the atmosphere can oxidize all pollutants, leading to a very thin atmosphere, global cooling and decadal

drought. Prior to the 20th century, increases in atmospheric carbon dioxide (CO2) followed increases in

temperature initiated by changes in SO2.

By 1962, man burning fossil fuels was adding SO2 to the atmosphere at a rate equivalent to one “large”

volcanic eruption each 1.7 years. Global temperatures increased slowly from 1890 to 1950 as anthropogenic

sulfur increased slowly. Global temperatures increased more rapidly after 1950 as the rate of anthropogenic ”

So there you go

sulfur emissions increased.

Exact quote “Global temperatures increased more rapidly after 1950 as the rate of anthropogenic sulfur emissions increased.”

Global warming is underestimated. It’s a MAN-BEAR-PIG!

Firstly, Thanks for the interest in our paper and much useful discussion.

I think the blog analysis is roughly correct, but I disagree completely with its interpretation. I disagree strongly that the IPCC authors were at fault for changing the priors and I disagree strongly that the uniform feedback prior is the best choice. Thirdly I disagree on the significance of all of this. I’ll deal with each in turn

1) I was fully aware about the use of our data and the choice of priors by the ipcc. However, my paper was published so I didn’t need to approve of it’s use by IPCC, as it was is the public domain. In fact to keep the chapter as objective as possible I did not get involved in the assessment of my own paper and was happy to let my expert colleagues decide on it’s merit or otherwise.

I also believe that the work put into assessing our paper by the IPCC authors is in fact a great testament of their professionalism. Rather than take our published numbers at face value, they looked very carefully at our paper and took the deliberative step, using their statistical and climate expertise, to modify our results to a uniform prior in sensitivity. They plainly state this in the report and are not trying to hide this. They also did this for sound science reasons based on the current literature at the time and acknowledge the problem of choosing the right prior in the chapter.

Like Nic, I personally really like Non-GCM ways of estimating sensitivity, but I quite understand why IPCC didnt give it any more weight than any other study. The major problems with FG06 are the short time period of observations, regression errors and satellite data errors. This was again a perfectly sound example of expert assessment.

2) Nic keeps mentioning OLS regression as the physics- based correct method, and that using this means that one should choose a uniform prior in feedback. If you read FG06 we infact spend a lot of time worrying about the choice of regression model. In the end we went for OLS, but this choice is not clearcut in complex systems where everything is interacting and isolating cause and effect becomes hard. All this means is that, as Annan Points out in his climatic change paper, the choice of prior isn’t an easy call and is very subjective. There is no one right answer for the best prior choice.

3)My third point is that in the meantime, just as Fred Moolten points out above, the science moves on. The work by James Annan shows clearly the effects of different prior choices and that there is not one “correct” answer. Gregory and Forster 2008 and other papers show how high sensitivities don’t really make any difference to current climate change rate, as high sensitivities slow down the system response. Finally, we have repeated FG06 with updated observations from ERBE and CERES in Murphy et al. (2010), JGR D17107 and got a median climate sensitivity estimate much higher, around 3. 0 C. Note that the murphy paper quotes a feedback value of 1.25 watts per square metre per K. We deliberately did not invert for sensitivity as we are unclear how representative these values are for the longer term climate response.

Piers, thanks very much for making these comments.

Dr. Forster,

You do not directly address Nic’s main point, which is the imposition of a uniform prior in S is questionable at best (though you do acknowledge Annan . . . you state it as if Annan’s work can be used to justify this prior). However, there are some strong arguments why this prior is not justifiable, and comparatively weak ones for using it.

Priors on the posterior probability distribution of the regression result can be expressed as an equivalent prior on the data. In this case, the prior in S is hardly uninformative in Y, and if the regression residuals were consistent with the uniform prior in Y, then they are certainly inconsistent with the prior in S. There is information readily available to indicate that the uniform prior in S is inconsistent with the available information.

Furthermore, I cannot think of a good reason for the IPCC to use a UNIFORM prior in S to transform your result. There is no result in any of the literature (and probably no result in anyone’s imagination, either) that would result in a uniform probability distribution for S from 0 to 18.5 . . . in which the 50% point of the CDF is 9.25. There is no reasonable argument that can be advanced for utilizing a prior on the answer with a median value that is rejected by all previous analyses.

Given that priors on the result can impose strong conditions on the data, one could potentially see using the PDFs from previous analyses to establish an expert prior in S. Since your original analyses gave values for S that were somewhat lower than previous studies, this would still tend to move your results a bit to the right, but would not have a significant impact and would translate into a prior in Y that is much more likely to be consistent with the residuals.

This was the salient point. The rest of the discussion was ancillary.

If you read Nic’s first paragraph, his main point is that this is an IPCC error, just like the WGii errors. This is what I disagree with.

I would not use a uniform sensitivity prior today and agree with Nic, you and Annan that this skews results towards high sensitivities. But in 2006 when the ipcc report was being written I was not that concerned that my result was transformed. I believe it was done for good science reasons at the time. Firstly to make results comparable on the same figure and secondly to be as cautious as possible about showing an improved understanding over the third assessment report about constraining the top end sensitivity. This skepticism about the significance and breakthrough nature of ones own results is good, not bad, science.

The story of how our paper was published is a case in point. We submitted it to Nature or Science, I can’t remember which now. We were saying we had made a breakthrough constraining high sensitivity. The reviewers pointed out that we had done no such thing, we had simply used a different (constant feedback) prior compared to previous studies. Nature rejected it and it ended up in the journal of climate!

Piers,

Can I first of all thank you for commenting here. Such engagement is very helpful.

You say that you disagree with my characterization of the IPCC’s transformation of your 2006 results onto a uniform prior in sensitivity basis as an error. Strictly, you may be right on that, although I don’t see how a transformation that had the effect on the PDF for Y that it did (as per my Fig. 6) can reasonably be regarded as a valid alternative to the original, approximately uniform prior in Y basis. That basis was implicit in the error analysis and assumptions, as you wrote originally, and I don’t think that use of any other justifiable regression method would have changed it.

I would concur with the response of a statistics professor to whom I sent the blog post:

” I would observe that your use of the word “error” for the apparent changes made to that paper’s results for inclusion in the IPCC report is, to put it mildly, too kind – a “distortion” of the empirical results is probably more appropriate.”

Finally, two queries re Murphy et al, JGR D17107 (2009, not 2010?), if I may:

1. Has the data and code used to analyse it been archived anywhere? If not, then would you mind encouraging the lead author to do so? Without archived code, it is often impossible to work out how exactly the data was processed. And external data sets are often updated or revised, with the version used for the study ceasing to be available. This was not so much an issue for your 2006 paper, as you provided good graphs of all the data used and the method was described in some detail.

2. The 1985-1998 interannual results for the ERBE non-scanner data shown in Table 1 and Figure A1(a) of Murphy et al, showing zero net feedback, seem totally at variance with the comparable results in your 2006 paper (over 1985-1996) of strong net feedback (Y=2.5 using the HadCRU data used in Murphy et al). Perhaps you can explain this discrepancy? I should add that the lack of any explanation of this in the Murphy paper dented my confidence in it.

Choice of prior doesn’t influence determination of the likelihood function. It’s Gaussian form and width are not influenced by that, and that’s where the empirical analysis and its error analysis ends.

The prior enters only, when this functional form is interpreted as PDF and used to determine the confidence range. It’s not an error or contrary to the experimental work to do this part using a different prior, because priors are subjective and must reflect thinking of that one, who is using it in the analysis.

Piers Forster:

James Annan’s observation about the effects of different priors were not novel at the time. On the contrary, it had been established for decades that the posterior distribution is sensitive to the choice of prior distribution when observational evidence is scarce.

It’s not so much a case of the science moving on, but of the IPCC authors failing to take on board basic and standard instruction in the application of Bayesian analysis, even when it was pointed out to them in Annan’s review comments.

(The two quotations above are from two separate comments of Dr. Forster – the appearance of contiguity is accidental, apologies).

Dear Dr Forster,

Nic writes of the error arising from an alteration of a peer-reviewed result. This was carried out by the IPCC, isn’t it?

Furthermore, the comparability that outsiders would like to see on the 2xCO2 question, is between models and data, and not between models and a transformation of results based on actual data that makes it look more like the models, is it?

Lastly, as a climate change skeptic, I would offer to you – those of us who have examined the IPCC’s claims know that the errors in the IPCC are systematic.

You know the IPCC as a group of atmospheric scientists and physicists, but we all know (including you) that it is a transnatioanal organization devoted to providing evidence for the UNFCCC’s mission objectives.

As an outsider, I read your “cautious as possible about showing an improved understanding over the third assessment report about constraining the top end sensitivity” as purely an expression of the precautionary principle and not as being for “good science reasons”.

You accept that it was an error, but I don’t, and as far as I understand Pier Forster agrees with my view.

What IPCC did was not to change the peer reviewed result but to use the result as scientific results are supposed to be used. The report also told in some detail, how the way they used the result differed from the corresponding part in the original paper.

You may think legitimately that the way IPCC did use the result is not necessarily the one, which gives most balanced view on what has been learned about climate sensitivity. Both Piers Forster and I seem to have that opinion, but we both have even so the view that IPCC did use the result in a fully acceptable way making a different judgment on other related factors, but using the empirical results as given by the paper.

Sure, no error; just a surreptitious sneaking of the ball which has now blown up in their faces.

The Gang That Couldn’t Shoot Straight now faces withering fire. No error, no, none at all.

====================

Pekka,

In a sane world, people would be changing models to match reality, and not the other way around.

The data and results from any piece of work, Forster’s included, ‘belong’ to the authors. After it is published, the interpretation of the results is in the common domain. The author’s interpretation are just one of the many that are possible, and even he has to

reasonout its validity. He or she has no special privilege that his view has to be accepted preferentially. Nic Lewis has provided a reason as to why the spin the IPCC put on the F&G results isnotvalid.Merely stating that, ‘I, being the author, was comfortable with it myself’ is not enough.

What the IPCC publishes has public consequences.

You must be aware that the fat-tailed pdf graph of the IPCC (Fig 9.2.0) featured in a submission to the Garnaut review in Australia where Garnaut reports have become the proclaimed basis for a ‘tax on carbon’.

Saying ‘I’m ok with that interpretation’ is good enough for intra-departmental conferences, where you voice disagreements just a bit but not too much, but is not good enough for the outside world.

Shub,

As Forsters wrote explicitly, published results are common property of the science community. They do not belong to anybody. No property rights or intellectual rights limit their use, except that the source has to be declared.

The usage in the IPCC report does not in any way conflict with the empirical work, the difference concerns only the choice of additional input that’s needed to transform the empirical findings to probabilities. Because the input is totally independent of the empirical work (it would not be a correct prior without being totally independent), it’s not conceivable that the authors of the empirical analysis would somehow limit the choices that others make.

In this particular case the IPCC report is perhaps even exceptionally complete in describing the difference in it’s usage compared to the original paper.

Alle the above has been told or implied also by Piers Forsters in his comments in this thread.

Pekka,

My response to your post has appeared below. Just FYI.

Piers Forster answered already, but I submit the following as it was written, before I saw his answer.

I think that Piers Forster did answer that point telling that choosing the prior can be used by everyone using the information provided by the empirical study in their subsequent analysis. The authors of the original study have published their results, and that’s the point where the responsibility transfers to the users of the data. That includes also IPCC, when IPCC is doing something in addition of telling, what the original paper did.

He did also acknowledge that there are no unique answers for the choice of priors.

IPCC has one strong and valid reason for dispaying the curves as they are displayed in Fig. 9.20. That reason is in the fact that the climate sensitivity is a parameter deemed as important and as the one about which more information is searched. It’s right and natural to show information on the parameter that is of most interest.

Some problems remain, however, in the details of IPCC representation. When the Figure 9.20 is interpreted solely as a summary of analyses that tell, what we have learned from empirical observations combined with modeling needed in its interpretation, the information is given by the values of the curves at each value of S relative to the values of the same curve at other values of S. These values tell, what the empirical data tells, but the curves are not probability density functions but likelihood functions (or conditional probability functions, where the word “conditional” is essential). Therefore the areas under the curves are not meaningful.

There is no informational significance in the fact that the scale of the S-axis is chosen as uniform until a prior, which is uniform in S is entered to the analysis. That may be misleading, unless the prior is really used and justified. Without the prior, the figure is a correct description of the empirical evidence, but perhaps too easy to misinterpret.

None of these problems is a problem of Piers Forster, the issue of an error in representation of Gregory 02 is something that should concern his co-author, but not directly him.

Thank you, Piers. In the hustle and bustle of the blogs, I think many have missed Annan points, and that by way of methodology, your point of high sensitivities slow down the system response, is missed, as well. Not to argue one way or the other, but that advocates, of either side, propose methodologies that ensure that we end up arguing in terms of assumptions more so than results. Your post brings a bit of balance.

Your statement

“”In fact to keep the chapter as objective as possible I did not get involved in the assessment of my own paper and was happy to let my expert colleagues decide on it’s merit or otherwise.””

was priceless.

Though I do find the conversation on your paper riveting, since I am interested. Hope you find some merit in it as well.

John , thanks. I’m all for this debate and openness of the science. Reading this makes for a fun coffee break! Bests