Overconfidence in IPCC’s detection and attribution: Part III

by Judith Curry

The focus of this series on detection and attribution is the following statement in the IPCC AR4:

“Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.”

Part I addressed the IPCC’s detection strategy and raised issues regarding the IPCC’s inferences about the relative importance of the multi-decadal modes of natural internal variability (e.g. AMO, PDO).  Part II addressed uncertainties in external forcing data sets used in the attribution studies and the relevant climate model structural uncertainties.  Part III addresses deficiencies in the overall logic of the IPCC’s attribution argument.

Summary. Consilience of evidence, degree of consistency, and expert judgment, spiced with some Bayesian reasoning, are at the heart of IPCC’s judgment regarding the confidence level of its detection and attribution statement.  To attempt to find a justification for their confidence level, I formulate the apparent underlying argument for their detection and attribution statement.  The uncertainty of each of the premises in the argument is assessed.  Different logics for drawing conclusions from these premises are considered.  Expert judgment based on a consensus approach seems to be the only way to arrive at such a high confidence level.  It is concluded that the IPCC needs logic police, in addition to uncertainty cops and statistics sentries. Some grounds for cautious optimism in the AR5 regarding its detection and attribution assessment is presented.

Reasoning about causality

At the heart of the IPCC’s attribution  argument is causality, which is the relationship between and event (the cause) and a second event (the effect), whereby the second event is a consequence of the first event.  Interpreting causation as a deterministic relation means that if A causes B, then A must always be followed by B.  Probabalistic causation means that A probabilistically causes B if A’s occurrence increases the probability of B. This can reflect imperfect knowledge of a deterministic system or mean that the causal system under study has an inherently chancy nature. Causal calculus infers probabilities from conditional probabilities in causal Bayesian Networks with unmeasured variables, whereby a Bayesian Network represents a set of variables and their conditional dependences. Causal calculus enables characterization of confounding variables, which are variables that can be adjusted to yield the correct causal effect between variables of interest.

I am no expert in logic.  My only formal exposure was a course in freshman logic nearly 40 years ago.  In the past few years, I’ve wandered randomly through the Wikipedia and the Stanford Encyclopedia of Philosophy.  I’ve read a few journal articles on the philosophy of science.  I can understand most Bayesian arguments that I’ve encountered, although I’ve never attempted to make one on my own (I am a coauthor on a paper that does).  My point is that I think there are some glaring logical errors in the IPCC’s detection and attribution argument, that it doesn’t take an expert in logic to identify.  I look forward to the input from logicians, bayesians, and lawyers in terms of assessing the IPCC’s argument.

The IPCC’s strategy for assigning a confidence level

The IPCC’s conclusion on detection and attribution is reached using probabilistic causation and counterfactual reasoning, whereby an ensemble of simulations are used to evaluate agreement between observations and forcing for simulations conducted with and without anthropogenic forcing.

Formal Bayesian reasoning is used to some extent by the IPCC in analyzing detection and attribution.  The reasoning process used by the IPCC in assessing confidence in its attribution statement is described by this statement from the AR4:

“The approaches used in detection and attribution research described above cannot fully account for all uncertainties, and thus ultimately expert judgement is required to give a calibrated assessment of whether a specific cause is responsible for a given climate change. The assessment approach used in this chapter is to consider results from multiple studies using a variety of observational data sets, models, forcings and analysis techniques. The assessment based on these results typically takes into account the number of studies, the extent to which there is consensus among studies on the significance of detection results, the extent to which there is consensus on the consistency between the observed change and the change expected from forcing, the degree of consistency with other types of evidence, the extent to which known uncertainties are accounted for in and between studies, and whether there might be other physically plausible explanations for the given climate change. Having determined a particular likelihood assessment, this was then further downweighted to take into account any remaining uncertainties, such as, for example, structural uncertainties or a limited exploration of possible forcing histories of uncertain forcings. The overall assessment also considers whether several independent lines of evidence strengthen a result.” (IPCC AR4)

From this statement, I infer that their objective analysis produced a very high level of confidence based upon multiple lines of evidence (which is referred to as a consilience of evidence), which they then downweighted to account for remaining uncertainties.

The underlying detection and attribution argument

As far as I can tell there has been relatively little discussion of the logic underlying the IPCC’s detection and attribution.  Richard Lindzen has stated that he is not impressed by the IPCC’s logic:

“However, with global warming the line of argument is even sillier. It generally amounts to something like if A kicked up some dirt, leaving an indentation in the ground into which a rock fell and B tripped on this rock and bumped into C who was carrying a carton of eggs which fell and broke, then if some broken eggs were found it showed that A had kicked up some dirt.”

Consider the following argument that I think must underlie the IPCC’s assessment of attribution and their high confidence in this assessment.   Uncertainty in each of the premises is characterized qualitatively by the Italian flag analysis described in Doubt, whereby evidence for a hypothesis is represented as green, evidence against is represented as red, and the white area reflecting uncommitted belief that can be associated with uncertainty in evidence or unknowns.

Here is the argument:

1.    Historical surface temperature observations over the 20th century show a clear signal of increasing surface temperatures. Italian flag:  Green 70%, White 30%, Red 0%. (Note: nobody is claiming that the temperatures have NOT increased.)

2.  Climate models are fit for the purpose of accurately simulating forced and internal climate variability on the time scale of a century.  This implies an accurate sensitivity to external forcing and an accurate simulation of the statistics of natural internal modes of variability on multi-decadal time scales. Italian flag:  Green 40%, White 50%, Red 10%.  (Note: the biggest issue is climate sensitivity, with a secondary issue being the magnitude of modes of natural internal variability on multi-decadal time scales, and tertiary issues associated model inadequacies in dealing with aerosol-cloud processes and solar indirect effects.)

3.  Time series data to force climate models are available and adequate for the required forcing input: long lived greenhouse gases, solar fluxes, volcanic aerosols, anthropogenic aerosols, etc.  Italian flag; Green:30%, White 60%, Red 10%. (Note: the biggest uncertainties having the greatest impact are solar forcing and aerosol forcing, with red associated with alternate interpretations of solar forcing.)

4.    Natural internal variability is small relative to forced variability and nonlinear interactions between forcings and responses are small; hence 20thcentury climate variability is explained by external forcing. Italian flag: Green 25%, White 50%, Red 25%.  (Note: the combination of natural internal variability plus solar variability remains a plausible explanation for most of the 20th century variability; the issue of nonlinear interactions between forcings and responses has not been  convincingly explored.)

5.  Global climate model simulations that include anthropogenic forcing (greenhouse gases and pollution aerosol) provide better agreement with historical observations in the second half of the 20th century than do simulations with only natural forcing (solar and volcanoes). Italian flag analysis: 30% Green, 50% White, 20% Red (JC Note: all climate models produce this result in spite of different sensitivities and using different forcing data sets; the models do not agree on the causes of the early 20th century warming and the mid-century cooling and do not reproduce the mid-century cooling.)

6.  Confidence in premises 1-5 is enhanced by the agreement between the simulations and observations of the 20th century surface temperature.

7. Thus:  Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.

Question A: Is my characterization of the argument correct?

Question B:  Is my assignment of the Italian flag % values correct?   Assignment of % values in the Italian flag analysis is necessarily subjective since the size of the “white” area is by definition unknown.  Other assignments of % would be plausible, but my assignment is not inconsistent with uncertainties stated by the IPCC itself as well as my analysis on previous threads.

Question C:  Assuming that the answers to A and B are “yes”,  how should we assess confidence in the conclusion (#7) based upon the 6 premises?

It seems that different logics could be applied here to assess the level of confidence in the conclusion.  I will provide a few simple examples of reasoning to compare (I know I will need help here from the logicians, bayesians, and lawyers in the group, but this is a start).

I.  Reasoning from contingency.

Assume that the conclusion (#7) is contingent on each of premises #2-#5.  It seems that the confidence in the conclusion should not exceed the values of green % for each of the premises.  Therefore the confidence in #7 should not exceed 40%, and possibly not exceed 25%, which is the premise with the lowest confidence level.  With a confidence level in the range 25-40%, IPCC’s conclusion would be plausible (not “very likely”)

II.  Reasoning from consensus

Decide what is really important here.  Ignore the white part of the Italian flag, and just consider the green and the red; this means that each of the premises has green 50% or higher.   Focus on premise #5, which is the punchline, and focus only on the period since 1970, where the models agree (ignoring the model disagreement for attribution causes in the earlier part of the 20th century).  This puts you in 90% very likely confidence territory

III.  Reasoning from a consilience of evidence

This is a case based on circumstantial evidence, that is often bolstered by Bayesian reasoning.  A large number of independent lines of evidence increase the confidence.  Problems with this kind of reasoning are highlighted by Nullius in Verba. Argument justification would require that you conduct the same analysis for for the opposing argument (i.e. natural variability).

IV.  I’m sure there are others?  Can anyone do a formal Bayesian analysis of this?

My own confidence assessment (provided on the Doubt thread in the context of the Italian flag) was derived from contingent reasoning.

Circularity in the argument

Apart from the issue of the actual  logic used for reasoning, there is circularity in the argument that is endemic to whatever reasoning logic is used. Circular reasoning is a logical fallacy whereby the proposition to be proved is assumed in one of the premises.

The most serious circularity enters into the determination of the forcing data. Given the large uncertainties in forcings and model inadequacies (including a factor of 2 difference in CO2 sensitivity), how is it that each model does a credible job of tracking the 20th century global surface temperature anomalies (AR4 Figure 9.5)? This agreement is accomplished through each modeling group selecting the forcing data set that produces the best agreement with observations, along with model kludges that include adjusting the aerosol forcing to produce good agreement with the surface temperature observations. If a model’s sensitivity is high, it is likely to require greater aerosol forcing to counter the greenhouse warming, and vice versa for a low model sensitivity.  The proposition to be proved (#7) is assumed in premise #3 by virtue of kludging of the model parameters and the aerosol forcing to agree with the 20th century observations of surface temperature.  Any climate models that uses inverse modeling to determine any aspect of the forcing substantially weakens the attribution argument owing to the introduction of circular reasoning.

Now consider premise #6.  The striking consistency between the time series of observed average global temperature observations and simulated values with both natural and anthropogenic forcing (Figure 9.5) was instrumental in convincing me (and presumably others) of the IPCC’s attribution argument.  The high confidence level ascribed by the IPCC provides bootstrapped plausibility to the uncertain temperature observations, uncertain forcing, and uncertain model sensitivity, each of which has been demonstrated in the previous sections to have large uncertainties that were not accounted for in the conclusion. I first encountered the marvelous phrase “bootstrapped plausibility” in an essay by Jerome Ravetz, which motivated me to learn more about this. Bootstrapped plausibility  (Agassi 1974) occurs with a proposition that is rendered plausible then lends plausibility to some of its more uncertain premises (e.g. 1-5), introducing further circularity into the argument.

From this analysis, it seems that the AR4’s assessment of confidence at the very likely (90-99%) level cannot be objectively justified, even if the word “most” is interpreted to imply a number that is only slightly greater than 50%.  A heavy dose of expert judgment is required to come up with “very likely” confidence.

The ambiguity of “most”

The word “most” introduces some interesting conundrums into the problem. The meaning of most is ambiguous; the dictionary provides a meaning of “a great majority of; nearly all.”  For some reason, I have been assuming that IPCC’s use of the word “most” implied greater than 50%, but now I don’t know where that came from (would appreciate some help with this).  For the sake of argument, lets assume that the word most is associated with a  range (maybe as large as 51-95%).  It seems that it would be more useful to provide probabilities for a range of states within the state space (help I might not be using the appropriate terminology).

Consider the following collection of states: 0%, 10%, 20% . . . 100%, where the percentage denotes the amount of warming that is attributed anthropogenic causes.  In terms of confidence levels, there would be virtually zero confidence that the correct state is either 0% or 100%.  There would probably be higher confidence across the ranges 30-70%.

Presenting confidence in the attribution as a function of different possible states would eliminate the ambiguity associated with the word “most” and would give a more realistic view of the confidence and uncertainties associated with the detection and attribution argument.

An alternative way would be to present the confidence levels for each state whereby the % denotes that anthropogenic warming causes at least the % of warming reflected by that state.  So 0% would have 100% confidence level and 100% would have zero confidence level.

Looking forward

Based upon my analysis, the IPCC has presented a weak case for its high level of confidence in its detection and attribution statement.  With the materials it has in hand (data, models, theory), it seems that it could make a much more robust case for a statement on detection and attribution with better logic and a better experimental design for its model simulations (see Part II).

I am cautiously hopeful that the AR5 might do a better job.  The IPCC Expert Meeting on Detection and Attribution Related to Anthropogenic Climate Change (2009) states the following:

“Where models are used in attribution, a model’s ability to properly represent the relevant causal link should be assessed. This should include an assessment of model biases and the model’s ability to capture the relevant processes and scales of interest. Confidence in attribution will also be influenced by the extent to which the study considers . . . confounding factors and also observational data limitations.”

“Confounding factors may lead to false conclusions within attribution studies if not properly considered or controlled for. Examples of possible confounding factors for attribution studies include pervasive biases and errors in instrumental records; model errors and uncertainties; improper or missing representation of forcings in climate and impact models; structural differences in methodological techniques; uncertain or unaccounted for internal variability; and nonlinear interactions between forcings and responses.”

“Confounding factors (or influences) should be explicitly identified and evaluated where possible. Such influences, when left unexamined, could undermine conclusions of climate and impact studies, particularly for factors that may have a large influence on the outcome.”

“For transparency and reproducibility it is essential that all steps taken in attribution approaches are documented. This includes full information on sources of data, steps and methods of data processing, and sources and processing of model results.”

Estimates of the variability internally generated within the climate system or climate impact system are needed to establish if observed changes are detectable. It is ideal if the observational record is of sufficient length to estimate internal variability of the system that is being considered (note, however, that in most cases observations will contain both response to forcing/drivers and variability). Further estimates of internal variability can be produced from long control simulations with climate models . . . Expert judgments or multi-model techniques may be used to incorporate as far as possible the range of variability in climate models and to assign uncertainty levels, confidence in which will need to be assessed.”

These recommendations reflect a much greater awareness of the challenges and uncertainties associated with attribution.  Dare we hope for improvements in the AR5 assessment?

208 responses to “Overconfidence in IPCC’s detection and attribution: Part III

  1. David L. Hagen

    Great summary of major issues.

    What Phase?
    May I recommend another implicit/hidden and commonly overlooked critical factor is the assumption of cause vs effect, or phase and feedback sign in models.
    Which is the causal factor? Which the consequence?

    e.g. Does Increasing CO2 increase temperature? OR
    Does increasing ocean temperature increase atmospheric CO2?

    Does reducing clouds increase temperature? OR
    Does increasing temperature reduce clouds?

    Is feedback positive or negative?

    I understand most current models assume one without testing the other.
    See Roy Spencer addresses this in a number of posts and articles. e.g.

    From what I can tell reading the paper, their claim is that, since our primary greenhouse gas water vapor (and clouds, which constitute a portion of the greenhouse effect) respond quickly to temperature change, vapor and clouds should only be considered “feedbacks” upon temperature change — not “forcings” that cause the average surface temperature of the atmosphere to be what it is in the first place.

    Though not obvious, this claim is central to the tenet of the paper, and is an example of the cause-versus-effect issue I repeatedly refer to in the past when discussing some of the most fundamental errors made in the scientific ‘consensus’ on climate change.

    Does CO2 Drive the Earth’s Climate System? Comments on the Latest NASA GISS Paper
    Global Warming as a Natural Response to Cloud Changes Associated with the Pacific Decadal Oscillation (PDO)

    Recommend adding another point or subpoint and flag, addressing this causality issue of phase and feedback.

  2. Very interesting, Judith! I’ve enjoyed this 3-part series and I feel you make a compelling case for reappraising the validity of the IPCC’s assertions.

    On the ambiguity of “most” I would agree with a potential 51% lower value attribution (“Gore won the most votes”) and a reasonable 95% upper. However, I would argue for a tentative but potential upper value of 99% in the case of a superlative adjective combination (particularly if “most” is used as an abbreviation for “almost”, eg. “almost completely” as “most all”) and even “by far the most complete” can reasonably mark a value as much as 99%. Context being everything, and each instance requiring careful consideration on its own merit.

  3. Judith,

    What a great essay! I have only one qualification.

    Where you set out ‘the argument’, your #1 talks of ‘a clear signal of increasing surface temperatures’, and goes on to say that ‘nobody is claiming that the temperatures have NOT increased’. I can accept both. But,

    I would have preferred you to say (1), that there have been periods of warming and cooling in the 20th century, and that the net outcome has been a higher temperature at the end of the century than at the beginning; and (2), that it is not clear exactly how much that increase has been, because of measurement problems.

    I have always had difficulties with both the concept and the measurement of a ‘global temperature anomaly’, in part because of its inapplicability to anywhere in particular, in part because it is an average of averages of averages, and in part because of measurement problems (UHI, bad siting, instrumentation and so on). It may be (some think it is) that the error around the GTA is much greater than any change in it. I would be happy to see a thread on all this. I accept that it is only marginally relevant to your current thread.

  4. Judy- Your posts are providing a forum for a much needed debate on the IPCC assessment process. This is very much appreciated by many of your colleagues. Keep up this valuable contribution!

    Roger Sr.

    I have posted on the Scientific American article that was written about you – see http://pielkeclimatesci.wordpress.com/2010/10/24/misleading-reporting-by-the-scientific-american-that-judy-curry-is-a-climate-heretic/

  5. Frankly, I’m perplexed about all this talk about LOGIC. Generally, as I understand it, degrees to the level of PhD in natural sciences don’t require courses in logic. The logic of natural science is, as understand it, built into the mathematics (e.g. calculus, real analysis, etc.).
    What’s more, this discussion is often not about logic per se (e.g. rules of deduction and fallacies that result when they are violated) but the use of analogies as Lindzer’s dirt kick and rock scenarios.
    So why does atmospheric, climate, etc. require logical analysis? This suggests to me that the science of climate is a science in name only.
    I am not being critical or rhetorical. I have a degree in philosophy so I’m not denigrating logic – I’m just intrigued.
    Also, what I consider illogical or un-logical is time spent analyzing the logic of IPCC. It seems clear that IPCC is an ideological organization that is primarily interested in promoting a political agenda. One cannot find logic in subjective values. The IPCC is grounded in its value system and will say or do whatever it takes to promote its values. It’s like analyzing the logic of Modern Art or Italian food.

    • I’d agree with you on this logic stuff, Tom. I’m OK with Bayesian approaches as an adjunct to standard statistical analyses in scientific subject matter, but I’m at a loss to understand what general logicians or lawyers could usefully bring to the table. (That’s not to denigrate the astounding skills of philosophers, but the criticisms of numbers and models that abound in relation to climate science would become howls of outrage when faced with the elaborate and obscure equations that logicians use.)

      If we’re looking for other disciplines to review and report on IPCC processes and outcomes, I’d think that full transparency of the *initial* scientific conclusions and recommendations and the *final* qualifications and adjustments introduced as a result of international negotiations would be a rich seam for political scientists and economists to mine.
      A simple table of scientists results versus final report conclusions could be fascinating (as well as tediously boring in many respects).

    • Hmmm . . . well i guess the problem is the uncertainty monster. When you are trying to test complex hypotheses with numerous premises, each of which is uncertain, how should you reason about this? Reasoning and hypothesis testing are key elements of science. Science is much messier than mathematics, and some sort of probabilistic reasoning is often the only way to approach a problem. Forget the IPCC for a moment, think purely of the question of how we should reason about the detection and attribution of 20th century climate change. This is a complex problem that cannot be solved only with mathematics, and it is much messier than problems typically encountered in physics and chemistry owing to the complexity of the system being analyzed.

      • Math and Climate Science
        I disagree (respectfully). Certainly the science of sub-atomic particles or genomes is very “complex”. Consider the crisis in physics at the famous 1927 Copenhagen conference dealing with ‘waves’ and/or ‘particles’ etc. This gave rise to ‘Uncertainty Principle and Quantum Mechanics etc. In short, the logic of mathematics is well equipped to deal with large numbers of variables and the ‘uncertainty’ associated with their relation e.g. multiple regression analysis. I believe it was Galileo who say: “Mathematics is the language of the universe”. I submit climate is in the universe.

        The problem with climate science is clearly indicated recently by (I think it was ) Wegman in his report criticizing Mann et al. He pointed out that one could get PhD’s in climate related sciences from Ivy League universities no less without taking a single course in statistics. To my mind that is the ‘scandal’ of climate science.

        Professor Curry, I submit, if you want to reform your ‘science’ the place to start is with math education. There cannot have a science without math and the math of large uncontrolled variables is calculus based probability theory.

        To reform climate science – in my opinion reform the math education of climate scientist

        Again – respectfully.

      • Tom, atmospheric dynamicists are pretty good with math of the applied mathematics variety that is encompassed by mathematical physics. Many (not all) are very poor at statistics. I definitely agree with your point about statistics. But reasoning about uncertainty and the logic of scientific arguments is essential also.

        Re complexity, how many sub-atomic particles and genomes comprise the subsystems that comprise the earth system?
        The number of degrees of freedom in any model or theory of the climate system far swamps that of sub-atomic particles or genomes. And the climate system cannot be controlled or analyzed in a laboratory.

      • Logic is the right word but it is neither simple inductive nor deductive logic. We are dealing with what is called decision making under uncertainty, which, alas, is not a well developed field. It has mostly been developed for, as the name suggests, making decisions, not judging complex scientific claims. Its workhorse is the decision tree.

        In the case of the climate change debate the basic structure is the “issue tree.” (See http://www.stemed.info/reports/Wojick_Issue_Analysis_txt.pdf) It goes like this. There is an initial simple claim, such as “humans are causing potentially dangerous global warming.” This sentence is the top node in the tree. It can be questioned and challenged is several simple ways, asking for evidence or elaboration, citing contrary evidence, etc. Each question or challenge is a node in the second layer of the tree, linked to the top node.

        But each question or challenge can have several simple responses, adding another layer of linked sentences. Then each response can again be challenged or questioned and so it goes, a tree structure.

        Issue trees can grow large quickly, and the climate debate certainly is very large. If every node has just 3 linked nodes in the next lower layer then the 10th layer has over 50,000 nodes. This is the incredibly complex, yet simple, structure of the debate. It is not a simple argument in the inductive nor deductive sense. The weight of evidence, or degree of uncertainty, of the top node proposition potentially depends on everything that is said in the tree. No wonder the debate seems endless and there is lots of room for rational disagreement.

      • David L. Hagen

        David Wojick
        Thanks for the link.
        See “Public Policy Forecasting” and the Global Warming Audit.
        How do these methods apply/compare with your complex issue tree analysis?
        Do you have a link to your thesis or summary papers on it?

      • I will look at your stuff, David. Unfortunately I have never had the opportunity to do an issue tree of the climate change debate, except in my mind’s eye. It requires making both sides of the debate clear and neither side wants to pay for that. I have mostly done regulatory debates. See http://www.bydesign.com/powervision/Mathematics_Philosophy_Science/ENR_cover_story.doc

      • David L. Hagen

        Great line:

        “that the ideas are held together by unspoken questions. What point is this sentence making? What point is it responding to?”

        I would strongly vote for applying your method to alternative fuel supply and climate change. Should be worth an NSF/DOE/EPA grant. See Robert Hirsch’s book. He exposes the real challenge behind all the climate “hot air.”

      • I have not seen any RFP directed toward elucidating the climate debate. AGW is federal policy at this point, although that may change. I did approach NSF about this several years ago but was told that the science is settled! (How times change.) Research is focused on issues internal to AGW, like the carbon cycle, bigger models, and aerosols, instead of natural variability and uncertainty. However the lack of warming over the last decade has forced some attention on natural factors. It will be interesting to see how the rise of skepticism affects the global change research program.

      • i am tackling dmuu next week, will need help!

      • Happy to help, on-line or off. You will find nothing about issue trees as I don’t publish. My clients are federal agencies and industrial firms, whose issues are private. I think the intelligence community is doing a lot on issue analysis and visualization, but some of it is classified.

      • Dr. Curry (Judith?), your premise #1 (or a supplemental premise) needs to be that the historical global temperatures are in fact the same as those used in premise #5 and following. Mere warming is not enough. Any such specific temperature premise will get a large red area. This specific temperature profile is after all not an observation. It is the output of a complex, controversial “Jonesian” statistical model (as I like to call it), with many premises of its own.

        If this temperature profile is incorrect then the model’s ability to fit it is if anything evidence against AGW. The models will have explained the wrong thing. (Rats!) For example, suppose we take the actual temperature profile of the earth since 1978 to be the satellite record, not the Jonesian estimates. The steady warming trend which GHG increases are purported to explain is not there. In fact the profile average is flat prior to the 1987 ENSO cycle, and flat thereafter, albeit at a higher level. This does not look at all like GHG warming.

        In short the validity of the argument depends critically on the validity of the temperature profile used, and this is specific profile is highly contentious. Up the red, as it were.

      • Dr. Strangelove


        I propose a different method of hypothesis testing that is independent of subjective opinions and uses only hard data – regression analysis using analytic geometry and matrix algebra.

        It is simple and accurate. Take global temp. and CO2 data say 100 yrs. of the last century. Plot it in a graph: y-axis is temp., x-axis is CO2. Look at the graph, it has many fluctuations, up and down along y-axis. What types of mathematical equation can produce that pattern? There are three:
        1) a linear random function
        2) a multiple variable linear equation
        3) a single variable polynomial equation

        Find the equation that best describe the graph, and that defines the relationship of temp. and CO2.

        Why is this method applicable and effective? Global temp. is not a random event. It’s a physical phenomenon obeying the laws of mathematical physics. Therefore there exist an exact solution that determines global temp. from a set of physical variables. We do not expect to find the exact solution bec. the equation may be too complex and unsolvable. However, it is possible to find an approximate solution, one that yields a small error, if correctly identify the significant variables. A small error is an error that does not exceed the observed effect of a hypothesis.

        First, formulate a hypothesis: CO2 is the most significant variable that determines global temp.
        Second, find the best possible solution of the hypothesis. From the 3 types of possible equations, we know the polynomial equation applies in this case. The best possible solution is a (n-1) degree general polynomial equation where n is no. of data pairs (temp, CO2). It is the highest degree polynomial that is solvable from the data set and yields the least error.

        Third, test the hypothesis. Compute the error by comparing the predicted values of the polynomial equation vs. actual temp. data. The observed effect of the hypothesis is 0.6 C in 100 yrs. of the last century. The predicted effect of CO2 on global temp. expressed in percentage is:
        1 – Ea/OE
        where Ea is the absolute ave. error, OE is absolute observed effect. The effect may be 0 to 100%. If it yields a negative predicted effect, it means the error is larger than the observed effect. This falsifies the hypothesis bec. the best possible solution using CO2 is not an approximate solution. Therefore CO2 is insignificant. The approximate solution contains other more significant variables. It is impossible for CO2 alone to determine global temp.

        The (n-1) degree general polynomial equation can be solved using (n x n) matrix algebra. I have not done the calculations as it is tedious but it is definitely solvable. I respectfully suggest you write a paper on this as it may finally settle the issue of how much is the effect of CO2 on global temp.

      • DS, just to highlight the complexity of the uncertainties, there are at least two big problems with your “simple” approach. First, there is no hard data. As I explain immediately above, the surface temps usually used are the output of complex statistical models which have lots of problems. These are not measurements or hard data by any means. Given their inconsistency with satellite temps the surface models are probably very wrong. There are similar problems with CO2 levels, especially in the first half century. We have not had a climate observing system so all of our “data” is really guesswork. This is a fundamental problem.

        Second, you are only finding a correlation, not a cause-effect relation. For example, there is an interesting argument to the effect that the CO2 rise is caused by the temp rise. This argument notes that the correlation is too good to support CO2 causing temp because CO2 is only one of many purported temperature drivers under AGW, so it supports the (reverse) temp causes CO2 hypothesis, in which case AGW is completely false.

        Not that I am not arguing that any of these arguments is correct. As a skeptic I merely note that they are unresolved.

      • Dr. Strangelove


        I fully understand the two points you raised even before I proposed this method of hypothesis testing. I agree with you. The errors in global temp. data from 1850-2005 are so large that one cannot ascertain global warming until 1990 onwards. 100,000 yrs. of proxy data show that increase in temp. precedes increase in CO2 by 800 yrs. on ave.

        My point is, for all its shortcomings, the proposed method is still superior to the IPCC logic which is qualitative and subjective (double guesswork). We have to keep trying to find better ways of assessing CO2 impact on global. temp. given the constraints (inaccurate data). The alternative is surrender, “we don’t know, let’s not try, it’s futile.” Social sciences like economics are equally messy and uncertain but it doesn’t stop economists from making econometric models and trying to improve their quantitative techniques.

        Yes, we are looking for correlation. If there is strong correlation, we have to determine which is cause and which is effect. If there is no correlation, then Co2 is not the cause. Crude but better than Bayesian logic.

      • I agree with your overall thesis. For example, the AGW modelers are fond of saying that their results are reasonably good when in fact they are not good at all and this lack of fit can be measured. As for Bayesian logic, it merely attempts to measure the strength of belief, in questionably probabilistic terms. I have no use for it. The weight of evidence should not depend on who is thinking about it.

    • The reason logic is needed is that experimentation is not possible. We can not wait 100 years to see which model is correct. The arguments in the IPCC are based on a chain of argument that is logically-based: X + Y + Z implies K. But with incorrect data/premises, the logical chain is falsified.

  6. David L. Hagen


    Where in your analysis do you consider evaluation of climate model projections against subsequent climatic evidence? e.g. see
    Roger Pielke’s post: Very Important New Paper “A Comparison Of Local And Aggregated Climate Model Outputs With Observed Data” By Anagnostopoulos Et Al 2010 commenting on:

    Anagnostopoulos, G. G., Koutsoyiannis, D., Christofides, A., Efstratiadis, A. & Mamassis, N. (2010) A comparison of local and aggregated climate model outputs with observed data. Hydrol. Sci. J. 55(7), 1094–1110.

    . . .we found that local projections do not correlate well with observed measurements. Furthermore, we found that the correlation at a large spatial scale, i.e. the contiguous USA, is worse than at the local scale.

    Wilby, R. L. (2010) Evaluating climate model outputs for hydrological applications – Opinion. Hydrol. Sci. J. 55(7), 1090–1093.

    there appears to be no immediate prospect of reducing uncertainty in the risk information supplied to decision makers.”

    Kundzewicz, Z. W. & Stakhiv, E. Z. (2010) Are climate models “ready for prime time” in water resources management applications, or is more research needed? Editorial. Hydrol. Sci. J. 55(7), 1085–1089

    the current suite of climate models were not developed to provide the level of accuracy required for adaptation-type analysis.

  7. “Historical surface temperature observations over the 20th century show a clear signal of increasing surface temperatures. Italian flag: Green 70%, White 30%, Red 0%. (Note: nobody is claiming that the temperatures have NOT increased.)”

    I do. The AVERAGE has increased, but that is not a measurement. It is a calculation. There are lots of different ways an average can increase without temperatures getting hotter. You can have a DECREASE

  8. Sorry hit wrong button.

    You can have a DECREASE in the TMax and an increase in the TMin which would show up as an increase in the average. But it’s not getting hotter. This is exactly what I’m seeing with Canadian surface temperatures. Summers are cooling, while winters are getting less cold. So someone will have to explain how CO2 would cool summers. We have 1/3 fewer days above 30C than we did in the 1920’s.

    Thus the range in temperatures is converging. Taken to the extreme the summer and winter temps would be the same in some 700 years at 18C. Since it is impossible for that to happen, at some time in the future, regardless of the level of CO2, the TMax and TMin must start to diverge. Hence a natural cycle. Nothing to do with CO2.

  9. You want to get into logic, then I have two logical problems for AGW.

    1) in all other sciences, skepticism is relished as a way to ensure the science is right. In climate science, skepticism and questioning the science is met with ad hominen attacks. It’s a disgrace.

    2) what is the default position in climate science? Are all climate and weather events a priori assumed to be caused by our CO2? Or are all climate and weather events a prior assumed to have a natural causation? All other sciences assume the latter. All events are assumed to have a natural cause unless shown otherwise. But in climate science the opposite is the basic premise. Our CO2 emissions are causing everything until a natural explanation is found. If anything else, this is climate sciences greatest logical fallacy. This is why AGW is considered a religion by many skeptics. It just replaces God with humans. God-of-the-gaps is alive and well practiced in climate science.

  10. I think you need more accurate formulations. To begin with #1, terms like “surface temperature observations” and ” increasing surface temperatures” create an impression that you might be talking about increasing in all individual temperature records. This is not true, there are many met stations that show records of 100-years long downtrends. What you should say is that GLOBAL AVERAGE index of temperatures is increasing, or average low troposphere temperature is increasing (a 5km layer of air, as per satellite interpretation of “brightness temperatures”). There is a substantial difference.

  11. Very interesting and thoughtful post. I too have pondered exactly what was meant by “most”, and similarly concluded it can only mean the IPCC has >90% confidence that > 50% of the warming since the mid 20th century is the result of anthropogenic greenhouse gas emissions.

    You’ve tried to understand the IPCC’s train of logic in arriving at this canonical statement. I have always found it instructive to look at the corrollary, which is:

    The IPCC has a confidence level >90% that less than 50% of the observed increase in global average temperatures since the mid-20th century is the result of non-anthropogenic external forcings and internal natural variability within the climate system.

    Since, without free parameters, and parameterizations calibrated (or fudged, if you like) to match observed data (such as it is), models (the principle means of attribution) are unable to replicate real world observations, then the statement above is obvious patent nonsense.

  12. “…means that A probabilistically causes B if A’s occurrence increases the probability of B…..”

    This is 1st yr stats as the IPCC seems to be applying this.

    One can make the well known observation that an increase in ice cream sales leads to increased crime. An increase in ice cream sales correlates to an increase in crime with a very high correlation.

    total junk of course as both ice cream sales and crime are both correlating with temp, not each other.

  13. Phillip Bratby

    The IPCC Expert Meeting on Detection and Attribution Related to Anthropogenic Climate Change (2009) states the following:

    “For transparency and reproducibility it is essential that all steps taken in attribution approaches are documented. This includes full information on sources of data, steps and methods of data processing, and sources and processing of model results.”

    Having to make this type of statement shows what a dreadful state “climate science” is in. This should be normal practice in science and should not need to be given as instructions to “climate scientists” and those writing AR5.

    • Phillip, have you ever seen military SOP documents? Every minute step is made explicit, even for tasks involving experienced qualified operatives.

      A Standard Operating Procedure document is *always* a good idea for large cooperative enterprises. Seeing as the scientists doing the work are volunteers, I think it’s a good idea to give a firm indication that un-paid does not mean not-as-good-as-paid work. (For all I know there may be different conventions, or at least different emphases, in the dozens of disciplines involved – so making the requirements explicit is a way of overriding any of those differences.)

      • Phillip Bratby

        No I haven’t, but the military is not science. I have seen SOPs in industry. I suggest an SOP is similar to a set of QA procedures, but SOPs are for performing tasks whereas QA procedures are about methods and processes. Use of QA procedures should be second nature, but they are necessary as a reference manual for the detail.

    • well it will be interesting to see if the AR5 turns out any different from AR4

  14. it doesn’t take an expert in logic to identify. I look forward to the input from logicians, bayesians, and lawyers in terms of assessing the IPCC’s argument.

    I am none of those. Your path of reasoning does suggest a simple way to frame things.

    The problem is largely a ‘perceptual’ fallacy and the Monty Hall problem is an example of how a very ordinary situation can be a perceptual boondoggle.

    You use the term ‘overconfidence’ I would suggest that the IPCC is ‘Begging the question’

    I suppose you can try and analyze the situation from the perspective of ‘propositional calculus’ (that course in logic that you might have taken way back when) but I’m not sure that the situation is can be cleanly disassembled and framed in such a way.

    For me, ‘Begging the Question’ has a ‘Monty Hall Problem’ quality about it as both examples involved undeclared and changing bayesian probabilities.

    Ok,I promised a simple way of framing the problem. Here it is:

    ‘Begging the Question’:

    Should Bill Clinton be remain as President because of his philandering with Ms. Lewinsky?

    I call this type of ‘Begging the Question’ a perceptual fallacy because the confusion is formed the instant that one agrees to entertain the question.

    The president of a country must perform many tasks. There is a very very large list of performance criteria type ‘questions’ which a person who is to serve as president needs to meet.

    In fact there are so many ‘questions’ to be asked that it is dubious that any single person can fill 100% of what is expected. In pragmatic terms, a person is suitable to serve in the presidency if they pass 80% – 95% of the expected standard.

    The ruse in asking the question and expecting an answer is to cherry pick just one question from the very large list and ignore all the other vast set of questions.

    As soon as one agrees to partake of considering based on just one question … the overview picture is lost and forgotten.

    The IPCC is floating a single question and cherry picking whatever supportive evidence it can muster in support of a causal proof.

    As soon as one agrees to partake of considering based on just one question … the overview picture is lost and forgotten.

    As soon as one agrees to partake of considering readily provided causal explanations … the overview picture of other causal paths and consequences is lost and forgotten.

    The IPCC controls the focal awareness and can choose to steer ‘consideration’ wherever and however it desires to bolster it’s assertion.

    As soon as one agrees to partake in ‘joining the chase’ one blinds oneself to a much larger overview and agrees to abide by and focus intensely on the narrow set agenda.

    • Raving, begging the question is a subset of circular reasoning i think. Seems to be far too much of that in the IPCC.

      • RECURSION:

        The ‘question’ is rhetorical and begged:
        Is the earth getting warmer? (Implied and presumed YES)

        The ‘answer’ is rhetorical and begged:
        Is AGW the causal explanation? (Implied and presumed YES)

        The ‘hypothesized answer’ is rhetorical and begged:
        Does anthropogenic activity produce warming on a global scale? (Implied and presumed YES)

        Recurse on that above. …

        This type of logical fallacy is called the “Unmoved mover” paradox although I prefer to call it ‘Forcing the question’. Therein resides the deeper meaning of the forcing term.

        However reality might be, the situation is a superb ‘Beggars Banquet’ of motivation and funding.

      • interesting, the unmoved mover paradox is a new one for me, sounds much better than chicken and egg

    • David L. Hagen

      Willis Essenbach brings out how IPCC is begging the question with a circular argument on the linearity of climate sensitivity:
      Climate models assume linearity.
      Testing climate sensitivity with models shows linear sensitivity.
      Therefore IPCC concludes that climate sensitivity is linear!
      See Nature hates straight lines

  15. Phillip Bratby

    The message from that statement was something I instilled in new graduates. It should not need to be repeated to experienced scientists or “experts”, unless something is seriously wrong – which Climategate has revealed to be clearly the case.

  16. Richard S Courtney

    Dr Curry:

    I think the problem with the models is more profound than you state when you write:

    “The most serious circularity enters into the determination of the forcing data. Given the large uncertainties in forcings and model inadequacies (including a factor of 2 difference in CO2 sensitivity), how is it that each model does a credible job of tracking the 20th century global surface temperature anomalies (AR4 Figure 9.5)? This agreement is accomplished through each modeling group selecting the forcing data set that produces the best agreement with observations, along with model kludges that include adjusting the aerosol forcing to produce good agreement with the surface temperature observations. ”

    I stated my assessment on a previous thread of your blog and I take the liberty of copying it here because I think it goes to the heart of the issue of “Overconfidence”.

    My comment was in the thread titled “What can we learn from climate models” that is at

    It was as follows:

    “Richard S Courtney | October 6, 2010 at 6:07 am | Reply Ms Curry:
    Dr Curry:

    Thank you for your thoughtful and informative post.

    In my opinion, your most cogent point is:

    “Particularly for a model of a complex system, the notion of a correct or incorrect model is not well defined, and falsification is not a relevant issue. The relevant issue is how well the model reproduces reality, i.e. whether the model “works” and is fit for its intended purpose.”

    However, in the case of climate models it is certain that they do not reproduce reality and are totally unsuitable for the purposes of future prediction (or “projection”) and attribution of the causes of climate change.

    All the global climate models and energy balance models are known to provide indications which are based on the assumed degree of forcings resulting from human activity resulting from anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature. This ‘fiddle factor’ is wrongly asserted to be parametrisation.

    A decade ago I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.

    And my paper demonstrated that the assumption of anthropogenic aerosol effects being responsible for the model’s failure was incorrect.

    (ref. Courtney RS ‘An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre’ Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).

    More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.

    (ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).

    Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model.

    He says in his paper:

    ”One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5 deg C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.

    The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy.

    Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available at http://www.nature.com/reports/climatechange ) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.”

    And Kiehl’s paper says:

    ”These results explain to a large degree why models with such diverse climate sensitivities can all simulate the global anomaly in surface temperature. The magnitude of applied anthropogenic total forcing compensates for the model sensitivity.”

    And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.

    Kiehl’s Figure 2 can be seen at http://img36.imageshack.us/img36/8167/kiehl2007figure2.png
    Please note that it is for 9 GCMs and 2 energy balance models, and its title is:
    ”Figure 2. Total anthropogenic forcing (Wm2) versus aerosol forcing (Wm2) from nine fully coupled climate models and two energy balance models used to simulate the 20th century.”

    The graph shows the anthropogenic forcings used by the models show large range of total anthropogenic forcing from 1.22 W/m^2 to 2.02 W/m^2 with each of these values compensated to agree with observations by use of assumed anthropogenic aerosol forcing in the range -0.6 W/m^2 to -1.42 W/m^2. In other words, the total anthropogenic forcings used by the models varies by a factor of almost 2, and this difference is compensated by assuming values of anthropogenic aerosol forcing that varies by a factor of almost 2.4.

    Anything can be adjusted to hindcast obervations by permitting that range of assumptions. But there is only one Earth, so at most only one of the models can approximate the climate system which exists in reality.

    The underlying problem is that the modellers assume additional energy content in the atmosphere will result in an increase of temperature, but that assumption is very, very unlikely to be true.

    Radiation physics tells us that additional greenhouse gases will increase the energy content of the atmosphere. But energy content is not necessarily sensible heat.

    An adequate climate physics (n.b. not radiation physics) would tell us how that increased energy content will be distributed among all the climate modes of the Earth. Additional atmospheric greenhouse gases may heat the atmosphere, they may have an undetectable effect on heat content, or they may cause the atmosphere to cool.

    The latter could happen, for example, if the extra energy went into a more vigorous hydrological cycle with resulting increase to low cloudiness. Low clouds reflect incoming solar energy (as every sunbather has noticed when a cloud passed in front of the Sun) and have a negative feedback on surface temperature.

    Alternatively, there could be an oscillation in cloudiness (in a feedback cycle) between atmospheric energy and hydrology: as the energy content cycles up and down with cloudiness, then the cloudiness cycles up and down with energy with their cycles not quite 180 degrees out of phase (this is analogous to the observed phase relationship of insolation and atmospheric temperature). The net result of such an oscillation process could be no detectable change in sensible heat, but a marginally observable change in cloud dynamics.

    However, nobody understands cloud dynamics so the reality of climate response to increased GHGs cannot be known.

    So, the climate models are known to be wrong, and it is known why they are wrong: i.e.

    1. they each emulate a different climate system and are each differently adjusted by use of ‘fiddle factors’ to get them to match past climate change,

    2. and the ‘fiddle factors’ are assumed (n.b. not “estimated”) forcings resulting from human activity,

    3. but there is only one climate system of the Earth so at most only one of the models can be right,

    4. and there is no reason to suppose any one of them is right,

    5. but there is good reason to suppose that they are all wrong because they cannot emulate cloud processes which are not understood.

    Hence, use of the models is very, very likely to provide misleading indications of future prediction (or “projection”) of climate change and is not appropriate for attribution of the causes of climate change.



    • Richard thanks for reminding me of your earlier post, I sometimes get overwhelmed and miss things the first time around. I’ve downloaded the papers you refer to so I can take a closer look.

    • Keihl’s paper has been out as long as IPCC AR4 and has been widely discussed. One problem is that many discussants speak as if climate sensitivity were an input to the climate models which is traded off against anthropogenic aerosol forcing which is another input to get the “right answer.” I detected that tone (if not those exact words – memory fails) at a talk given by Dr. Spencer some time ago. Modelers do not input climate sensitivity. The models contain as much physics as is computationally reasonable. They include parametrizations and more or less arbitary inputs when the physics is unknown, the abundances are uncertain or the calculation of the physics requires too many gridboxes or too much computer speed. So what happens is that the aerosol ‘kludges’ (as styled by Dr. Curry) are input and varied to produce a reasonable 20th century climate. The wonderful squabbles over tree rings indicate the difficulties associated with reconstructing past temperature series. Past aerosol series are much less constrained it turns out. The spatial variability of abundance and properties is daunting in today’s atmosphere. Historical reconstructions of abundance and properties are, well, really difficult to evaluate.
      Keihl’s paper showed us that the sensitivity of a given model (which is the result of the physics, the parameterizations and aerosol inputs – not the result of being input) trades off against the aerosol-cloud input. This is not shocking. It suggests that the physics in the models all agree pretty well. (There are, of course, modeling olympics where the physics is shown to be consistent.) Since one twiddles or tunes certain poorly constrained historical parameters and poorly understood microphysics (doesn’t ‘tune’ sound more harmonic than kludge? in a music of the spheres kind of way?) to get a reasonable 20th century climate, it is not surprising that the sensitivity and anthropogenic forcings trade off. It is actually reassuring that the trade offs can be so easily characterized as in Kiehl’s figure 1. There is a lot we do not know about cloud-aerosol interaction and that impacts what we do not know about climate sensitivity. The order in the graph suggests to me that much of the rest of it goes pretty well as it should. The IPCC did not hide that result – the sensitivity range 2C to 4.5C for a doubling of CO2 has been well advertised. Shrinking that range may not be possible in the short term. And in the long term, we may have put in too much CO2 for our posterity to cope with. So that is the dilemma. I doubt that the policy makers will have the time or the calm needed to apply Bayesian analysis to the uncertainties. The tone of the discussion is tending away from words with that many syllables. (If you haven’t noticed, 20 of the 21 Republican Senate candidates deny that climate is warming or that humans are doing any of it. – Dr. Lindzen, on the other hand, says that those results are trivial. For him, the question is sensitivity and he is headed for the lower end of IPCC’s range.)

      As for your thermodynamic analysis, you should know that the computer models do a rather good job on the radiation balance. Of course the differences between incoming and outgoing in our current warming state are smaller than can be measured accurately by the satellites currently flying. But their precision permits analysis of the trends (Murphy et al. 2009 – JGR). That analysis has been done without using climate models and it shows that the added CO2 adds lots and lots of energy to the earth/ocean/climate system. It adds much more than is needed to explain the observed warming. Ten times more goes into oceans than into warming the surface.

      We do not know exactly how the earth/atmosphere/ocean system will deal with the increases in internal energy that result from the emissions of long-lived greenhouse gases. This was the content of Trenberth’s famous lament. The usual suspects in a planet made up of gases, water and rocks is that temperatures will go up, ice will melt and circulations will change. (I suppose that Walt Kelly’s Schmoos could show up and deploy that extra energy to remove the pollution from the air, but I doubt it.) The actual question of interest to the policy maker is “How will these changes impact the price of corn, soybeans and beachfront property?” Lots of careful thinkers think that they are all going up anyway as population increases but that adding climate change to the mix is unlikely to make things better.

      So reading Kiehl and noticing that historical necessity forces some of Dr. Curry’s circularity, one could ask, Did the models encompass a reasonable range of anthropogenic forcings (aerosol – cloud interactions). If so, then the range of sensitivities coming out of the models is probably pretty good.

      And the adults on the planet will have to live with the uncertainty as they decide how much CO2 to burden our posterity with.

      Chuck Wilson

      • Richard S Courtney

        Chuck Wilson:

        Thank you for your response to my post. You make some good points but you do not address the issue of my post.

        Indeed, you handwave my point away when you say:

        “Keihl’s paper showed us that the sensitivity of a given model (which is the result of the physics, the parameterizations and aerosol inputs – not the result of being input) trades off against the aerosol-cloud input. This is not shocking. It suggests that the physics in the models all agree pretty well.”

        Perhaps, but much more importantly, it proves beyond any question of doubt that “the physics in the models” is not adquate for the models to be useful.

        As you say:
        “The order in the graph suggests to me that much of the rest of it goes pretty well as it should. The IPCC did not hide that result – the sensitivity range 2C to 4.5C for a doubling of CO2 has been well advertised. Shrinking that range may not be possible in the short term.”

        I agree. But the implication of that is being ignored.

        The climate sensitivity derived from a variety of understandings of the “physics” varies by a factor of between 2 and 3. This demonstrates beyond doubt that the “physics” is inadequate.

        What is the value of climate sensitivity?
        Differences of 250% between the climate sensitivities used in the models demonstrates that the models are each modeling a different climate system because – as you say – they each use the same “physics”.

        There is only one Earth and a model can emulate anything.
        Which one – if any – of the models emulates the climate system which exists in reality?

        And please do not say that an average of model outputs can overcome this problem.
        Average wrong is wrong.

        Also, the inadequacy of the models is greater than you assert when you say:
        “So reading Kiehl and noticing that historical necessity forces some of Dr. Curry’s circularity, one could ask, Did the models encompass a reasonable range of anthropogenic forcings (aerosol – cloud interactions). If so, then the range of sensitivities coming out of the models is probably pretty good.”

        Actually, no, it is not “pretty good”.

        Several people assess the climate sensitivity to be larger than the top of the range used in the models, and others have used empirical studies to determine it as being much lower than the bottom of the range. For example, Idso’s 8 “natural experiments” each indicates that climate sensitivity is an order of magnitude lower than the bottom of the range used in the models

        In other words, the climate sensitivity values published in the refereed literature differ by more than an order of magnitude.

        But the outputs of the models depend on climate sensitivity.

        And, therefore, the models are not fit for purpose.

        Also, you make a political point when you say;
        “And the adults on the planet will have to live with the uncertainty as they decide how much CO2 to burden our posterity with.”

        Ok. I will bite. We need to “burden” the atmosphere with as much of this essential plant food as possible for the benefit of future adults. But that has no relevance of any kind to what I wrote.


      • Craig Goodrich

        Dr. Spencer argues that mishandling cloud feedback causes gross oversensitivity in the models. Discussed with satellite data at

        And by the way, the Schmoos are Al Capp’s.

  17. Harold Pierce Jr

    Hello Judy!

    Below is a comment that I posted yesterday over at Sci Am in the comments of their article about you. Nobody commented on it.

    The IPCC should be checking out websites by amateur scientists. There is lots of good work on the web.

    RE: Temperature Data from Death Valley Falsifies the
    RE: Enhanced AGW Hypothesis.

    Please do the following:

    1. Go to the late John Daly’s website “Still Waiting for Greenhouse” at http://www.John-Daly.com.

    2. On the homepage, scroll down and click on “Station Temperature Data.

    3. On the world map, click on USA.

    4. Under section “Pacific”, click on “Death Valley.”

    The chart is a plot of the temperature data from the weather station at Furnace Creek and is empirical data that falsifies the enhanced AGW hypothesis in the following way.

    A desert is an arid region of low humidity, a low biomass of plants and animals, little or no free standing or running water and mostly cloudless skies.

    After sunrise most of the sunlight is absorbed by the surface because there is little vegetation to block it. The absorbed sunlight is converted to heat, and the air heats rapidly by conduction and convection. Some of the heat escapes from the surface as out-going long wavelenght infrared radiation (OLR).

    After sunset the surface cools rapidly as most of heat is removed by conduction and convection. The air cools because there are no clouds to block the rising warm air and there little water vapor to absorb the OLR. If CO2 causes any warming of the air near the weather station, we would anticipate as slight but descernible increase in the annual mean temperature that should correlate with increasing the concentration of CO2. We do not know the actual atmospheric concentration of CO2 in Death Valley. We know only that it increase over time as indicated by the data from the Mauna Loa Observatory.

    The chart shows the temperature-time plots are essentially flat. Thus we conclude that CO2 causes little or no warning of the air in Death Valley which is the hottest and driest region in North America.

    In winter the air is more dense has a higher concentation of CO2 than in summer. Thus we might anticipate a greater slope of the winter plot compared to the summer plot if CO2 caused any warming of the air. Since the plots are parallel we can further concluded that CO2 causes no warming of the air.
    Many of the temperature-time plots of weather stations in desert and arid regions are similar to that from Death Valley suchas those in Tombstone, Dodge City and Utah.

    Here is what the reader should know. The climate scientists are well aware of John Daly and his website. They ignore his work because it is not “peer reviewed.”

    I add these comments and suggetions.

    A plot of the annual mean temperature often shows large excursion from the mean which is attributable to “weather noise” and is in part due to the variation of sunlight over the course of year. We can eliminate the effects of sunlight by analyzing the temperatue data for one day for several selected days of the year such as the equinoxes and soltices. The Tmax and Tmin metrics should be analyzesd separately as recommend by Roger Sr. We can estimate value of weather noise by computing the classical deviation from the mean.

    We would anticipate that Tmin trend line should have slightly greater slope than that of Tmax since the air at Tmin is denser than that at Tmax if CO2 has any effect on warming the air for comparable air pressure.

    I have analyzed the temperature data of Sept 21 from the weather station at Quatsino, BC for the 1895 -2009 interval and have obtained a value of
    +/- 1.5 K of weather noise for both Tmax and T min. I also used an 11 day sample interval centered on Sept 21 and obtained the same value. This value of the weather noise is only valid for this weather station.

    It this value is similair for other weather stations, then there has not been any global warming for the last 1oo or so years.

    Would you like hard copy of my results?

    BTW, if you want to learn about any climate change, go and talk to the old folks in the rual areas. What do the guys in New York City really know about climate change?

  18. Another thought provoking article, though i would echo the above assertion…

    ” that there have been periods of warming and cooling in the 20th century, and that the net outcome has been a higher temperature at the end of the century than at the beginning; and (2), that it is not clear exactly how much that increase has been, because of measurement problems.”

    …by Don made above and I’d also add a few points of my own.

    I’d suggest that the atribution of cause towards Co2 (wrt temp) has not been sufficiently ‘proven’ in this situation due to the lack of real progress with the theory and the models. Though of course the word ‘proof’ is a subjective one in itself, the degree to which anything can be proven, but i think it’s apt for this discussion- especially when commenting on the IPCC!

    To my mind, the level of understanding and uncertanty of the climate completely undermines ANY claims of confidence. The casual disregard of the internal variations/cycles and the ‘elmination’ of any external factors (galactic radiation effects are a very very interesting for me at the moment- especially due to the inverse relationship with solar output) disguise the level of ignorance that we have about the climate as a whole. We are discovering new facets and processes at a terrific rate, so for these ‘experts’ to calmly consign everything barring Co2 to irrelevance gives me cause for concern.

    This is not to say of course that co2 may not act in the way they prescribe- though i find that increasingly unlikely at present, but more to try to highlight the political aspects of this debate for what they are.

    If you do not fully understand a system, which for climate we’re not even close, you cannot make any predictions/claims on it’s manipulations, natural or not, with anything approaching the levels of certainty that the IPCC et al suggest without reams of evidence to back it up.

    What do we have? Models.

    It’s this reliance on models that i just can’t follow. Tightly defined, well understood situations- yes, models can be invaluable. The climate is neither, as such i really have to question the veracity of anything these climate models provide, ESPECIALLY given the limitations of the models they are based upon (and this is even before you get into the other issues well known about climate models).

  19. Hi Judith, sorry to be a little off topic. Over at the Science of Doom article on CO2 I was asked a question about your views.

    They were your views as reported by Scientific American, so I didn’t know exactly what you thought:

    . . . Curry asserts that scientists haven’t adequately dealt with the uncertainty in their calculations and don’t even know with precision what’s arguably the most basic number in the field: the climate forcing from CO2—that is, the amount of warming a doubling of CO2 alone would cause without any amplifying or mitigating effects from melting ice, increased water vapor or any of a dozen other factors . . .

    Can you clarify what you believe about the uncertainty in the CO2 forcing (as defined in the IPCC) – in % for example. And is this because of issues with the radiative transfer equations (including questions about the absorption/emission calculations) or about other aspects of climate unknowns?

    Thanks very much.

    • Hi scienceofdoom, I plan to take on this issue issue in a few months, it defies simple word bites. BTW, i really like your site and refer people there frequently.

    • An excellent discussion. On the uncertainty of CO2 forcing, my experience has been that the biggest uncertainy is not in the radiation models themselves, but in the effect of clouds. Clouds will reduce the forcing of any GHG by reducing the amount of absorber between the “surface” (the cloud top) and space. In extreme cases very high clouds may even locally reverse the sign of the forcing. CO2 forcing is also a significant function of temperature, so spatial and temporal variations of temperature have a significant effect. Just a guess, but I’d say CO2 forcing uncertainties at the global level of 30% would not be surprising. Thanks for the good work on “The Science of Doom”.

  20. Judith, you write ” Dare we hope for improvements in the AR5 assessment?”

    As I understand what you have written, it goes something like this. CAGW is real, but AR4 has not established this. Let us hope AR5 does a better job.

    I, and other skeptics, put it a different way. Despite having the best minds available, in AR4 the IPCC could not establish that CAGW is real. This is because CAGW cannot be shown to be real using the “scientific method” . Let us, therefore, abandon AR5, thereby saving the world billions of dollars in unnecessary expenditures.

  21. Judith,

    Is the evidential reasoning you want to criticize made explicit somewhere in the IPCC report? The closest thing I could find is this:


    • Willard, yes i would say this is their consilience of evidence argument. Think of a court case with only circumstantial evidence. More is needed. Especially in the absence of a smoking gun, a strong argument is needed to make the circumstantial evidence convincing.

  22. Dr Curry,
    The hypothesis: CO2 causes the planet to warm has not been demonstrated by the data.

    However, there has been much input from many scientists, mathematicians and statisticians that throws doubt on the proxies and methods used by the IPCC. There is even concern that the temperature data has been enthusiastically over-homogenised in a way that always increases the ‘unnaturally linear’ temperature trend. And then ‘tuning’ the models to that modified data doesn’t appear to be very scientific.

    There is significant doubt amongst the scientific community about the IPCC process and the hypothesis must, therefore,remain steadfastly a hypothesis.

    It seems to an outsider that the IPCC modelling warriors have got their blinkers on, have lost objectivity and will not stop flogging the dead horse called CO2. I thought that the whole scientific process revolved around evidence confirming a hypothesis; if the confirmation isn’t there then a new hypothesis is needed. The more the climate scientists try to squeeze their desired result from an apparently flawed process, the more their credibility goes down hill. Except of course for those with a vested interested.

    • The CO2 obsession, from the IPCC to 10:10’s infamous movie, to the billions in subsidies to windmill power, one of the most interesting aspects of the social movement.

  23. I am an expert in logic, especially the logic of complex issues. My Ph.D. thesis was on the problem raised by Thomas Kuhn, namely that scientists with different theories seem to talk past each other. The climate change debate has been my informal case study for many years. Unfortunately I will be out today but I look forward to discussing your lengthy essay on the IPCC’s argument.

    One basic point however is that it is an argument, and that in itself is a problem. The IPCC reports have the logic of advocacy, not assessment. Well known counter arguments are presented poorly, if at all. A true assessment considers the pros and cons in equal depth. If a judgement is rendered it is the opposing view that gets the most attention. We see none of this logic in the IPCC reports. The deepest logical fault is what is not there.

    • David, I am delighted to have someone with your expertise participate on this thread. On the no consensus thread, I mention this issue in terms of analogy to a legal brief.

    • Phillip Bratby

      “A true assessment considers the pros and cons in equal depth.”

      I agree entirely and that is precisely what Einstein and Feynman would have said. The cons are not there in the IPCC reports, thus as you say, it is advocacy not science.

    • David L. Hagen

      David Wojick

      A true assessment considers the pros and cons in equal depth. If a judgment is rendered it is the opposing view that gets the most attention.

      The “What is NOT there” problem was addressed in
      Climate Change Reconsidered
      the 880 page 2009 report by the
      Non-governmental International Panel on Climate Change.

      The true assessment you raise would need to address all the issues and science raised in this Red Team effort. It would appear that as much if not more resources would need to be put into testing the “null hypothesis” and opposing publications to evaluate the “catastrophic anthropogenic global warming” argument and the possible adaptation vs mitigation issues.

  24. “The deepest logical fault is what is not there.”

    Well, I’d lay the responsibility for that on the Expert Reviewers. There were 2500 people who had the chance to comment on the drafts of the various bits of the last report. Some of those reviewers were nominated for the actual sorting and presentation of suggestions. The rest only needed the qualification that they were prepared to maintain confidentiality.

    Any halfway diligent person with good knowledge of the scientific field in question could have put together an argument for inclusion and consideration of papers that had been omitted (in their view) from the overview of the science.

    • Regarding what is not there, this is a key point. Esp in the AR4, they made a conscious decision to only include the things they had a significant level of uncertainty about. So they left out the “white” part of the Italian flag. An interesting consequence of leaving out uncertain aspects is that they neglected to include the impact of glacier melting on sea level rise.

      • I don’t believe that saying they left out the impact of glacier melting on sea level rise and mearly went with the older models where a consensus could be reached. My understanding is they left out increased sea level rise based upon dynamic ice flow but included this caveat in the discussion:

        “Dynamical processes related to ice flow not included
        in current models but suggested by recent observations
        could increase the vulnerability of the ice sheets to
        warming, increasing future sea level rise. Understanding
        of these processes is limited and there is no consensus
        on their magnitude.”

    • Have a look at Donna Laframboise’s blog, where she reveals some interesting facts about authors and editors.

      • and we mustn’t forget the other “kid”, Michael Mann (PhD 1998), who was appointed as lead author in IPCC TAR (published in 2001)

    • Richard S Courtney


      You assert:

      ““The deepest logical fault is what is not there.”

      Well, I’d lay the responsibility for that on the Expert Reviewers. ”

      With respect, I think that is grossly unfair to the reviewers.

      I was a reviewer for the IPCC AR4 and a contributor to the NIPCC report.

      My review comments for the AR4 were ignored. Other reviewers report the same.
      The NIPCC report attempts to put the pertinent arguments and evidence that were ignored in the IPCC AR4.

      So, some resemblace of true balance is achieved by reading both reports in a critical manner.

      In my opinion, this need to assess both the IPCC AR4 and the IPCC reports demonstrates the poverty of the IPCC process because science is best conducted by weighing all the evidence (on the one hand but on the other hand) and not as a consideration of two ‘sides’.

      Assessment of most research (not only climate research) is much more complex than two ‘sides’ but that is what consideration of the AGW hypothesis has devolved to become and, in my opinion, the IPCC Charter is responsible for this.



  25. Just to get this past the usual, I’d like to point out a couple things in the IPCC.

    Detection does not imply attribution of the detected change to the assumed cause. ‘Attribution’ of causes of climate change is the process of establishing the most likely causes for the detected change with some defined level of confidence (see Glossary). As noted in the SAR (IPCC, 1996) and the TAR (IPCC, 2001), unequivocal attribution would require controlled experimentation with the climate system. Since that is not possible, in practice attribution of anthropogenic climate change is understood to mean demonstration that a detected change is ‘consistent with the estimated responses to the given combination of anthropogenic and natural forcing’ and ‘not consistent with alternative, physically plausible explanations of recent climate change that exclude important elements of the given combination of forcings’ (IPCC, 2001).


    The approaches used in detection and attribution research described above cannot fully account for all uncertainties, and thus ultimately expert judgement is required to give a calibrated assessment of whether a specific cause is responsible for a given climate change. The assessment approach used in this chapter is to consider results from multiple studies using a variety of observational data sets, models, forcings and analysis techniques.

    The IPCC is straightforward in its introduction to attribution and doesn’t claim anything other than that attribution needs some kind of modelling (because we can’t put the climate in a bottle) and that this method relies on a number of different tactics, including the consensus of what these tactics mean of the experts.

    Also, I see no mention of the process of elimination in your seven steps to re-stating the IPCC argument (I notice you do not question the ‘detection’). Do you take this into account? What other forcing do you consider to be attributable, based on current data. you mention that solar forcing is one the larger uncertainties. What do you mean by that? Do you mean paleo-data or recent observation?

    Also, I don’t understand why you think the IPCC is using circular logic. At most, you have built an argument in which one of the premises is wrong (#3, and I’m not sure if your characterization of aerosol inverse modelling is specific enough to help people understand) and therefore it casts doubt on the conclusion. So I’d like to here you or some other expert give a more detailed account of how aerosols are dealt with in the IPCC AR4 and how that is different now.

    It’s not that I think the IPCC doesn’t have problems in this area, but I think this post a little over generalized to attack specific uncertainties.

    • Gryposaurus, i discussed most of the points you raise in Parts I and II.

      • So I went back to see what you had said about aerosols, and this is what I found:

        The 20th century aerosol forcing used in most of the AR4 model simulations (Section relies on inverse calculations of aerosol optical properties to match climate model simulations with observations.  The inverse method effectively makes aerosol forcing a tunable parameter (kludge) for the model, particularly in the pre-satellite era. In trying to sort out which models use what for aerosol forcing, I ran into a dead end (rather dead link) referenced to in Table S9.1.  Sorting this out requires reading 13 different journal articles cited in Table S9.1:   an uncertainty monster taming strategy of “make the evidence difficult to find and sort out.”

        This didn’t seem correct to me. I checked the NASA website with models run pre-AR4 and this is not the method they used. While this method may be one used in some of the many models run for redundancy, it certainly isn’t the norm. As I looked for info on your ideas to get more depth, I ran into this conversation from KK’s blog where you make the same argument you do here:

        The aerosol forcing is the biggest tuning in this regard in many climate models, although the GISS model uses published forward calculations (the right thing to do, but still fraught with great uncertainty), whereas many climate models use an inverse method to get aerosol forcing that matches. Se the IPCC for a summary of how this is done

        And Gavin S, who runs the models, responded, including an explanation and put the aerosol argument in context:

        Nobody is arguing that aerosol forcings are not uncertain. But let’s think about what that uncertainty means for the attribution issue, noting that the net effect of aerosols is almost certainly cooling. Imagine that the aerosols have a minimal effect (ie. forcing much less than generally assumed), then you would get a good match with obs using models with a lower-end sensitivity. But you would end up with an attribution to greenhouse gases that would match or exceed the recent trend (i.e. no effect on the AR4 statement). Now, let’s take a case such that the aerosols have a big effect, so that the net forcings are much smaller than the median estimate. In that case, only the models with high sensitivity will match obs, and again, the impact of GHGs will be strong and positive. The reason why the aerosol uncertainties don’t have much of an impact is because they are cooling in the aggregate. Only if there was a significant chance of the anthropogenic aerosols actually having a net warming effect would this change the conclusion and evidence for this is sorely lacking.
        Finally, I cannot speak for what other modelling groups have done, but I can absolutely assure you that your description for how the modelling is done at GISS is not ‘how it works’. Our model’s sensitivity has varied over the years as a function of a number of issues – and we have not adjusted our aerosol forcings to match. For AR5 we will have two variations on the forcings (a parameterised indirect effect and a calculated version). The parameterised effect is exactly the same as what we used in AR4, despite the fact that the model sensitivity is different (slightly higher this time). The calculated indirect effect is what comes out of the code we are using and again, was set independently from the sensitivity calculations (which we have not yet done in any case).
        There may be, as Kiehl (2007) pointed out, some reasons why the AR4 runs as a whole did not span the whole possible space of sensitivity/aerosol forcings. But again, think about what this means for the attribution issue – we are essentially missing the high sensitivity/low aerosol cases (which will have much warmer temperature rises than observed), or the low sensitivity/high aerosol cases (which will have much less temperature rise than observed).   In either case, what information can you gleam from this in terms of attributing the actual trends? They just won’t be relevant. On the contrary, it is by looking at the simulations that have reasonable matches to observed trends (constructed in any plausible fashion) that allow you say something robust about attribution.

        This was back in August and doesn’t seem you’ve adjusted your premise. Do you have a problem with this explanation, and if so, why?

        And BTW, John NG’s correction to your argument is correct. This was also explained in that KK thread.

        This is not true, and it is not a conclusion of Hansen et al (2007). Indeed, we point out specifically that the 1940s peak in the temperature record is not matched in the forced component of our simulations regardless of the solar forcing used. We suggest instead that is likely to be a feature of the internal variability since individual runs do show excursions of this magnitude (a result also noted in GFDL papers on the subject Delworth et al, and is seen in figure 9.5 in AR4 too).
        Your next line is illogical. “If they can’t explain this earlier warming, why should we be highly confident in their explanation of warming in the second half of the 20th century, which is approximately the same duration and magnitude of the warming during 1910-1940?”
        The reason why there is a difference in our ability to confidently attribute changes in 1910-1940 compared to post-1950 (and for reference 50 years is longer than 30 years), is precisely because of the well-known issues you have mentioned. Confidence in forcings is less, confidence in the temperature record is also less (cf Thompson et al (2007)), and there is not the same contrast between possible solar changes and GHGs that allows for a more confident attribution in the later 20th Century. Are we to assume that you think that any climate change in any period of the past must be attributed with the same confidence as the most recent period before that can be credible? That would certainly be a novel argument

      • NASA GISS does one of the better jobs with aerosol forcing; for many models it is straight tuning. Much of the text I took about the aerosol issue was straight from the IPCC report.

        Regarding the 1910-1940 warming. To justify high confidence in the warming since 1970, you should be able to demonstrate that this was not caused by the same mechanisms that caused the 1910-1940 warming. The models don’t agree even with the same forcing. This does not inspire confidence.

      • While I can’t speak as an expert, I did do a little more digging, this time into the IPCC AR4 WG1 and found a lot of information on aerosol modelling and the newest developments. The text that you took from the IPCC was a particular section called “Summary of ‘Inverse’ Estimates of Net Aerosol Forcing”. This section described just one method of modelling aerosols and was direct in discussing why it is good and why it isn’t. This only accounts for a small part of what is done, as far as I can see. In chapter 2.4.3 “Advances in Modelling the Aerosol Direct Effect” it states:

        Since the TAR, more complete aerosol modules in a larger number of global atmospheric models now provide estimates of the direct RF. Several models have resolutions better than 2° by 2° in the horizontal and more than 20 to 30 vertical levels; this represents a considerable enhancement over the models used in the TAR. Such models now include the most important anthropogenic and natural species. Tables 2.4, 2.5 and 2.6 summarise studies published since the TAR. Some of the more complex models now account explicitly for the dynamics of the aerosol size distribution throughout the aerosol atmospheric lifetime and also parametrize the internal/external mixing of the various aerosol components in a more physically realistic way than in the TAR (e.g., Adams and Seinfeld, 2002; Easter et al., 2004; Stier et al., 2005). Because the most important aerosol species are now included, a comparison of key model output parameters, such as the total τaer, against satellite retrievals and surface-based sun photometer and lidar observations is possible (see Sections 2.4.2 and 2.4.4). Progress with respect to modelling the indirect effects due to aerosol-cloud interactions is detailed in Section 2.4.5 and Section 7.5. Several studies have explored the sensitivity of aerosol direct RF to current parametrization uncertainties. These are assessed in the following sections.

        This section points to quite a bit of information that people can look into to get a clearer picture.
        and chapter 8.2.5″Aerosol Modelling and Atmospheric Chemistry” discusses new methods for projecting into the future:

        Climate simulations including atmospheric aerosols with chemical transport have greatly improved since the TAR. Simulated global aerosol distributions are better compared with observations, especially satellite data (e.g., Advanced Very High Resolution Radar (AVHRR), Moderate Resolution Imaging Spectroradiometer (MODIS), Multi-angle Imaging Spectroradiometer (MISR), Polarization and Directionality of the Earth’s Reflectance (POLDER), Total Ozone Mapping Spectrometer (TOMS)), the ground-based network (Aerosol Robotic Network; AERONET) and many measurement campaigns (e.g., Chin et al., 2002; Takemura et al., 2002). The global Aerosol Model Intercomparison project, AeroCom, has also been initiated in order to improve understanding of uncertainties of model estimates, and to reduce them (Kinne et al., 2003). These comparisons, combined with cloud observations, should result in improved confidence in the estimation of the aerosol direct and indirect radiative forcing (e.g., Ghan et al., 2001a,b; Lohmann and Lesins, 2002; Takemura et al., 2005). Interactive aerosol sub-component models have been incorporated in some of the climate models used in Chapter 10 (HadGEM1 and MIROC). Some models also include indirect aerosol effects (e.g., Takemura et al., 2005); however, the formulation of these processes is still the subject of much research.

        In short, I think although the uncertainties are there, as you correctly point out, the limited information you give can be misleading and the “kludge” argument that the IPCC using indirect modelling makes their argument “circular” doesn’t have evidence to back it up.

        Your point about 1910-1940 is well taken and gets back to the solar or “other” element, yet to be discovered. I’ll take a look what the IPCC says about solar soon.

      • the “kludge” argument that the IPCC using indirect modelling makes their argument “circular” doesn’t have evidence to back it up.

        The “kludge argument” originates from observations of the STS researchers (I think this paper covers it farily well; see also Serendipity; those guys have some interesting things to say about model inter-comparisons too). This is a separate issue from “inverse modeling” (there are entire journals on this subject). For instance, inverse modeling is how we use models (not necessarily GCMs) in diagnostic techniques of many sorts.

      • I understand the problems, but this is only one element. Judith’s argument is taken from the section on “inverse modelling”. I’m pointing out that merely looking at that one section and using that to make an argument about how the IPCC models treat the overall aerosol issue is incomplete. She uses this argument as one of the premises that the IPCC uses to make it’s argument and that, to me, is very misleading. And it is especially important to know how the models are evolving, considering this entire 3 part series is about improving the AR5.

      • Regarding the 1910-1940 warming. To justify high confidence in the warming since 1970, you should be able to demonstrate that this was not caused by the same mechanisms that caused the 1910-1940 warming. The models don’t agree even with the same forcing. This does not inspire confidence.

        As Gavin pointed out in the KK thread in August,:

        Indeed, we point out specifically that the 1940s peak in the temperature record is not matched in the forced component of our simulations regardless of the solar forcing used. We suggest instead that is likely to be a feature of the internal variability since individual runs do show excursions of this magnitude (a result also noted in GFDL papers on the subject Delworth et al, and is seen in figure 9.5 in AR4 too).

        This is also covered in the IPCC 3.6.6 “Atlantic Multi-decadal Oscillation”

        Over the instrumental period (since the 1850s), North Atlantic SSTs show a 65 to 75 year variation (0.4°C range), with a warm phase during 1930 to 1960 and cool phases during 1905 to 1925 and 1970 to 1990 (Schlesinger and Ramankutty, 1994), and this feature has been termed the AMO (Kerr, 2000), as shown in Figure 3.33. Evidence (e.g., Enfield et al., 2001; Knight et al., 2005) of a warm phase in the AMO from 1870 to 1900 is revealed as an artefact of the de-trending used (Trenberth and Shea, 2006). The cycle appears to have returned to a warm phase beginning in the mid-1990s, and tropical Atlantic SSTs were at record high levels in 2005. Instrumental observations capture only two full cycles of the AMO, so the robustness of the signal has been addressed using proxies. Similar oscillations in a 60- to 110-year band are seen in North Atlantic palaeoclimatic reconstructions through the last four centuries (Delworth and Mann, 2000; Gray et al., 2004). Both observations and model simulations implicate changes in the strength of the THC as the primary source of the multi-decadal variability, and suggest a possible oscillatory component to its behaviour (Delworth and Mann, 2000; Latif, 2001; Sutton and Hodson, 2003; Knight et al., 2005). Trenberth and Shea (2006) proposed a revised AMO index, subtracting the global mean SST from the North Atlantic SST. The revised index is about 0.35°C lower than the original after 2000, highlighting the fact that most of the recent warming is global in scale.

        Still, I don’t see how this adds to the already stated uncertainty in the IPCC’s conclusion about post 1950 warming (most [%50] warming due to AGW). The ability to collect data on solar and AMO/PDO has increased significantly and that should be noted as a positive, IMHO, not something that lends belief that there is some unknown element that contributes to current warming, aside from the %5-10 uncertainty that there is an undiscovered element or a theory, that presently, lacks evidence.

  26. If I may refer to Popper here, when evaluating a theory (GCM model) it is sufficient to show that the model gets something wrong to conclude it is false, but the asymmetry of proof indicates that good fit with data does not prove it true. By “wrong” I mean something major. All the models, as Judith noted, fail to explain the warming pre-1940 and cooling 1940 to 1978. The also get absolute temperatures wrong in many cases. If a model (based on radiative physics in which black body radiation is a 4th power of temperature) is off by 4 deg C global mean, this is serious error. Also as JC notes the models differ in their parameters and have been tuned to work well post-1978. One by Popperian logic must note a serious deficiency that prevents us from having “high confidence” in any sense in these models.
    I would also argue by logical analysis that IPCC takes the following logic:
    Assume Model. If CO2, then good fit. If NOT CO2, bad fit. Therefore CO2. but this implicitly assumes that the universe of options is covered by the popositions. In tossing a coin the only options are H and T, but for this problem there are other options, including 1) models were tuned to get this answer, 2) natural internal variability and 3) forcing variability (eg, solar cosmic rays etc). IPCC would need to REJECT 1, 2, and 3 for their logic to be valid, not just ignore them.

    • Craig in terms of the models getting something wrong, it is important to consider the location of the error; it isn’t always the fault of the model itself. For example, the problem with attribution of the early 20th century temperature variations may be more associated with uncertainties/errors in external forcing plus poor experimental design and logic in elucidating the contribution of natural internal variability.

      Your assessment of the underlying logic undoubtedly underlies the reasoning process of many of the experts making the judgments in the IPCC assessment

      • If you cannot match a climate or weather event to CO2 with empirical measurements, not computer modeling which can be altered to do anything you want, then there is no basis for the AGW “theory”. Since nothing is happening beyond normal variation in the climate or weather, not even trends (with 1000 year plus cycles a short phase will look like a trend), then there is no measurable basis for claiming CO2 is changing the climate. With such short time frames, computer modeling is a meaningless effort.

        I have asked, but got no answer as to what is the default position in climate science. Since no one is willing to (ever) answer that question for obvious reasons, then I’m going to ask a different question.

        The claim is the planet it heating. The evidence shows that the summer temps have not been increasing, only the winter temps have moved to being “less cold”. That’s a big distinction that needs to be addressed.

        The claim is then made that even if the winters are warmer, that’s still getting warmer. But is it? It depends on how you look at it.

        What is the normal winter temp for the planet? If the normal winter conditions are short milder winters, then the deep, long cold winters were abnormal, and the planet is just swinging back to the normal state of short mild winters. That is, we are SUPPOSED to not have deep long winters. Returning to the normal state of short mild winters is where the planet is supposed to normally be.

        So looking at it from that view, this “warm” trend is returning us to the normal state. The Medieval Warm Period wasn’t hotter than today. Summer temps would have been in the same range as today. What made the MWP “warmer” was short mild winters. Thus a longer growing seasons. The LIA likely also had just as hot of summers as today, but the winters were long and deeply cold. Hence shorter growing seasons and the crop losses that implies.

        The fear mongering of the AGW must have a political motive because the scientific one doesn’t exist. Warm is good, cold kills.

    • Popper may also not be the best source for evaluating models – GCMs can be related to the sorts of theories that Popper was interested in, but the two are not the same. After all, what does it mean that a model is “false”? Clearly no model is going to be a reproduction of reality, so they will all be false in certain ways. You set the bar at something being “seriously wrong” – fair enough, but this still doesn’t warrant the same outright rejection of a model as a good falsification experiment. Are all the failures of GCMs enough to “prevent us from having “high confidence” in any sense in these models”? Maybe, but I’m not sure if you can mix Popperian logic with levels of confidence.

  27. I have published and am trying to publish some attribution studies. To avoid circular reasoning, I have been trying to characterize natural variability, subtract that, and see if the residual might be anthropogenic. Two responses from many reviewers are consistent: 1) They believe there CANNOT BE any natural variability beyond a few years (like ENSO), and 2) They assert that any attribution study MUST use climate models. I think these views are revealing of poor logic.

    • Agreed.

    • Alexander Harvey


      “1) They believe there CANNOT BE any natural variability beyond a few years (like ENSO)”

      I am not sure what they might mean by that. Do they mean deterministic (chaotic, periodic) variation or stochastic variation or both?

      The evidence of the period 1900-1970 suggests that there is something unexplained by the models, it could be due to either of the above variations or to some unknown forcing.

      FWIW I have looked at the case for random fluctuations in terms of the implied forcing noise and I beleive that alone could account for the missing variance. I also find that I cannot make it account for the recent (1970-2010) warming. I suspect that if the IPCC did not have the AOGCMs and relied on simpler models they might well come to the same conclusion. What it would cost them is certainty during the post 1970 period e.g bigger error bars on the gradient, strangely they seem to have bigger error bars in the summary than the AOGCMS suggest. From what I now, pure reliance on the models would lead to a statement that it is virtually impossible for the warming to not be mostly due AGW.

      The case for deterministic variance is a real loose cannon because you either beleive the models (which your commentators indicate predict a small effect at periods greater than a decade), or you speculate that the unexplained variance prior to 1970 is deterministic but with no known cause or that you characterise some aspect of the variance as the cause, e.g. PDO etc.

      Personally I doubt that it is currently possible to attribute the unexplained variance to any particular mix of random and determinstic terms.

      “2) They assert that any attribution study MUST use climate models. ”

      By that I presume you mean AOGCMs. I think it is true that attribution studies must use appropriate climate models but that does not to my mind imply AOGCMs to the exclusion of all others. Providing you can sufficiently constrain a more primitive model, i.e. ensure that all its parameters can be determined/estimated from observables, I do not see the problem. It is quite a strong test and one that the AOGCMs are alleged not to meet (that they are tuned to the historic record but that tuning is not a unique solution). It is my understanding that if you teased apart the models the attribution between the various forcings vary considerably from model to model, possibly due to overfitting, e.g. a desire to reproduce the post WWII cooling, an observation that may have no real significance i.e. just a stochastic fluctuation.

      If there is significant circularity due to fitting the models (AOGCMS) to the record, it could be argued that they are unfit for the purpose of attribution as they have been trained to explain variance and hence may have unjustifiably minimised the unexplained variance post 197o. However there remains the unexplained variance prior to 1970 which they sometimes seem to characterise as inexplicable or even impossible (hence an apparent desire to seek out plausable and scientifically legitimate ways by which it could be removed from the record) or else simply ignore. Viewed by eye the IPPC model error ranges in the attribution graphics indicate that a) the post 1970 warming is extremely likely to be due mostly to AGW and b) itis almost impossible for the pre 1970 variance to have occured.

      I think there is merit in the use of simplified but not simplistic models, but I will say that they have almost no traction in the general agrument. In order to do any attribution study I think it is essential to have some model that at a minimum allows for a plausible mapping between the temperature and the flux domains. I think that the burden on a simplified model is higher than on AOGCMs in that to get academic traction not only must it not have any loose ends (free parameters) but probably it must explain at least one additional observable to justify the implicit free parameter of model choice. If one could construct a simplified model that accounted for one more observable then it possesses free parameters then people might have to sit up and listen. That is a very high burden indeed. Perhaps it shouldn’t be that way, but people are very wary of accepting the verdicts of simplified models as they are commonly ad hoc. AOGCMs are not concidered to be ad hoc, they are inherently not so, but if they are trained to the historic record they do, in a sense, become ad hoc. In an ideal world only the most sophisticated models should get a look in, currently this is not an ideal world and other more simplified approaches may be useful, but do not anticipate that they or their results will be seen as significant.


    • You are looking, as we say in Texas, for a penny in the corner of a round room.
      Your overall post is telling, but your item 1) is worth expanding on a bit more. Could you please?

  28. “no one asserts that it isn’t warming”
    but let us consider that there may be some bias in the climate data. If UHI has not been properly factored out, and there has been a big urbanization trend since WWII, then there may be a bias upwards in warming in recent decades. The fact that GISS shows more warming since 1990 than any other dataset indicates this possibility (and the others are not pure as driven snow here either). So let’s say that 25% of warming since 1978 is data error, and IPCC admits that let’s say 45% is natural. This means that the amount of warming that is unnatural is only 30% of recent warming. Then the model estimated sensitivity is way too high. The problem here is also about quantities, not yes/no logic. The quantities matter.

    • Don’t look at the average temperature for this “warming”. Average temps are not a measurement, they are calculation. Temps swing with the seasons. The change in average temperature is completely meaningless unless taken in context of what is going on with TMax and TMin in the seasons, particularly in the summer and winter (fall and spring are just tranisition months. Have a warm winter and the spring also has to be warmer because it is starting from a higher temperature. )

    • Craig, I agree with your statement here. I will be taking on the temperature record soon.

    • Alexander Harvey


      I think that in the big scheme of things it is the ocean data that is overwhelmingly significant, the ocean surface constitutes most of the globe’s and overlies the most susceptable heat sink. I think that any argument based on global temperatures must at least be supported when argued from ocean temperatures. These doubtlessly have their instrumental problems but they appear to be quite different problems.

      The question of retained heat is overwhelmingly an oceanic issue, and I think that any argument is strengthened when based on both temperature and retained heat anomalies. It may also be of interest that over the oceans the climatic sensitivity is I think unlikely to be the prevailing factor in assessing the attribution of flux anomalies, for periods shorter than about a century. This is a question of the oceanic susceptance which can be argued to be of the order of 10W/m^2/K for harmonics with a period of a year and only falling to ~1W/m^2/K for periods of 100 years. Where this not so we should and I think would be in a position to better characterise the climatic sensitivity directly from the historic record of surface temperatures and retained heat.

      I am not sure that this helps but one never knows.


  29. Re: Scientific American. Congrats on your public villification in SA. Remember that what brought Joe McCarthy down was when he started claiming that generals and people in the State Department were commies. Pielke Sr. and Lindzen really need the company.

  30. Off topic, but this is not to be missed. Scientific American is now taking an online survey as to whether I am a dupe or a peacemaker.


    • Neither. You are a scientist who asks scientific questions. Asking penetrating questions does not “bring peace” but it brings truth.

      • A truly remarkable ‘leading the particiapant survey’

        There was one good question, acknowledging that climate sensitivity is unknown. One or 2 options on that question, were just ridiculous though…

        It did go on about the ‘fossil fuel industry’ which implies a very political situation in the USA..

        Blaming the fossil fuel industry seems ridiuclous in the UK/EU, because all main political parties are fully signed up to a IPCC AGW consensus, for 25 years or more, thus businesses, including energy followed long ago.

        I am going to a meeting on Wednesday in the House of Commons, (MP Graham Stringer, from the science and technology commitee will be attending) which is ‘celebrating’ 2 years since the Climate Change Bill was passed. Estimated cost for the UK is hundreds of billions of dollars over the next 40 years, with no idea how its carbion targets will be met. All bar 3 MP’s voted for it..

        So, this is possibly why US and UK bloggers/commentors talk past each other occasionally.. It seems a very political issue still in the USA, large greentaxes yet to be implemented.

        Whereas in the UK, the coalition just announced a £1,000,000,000 stealth tax on high energy using businesses, forcing them to pay by the tonne of CO2. I believe £12 at the moment, to be ramped up of course, whenever the politicians feel like it, all in the name of saving the planet.

        When the money will just be poured into the budget deficit, and the costs passed onto the consumer, possibly forcing some energy intensive business abroad.

        Would US consumers/business go along with this type of tax, or will their be political action?

      • Craig,

        Yes, that’s interesing. They consider different hypotheses or “story lines” about Judith’s motivations, but fail to mention the most straightforward and obvious one.

    • Phillip Bratby

      What a ridiculously unscientific set of questions. I suspect they will analyse the answers to judge the responders.

  31. Hi Judy- Regarding the Scientific American survey, his questions are poorly (and incompletely) posed. For example, under #3 “What is causing climate change?”, a more inclusive survey could list the three hypotheses that we present in our EOS paper

    Pielke Sr., R., K. Beven, G. Brasseur, J. Calvert, M. Chahine, R. Dickerson, D. Entekhabi, E. Foufoula-Georgiou, H. Gupta, V. Gupta, W. Krajewski, E. Philip Krider, W. K.M. Lau, J. McDonnell, W. Rossow, J. Schaake, J. Smith, S. Sorooshian, and E. Wood, 2009: Climate change: The need to consider human forcings besides greenhouse gases. Eos, Vol. 90, No. 45, 10 November 2009, 413. Copyright (2009) American Geophysical Union. http://pielkeclimatesci.wordpress.com/files/2009/12/r-354.pdf

    Hypothesis 1: Human influence on climate variability and change is of minimal importance, and natural causes dominate climate variations and changes on all time scales. In coming decades, the human influence will continue to be minimal.

    Hypothesis 2a: Although the natural causes of climate variations and changes
    are undoubtedly important, the human influences are significant and involve a diverse range of first- order climate forcings, including, but not limited to, the human input of carbon dioxide (CO2). Most, if not all, of these human influences on regional and global climate will continue to be of concern during the coming decades.

    Hypothesis 2b: Although the natural causes of climate variations and changes are undoubtedly important, the human influences are significant and are dominated by the emissions into the atmosphere of greenhouse
    gases, the most important of which is CO2. The adverse impact of these gases on regional and global climate constitutes the primary climate
    issue for the coming decades.

    As his survey question is presented, where would one include the diverse effects of aerosols and of land use/land cover change? Also, he should define what he means by “climate change” in terms of time and space scales, and of what specific societally and environmentally relevant metrics (such as hydrologic drought; agricultural drought; large basin river floods, etc) he is focusing on.

    • Dr. Pielke,
      “Coming Decades” is too short a time horizon due to the long lifetime of CO2. To effectively comply with the moral imperative of intergenerational justice, we must think in longer terms as Reagan did when protecting the ozone layer.
      Try this:
      Hypothesis 3. Although the natural causes of climate variations and changes are undoubtedly important, the human influences are significant and becoming greater. A fundamental human influence is the emission of long-lived greenhouse gases, the most important of which is CO2. These gasses have long lifetimes, a cumulative warming action and are difficult to removing or counter. Since temperature sets the boundary conditions for many activities on the planet, we are morally obliged to those who follow us to reduce those emissions to a level which will not dangerously alter climate.
      Chuck Wilson

      • David L. Hagen

        Chuck Wilson
        Re: “These gasses have long lifetimes”
        See Robert H. Essenhigh
        Potential Dependence of Global Warming on the Residence Time (RT) in the Atmosphere of Anthropogenically Sourced Carbon Dioxide

        In this study, using the combustion/chemical-engineering perfectly stirred reactor (PSR) mixing structure or 0D box for the model basis, as an alternative to the more commonly used global circulation models (GCMs), to define and determine the RT in the atmosphere and then using data from the IPCC and other sources for model validation and numerical determination, the data (1) support the validity of the PSR model application in this context and, (2) from the analysis, provide (quasi-equilibrium) RTs for CO2 of 5 years carrying C12 and 16 years carrying C14, with both values essentially in agreement with the IPCC short-term (4 year) value and, separately, in agreement with most other data sources, notably, a 1998 listing by Segalstad of 36 other published values, also in the range of 5−15 years.

  32. Lemonick’s characterization is inaccurate and needlessly antagonistic. Why is he falling over himself explaining his motives on his blog and trying to mend things with ill-framed questions?

    “gone off the scientific deep end”? WTF? (Sorry). If Dr C has a blog where she interacts with professionals and researchers from other sciences, ‘Scientific’ American sees that as “going off the deep end”?

    Can we point out Scientific American’s response to the revelations of Climategate?


    Look at the above article – an uncritical reproduction of Michael Mann others’ views about the emails, and decide for yourself.

  33. Over confidence III

    Just one simple comment:

    “2. Climate models are fit for the purpose of accurately simulating forced and internal climate variability on the time scale of a century.
    ……Now consider premise #6. The striking consistency between the time series of observed average global temperature observations and simulated values with both natural and anthropogenic forcing (Figure 9.5) was instrumental in convincing me (and presumably others) of the IPCC’s attribution argument. ”

    A major argument made is that since the models accurately mimic the effects of 4 volcanic eruptions they are working and in accordance with 2. above are fit to simulate climate change on a century time scale.

    A simple examination of the IPCC figure 9.5 shows at least 12 drops in temperature of ~.3-.4 deg C, four of which are associated with volcano eruptions. No explanation is even proposed for the other 8 or more drops in temperature that are not modeled at all(the heavy red line in the graph). That shows me that the large majority of the climate variability is not being modeled during the course of a century. Statement 2. above is convincingly wrong.

    A realistic test of a climate model would be to initialize it to conditions around 1850-1880(which would mean making multiple runs with random starting data) and see if the average model outputs follow the measured trend from 1900 onwards. If the model could predict the measured trend within the margin of error(for the model runs) that would give me some confidence in it.

    • I agree, there are much better ways to design these experiments

      • I think you are overlooking the point of the post. The response of the models to a true forcing of volcanic eruption appears to be correct. BUT the models have no explanation for multiple instances with similar drops in temperature. The un-modeled drops in the temperature record demonstrate that the models are not correctly modeling the climate and cannot be used to extrapolate future trends in the climate. There simply is no consistency between the models and the observation record.

  34. Alexander Harvey

    Re: Scientific American survey

    “1. Should climate scientists discuss scientific uncertainty in mainstream forums?”

    I find this question fascinating and significant in that it is asked.

    Read one way, it is requesting a moral judgment. A question about ethics. Another intrepretation would be as to the wisdom of the act, also interesting but in a different way. What other interpretations are there beyond:

    Is it ethical for a climate scientists to discuss scientific uncertainty in mainstream forums?

    Is it wise for a climate scientists to discuss scientific uncertainty in mainstream forums?


    Is it treasonable for a climate scientists to discuss scientific uncertainty in mainstream forums?

    So is it a matter turpitude, folly, or treachery, or am I missing the sane intent of the question?

    I repeat myself in saying that this is an issue that needs bringing out in the open. If one believes that certain actions constitute an existential hazard, is there a moral imperative to prevent such activities. Is that the tribal ethical stance that not only justifies some of the activities apparently disclosed by climategate but raisies them to a moral duty? Would that justify the stance that the disclosure of unhelpful scientific opinion and fact constitute a treasonable act?

    I am glad it was question number (1). It gets to the heart of the matter.


    • I also find this question totally bizarre, it will be interesting what the survey responses show. I don’t know what lemonick’s background is, but it doesn’t seem that he is a scientist.

    • When I was young I looked forward to Scientific American monthly. I got so much out of it.
      And then they engaged in ‘ends justifying the means’ advocacy, publishing obvious political tripe more and more, until my interest in what they had to say, along with my subscription, expired.

  35. Nullius in Verba

    It may be a bit simplistic, but in thinking about the circularity of the attribution studies (a point I agree with), I tend to think in terms of two of the classical logical fallacies.

    Argument from ignorance:
    Paraphrased as “We can’t think of anything else it could be, therefore there is nothing else it could be.”

    Confirming the consequent:
    “If it is X then we would observe Y; we have observed Y, therefore X.”

    You have already covered these, but I thought this might be a more succinct way of putting it.

    I also like von Neumann’s comment on the limitations of models: “With four parameters I can fit an elephant, and with five I can make him wiggle his trunk!”

    • yes, thanks. I was going to use affirming the consequent but in the end didn’t. both of the fallacies you describe are very apt.

    • David L. Hagen

      Whatever happened to the “null hypothesis”?
      Assuming nature will continue on as before unless proven otherwise?
      Then again,

      it is important to recognize that an inherent difficulty of testing null hypotheses is that one cannot confirm (statistically) the hypothesis of no effect.While robustness checks (reported in the appendix), as well as p values that never approach standard levels of statistical significance, provide some confidence that the results do not depend on model specification or overly strict requirements for statistical significance, one cannot entirely dismiss the possibility of a Type II error.

      Jana von Stein (2008)

      • Finally, another null hypothesis nut! I was beginning to think it was just me. Judith the misgivings I have about your Italian flag, and the whole calculus of uncertainty, include the question David asks. It seems that to view every hypothesis solely through the prism of uncertainty allows the null hypothesis to fade from view like the Cheshire cat.

        I’ve seen plenty of evidence that climate scientists may not even properly understand (or at least share my understanding of) the term, in that they confuse it with the consensus. One see talk by climate scientists of x “becoming the new null hypothesis”, where what is meant is that x has attained the status of “the consensus view” and thus deserves to thought of as the default position.

        Elsewhere one sees the null hypothesis (on the rare occasions when it makes its unwelcome appearance) adduced as a sort of box to be ticked only at certain milestone junctures in the experimental process, rather than as an ever-present, if often irksome, companion the scientist, to be consulted at every step.

        So in climate science it seems to me that the null hypothesis is poorly understood, and honoured more in the breach than in the observance.

        This underlies my earlier plea for a post on The Scientific Method.

      • Richard S Courtney

        Tom FP:

        The null hypothesis is a fundamental principle of the scientific method. And, because it is a fundamental principle, it must always be recognised so cannot make “an unwelcome appearance”.

        The null hypothesis is the assumption that nothing has changed unless there is empirical evidence of a change.

        And the null hypothesis is the governing assumption in the absence of empirical evidence of a change. Hence, in any scientific assessment it it is the responsibility of those who assert a change has occured to provide evidence that the change exists. Until that evidence is provided then the ONLY scientific assumption is that no change has happened.

        This goes to the heart of much of climate science pertaining to AGW.

        For example, the MBH ‘hockey stick’ was an instant poster-child of AGW-proponents because it seemed to be evidence that recent climate behaviour differed from previous climate behaviour in the holocene. So, the MBH ‘hockey stick’ seemed to be the evidence needed to disprove the null hypothesis.

        But there is no evidence – none, not any – which disproves the null hypothesis which says the causes of recent global climate change are the same as the causes of previous global climate change.

        And the fact that the null hypothesis cannot be disproved at present is why “attribution studies” have great importance.


      • Thanks Richard – it may not have been clear, but my “unwelcome experience” was an ironic reference to the way I believe climate scientists tend to view the null hypothesis – one member of the “climate establishment” even pointed out on this site that it is disregarded because it is not a helpful device for gaining further funding.

        Staggering, I know, but I think it points to the need for a post on the Scientific Method.

        And I described David Hagen and myself as “null hypothesis nuts” because the issue seems to receive less attention that it is due, here and elsewhere. JC’s Italian Flag seems to have no place for it. Your post supports this view.

        A small point – as a non-scientist I would, if asked to define the term, describe the null hypothesis as being “that our experiment shows no causal relationship between the variables being studied”. Is this wrong, right, right, but incomplete, or what?

      • A guest post on the scientific method is forthcoming. Re the null hypothesis, I use it a little (I used it in my paper on mixing politics and science), but I prefer argument justification for complex arguments.

        The theory of justification (Betz 2009) invokes counterfactual reasoning to ask the question “What would have to be the case such that this thesis were false?” The general idea is that the fewer positions supporting the idea that the thesis is false, the higher its degree of justification. The relationship among rational belief, well justified partial positions, and robustness exists because believing a partial position with a higher degree of justification provides a thesis that can be extended flexibly in many different ways when constructing a complete position and is more immune to falsification. As a caveat, it should be noted that it is not unreasonable to adopt a position with low degree of justification in the same sense as adopting an incoherent and contradictory position.

        I am trying to figure all this out, so I esp appreciate this kind of discussion. Somehow i lost the comments between Tim C and Nullius in Verba on bayesian analysis of multiple lines of evidence, trying to sort this out is hugely important in my mind. We need a thread devoted to this kind of stuff. Not sure what is coming down the pike on the guest scientific method thread, but should be interesting.

      • Nullius in Verba

        “Somehow i lost the comments between Tim C and Nullius in Verba on bayesian analysis of multiple lines of evidence, trying to sort this out is hugely important in my mind. We need a thread devoted to this kind of stuff.”

        Do you mean the ones at the bottom of part I?

        Tim and I agree that positive evidence accumulates, and Tim has proposed that if you start with a very low prior expectation, then even strongly positive evidence can result in a weak assessment based on a single line, which multiple lines of evidence can overcome. It’s a true statement, but it only highlights the importance of distinguishing the strength of evidence and the strength of confidence.

        Confidence = Prior + Evidence + Evidence + …

        I thought at first he was saying that starting with Evidence corresponding to a probability of 15% (which is actually negative evidence) one could accumulate it to get 95%, but what it seems he intended was that Prior + Evidence could correspond to 15% while Prior + 5*Evidence corresponded to 95%.

        The problem is: saying Prior + Evidence corresponds to 15% doesn’t tell you how much is Prior and how much is Evidence, without which you can’t say how it might accumulate.

        The bigger problem is that there’s no indication the IPCC did this, or anything like it. It’s not even clear that such analysis is possible, in the current state of knowledge. Certainly they haven’t shown it, which means that other scientists shouldn’t take it on trust. As Tom Wigley once said: “No scientist who wishes to maintain respect in the community should ever endorse any statement unless they have examined the issue fully themselves.” In itself an excellent comment, and one that was made in an interesting context…

      • got it thanks, bottom of part I. I’m trying to square this with my intuition of what is a useful line of evidence in trying to decide what should be included in an argument, i guess that is the prior part. I assigned a prior of 0 value to something that Tim thought should be included, is that a correct interpretation of Tim’s original point (which i now understand). Seems like this is a really key issue.

      • Nullius in Verba

        OK, the ‘Prior’ expresses your belief in the hypothesis before you have any evidence. (The philosophy involved in this gets quite turgid, but a lot of people follow the indifference principle and assign equal probabilities to each possibility – assuming you know how many possibilities there actually are. This arguably requires some evidence to determine, so things can get very muddled. But with a simple true/false question, an default probability of 1/2 each is often suitable, which corresponds to ‘zero’ confidence.)

        Then each piece of evidence results from an observation, and adds to (or subtracts from) your confidence. To be evidence, the observation has to have a different probability depending on whether the hypothesis is true or not, and the more different it is, the more confidence it adds or subtracts.

        So in making their assessment, the IPCC must first decide where to start off, and then separately decide how much each line of evidence adds to it. Evidence is stronger if it has a higher probability of happening when CO2 is causing most of the late 20th century warming than it does if something else is causing it.

      • this is very helpful, thank you again. i want to revisit this in some detail in a near future thread

      • In reference to Bayes, there is a “Bayes theorem” and a “Bayesian method” for construction of a model. As it follows deductively from the precepts of probability theory, Bayes theorem is logically correct. However, the Bayesian method has a logical Achilles heal. This is the necessity for a so-called “prior” probability distribution function (PPDF).

        In building models, Bayes and Laplace favored PPDFs that were constant in probability. The contention that they were constant in probability became known as the “indifference principle.”

        In the 19th century, the logician John Venn made an argument that proved persuasive to many of his fellow logicians. Venn argued that the choice of a PPDF that was constant in probability was arbitrary. If it was arbitrary, the indifference principle violated the law of non-contradition. The law of non-contradiction was a principle of logic.

        The apparent violation of non-contradiction led logicians to seek a method of construction that required no PPDF. The method which they found became known as “frequentism.” In the 20th century, frequentism grew to a position of dominance over the Bayesian method. However, frequentism had logical shortcomings of its own. One was that the construction of a model required selection of a parameterized model from a large number of possibilities by an essentially arbitrary process of selection. This method of selection violated the law of non-contradiction.

        The Bayesian method and frequentism both violated the law of non-contradiction! Was there a method of construction that did not violate the law of non-contradiction? In the period 1963-1975, this question was answered in the affirmative by the engineer-physicist Ronald Christensen. Christensen described a method of construction that replaced the arbitrary decision making of the Bayesian method and frequentism by logical principles. Under his method of construction, arbitrariness was eliminated by making selections by optimization.

        While the Bayesian method and frequentism both provide means by which a model can be constructed, neither provides logical means by which a model can be constructed. Both violate the law of non-contradiction. The only method of construction that does not violate this law is Christensen’s.

      • Terry, thanks. can you point me again to Christensen, I’m losing track, and all this is ratcheting up in my head in importance (i promise to mark it this time). Thanks much.

      • Judy:

        My original post on Christensen was on Oct. 26 at 10:02 PM. You ought to be able to find it via a search of the comments by your Web browser on the word “Christensen.” Additionally, there is a tutorial and bibliography at http://www.knowledgetothemax.com . To try to go beyond the tutorial by reading the literature would be far less cost effective for you than to engage Christensen and/or me to conduct a day long seminar at your campus.

        I’m glad you’re paying attention. As it is uniquely logical, this is really the direction in which Climatological modeling should go.


      • ok, now i remember, i actually did a read through of knowledgetothemax 2 days ago.

      • David L. Hagen

        Link not working. Do you mean:
        Betz, Gregor (2009) On Degrees of justification

        Re: “Null Hypothesis”
        “Climate realists” typically accept that the Null Hypothesis includes numerous natural and anthropogenic causes that affect climate and that these will continue based on historic and prehistoric trends.

        Thus the key issue is NOT
        1) Is there is “anthropogenic climate change”?
        2) Is there “anthropogenic global warming”?
        i.e., burning forests, converting forests to tilled fields cause “warming”.

        The great challenge is:
        3) Will anthropogenic causes result in catastrophic global warming?
        To determine that, we need
        4) QUANTITATIVE evaluation and reliable climate modeling to determine HOW MUCH do EACH of the natural AND anthropogenic factors influence climate, AND
        5) Validated climate models that can reliable forecast future changes.

        To date, it does not appear that either 4 or 5 have been done to satisfactory degree of scientific uncertainty.

        On top of that, we have Climategate, and then the slight of hand of changing “catastrophic anthropogenic global warming” to “climate change”.
        Trying to pull such shenanigans immediately raises numerous red flags that neither 4 nor 5 have been demonstrated, and that they likely cannot be demonstrated.

        The challenge is NOT a bipolar Yes/No does anthropogenic warming exist, or even is it measureable.
        The challenge is QUANTITATIVE evaluation and ACCURATE modeling of ALL factors and models.

      • yes this is the correct link

  36. Judy – Your posts are thoughtful and provocative, but I would challenge some of the Italian flag values you assign to the seven arguments you cite. First, a general point – to sustain a conclusion need not require 100 percent accuracy, and so the issue regarding IPCC conclusions is not whether estimates of forcing, internal variability, etc. are “accurate”, but whether they are accurate enough to justify IPCC assessments of twentieth century warming.
    I believe they are, but rather than consume multiple column pages, I won’t apply that concept to all the arguments. Rather, let me illustrate by citing arguments number 5, 3, and 4 in perhaps a descending order of importance.

    5. I’m puzzled by your assignment of only a 30 percent probability to the proposition that “Global climate model simulations that include anthropogenic forcing (greenhouse gases and pollution aerosol) provide better agreement with historical observations in the second half of the 20th century than do simulations with only natural forcing (solar and volcanoes).” I realize that one can always quarrel with the accuracy of both observations and forcings, but the evidentiary basis for these occasional divergent claims must be assessed before affording them equal weight with the larger body of evidence that estimates negligible solar forcing over the past 40 years and only very temporary effects from volcanoes (e.g., the negative forcing from Pinatubo). If you claim that everybody’s conclusions must be averaged, including very divergent conclusions about solar forcing, then the 30 percent figure might be warranted, but based on the weight of an enormous quantity of data, I would argue that the figure must be far higher.

    3. Again, adequacy of data is an issue for you in regard to GHG, solar, and aerosol forcings, but again, absolute accuracy is less critical than accuracy adequate to justify conclusions about the relative role of anthropogenic GHGs. With some exceptions, most observers see the greatest uncertainty as residing in anthropogenic aerosols. If their contribution has been overestimated, the role of CO2 will also have been overestimated, but not by an extraordinary degree. Conversely, however, if aerosol effects have been underestimated (not at all implausible), climate sensitivity to CO2 must also have been underestimated, perhaps substantially. It is hard to contrive a plausible set of inaccuracies that would reverse the conclusion that anthropogenic emissions account for most warming, although the quantitation of how much is “most” would be affected.

    4. Here the issue is natural variability vs forced variability, but the justification for the low score you assign assumes some comparability of timescales. Diurnal temperature variation exceeds the 0.74 C twentieth century warming trend by orders of magnitude, but these variations obviously even out over long intervals. How about natural variability? In part 1 or your series, you discussed PDO and AMO variations, and so the question arises as to their influence on the century-long trend. Here, respectively, are links to PDO and AMO datasets – PDO and AMO . To my eye, the PDO data exhibit variations that very much even out over the century. The AMO does exhibit substantial variations over decadal intervals, but averaged over the global oceans, would appear to contribute relatively little to the long term trend. In particular, a profound impact of the substantial positive AMO variation during the 1940s to 1960s is hard to reconcile with the flat or slightly declining overall or SST data for that interval.

    It is fortunate that the IPCC did not claim 100 percent certainty for the dominance of anthropogenic emissions, nor claim that they accounted for all warming rather than “most”. The conclusion that they were responsible for most still seems rather robust despites the challenges.

    • Fred,

      re #5, its about agreeing for the right reason. The disagreement among the models in terms of causes for the 1910-1940 warming and the mid century cooling is the rationale for my low assessment.

      re #3, without accurate forcing data, it is difficult to rule out other modes of external forcing and also natural internal modes of variability.

      re #4 the AMO and PDO have received insufficient attention in this regard, and it is the combination of a variety of natural factors that could potentially provide the explanation.

      • re #4 the AMO and PDO have received insufficient attention in this regard, and it is the combination of a variety of natural factors that could potentially provide the explanation.

        The most recent models incorporate the movement of energy through the ocean oscillation and show a small offset of anthropogenic forcing in the present years, but predict an increase starting this year and moving forward, which looks, at the moment, to be correct. Also, wouldn’t there need to be data showing much more ocean cooling if the AMO PDO were a significant factor in recent warming? It is an internal factor. The energy must come from somewhere.

      • People keep insisting that the AMO and PDO are “internal” oscillations–ie just the redistribution of heat. There are various mechanisms that could make this not true. Roy Spencer argues that local inhomogeneities in temperature could create cloud differences that drive PDO etc systems by changing albetdo, which is NOT just redistributing heat. Other mechanisms include tidal and solar effects that are not accounted for by TSI. Careful about unproven assumptions.

      • I’m usually careful about unproven assumptions, which why I phrased my comment in a question. But that gets back to the reason that the IPCC doesn’t say with %100 certainty that it is all anthropogenic warming. Spencer’s theory is just one argument at this point, without a long term series of data that supports it, as far as I am aware. I would never say that these aren’t possibilities, but I’m wondering why betting on these theories should put a large dent in the IPCC consensus conclusions. Is there a compelling reason I should?

  37. Question A: Is my characterization of the argument correct?

    I don’t see that anyone has specifically answered Question A yet. My answer is No. Here, as an alternative, is my characterization, at least with respect to the core argument. Additions are in bold.

    #1: Historical surface temperature observations over since the middle of the 20th century show a clear signal of increasing surface temperatures. Italian flag: Green 95%, White 5%, Red 0%.

    #2: Climate models are fit for the purpose of accurately simulating forced and internal greenhouse-gas-driven climate variability on the time scale of half a century. Italian flag: Green 60%, White 30%, Red 10%.

    #3: Time series data to force climate models are available and adequate for the required forcing input: long lived greenhouse gases. , solar fluxes, volcanic aerosols, anthropogenic aerosols, etc. Italian flag: Green 95%, White 5%, Red 0%.

    #4: Not required.

    #5: Global climate model simulations that include greenhouse gases indicate that the magnitude of warming that would be expected from greenhouse gas increases is at least as large as the observed warming. Italian flag analysis: 60% Green, 30% White, 10% Red.

    #6: Replace with: No other known forcings (except for natural variability) had the proper sign and magnitude to produce a significant fraction of the observed warming since the mid 20th century. Italian flag: Green 80%, White 15%, Red 5%.

    7. Confidence in premises 1-6 is enhanced by the agreement between the simulations and observations of the 20th century surface temperature and the distinct signals produced by the various forcing mechanisms.

    8. Thus: Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.

    Note that this argument does not require the models to simulate natural variability on a multi-decade time scale with any fidelity whatsoever, which is a good thing since there’s no evidence that they can do so.

    I’ll make a separate point in my next comment.

    • David L. Hagen

      “since the middle of the 20th century” may better be stated “over the last part of the 20th century”. That strong warming is not evident since 2002.

  38. I should have added “in climate models” to point #7 above.

    My separate point is that the phrase “most of the observed increase in global average temperatures” is ridiculous as part of a statement of attribution. The word “most” implies some sort of positive-definite apportioning: 10% here, 20% there, 50% here, 15% here, and 5% there, that together adds up to 100%.

    However, in the case of global temperatures, there are both positive and negative effects, and they can happen simultaneously. Consider the following very real possibility:

    Since the middle of the 20th century, global mean temperatures have increased 0.5 K. All causes of climate change had a negligible impact, except for the following three: anthropogenic greenhouse gases +0.6 K, anthropogenic aerosols -0.9 K, and natural variability +0.8 K.

    In that case, it wouldn’t seem correct to say that greenhouse gases caused most of the observed warming. But a decent argument could be made that they caused some of the observed warming, and another decent argument could be made that they caused all of the observed warming.

    The IPCC should stick to stating a specific range of temperature perturbation that is supposed to have been caused by a particular mechanism, and avoid statements that imply a fractional contribution.

    • Very interesting.
      But Pielke, Sr. gets condemned for trying to point out just that.
      Now what about things like clouds and the big ocean oscillations?
      And how do you get from there to predicting Texas will have big droughts in a climatological way?

  39. This is a very interesting article and a breath of fresh air in the rather turgid debates on the internet. Could I make a rather naive point from the point of view of a complete non-specialist in climate, but having been involved in experimetal medicine and mathematical model modelling of (of all things) cardiac arrhythmias? There seem to be parallels in the logic in both areas.

    A model can be calibrated on the basis of inputs (forcings) and outputs (temperature) without the system it represents being observable. Any model that has multiple parameters is likely to be ill-conditioned if deductions about the parameters are made from a limited number of outputs. This has been the case in “mathematical cardiology” where the overall behaviour of the heart may be determined by great sensitivity to parameters that are fitted to imperfect models of individual components of the system (ion channel dynamics). Despite enthusiasm for the model outputs among mathematicians, the models in general produce predictions that simply do not conform to a wide range of experimental results.
    The key to deciding if these models are useful, because they do produce something that might appear to be similar to a cardiac arrhythmia, and so could be said to produce a useful result, is that they can be falsified on the basis of specific predictions such as implying a range of conduction velocities within the heart that have never been measured in real life or that an arrhythmia can persist in a small piece of isolated tissue when all the experimental evidence suggests that a minimum volume of tissue is required to support an arrhythmia.

    The problem seems that the GCMs can be modified and recalibrated to conform to temperature without making a definite prediction that can be falsified. In particular, if some parameters are establsihed, this may lead to changes in assumed values of other parameters to make the model conform to observation. I understood that one of the features of GCMs was that they predicted an incease in tropospheric temperature in the equatorial regions that has not been confirmed by direct measurement. This would seem to be a major difficulty and, while models could presumably be modified to create this feature and global warming, would one have any confidence in the result of the model as this is post-hoc calibration as opposed to pre-hoc prediction? Alternatively, can one argue that failure to predict a specific observed climatic feature with a climate model is sufficient to falsify that model?

    It seems to me that the key issue is not to say whether current models accout for 40% or 60% of the observed variance in temperature but to generate falsifiable hypotheses from the models. I realise that this is more easily stated than achieved, but if it can be done, it is a more rigorous test of a model’s validity than comparing a temperature time series with a model output.
    (p.s. The fact that cardiac activation models produce astonishingly unrealistic results does not deter many ofthe mathematicians involved. However, as physicians we are not going to subject patients to treatments based on these models!)

    • interesting analogy, thanks

    • There is great resistance to making it clear what are predictions of the climate models. Modelers want to hand wave over the big picture, and any time one points to the lack of tropical troposphere warming or any other particular failure/discrepancy, it is like stepping in a fire ant nest, you get swarmed and stung.

    • Richard – Most GCMs are tuned so as to reproduce the current climate (temperature, latitudinal gradients, seasonality, etc.), or for paleoclimatology, they are tuned to an ealier climate. Once tuned, they are then subjected to forcings (e.g., a rising CO2 concentration), and the modeler must then accept the results – he or she cannot retune the model to make the results conform to expectations. Because model projections play out over decades, few have been testable (falsifiable) in a forward direction, but Jim Hansen’s models were an exception and performed skillfully. More often, models have been tested by hindcasting – they are forced with a known change starting at a past known climate state, and asked whether they can accurately project the output (e.g., a temperature change resulting from a change in CO2, solar forcing, etc.)? Again, they have performed fairly skillfully on global and multidecadal scales, but less well over short intervals or regionally. They are far from perfect, but remain an enormously important tool for understanding climate variations and anticipating how the climate will vary in response to perturbations.

      The issue of tropospheric temperature amplification remains to be completely resolved, but disparities between predictions and observations have diminished as instrument biases have been corrected, and it is not unreasonable to expect that further improvements in instrumentation accuracy will largely eliminate the remaining disparities. This remains to be seen, of course, but it’s important to point out that the trospospheric amplification prediction does not originate in the models but in the basic physics of radiative transfer in combination with the Clausius-Clapeyron relationship describing the change in atmospheric water vapor as a function of temperature. Even if models did not exist, that prediction could be made, and the models merely lend a degree of quantitaion to the prediction.

      On an historical note, it is interesting that Arrhenius in 1896 predicted the temperature response to increases in atmospheric CO2 with surprising accuracy in the absence of any type of sophisticated model. He overestimated climate sensitivity but not by a huge extent.

      • Jim Hansen’s models were an exception and performed skillfully.

        Less impressive in context. Hey all three scenarios are skillfull! That should be worth some extra credit, right?

      • Note that Hargreaves et al. evaluated Scenario B for skill because it had the applied forcing was the closest to what actually occurred. The projection that was actually closest to the temps was Scenario C though (that was the sensitivity study with a drastic reduction of C02 emissions), so we accomplished by doing nothing what Hansen’s 1988 GCM claimed we’d accomplish by huge reductions of CO2 emissions…

      • It would be worthwhile for readers to review the Hargreaves analysis to form their own conclusions. It appears that the Hansen Scenario B performed fairly well, with an overestimated trend consistent with its estimate of climate sensitivity at what is now considered to be toward the high end of the likely range (although of course, Hansen continues to estimate climate sensitivity at higher levels than most other observers). Hopefully, models have improved somewhat since then, but Hargreaves et al suggest that the results to date are not unpromising.

        It’s also not surprising that an off-the-cuff prediction that the future will look like the past is not a bad way to predict the future, but the models were not given the benefit of that shortcut, and so what was at issue was their ability to yield reasonable output on the basis of inputted physical principles, initial and boundary values, parametrizations, and appropriate choice of simplifying assumptions. At some point, the future won’t look like the past, and we will need something other reliance on past experience to anticipate how the deviations will play out.

      • Fred:

        The tradition in meteorology and climatology of emphasizing the skill as the measure of performance can have the effect of obscuring the falsification of the model by the evidence. Though it may have skill, Hansen’s model is falsified by the evidence for this model claims to predict the average global temperature but the predicted temperature differs from the observed temperature.

      • Terry – What was “falsified” was only a strawman – a supposition (advanced by no-one) that a model can achieve 100 percent accuracy. Fortunately, that unachievable level is unnecessary for models to provide useful predictions. Hansen’s models did that fairly well, and models have improved since then. The critical question will never be whether a model is always right – that can’t happen – but whether it can yield a reasonably accurate answer often enough to provide valuable information, in conjunction with other evidence, that informs our actions better than would be possible without the model. The answer appears to be affirmative for some scenarios such as long term global temperature predictions, but the models do less well on short term or regional scales. Climate predictions do not depend exclusively on models, but the models reduce the levels of quantitative uncertainty.

      • Fred:
        By all of the evidence I’ve ever seen, by his model Hansen represents that the temperature has a specific value. By the evidence, this representation is repeatedly and consistently false.

        I would be guilty of having erected a strawman if and only if Hansen did not make this representation. If he did not make it, where is the evidence? What, in fact, is the representation that Hansen did make?

      • Your interpretation in incorrect. Model results are never represented as an unerringly accurate portrait of reality, although if you quote Hansen as claiming they are, I will retract that assertion. That said, the issue is whether they are sufficiently close to be valuable. Rather than repeat myself, I would urge you to see my earlier comment as well as the Hargreaves article for further elaboration.

      • An additional small point. Models don’t attempt to project estimates of global temperature but rather temperature anomaly, which, despite its own challenges, is easier to quantify. I realize that you already understand that, but readers with no knowledge of the subject might come away with the wrong impression on seeing the word temperature.

      • Fred:
        You’ve misrepresented the scientific method of inquiry. Under this method, each claim that is made by a model must be is susceptible to being found false by reference to an instrument reading. If Hansen’s model makes a falsifiable claim what is it?

  40. Judith,
    A great deal of arrogance has slipped in that no matter how badly the model, they are correct. Having the backing of many governments gives the ultimate power that they cannot be wrong unless absolutely made foolish.

  41. The term “most” only means more than 50% when there are two choices. When there are more than two choices, “most” simply means the one with the highest. Since there are more than two positive temperature forcings, the one with the “most” can have < 50 %.
    An IPCC method to deliberately mislead, as John N-G implied.

  42. I’ve been stuggling to understand the IPCC’s thinking on the detection and attribution issue. Their thinking on “detection” seems to be that there is such a thing as “natural” variability in the average global surface temperature (agst) and such a thing as “unnatural” variability.”

    They seem to imagine that the observed agst is the sum of two terms. One is the agst of the natural variability. The other is the agst of the unnatural variability. The three agst’s have agst vs time waveforms. The waveform of the agst of the natural variability adds “noise” to the observed waveform. The waveform of the agst of the unnatural variability adds “signal” to the observed waveform. If a signal stands out above the noise, the IPCC reasons that something unnatural is happening.

    If I understand the thinking, the hockey stick like waveform of the observed agst waveform leads the IPCC to the conclusion that a signal has been detected; the shaft of the hockey stick is pure noise while the blade is pure signal. This signal having been detected, the IPCC wonders about the cause. In making an investigation, it runs the climate models without anthopogenic CO2 emissions. Under this condition the computed agst looks like the hockey stick without the blade. Thus, they attribute the cause to CO2.

    What if anything is wrong with this line of thinking? Several defects come to mind. Of these, I think the most noteworthy is that the only observable in this picture is the signal + noise. In science, we are forced by the requirement for the falsifiability of our hypotheses to limit these hypotheses to making claims about quantities that are observable. In view of this limit, we can state hypotheses about the signal + noise. We cannot state hypotheses about the signal or about the noise for neither the signal nor the noise is observable.

    The job of forming hypotheses about the signal + noise is filled by climate models. I don’t believe there to be an additional role for IPCC-style detection and attribution. For policy making, it is necessary and sufficient to have a statistically validated, highly informative and logically constructed model that is robust against failure. This we do not yet have.

  43. Wm. Briggs, a Bayesian statistician with experience in the Climate field, says he is “happy to help” with these kinds of issues:


    • David L. Hagen

      Briggs observes in part:

      . . .There are an infinite number of hypotheses that might account for the historic observations: the CO2 hypothesis (coupled with other hypotheses about how the atmosphere works) is just one of these. . . .We can whittle down the set of explanatory hypotheses by asking how each of them fit in with true or highly likely non-climate hypotheses, such as the theories of thermodynamics, etc. . . .Thus far, this is the only evidence we have for the CO2 hypothesis. The true test of climate models, hence of the CO2 hypothesis, will be in how well they predict data not yet seen.

  44. Dear Judge Curry,

    Your summing-up in this trial – The UN vs. Carbon Dioxide – was detailed and thorough, and has been generally accalimed.

    Your verdict, however, that the prosecution failed to make a case against the defendant but will be given another go, is woeful and way outside the law. Much worse, your hope that they will make a better case next time shows blatant judicial bias.

    From your summing-up, only one verdict is possible : Not Guilty.


    Mike Jonas.

    • Nope, science gets an infinite number of chances to refine and improve their arguments. But if an alternative argument gets more traction, then the original one will eventually go away unless reinvigorated by something.

      • Well, no. You don’t need an alternative argument if the original one is wrong, you simply ditch the original one. This original argument – as demonstrated by your post – is wrong. As for an infinite number of refinements, it feels like they have used up half that number already. This is shonky computer modelling based on circular logic, masquerading as science, and exaggerated to the point of absurdity. You have just demonstrated the exaggeration. As a scientist, you should also be paying attention to the lack of successful testing and the existence of tests which show the hypothesis to be false.

        Q. How many tests showing a hypothesis to be false are needed in order for the hypothesis to be ditched?
        A. One.

        Unless the hypothesis is AGW???

  45. Ah…. one of my pet peeves – the misuse of the word ‘most.’

    The most common use of the word most in journalism today is, unfortunately, the >50% one. Most is properly synonymous with ‘almost all.’ When used as ‘the most,’ it becomes ‘greater than any other,’ without absolute value. But for some reason, ‘most voters’ has come to mean ‘more than 50%,’ when what they actually mean is ‘a majority.’

    When you ask your daughter if she has finished her homework, and she says ‘most of it,’ you expect her to have nine out of ten math problems finished. That is, you would LIKE to expect her to have nine out of ten problems done. If she has finished number six and stopped, you send her back to her desk.

    Funny how a small thing like this can be so important. One would think quantitative scientists would be careful with their language.

    • Note that the word “most” may have been introduced by the IPCC government reps rather than the scientists. The scientists would only need to have agreed that their understanding fit within that terminology.

      • Here’s a coincidence: just today, Andy Revkin posed a passage from a chapter he wrote a few years ago. In it appears this sentence:

        “For the first time, nearly all of the caveats were gone, and there was a firm statement that ‘‘most’’ (meaning more than half) of the warming trend since 1950 was probably due to the human-caused buildup of greenhouse gases.

        So ‘most’ is officially anything more than half, including 50% plus a smudge. I can only assume that this misuse of the word is deliberate. If you ask 10 people on the street what ‘most’ means, you’d get answers in the range of 80-98%.

      • I don’t think you can say it’s “misuse.” The word doesn’t have a quantitative definition:

        a. Greatest in number: won the most votes.
        b. Greatest in amount, extent, or degree: has the most compassion.
        2. In the greatest number of instances: Most fish have fins.

        So there’s some ambiguity in the term, but I’m used to most = majority de minimis. It’s a fairly common usage.

  46. In her essay, Judy requests input from logicians on the IPCC’s detection and attribution argument. This is to respond to her request.

    Whether or not the IPCC’s argument is “logical” depends upon what one means by “logic.” Logic is the science of the principles of reasoning. By these principles, one may distinguish inferences that are correct from inferences that are incorrect.

    What are the principles of reasoning? One is the law of non-contradiction. It states that a proposition is false if self contradictory.

    What are the remaining principles of reasoning? Many philosophers say they do not know. However, at http://www.knowledgetothemax.com, I argue that these principles were described for the first time in 1963 by Ronald Christensen. Christensen noted that an inference had the unique measure that was called its “conditional entropy.” The conditional entropy reduced to the more widely known “entropy” under certain circumstances. Thus, Christensen reasoned, it should be possible to disguish correct from incorrect inferences by optimization. In particular, in a group of inferences that were candidates for being made, the correct inference was the one which (depending upon the type of inference) minimized the conditional entropy or maximized the entropy. In the case of entropy maximization, the process of maximization was constrained by the available information including observational data. Thus, Christensen concluded, the principles of reasoning were entropy maximization and conditional entropy minimization.

    A huge body of evidence supports Christensen’s contention. No body of evidence falsifies his contention.

    Several elements of the supporting evidence are published in the literature of meteorology. In 1978, Christensen and some colleagues undertook construction of the first weather forecasting model to be built under Christensen’s principles of reasoning. In 1980, the American Meteorological Society published the results of this work. By ensuring that every inference that was made by the model was logically correct, Christensen and his colleagues increased the span of time over which precipitation could be predicted in the central Sierra Nevada by a factor of 12 to 36! In this way, the first long range weather forecasting model was created.

    An alternative to identification of the one correct inference by the principles of reasoning is to select it by one of the intuitive rules of thumb that are called “heuristics.” However, each time a particular heuristic identifies a particular inference as the one correct inference a different heuristic identifies a different inference as the one correct inference. In this way, the method of heuristics violates the law of non-contradiction. Accompanying this violation are violations of one or both of Christensen’s principles of reasoning. When heuristics replace Christensen’s principles of reasoning in the construction of a model, then, the method of construction merits the descriptor “illogical.”

    Thus far, Christensen’s fate has been for his work to be ignored by nearly all scientists, statisticians, philosophers and academics. A result has been for virtually all model builders to continue in the tradition of employing the method of heuristics. One consequence is for virtually all models to be highly prone to being falsified if tested. Often, falsification is avoided only because the model is never tested. Another consequence can be for the models to provide less information to the decision maker than could have been provided.

    So far as I’ve been able to determine, all climatological models that have already been built were built under the method of heuristics. Thus, we can be sure that every single one of them makes a logically fallacious argument.

    The IPCC climate models are among the climatological models that make logically fallacious arguments. It is clear from the structure of these models that every one of them would be falsified if tested. However, the IPCC ensures that neither the IPCC itself nor anyone else can discover this feature of the models by not only failing to try to statistically validate them but also making it impossible for anyone else to try to statistically validate them. The IPCC makes it impossible for anyone to try to statistically validate them by persistently failing to identify the statistical population that would be sampled in making this attempt.

    In an attempt at statistically validating a model, it is necessary for a sample to be available that is of sufficient size for statistical significance of the conclusion. The global temperature record extends backward in time about 150 years. If the period of time over which the weather is averaged in producing the climate, then, the sample that is available for model building and validation consists of 15 events. The number of events that are available for validation is 15 less the number used in model building tasks such as assignment of numerical values to parameters. Thus, we can be sure that the level of statistical significance would be abyssmal if the climate models were tested.

    Now to address the argument which the IPCC makes regarding detection of an anthropogenic signal, this argument seems to be rooted in signal detection theory. In communications engineering, signal detection theory has been supplanted by information theory. Information theory identifies the one correct inference by Christensen’s principles of reasoning. Signal detection theory identifies the one correct inference by the method of heuristics.

    By use of the method of heuristics, it is possible for the model builder to fabricate information that he/she does not have by violation of the principle of entropy maximization. With the availability of 15 observed events for the combined purposes of constructing and testing its anthrogenic signal detector, the IPCC can have reached its high level of confidence only by fabricating information. I’m insufficiently up to speed on the IPCC’s attribution argument to venture an opinion on it.

    • Opps! I erred.

      When I said is: “If the period of time over which the weather is averaged in producing the climate, then, the sample that is available for model building and validation consists of 15 events”
      What I meant to say is: “If the period of time over which the weather is averaged in producing the climate IS TEN YEARS then, the sample that is available for model building and validation consists of 15 events.”

      I apologize for my mistake.

  47. I just spotted this at Michael Tobis’ blog, about the italian flag


    The reason I don’t have Tobis blog on the blog roll is because I either can’t understand his articles or don’t find them relevant (which i find surprising, since his comments at other blogs are well written). Well this one is definitely relevant, if you can get past the part where he tries to use the italian flag argument to demonstrate that I am an idiot.

    Does anyone understand the point he is trying to make? The point of the italian flag is as a visual to demonstrate that considering the red and green (arguments for, arguments against) isn’t sufficient: the white area is very important.

  48. Looks like a case of talking past. The flag is a 3 valued epistemic model. Tobis is trying to force it into a probabilistic decision model. It does not work so he blames you! But the white area needs to be better understood. For example, evidence against a hypothesis is normally a major form of uncertainty but in this case it goes in the red area, not the white.

  49. Although I’ve tried to use the Italian Flag syntax in my comment above, I must confess that I don’t understand it.

    The figure presented in “Doubt” has three labeled categories: “evidence for”, “uncertainty unknowns”, and “evidence against”. For or against what? A proposition, assertion, or hypothesis, I assume.

    Then your first example parses the IPCC statement at issue here, and seems to assign colors as follows: “due to AGGC”, “attribution unknown”, and “due to other than AGGC” (67, 5, 28). But the IPCC statement IS the proposition, and the flag as applied here describes the attribution of causes rather than the certainty or uncertainty of the proposition. So I’m immediately at a loss: in the definition it expresses the balance of evidence, but in the first application it expresses an apportionment of causality.

    You provide your colors: (30, 40, 30) and express them as an apportionment of causality, saying it means that the possible range of CO2 attribution is 30%-70%. But if you translated your apportionment of the IPCC statement the same way, you would be asserting that the IPCC’s possible range of CO2 attribution is 62%-72%, which it plainly is not. If it were, the IPCC would be able to say that most of the warming is unequivocally due to greenhouse gases. The IPCC’s statement, in apportionment percentages, probably is something like (45, 30, 25), which admits the slim possibility that most of the warming is not due to greenhouse gases.

    Then one can apply the Italian Flag to the IPCC’s statement in a different way: what’s the evidence in favor of or against the IPCC’s statement, and how big are the uncertainties/unknowns? One might imagine the IPCC being supremely confident that their statement is correct (80,15,5). This would mean that 80% of the evidence says that most of the warming was probably anthropogenic, 5% of the evidence says that most of the warming was probably not anthropogenic, and the remaining 15% of the evidence is ambiguous.

    Whether you treat the Flag as representing confidence or attribution makes a big difference, and I can’t tell which use you’re advocating. The issue of “balance of evidence” versus “apportionment of causality” is, I think, what Michael Tobis is concerned about too.

    Your next example, your “litmus test” question, is a question! Is it meant to be a statement for which the balance of evidence is to be weighted, or a demand for apportionment of causality? Your answer (25, 50, 25) is followed by the incomprehensible statement “This assessment allows for a greater level of uncertainty in the 21st century than in the 20th century, retaining the 50% mean score albeit with a greater level of overall certainty.” What scares me is that it might not be a typo.

  50. I prefer to think of Ms. Curry as a dissident–A dissident, broadly defined, is a person who actively challenges an established doctrine, policy, or institution.

    My hat is off to Ms. Curry.

  51. If anyone is interested in discussing Michael Tobis’s reaction to Dr. Curry’s posts on uncertainty, I’ve posted an edited version of his response on my personal blog. There were objections to Dr. Tobis’s tone and characterization of Dr. Curry, so I’ve removed those to encourage more direct and constructive dialogue.

    I’ve turned off moderation so as to permit a freer exchange of ideas, and will avoid editing anything short of egregious direct attacks on other commenters. I’d ask that people self-moderate, and in particular request that you respect that some of the other posts on my blog are of a pretty personal nature. Your comments are welcome, but your sensitivity would be appreciated.

  52. Many thanks Judith for this post and the quality of exchange which you obtain on your site

    May I add a few observations from France ?

    1. there is indeed limited argument regarding recent warming as long as the discussion is based on ‘official’ global data bases ; as soon as you consider a) all ‘comments’ on data quality and significance, temperature data as included in GHCN, Crutem, Giss are heavily contested by experts from more than 15 countries, including ‘smaller’ countries like he US, Canada, the entire Northern Europe, Russia….and b) the urban heat effect (see recent paper by McKitrick), the consensus on a warming trend would appear differently ; add to that the fact that some analysis show that the 75-95 higher temperatures may be partly due to effectiveness of anti-pollution policies, and you may realise that a consensus on higher temperatures may be based on sand rather than anything else. As the French Academie des Sciences put it recently, we only have high quality temperature data from satellites since the 70’s.

    2. there is a highly interesting discussion of how a ‘very likely’ level of confidence as to attribution was obtained in AR4 in a McKitrick paper in 2007 :
    after having a look on it, you may conclude that IPCC expert knowledge may be closer to alchemy than to science

    3. In the same paper, McKitrick mentions long terme persistence as a key factor not taken into account in any analysis, not to mention models ; this gives me the opportunity to pay a tribute to the (late) great Benoit Mandelbrot who passed a few days ago ; climatologists may be desperately looking for scientific laws where just chance is governing the Earth.

    4. Il is always amazing on the blogs held by US or british citizens how little attention is paid to assumptions from central and eastern european scientists. You should also have on your radar screen ideas from the like of Miskolczi, Gerlich and Tscheuschner, the Russian, etc.

    Many thanks

    • Daniel, thanks for your comment. Miskolczi has one interesting point that i want to follow up on. But these papers have many flawed arguments that perhaps have not been adequately discussed.

    • Regarding Mandelbrot, this brings up what I call the chaotic climate hypothesis. The climate system’s nonlinear feedbacks make it chaotic. So-called internal variability is assumed to be small but may be quite large, such as that seen in abrupt events. If so then all the oscillations we see, including those in 20th century warming, may simply be the chaotic result of constant solar input. Chaotic systems oscillate under constant forcing. This is little discussed, but it explains why the data keep contradicting the models.

      PS: Regarding France the National Academy just announced that the science is settled. So sorry.

  53. Chance that we’ve seen warming over the past 50 years: nearly 100%. Based on surface temperature measurements, satellites, glacial retreat, sea level rise, decrease in snow-cover in spring & fall, etc. etc.

    Best estimate of natural external forcings in the past 50 years: Cooling, due to solar and volcanic measurements. Probability that net natural forcing influence is cooling: maybe 80%. So the probability that net natural influence is warming: maybe 20%. But what is the probability that the net natural external influence is greater than half the warming over the past 50 years? I would estimate a couple of percent at best.

    Chance that increased GHGs are expected to lead to warming: Nearly 100%. Chance that increased GHGs are expected to lead to warming on the scale of the observed warming: this requires estimates of climate sensitivity, ocean heat uptake, etc., but I would think that basic theory suggests that increased GHGs could be responsible for much more than the observed warming. So, maybe 90%?

    Other influences: aerosols, likely cooling, though the temporal variation is key (eg, they are net cooling overall, but their trend in the past 20 years may actually be net warming because of sulfate pollution control in the industrialized nations). Land use change albedo: again, net cooling, but the trend is not as clear – but not large. Chance that these other forcings lead to net cooling? Greater than 50%.

    Finally, natural variability: not clear what direction natural variability would be: say, 50% chance of warming, 50% cooling, magnitude of long term change likely small but might not be.

    So: expected best guess: We have warming, we have a bunch of cooling influences (natural external influences, aerosols, land use change), one warming influence (GHGs), and some uncertain (internal natural variability). Based on this, the expected contribution of GHGs might actually be GREATER than 100% of the observed warming.

    Then the question becomes: how likely is it that we have simultaneously overestimated the effect of GHGs by a factor of more than 2, and that some combination of natural internal variability and errors in our estimates of the other external forcings can combine to make up the difference? Don’t forget that stratospheric cooling, and other geographic/temporal patterns and fingerprints are reasonably consistent with GHG warming + NH aerosol cooling…

    In all, I think the IPCC conclusion is quite conservative. And that the Italian Flag approach is not the right way to frame this problem, in addition to suffering from some serious definition problems (as pointed out by MT and James Annan, among others).


    ps. Miskolczi had a semi-reasonable point in his paper? I’m surprised…

    • Yours are of course subjective probabilities and mine are quite different. As I pointed out above, if we take the satellite readings as the best available data then the only warming in the last 30+ years is a step up during the big 1997+ ENSO cycle. Temps were flat both before and after, but flat at a higher level after. And if we accept the surface statistical models before that then this single step function is the only warming since 1940 or so. So while yes there has been some warming there is nothing here for GHGs to explain.

      Moreover, confining the argument to the last century, as Curry does, is incorrect. If we look further back we find natural oscillations as large and larger than the small warmings found in the 20th century (if we even know what the latter are, which is not likely). The simplest explanation for these purported warmings is that we are emerging from the little ice age. The problem is that we cannot explain these historical natural oscillations, and the pro-AGW research program refuses to try to do so. We are thus left with a competing possibility that we do not know how to explain. This is an extraordinary epistemic situation. It is not two competing hypotheses but rather one hypothesis (AGW) competing with something we do not understand, but which we know exists. Natural variability is a “highly likely unexplained possibility,” an interesting logical object indeed.

      Given that (1) there is no evidence of GHG induced warming in the best available temperature record and (2) that natural variability (to the extent that we understand it) easily explains what little warming we see , my estimate of the probability that increasing GHGs have, or will, increase temperature is close to zero.

      Clearly we are at odds, looking at the same data. But rational people of good will can do that, and often do. The obvious result is great uncertainty, and a great debate, and that is the precise state of the science. Your certainty merely frames one end of the distribution of belief. The great breadth of that distribution measures the degree of uncertainty.

      • p.s. I agree that confining the argument to the last century is far from optimal, I have mainly been critiquing the IPCC’s argument (which is focused on the 20th century and mainly on the latter half of the 20th century). natural internal variability can be pretty large on decadal to century time scales, and minimization of this variability by the hockey team has been very damaging to the science and the identification and intepretation of natural variability.

      • Indeed, the defining mark of an advocacy document is that it systematically ignores well known counter arguments, in this case the natural history of global temperatures. This is why the geologists tend to be skeptics. But critiquing such a document requires going beyond its frame, to say what is not said. If judgments were passed solely on the prosecution’s case we would not need trials, as everyone would be convicted.

      • “As I pointed out above, if we take the satellite readings as the best available data then the only warming in the last 30+ years is a step up during the big 1997+ ENSO cycle. Temps were flat both before and after, but flat at a higher level after. And if we accept the surface statistical models before that then this single step function is the only warming since 1940 or so.”

        Um. If a rational person looked cross-eyed at the satellite data and totally ignore all other data, then maybe they might believe that there was a chance that the warming only occurred in 1997. But even then, a rational person would more likely come to the conclusion that there is a warming trend superimposed onto interannual variability (mainly explained by ENSO and volcanic eruptions). Combine the satellite trend with the surface observations and the umpteen non-temperature based records that reflect temperature change (from glaciers to phenology to lake freeze dates to snow-cover extent in spring & fall to sea level rise to stratospheric temps) and the evidence for recent gradual warming is, well, unequivocal.

        Previous large natural oscillations are important to examine: however, 1) our data isn’t as good with regards to external forcings or to historical temperatures, making attribution more difficult, 2) to the extent that we have solar and volcanic data, and paleoclimate temperature records, they are indeed fairly consistent with each other within their respective uncertainties, and 3) most mechanisms of internal variability would have different fingerprints: eg, shifting of warmth from the oceans to the atmosphere (but we see warming in both), or simultaneous warming of the troposphere and stratosphere, or shifts in global temperature associated with major ocean current shifts which for the most part haven’t been seen. Perhaps, perhaps, a cloud cover shift ala Spencer could explain some changes under some very specific circumstances: but not only is there no evidence for such a cloud cover shift, but there would _still_ need to be an explanation for why GHGs were NOT causing warming as predicted by our best theoretical understanding.


        ps. “The great breadth of that distribution measures the degree of uncertainty.” This logic implies that we are uncertain as to whether people ever landed on the Moon, whether evolution happened, whether the Earth is more than 6000 years old, whether megadoses of Vitamin C can cure all ills… plenty of smart people subscribe/subscribed to all these beliefs, but that does not actually mean there’s any (reasonable) uncertainty that those beliefs are wrong. Ditto to unequivocal warming in the past 50 years.

      • Take the sat data before the 1998 El Nino spike and again from 2002, after the La Nina half of the ENSO, and run linear trends on both segments. Each is flat. That is the data. There is nothing to support a hypothesis of GHG warming.

        Complex, mathematically questionable surface statistical models of the Jones type do not count, nor do selected glacier studies. (In fact there are studies showing no change in global ice balance, but it does not matter.) The satellites are our instrument, the only true instrument we have. All the rest is guesswork. If the guesswork contradicts the satellites then the guesswork is wrong.

        The rest, about natural variability, is conjectural hand waving until we can say why the little ice age occurred, as well as the earlier oscillations, abrupt events, etc., such that we are not emerging from the LIA, or otherwise affected by these unknown mechanisms.

        The climate change debate is a scientific debate. Likening it to the special creation or moon landing debate is just silly. You are at one extreme of scientific opinion and I am at the other. The entire distribution is well populated by scientists. It may even be bipolar, nobody knows. The differences measure the uncertainty.

      • Ah, yes. The old, “let’s chop this data up into carefully chosen segments”. I note your first segment ends with the years post-Pinatubo running into a La Nina, making it almost flat, and your second of course starts at the top of an el Nino. I could totally take a linear trend plus a sine wave and claim that all the warming happened in that little segment where both are rising at the same time.

        And even trying to follow your instructions, I still get rising trends for both the pre-1997 and post-2002 periods – I could probably pick a specific month which would avoid that trend, but that would be cherry-picking ala mode.

        And what study do you have that shows a constant or growing glacial ice balance? I give you NCDC: http://www1.ncdc.noaa.gov/pub/data/cmb/images/indicators/glacial-decrease.gif, NSIDC: http://nsidc.org/sotc/glacier_balance.html, UNEP/WGMS: http://www.grid.unep.ch/glaciers/pdfs/5.pdf.

        While you’re at it, check out the NOAA State of the Climate Report in BAMS, and the convenient FAQ “How do we know the world has warmed?” at http://www1.ncdc.noaa.gov/pub/data/cmb/bams-sotc/2009/bams-sotc-2009-chapter2-global-climate-lo-rez.pdf#page=8

        So, to conclude: 1) The satellite data supports the interpretation that is obvious when looking at all the available data, eg, the world has warmed. 2) Even if the satellite data contradicted the remainder of the data, is it more likely that one data set depending on the calibration of one instrument is correct, or dozens of data sets using dozens of methods all showing the same pattern of warming? If you recall, it was only a few years ago that the satellite data had to be corrected after years where it was showing cooling in contrast to the rest of the world’s temperature datasets. Remember that before you call it the “only true instrument”.

        This is a well populated distribution only inasmuch as 95% of the scientists who study this area all agree that the world is warming, and there are a set of crackpots and cranks who jump up and down using numerology and ignoring 90% of the available data to conclude the opposite. The sort of logic that can deny the obvious warming of the planet is the kind of logic that ignores the forest for the rotting treetrunk in the corner.


        ps. There are plenty of legitimate uncertainties in the realm of climate science, and I agree that the field could continue to improve the manner in which it includes, addresses, and communicates these uncertainties, and that in many cases the uncertainty is broader than the “consensus” statements in the IPCC: but the fact that the world has warmed fairly steadily over the past several decades is not one of those uncertainties.

      • Mikel Mariñelarena

        You sound like a very knowledgeable person. Perhaps you could be the person who can take me out of my misery and explain how the 2 aerosol paradoxes that I mentioned below can be reconciled with the IPCC attribution position. And, since you’ve mentioned several times the stratosphere cooling as a fingerprint of GHG-warming, could you also explain why it hasn´t cooled since the mid-nineties?

      • I can try to explain, but without nearly as much confidence:

        1) The particular spike you are looking at is a shorter time period than is great for climate signals, and is 1 hemisphere. Therefore, it is harder to separate natural variability from anthro signal (in this case, shifting of heat around, as versus a global, ocean + atmosphere heating over the past 50 years).

        2) It would be nice to see an ENSO+volcano corrected time series

        3) The timing of that spike coincides with the known issue that sea-surface temperature readings may have been biased high during/immediately post WWII (the Wigley “blip”).

        4) The majority of the studies on the hemispheric aerosol signal actually look at longer, post WWII trends.

        Having said all that, there is a recent Thompson et al. (2010) study that suggests that more of the post-WWII hemispheric trend difference that had been attributed to aerosols might be a result of Northern Hemisphere sea surface temperature drop in 1970 (presumably natural), which muddies the picture up again…

        That probably doesn’t help, does it?


      • Mikel Mariñelarena

        Thanks for your explanation, M. But it only addresses the NH-SH anomaly difference in the mid-century. In fact, if the mid-century cooling had been “largely originated by anthropogenic aerosols” (I don’t have the exact wording handy now but that’s the IPCC claim) I wouldn’t even expect to see much of a cooling in the little industrialized SH. However, HadCRUT shows a pronounced cooling around 1944 that lasted until the late 60s.

        Still, I am prepared to accept that part of that cooling is due to measurement artifacts. But if the aerosol forcing is so strong, why are we still left with the 1st paradox? This is what the world sulfate aerosol distribution actually is: http://upload.wikimedia.org/wikipedia/commons/6/67/Gocart_sulfate_optical_thickness.png Not surprisingly, eastern China (by far) and Eastern Europe show the highest concentration. Why are we not seeing any cooling in those regions? This is all quite amazing to me, since if anthro aerosols don’t really cool much, then the climate sensitivity must be very low (Hansen himself proclaimed last year in his Venus-runaway paper that the climate sensitivity figure hinges entirely on the value of the aerosol forcing). Much though I’ve tried, I haven’t managed to have these 2 paradoxes explained to me by any AGW supporter.

        Anyway, shouldn’t the MSU measurements of the stratosphere continue to show cooling since the mid-nineties?

      • Your hypothesis-saving fanaticism is showing. If you want to claim that glaciers are better instruments than satellites for measuring the heat content of the atmosphere, good luck with that. It sounds crackpot to me, to use your term.

        As for the sat record there is nothing unscientific in identifying a significant pattern, just because it consists of segments. The segments — before, during and after the big ENSO, have clear physical meaning. The results are important, and that is what science is about. If you have an important segmented pattern let’s see it, but it will not make my pattern go away.

        You just don’t like the pattern I am pointing to because it does not fit your theory. You prefer GISS or HadCRU but these surface statistical models have a host of known problems. Mind you we could compromise and say we just do not know what the temperature behavior has been, that it is simply a huge uncertainty, but I see no reason to do that.

        The fact is that the models have explained the wrong temperature behavior. The best data we have shows no GHG warming, just a mysterious step function. This is progress for science, just not for AGW.

        Anyway we are just arguing about examples of uncertainty. The point is simply that the scientific debate is both real and extensive, as Curry points out. I rest my case.

  54. Mikel Mariñelarena

    I am intrigued by this statement made by Fred Moolten above:

    “most observers see the greatest uncertainty as residing in anthropogenic aerosols. If their contribution has been overestimated, the role of CO2 will also have been overestimated, but not by an extraordinary degree.“

    I guess that this is based on the assumption that the range of total aerosol forcing is correct in the IPCC assessment: -1.2 w/m2 [-2.7 to -0.4] http://www.skepticalscience.com/despite-uncertainty-CO2-drives-the-climate.html That central value is almost on par with the CO2 forcing (+1.6), thus canceling most of it. But the first issue here is: where is the logic in assigning such a specific value and uncertainty range to something that the IPCC declares to be associated to a *low* level of scientific understanding?

    More importantly for me, I see good evidence that the aerosol negative forcing has been greatly overestimated:

    1) Due to the short atmospheric lifetime of tropospheric sulfates, if their cooling effect was so large we would observe cooling or, at the very least, less warming over the emitting areas and downwind from them, especially China and some Eastern European regions. This is not observed.

    2) If the mid-century cooling has been caused by aerosols, as explicitly proclaimed by the IPCC, the cooling would have been more intense in the NH. In fact, the cooling was more intense in the SH, although it lasted a bit longer in the NH: http://hadobs.metoffice.com/hadcrut3/diagnostics/hemispheric/northern/
    <a href =”http://hadobs.metoffice.com/hadcrut3/diagnostics/hemispheric/southern/” http://hadobs.metoffice.com/hadcrut3/diagnostics/hemispheric/southern/

    But I’m not so smart. The IPCC people must have seen all this and still somehow concluded that anthropogenic aerosols do have a strong cooling effect. If anyone knows their rationale to conclude that in spite of the 2 evidences to the contrary that I have presented, I’d greatly appreciate it.

  55. When discussing attribution to various possible cause, especially when the relation is non-linear and the interractions are complex, there is imho only three possible approaches:

    1-perform a lot of carefully controlled, repeatable experiments where the possible cause are changed independently and the change measured.

    2-Do a lot of experiments on a model of the phenomenon. The model can be physical (scaled model, or models based on analogy (electrical model of a mechanical setup for example), or mathematical. When dealing with simple mathematical models, experiments can be bypassed, mathematics alone can the trick by inversion, simplification, … Else, approximations (often based on numerical discretisation and big computers nowadays) have to be used.

    3-Trust some experts, who have demonstrated to be right before on similar subjects even if they can not explain mathematically or algorithmically why.

    (1) is much better than (2), and (2) is also much better than (3).

    Imho, given the importance of the issue and the collision with philosophical/political preconceptions, and the fact that there is no set of sufficiently similar problems on which experts could prove themeselves, (3) is not an option in this case.

    (1) is probably not an option either, although the use of “natural experiments” should be encouraged. Natural variations of forcings and resulting temperature are apparently not accurate enough to get much out of them, but still, this is something that must be encouraged.

    (2) is the method that was selected, rightfully imho, by the IPCC.
    The problem with 2 is that the overwhelming factor in attribution uncertainties is usually the quality of the model itself. Building a model that can be trusted is something incredibly difficult* , especially without the safety net of (1) for testing the model, if not thoroughly, at least along it’s main input and a few combination of them.
    Even if the individual phenomenons are well known, the final numerical model is not easily trusted: they are simplification at multiple levels, and you are never sure there is no phenomenon that you forgot or mis-modelled.

    Given that, the overwhelming source of uncertainty in the attribution is linked to the model themselves. Can the model be trusted for the purpose, or not? For so many inputs and complex interractions, such large numerical models (which means A LOT of numerical approximations – even on supercomputers), clearly the fact that they more or less fit the 20th century temperature is not enought. Fitting one mostly monotonous curve does not makes me trust a model. Even a simpler one.

    There is more than the model fits, of course. But in the report of the IPCC, the validation is far from extensive. From what I gathered on blogs, the whole set of validation is still not very convincing. The amount of experimental validation for GCM is far from what is the norm on engineering FE models. It is in line with some academic models used for pioneering new classes of simulation. They are interesting of course, and often bring some lights to inner workings of complex phenomenon. When used for prediction, they are also often wrong…

    So, imho, the main thing that would reduce uncertainties in attribution would be for AR5 to focus on model validation. Not global temperature fitting on 100 years. Fine scale regional fit, as many outputs as measurable, global fitting on long time, with a complex non-monotonic temp curve .
    And establishing a experimental dataset that can be used for validation, i.e. which is not himself into dispute…

    *I should know that, I do it for a living – not for climate but in the same line – continuum mechanics, simplifications, FE discretisation, numerical implementation, experimental comparisons)

    • Thank you for your analysis.

    • Alex Heyworth

      kai, you left out (4) Trust some experts, who have demonstrated to be wrong before on similar subjects and have still claimed they were right, it was the data that was faulty (eg the missing hotspot).

  56. curryja | October 25, 2010 at 7:34 am | Reply

    Harold, I agree that there is a lot of good stuff on blogs that is not peer reviewed in the conventional sense (I hope my essays at Climate Etc. are regarded in this way). But there is also much dubious stuff. I have heard of John Daly but have never visited his website, I will take a look.

    John Daly was the original source of ‘dubious stuff’. You can safely disregard his website.

    The IPCC was recently admonished for referring to ‘grey sources’, and, indeed, it was the ‘grey sources’ that lead to it making the error about the Himalayan glaciers.

    • Nullius in Verba

      “The IPCC was recently admonished for referring to ‘grey sources’, and, indeed, it was the ‘grey sources’ that lead to it making the error about the Himalayan glaciers.”

      Not quite.

      The IPCC rules are that peer-reviewed sources should be used where available, and where they are not that alternative sources should be carefully checked for reliability, and when cited they should be clearly marked as not being peer-reviewed. The IPCC were criticised by the IAC for not checking, for using grey sources that were quite clearly not reliable, and for not indicating their status.

      As a matter of principle, allowing grey sources is necessary to avoid the charge of Argument from Authority. Science requires arguments and evidence to be assessed on their actual content, not on the basis of who has given them a stamp of approval.

      The reason that sceptics make a fuss about the IPCC using grey literature per se is that a lot of the science and information contrary to climate orthodoxy is not peer-reviewed and the IPCC has long used this as an excuse to ignore or reject it. They have tried hard to give the impression that the IPCC only uses peer-reviewed input, as with Raj Pachauri’s statements to that effect.

      It’s a neat bit of circular logic: sceptics can’t get published in peer-reviewed journals, so journals that sceptics get published in therefore cannot be properly peer-reviewed, and since only peer-reviewed material is allowed into the IPCC reports, scepticism can and must be entirely excluded from them, by definition.

      “This was the danger of always criticising the skeptics for not publishing in the “peer-reviewed literature”. Obviously, they found a solution to that–take over a journal! So what do we do about this? I think we have to stop considering “Climate Research” as a legitimate peer-reviewed journal.” Mann, 2003.

      Fairly obviously, if the IPCC were allowed to use non-peer-reviewed sources, then it would have to pay attention to the likes of ClimateAudit and SurfaceStations, or find some other excuse for not including them. That’s why a lot of people are now trying to sell the idea that the IAC review has called for an outright ban on grey literature. It would retrospectively make a fact of their earlier fiction.

  57. Judith Curry.

    I am a little puzzled about your apparently novel interpretations of probablism and of statistical confidence.

    Perhaps you might humour me and offer your analysis of the probability that the 2010 land/ocean index, as determined by GISS, say, will be greater than the index for 2005. As we currently await only the October to December results, perhaps you could also describe to us how one might determine the confidence in the previous analysis given that three quarters of the year’s data are already in.

    More interestingly, I would be most intrigued to know of your interpretation of what the 2010 result will mean in a climate change context, and of what confidence (and how derived) one can ascribe to said interpretation given all significant factors involved.

    I’d be especially delighted to see the working.

    • Bernard, I will be taking on this issue (surface temperature measurements) in a few months. The temperature for a single month or a single year isn’t all that meaningful in the context of climate; year to year variations are dominated by things like El Nino, volcanic eruptions, etc.

    • Bernard, I am more than a little puzzled by your post. In fact I find it incoherent, so perhaps you can amplify or clarify it. For example, you ask Dr. Curry for an “analysis of the probability.” What sort of analysis do you have in mind, and for that matter what sort of probability? Are you asking her to guess a probability? I personally think that situations like this do not have mathematical probabilities. The use of probabilities is metaphorical at best.

      Likewise for your “interpretation of what the 2010 result will mean.” What do you mean by “mean”? Do you mean how many newspaper articles there will be about it? Or how it will affect her thinking? Or what?

    • David Wojick.

      It seems that you completely missed my point, which is essentially avoided too by Judith Curry’s intention to do a post hoc analysis of this year’s temperature record, rather than to attempt a trivial statistical prediction of short-term trend, and to offer a covering explanation of the interpretation of such. This type of analysis might have helped to shed light on the underlying approach to dealing with parameters that was taken at the top of this thread.

      And Judith, I very well understand the meaning, or the lack thereof, of a single year in isolation. I’m sure that you understand that I was pressing for a clear explanation of the use of probabilities, of confidence intervals, and of the application of such to making blanket statements about trends and about the physical phenomena that underly them.

      Nevertheless I will await with keen interest for your take on matters to appear next year. It will be interesting to tie commentary then to recent expositions here by various people, and to see what people learn about statistics in the interim.

      • Dr. Curry’s post analyzes the basic IPCC arguments for attribution of warming to humans over the last 50 years. What the GISS statistical model does this year is irrelevant to that analysis, so I still don’t understand what you are asking for. Do you want an Italian flag estimate for the uncertainties in the GISS model? Many skeptics think that GISS is cooking the books, so the green percentage would be quite small. What Curry thinks I do not know but she says she will address this issue in a later post, which it deserves.

        I myself think that all of the statistical surface models are unreliable, on mathematical grounds, not to mention data quality problems. One piece of evidence for this is that there are no confidence intervals for these estimates, because the models do not use the math of probabilistic statistics. They use area averaging (of unrepresentative and questionable data). With area averaging different thermometers carry very deferent weights. This violates a basic principle of statistical sampling theory. Does this help?

  58. Thaddeus Blayda

    You believe that the heart of the IPCC’s claim is causality. I disagree. The claim of causality comes most directly and specifically from the climate models they use.

    The climate models that show global warming seem to be derived, in part or in whole from the model at Hadley. When that upload became available I downloaded a copy with some online friends. As computer programmers, we dissected the program with a fine tooth comb with an eye on the programmer’s notes.

    Upon dissection, it was realized that the “Hockey Stick” is based upon a vector that has no legitimate explanation for its existence. The purpose of that vector is as simplistic and heavy-handed as it is arbitrary. Any data set used, including any random data sets, will give you IPCC’s “global warming”.

    Thus, the heart of the IPCC’s claim is not causality so much as it is a claim of fraud. We’ve taken to calling it the “Mann Correction Vector”, presuming that this was the item referred to in the e-mails as his “trick” to “hide the decline”.

  59. Thaddeus Blayda : It could be very helpful if you were able to publish more details of your findings. I imagine they would be complementary to work by Steve McIntyre, eg.

    On “hide the decline”, I think my understanding is somewhat different. The following is cut-and-pasted from an email dialogue I had with a climate scientist a little while ago, but hopefully explains my understanding:

    MJ : The trick was to substitute measured data for proxy data over part of the total period. Of itself, and if done openly, that is not necessarily a crime. But the reason it was done was to hide the fact that the proxy data had a different trend to the measured data over the substituted period. Why did they want to hide that fact? Because it showed that the proxy data was unreliable. The whole point of a proxy for temperature is that it gives a guide to temperature when measured temperatures are not available, eg. well into the past. But if proxy and measured temperatures both exist for a given period, then it is possible to check how accurate the proxy is. If, as in this case, the proxy trend is in the opposite direction to the measured trend, then clearly the proxy is completely useless. In that case, the only proper scientific thing to do is to discard all of that proxy and everything derived from it. Did they do that? No, they substituted measured data in the overlap period, because people would otherwise have noticed the discrepancy, and left the demonstrably unreliable proxy data in place for the rest of the period.

    To put it bluntly, that is scientific fraud.


    MJ : ….. you said “there was simply no hiding. Just read the papers where the material was published.“. OK, here is the IPCC 2001 WG1 Fig.2.21:
    You can see that three of the proxy series end in around 1980, and the fourth (Briffa 2000) ends around 1960. Some later proxy data was available, but was omitted. It did not follow the path of the instrumental (ie. measured) data. To any respectable scientist, the implication is clear : the proxies are unreliable and should not have been used at all, and the scientists involved must have known that.

    In this use of the “trick“, it wasn’t very well hidden. Note that the Climategate “hide the decline” email says that the “trick” was used again at least twice – though then I think I am right in saying that they took steps to hide it better. That they did hide something is of course not in dispute; they themselves said so:
    I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) and from 1961 for Keith’s to hide the decline.

  60. Thaddeus Blayda

    I agree, insofar as you go. The proxy data, which came from Briffa, was worthless. It amounts to a single Siberian tree that matched what Briffa wanted, more or less.

    In and of itself, however, it does not completely make their science completely and totally worthless crap. The vector in the program does that.

    It has been quite some time since the CRU upload. I am not a climatologist. I am a computer programmer/electrical engineer. I am under the impression that you need someone with a PhD to be taken seriously. It would be apparent to me that the obvious problems I encountered with that program would have been write large by programmers and climatologists with the requisite sheepskin. I am disappointed that the “CRU e-mails” remain a focus for anything when the crimes committed against science are expressly overt in the programming code and that misuse of the database. Apparently, nobody will dare step forward, or there are too few Scientists, with the knowledge of both climatology and programming. I capitalize the word “Scientists” as the title of “scientist” is being applied to those acting as high priests for a cult of Anthropogenic Global Warming.

    As to the database: If you ask Phil Jones, he will tell you that the MET has the raw data, but there is no connection between that raw data and the numbers claimed by his entire branch of climate study. They threw it out. The numbers they use may as well *be* random numbers.

    Climatology can only begin when the sludge of politics is removed, and it must begin from scratch for the damage that sludge has done to the field.

    • Thaddeus Blayda : IMHO What we need is a definitive analysis of the code, that proves (from the code itself not just from the comments in it) that it generates seriously incorrect results, and that it was actually used. Are you able to do that? If so, then I think your findings (written up to ‘proof’ standard) should be promoted to the widest possible audience.

      The topic was covered in earlier posts at WUWT, eg.
      which said:

      Here are the four most frequent concerns dealing with the CRU’s source code:

      1. The source code that actually printed the graph was commented out and, therefore, is not valid proof.
      2. No proof exists that shows this code was used in publishing results.
      3. Interpolation is a normal part of dealing with large data sets, this is no different.
      4. You need the raw climate data to prove that foul play occurred.

      If anyone can think of something I missed, please let me know.

  61. Thaddeus Blayda

    Did anyone run the program?

    Did anyone *read* the program?

  62. Thaddeus Blayda

    I knew this had been gone over before. THIS is the “smoking gun”. My friends and I now refer to “valadj” as the Mann Correction Vector.

    Full thread:

    Specific commentary:

    From the programming file called “briffa_sep98_d.pro”:

    ; Apply a VERY ARTIFICAL correction for decline!!
    2.6,2.6,2.6]*0.75 ; fudge factor
    if n_elements(yrloc) ne n_elements(valadj) then message,’Oooops!’

    November 24, 2009 | mark
    From the programming file “combined_wavelet.pro”:

    ; Remove missing data from start & end (end in 1960 due to decline)
    kl=where((yrmxd ge 1402) and (yrmxd le 1960),n)

    November 24, 2009 | mark
    From the programming file “testeof.pro”:

    ; Computes EOFs of infilled calibrated MXD gridded dataset.
    ; Can use corrected or uncorrected MXD data (i.e., corrected for the decline).
    ; Do not usually rotate, since this loses the common volcanic and global
    ; warming signal, and results in regional-mean series instead.
    ; Generally use the correlation matrix EOFs.

    November 24, 2009 | mark
    From the programming file: “pl_decline.pro”:

    ; Now apply a completely artificial adjustment for the decline
    ; (only where coefficient is positive!)

    November 24, 2009 | mark
    From the programming file “olat_stp_modes.pro”:

    ; nele=n_elements(onets)
    ; onets=randomn(seed,nele)
    ; for iele = 1 , nele-1 do onets(iele)=onets(iele)+0.35*onets(iele-1)
    if ivar eq 0 then begin
    if iretain eq 0 then modets=fltarr(mxdnyr,nretain)
    ; Leading mode is contaminated by decline, so pre-filter it (but not
    ; the gridded datasets!)

    November 24, 2009 | mark
    From the programming file “data4alps.pro”:

    printf,1,’IMPORTANT NOTE:’
    printf,1,’The data after 1960 should not be used. The tree-ring density’
    printf,1,’records tend to show a decline after 1960 relative to the summer’
    printf,1,’temperature in many high-latitude locations. In this data set’
    printf,1,’this “decline” has been artificially removed in an ad-hoc way, and’
    printf,1,’this means that data after 1960 no longer represent tree-ring
    printf,1,’density variations, but have been modified to look more like the
    printf,1,’observed temperatures.’

    November 24, 2009 | mark
    From the programming file “mxd_pcr_localtemp.pro”

    ; Tries to reconstruct Apr-Sep temperatures, on a box-by-box basis, from the
    ; EOFs of the MXD data set. This is PCR, although PCs are used as predictors
    ; but not as predictands. This PCR-infilling must be done for a number of
    ; periods, with different EOFs for each period (due to different spatial
    ; coverage). *BUT* don’t do special PCR for the modern period (post-1976),
    ; since they won’t be used due to the decline/correction problem.
    ; Certain boxes that appear to reconstruct well are “manually” removed because
    ; they are isolated and away from any trees.

    November 24, 2009 | mark
    From the programming file “calibrate_mxd.pro”:

    ; Due to the decline, all time series are first high-pass filter with a
    ; 40-yr filter, although the calibration equation is then applied to raw
    ; data.

    November 24, 2009 | mark
    From the programming file “calibrate_correctmxd.pro”:

    ; We have previously (calibrate_mxd.pro) calibrated the high-pass filtered
    ; MXD over 1911-1990, applied the calibration to unfiltered MXD data (which
    ; gives a zero mean over 1881-1960) after extending the calibration to boxes
    ; without temperature data (pl_calibmxd1.pro). We have identified and
    ; artificially removed (i.e. corrected) the decline in this calibrated
    ; data set. We now recalibrate this corrected calibrated dataset against
    ; the unfiltered 1911-1990 temperature data, and apply the same calibration
    ; to the corrected and uncorrected calibrated MXD data.

    November 24, 2009 | mark
    From the programming file “mxdgrid2ascii.pro”:

    printf,1,’NOTE: recent decline in tree-ring density has been ARTIFICIALLY’
    printf,1,’REMOVED to facilitate calibration. THEREFORE, post-1960 values’
    printf,1,’will be much closer to observed temperatures then they should be,’
    printf,1,’which will incorrectly imply the reconstruction is more skilful’
    printf,1,’than it actually is. See Osborn et al. (2004).’
    printf,1,’Osborn TJ, Briffa KR, Schweingruber FH and Jones PD (2004)’
    printf,1,’Annually resolved patterns of summer temperature over the Northern’
    printf,1,’Hemisphere since AD 1400 from a tree-ring-density network.’
    printf,1,’Submitted to Global and Planetary Change.’

    November 24, 2009 | mark
    From the programming file “maps24.pro”:

    ; Plots 24 yearly maps of calibrated (PCR-infilled or not) MXD reconstructions
    ; of growing season temperatures. Uses “corrected” MXD – but shouldn’t usually
    ; plot past 1960 because these will be artificially adjusted to look closer to
    ; the real temperatures.
    if n_elements(yrstart) eq 0 then yrstart=1800
    if n_elements(doinfill) eq 0 then doinfill=0
    if yrstart gt 1937 then message,’Plotting into the decline period!’
    ; Now prepare for plotting

    November 24, 2009 | mark
    From the programming file “calibrate_correctmxd.pro”:

    ; Now verify on a grid-box basis
    ; No need to verify the correct and uncorrected versions, since these
    ; should be identical prior to 1920 or 1930 or whenever the decline
    ; was corrected onwards from.

    November 24, 2009 | mark
    From the programming file “recon1.pro”:

    ; Computes regressions on full, high and low pass MEAN timeseries of MXD
    ; anomalies against full NH temperatures.
    ; Specify period over which to compute the regressions (stop in 1940 to avoid
    ; the decline

    November 24, 2009 | mark
    From the programming file “calibrate_nhrecon.pro”:

    ; Calibrates, usually via regression, various NH and quasi-NH records
    ; against NH or quasi-NH seasonal or annual temperatures.
    ; Specify period over which to compute the regressions (stop in 1960 to avoid
    ; the decline that affects tree-ring density records)

    November 24, 2009 | mark
    From the programming file “briffa_sep98_e.pro”:

    ; PLOTS ‘ALL’ REGION MXD timeseries from age banded and from hugershoff
    ; standardised datasets.
    ; Reads Harry’s regional timeseries and outputs the 1600-1992 portion
    ; with missing values set appropriately. Uses mxd, and just the
    ; “all band” timeseries